<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Milos writes about stuff]]></title><description><![CDATA[There's lots of stuff but it's mostly software.]]></description><link>https://blog.levacic.net/</link><generator>Ghost 3.40</generator><lastBuildDate>Tue, 31 Mar 2026 21:48:58 GMT</lastBuildDate><atom:link href="https://blog.levacic.net/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Server setup guide]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>This is a guide to setting up and installing PHP and Apache or Nginx on an Ubuntu 20.04.1 LTS system running on an AWS EC2 instance (though it's probably almost exactly the same elsewhere). It should probably work more-or-less the same on other Ubuntu releases too, which was</p>]]></description><link>https://blog.levacic.net/2020/12/19/server-setup-guide/</link><guid isPermaLink="false">5fde312b7c82ae4336695818</guid><category><![CDATA[infrastructure]]></category><category><![CDATA[apache]]></category><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[aws]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[docker]]></category><dc:creator><![CDATA[Milos Levacic]]></dc:creator><pubDate>Sat, 19 Dec 2020 17:03:04 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>This is a guide to setting up and installing PHP and Apache or Nginx on an Ubuntu 20.04.1 LTS system running on an AWS EC2 instance (though it's probably almost exactly the same elsewhere). It should probably work more-or-less the same on other Ubuntu releases too, which was already successfully attempted - for example, this guide was more or less identical while Ubuntu 18.04.3 LTS was the latest LTS release, as well as even Ubuntu 16.04.3 LTS - which is good, because it means the guide is pretty stable over time, and will likely remain like that.</p>
<p>It's recommended to read through this guide before attempting to follow it, so you'd know what to expect, and which alternative options might be suggested for certain steps.</p>
<p>Not everything here is absolutely required for setting up a system - as always, it depends on the specific situation. This just documents the most common stuff I've personally needed when setting up servers, so that I could easily refer to it when needed.</p>
<p>The projects I usually deployed on these servers are Laravel applications, either running directly on the server in an Apache+PHP setup, or within a Docker container, proxied through NGINX.</p>
<p>This guide went through many updates and iterations over the years, but I've just now gotten to managing it in a <a href="https://github.com/levacic/server-setup-guide">GitHub repo</a> (it previously lived within a private Gist for a very long time).</p>
<p>There is also a mirror of this guide <a href="https://blog.levacic.net/2020/12/19/server-setup-guide/">on my blog</a>.</p>
<p>All right, let's go.</p>
<h2 id="updates">Updates</h2>
<p>Before doing anything else, update all packages on the system:</p>
<pre><code class="language-sh">sudo apt-get update
sudo apt-get upgrade
</code></pre>
<p>Sometimes, you might get a message regarding GRUB, which I've yet to figure out why and when it happens - but just ignoring it and skipping reinstalling GRUB seems to work.</p>
<h2 id="editor">Editor</h2>
<p>Most Ubuntu images on EC2 use <code>nano</code> as the default editor. If you'd like to change it to <code>vim</code> instead, run the following:</p>
<pre><code class="language-sh">sudo update-alternatives --config editor
</code></pre>
<p>You'll be presented with several choices, including <code>vim.basic</code>, which is the one you want.</p>
<h2 id="fail2ban">Fail2ban</h2>
<p>Fail2ban is a security tool which monitors log files for common services running on servers, and when it detects suspicious behavior from specific IP addresses (e.g. too many consecutive failed login attempts), it bans those IP addresses for a certain amount of time.</p>
<p>This is needed almost always on any publicly accessible servers as an important security precaution - but might be applicable in other situations as well, depending on the vulnerability profile. In cloud hosting setups specifically, this is a must, as the available IP addresses are almost guaranteed to be reused and publicly known, and thus a common target for brute force hacking attempts.</p>
<p>To install, just run the following:</p>
<pre><code class="language-sh">sudo apt-get install fail2ban
</code></pre>
<p>Fail2ban's configuration is located in <code>/etc/fail2ban</code>, and by default on Debian-based distributions, includes SSHD monitoring, which you can confirm by checking the contents of <code>/etc/fail2ban/jail.d/defaults-debian.conf</code>, which should look something like this:</p>
<pre><code class="language-ini">[sshd]
enabled = true
</code></pre>
<p>Monitoring can be enabled for other services as well, but this is a baseline security precaution. In a setup with a bastion SSH proxy server, the bastion <em>should</em> have Fail2ban installed and configured to monitor SSH connections.</p>
<p>In this case, Fail2ban will monitor <code>/var/log/auth.log</code> (which is where SSHD logs SSH actions and logins) and track the IP addresses attempting to login.</p>
<p>Fail2ban has its own log file in <code>/var/log/fail2ban.log</code> where it's possible to review what it's doing.</p>
<h2 id="iptables">iptables</h2>
<p>Depending on the hosting environment, it might be possible to filter traffic using platform-provided features, such as Security Groups, which is a common mechanism available on cloud platforms.</p>
<p>It's possible to setup a similar traffic filtering mechanism within the server itself by using <code>iptables</code>.</p>
<p>The following command can be used at any moment to view the current <code>iptables</code> configuration:</p>
<pre><code class="language-sh">sudo iptables --list --verbose
</code></pre>
<p>Or its shorter version:</p>
<pre><code class="language-sh">sudo iptables -L -v
</code></pre>
<p>At a minimum, you want the following configuration:</p>
<pre><code class="language-sh"># Accept incoming localhost connections.
sudo iptables --append INPUT --in-interface lo --jump ACCEPT

# Accept existing connections, to avoid dropping the current SSH connection in
# cases of misconfiguration.
sudo iptables --append INPUT --match conntrack --ctstate RELATED,ESTABLISHED --jump ACCEPT

# Accept incoming SSH, HTTP, and HTTPS connections.
sudo iptables --append INPUT --protocol tcp --dport 22 --jump ACCEPT
sudo iptables --append INPUT --protocol tcp --dport 80 --jump ACCEPT
sudo iptables --append INPUT --protocol tcp --dport 443 --jump ACCEPT

# Drop all other traffic.
sudo iptables --append INPUT --jump DROP
</code></pre>
<p>It's possible to insert a rule at a specific position like this:</p>
<pre><code class="language-sh">sudo iptables --insert chain [rule-num] rule-specification
</code></pre>
<p>For example, to add a rule to accept MySQL traffic to position 6, you can do this:</p>
<pre><code class="language-sh">sudo iptables --insert INPUT 6 --protocol tcp --dport 3306 --jump ACCEPT
</code></pre>
<p>To delete a rule in a specific position, you can do:</p>
<pre><code class="language-sh">sudo iptables --delete chain rule-num
</code></pre>
<p>For example, to delete the rule in position 6:</p>
<pre><code class="language-sh">sudo iptables --delete INPUT 6
</code></pre>
<h3 id="persistingiptablesrules">Persisting iptables rules</h3>
<p>By default, the <code>iptables</code> configuration will clear after a server restart. To persist it, you want to install <code>iptables-persistent</code>, or <code>netfilter-persistent</code> (which is the new name for the same program).</p>
<pre><code class="language-sh">sudo apt-get install netfilter-persistent
</code></pre>
<p>During installation, you will be prompted to persist the current IPv4 and IPv6 rules, which will be saved into <code>/etc/iptables/rules.v4</code> and <code>/etc/iptables/rules.v6</code> respectively.</p>
<p>To update the rules, you can use:</p>
<pre><code class="language-sh">sudo iptables-save | sudo tee /etc/iptables/rules.v4
</code></pre>
<p>Finally, you can restart the service:</p>
<pre><code class="language-sh">sudo service iptables restart
</code></pre>
<p>To check the service status, run:</p>
<pre><code class="language-sh">sudo service iptables status
</code></pre>
<h3 id="nft">NFT</h3>
<p>Finally, you might want to use the newer NFT tool instead of <code>iptables</code> - which is the recommended default firewall today. See the following links for installation and configuration info:</p>
<ul>
<li><a href="https://wiki.debian.org/nftables">https://wiki.debian.org/nftables</a></li>
<li><a href="https://wiki.nftables.org/wiki-nftables/index.php/Moving_from_iptables_to_nftables">https://wiki.nftables.org/wiki-nftables/index.php/Moving_from_iptables_to_nftables</a></li>
</ul>
<h2 id="dotfiles">Dotfiles</h2>
<blockquote>
<p><strong>Note:</strong> Specific to my personal setup, feel free to skip.</p>
</blockquote>
<p>So assuming that you're logged into the server, the first thing you want is to set up an SSH key so we could later clone the repos you need. If you don't already have a public/private key pair (e.g. <code>id_rsa</code> and <code>id_rsa.pub</code> in <code>~/.ssh</code>), generate them:</p>
<pre><code class="language-sh">ssh-keygen -t rsa -b 4096
</code></pre>
<p>Clone the dotfiles repo (add the public key <code>cat ~/.ssh/id_rsa.pub</code> to the approved access keys for that repo), and configure it as per the instructions in the repo's README file. This helps have a more readable prompt, and provides some useful project-administration-related commands.</p>
<p>Logout and login to get a nicer prompt.</p>
<h2 id="bashhistoryperuserlogging">Bash history per-user logging</h2>
<blockquote>
<p><strong>Note:</strong> This section of the guide differentiates between a &quot;user&quot; (an account on the server, such as the default <code>ubuntu</code> user on Ubuntu systems) and a &quot;person&quot;/&quot;people&quot; (which are actual humans, most often developers or system administrators, who need access to the server).</p>
</blockquote>
<p>Most of the time, we have a single user with root permissions doing the setup (usually <code>ubuntu</code> in the context of this guide - other non-AWS providers might use a different default user in their base images), which multiple people are authorized to login as, using their SSH keys - which is accomplished by adding such people's keys into the <code>ubuntu</code> user's <code>~/.ssh/authorized_keys</code> file.</p>
<p>A useful setup is to have each person's Bash commands logged into a different Bash history file, so it would be easy to track <em>who did what</em> on the server - this is not a foolproof solution, nor is it meant to be - these still need to be people who are trusted not to act maliciously on the server, and with this setup, they have the opportunity to delete any traces of their activity on the server.</p>
<p>If you're looking for a full-blown solution, what you need is a logging/auditing bastion host, and there are both free and paid enteprise-grade solutions to accomplish this. One such free solution can be implemented by following this guide:</p>
<ul>
<li><a href="https://aws.amazon.com/blogs/security/how-to-record-ssh-sessions-established-through-a-bastion-host/">https://aws.amazon.com/blogs/security/how-to-record-ssh-sessions-established-through-a-bastion-host/</a></li>
</ul>
<p>So, in order to log SSH history separately for each person who logs into the server as the <code>ubuntu</code> user, we first need to configure the SSH server to allow users to set their own environment variables while SSH sessions are being established - which is a potential security risk when access restrictions are limited (e.g. if we were setting up a bastion host with limited access), but doesn't really change anything in a root-login scenario like we have here (where people can already do everything once logged into the server).</p>
<p>This is achieved by editing the <code>/etc/ssh/sshd_config</code> file and configuring:</p>
<pre><code class="language-plaintext">PermitUserEnvironment yes
</code></pre>
<p>After this you need to restart the SSH daemon (a reload probably works as well):</p>
<pre><code class="language-sh">sudo service sshd restart
</code></pre>
<p>What this allows you is to add pre-configured environment variables into the <code>authorized_keys</code> file, e.g. instead of the file looking like this:</p>
<pre><code class="language-plaintext"># user-foo
ssh-rsa AAAAB3NzaC1y...

# user-bar
ssh-rsa AAAAB3NzaC1y...
</code></pre>
<p>you can do this:</p>
<pre><code class="language-plaintext"># user-foo
environment=&quot;LOGGED_IN_USER=user-foo&quot; ssh-rsa AAAAB3NzaC1y...

# user-bar
environment=&quot;LOGGED_IN_USER=user-bar&quot; ssh-rsa AAAAB3NzaC1y...
</code></pre>
<p>after which any times a person logs in with a specific key, the respective <code>LOGGED_IN_USER</code> environment variable will be set accordingly. This further allows us to configure a custom Bash-history file by adding the following into the <code>~/.bashrc</code> file:</p>
<pre><code class="language-sh"># Enable timestamps in history, and format them nicely for display.
HISTTIMEFORMAT=&quot;%F %T &quot;

# Append history, and update it after every command.
shopt -s histappend
PROMPT_COMMAND=&quot;history -a;$PROMPT_COMMAND&quot;

# Track SSH logins and per-key history.
if [ &quot;$LOGGED_IN_USER&quot; != &quot;&quot; ]
then
  logger -ip auth.notice -t sshd &quot;Accepted publickey for $LOGGED_IN_USER&quot;
  HISTFILE=&quot;$HOME/.$LOGGED_IN_USER.bash_history&quot;
fi
</code></pre>
<p>The above configures a few additional things, to make the history more reliable and easier to use.</p>
<h2 id="awscloudwatchagent">AWS CloudWatch Agent</h2>
<p>If you're in an AWS environment and want your server to send additional metrics to CloudWatch (which is recommended in order to track some additional metrics not included by default, e.g. disk and memory usage), you need to install the CloudWatch Agent.</p>
<p>The server running the agent needs to have an IAM role assigned, which has the <code>CloudWatchAgentServerPolicy</code> policy attached - so add that, in addition to any other policies the server's role needs. The alternative options is to create an IAM user with this policy, and configure the user's access key ID and secret access key when setting up the CloudWatch Agent - however, this is out of the scope of this guide, and not the recommended way of setting stuff up anyway.</p>
<p>Note that monitoring a server using the CloudWatch Agent will incurr additional costs - consult the AWS pricing pages and documentation for more info on that.</p>
<h3 id="installation">Installation</h3>
<p>For Ubuntu AMD64 you want to download the following installation:</p>
<pre><code class="language-sh">wget https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
</code></pre>
<p>You can also use a region-specific URL, e.g. <code>s3.{region}.amazonaws.com</code> to potentially speed up the download - although it's not a major difference anyway.</p>
<p>For other systems, review the AWS documentation.</p>
<p>Install the agent like this:</p>
<pre><code class="language-sh">sudo dpkg -i -E ./amazon-cloudwatch-agent.deb
</code></pre>
<h3 id="configuration">Configuration</h3>
<p>Before configuring and running the agent, you must ensure the correct region will be in use - by default, the agent will publish the metrics to the same region in which the EC2 instance is located. The <code>region</code> entry in the <code>[default]</code> section of your AWS configuration file (ie. <code>~/.aws/config</code>) will take precedence over that default, and the <code>region</code> entry in the <code>[AmazonCloudWatchAgent]</code> section of the AWS configuration file (if it exist) will have the highest precedence.</p>
<p>You can run the configuration wizard by entering the following:</p>
<pre><code class="language-sh">sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
</code></pre>
<p>This will ask a series of questions, based on which a configuration will be created and stored in <code>/opt/aws/amazon-cloudwatch-agent/bin/config.json</code>.</p>
<p>Alternatively, you can just create that file manually with the following configuration which makes sensible assumptions about the logging requirements:</p>
<pre><code class="language-json">{
    &quot;agent&quot;: {
        &quot;metrics_collection_interval&quot;: 60,
        &quot;run_as_user&quot;: &quot;root&quot;
    },
    &quot;metrics&quot;: {
        &quot;append_dimensions&quot;: {
            &quot;AutoScalingGroupName&quot;: &quot;${aws:AutoScalingGroupName}&quot;,
            &quot;ImageId&quot;: &quot;${aws:ImageId}&quot;,
            &quot;InstanceId&quot;: &quot;${aws:InstanceId}&quot;,
            &quot;InstanceType&quot;: &quot;${aws:InstanceType}&quot;
        },
        &quot;aggregation_dimensions&quot;: [
            [
                &quot;InstanceId&quot;
            ]
        ],
        &quot;metrics_collected&quot;: {
            &quot;cpu&quot;: {
                &quot;measurement&quot;: [
                    &quot;cpu_usage_idle&quot;,
                    &quot;cpu_usage_iowait&quot;,
                    &quot;cpu_usage_user&quot;,
                    &quot;cpu_usage_system&quot;
                ],
                &quot;metrics_collection_interval&quot;: 60,
                &quot;totalcpu&quot;: false
            },
            &quot;disk&quot;: {
                &quot;measurement&quot;: [
                    &quot;used_percent&quot;,
                    &quot;inodes_free&quot;
                ],
                &quot;metrics_collection_interval&quot;: 60,
                &quot;resources&quot;: [
                    &quot;*&quot;
                ]
            },
            &quot;diskio&quot;: {
                &quot;measurement&quot;: [
                    &quot;io_time&quot;,
                    &quot;write_bytes&quot;,
                    &quot;read_bytes&quot;,
                    &quot;writes&quot;,
                    &quot;reads&quot;
                ],
                &quot;metrics_collection_interval&quot;: 60,
                &quot;resources&quot;: [
                    &quot;*&quot;
                ]
            },
            &quot;mem&quot;: {
                &quot;measurement&quot;: [
                    &quot;mem_used_percent&quot;
                ],
                &quot;metrics_collection_interval&quot;: 60
            },
            &quot;netstat&quot;: {
                &quot;measurement&quot;: [
                    &quot;tcp_established&quot;,
                    &quot;tcp_time_wait&quot;
                ],
                &quot;metrics_collection_interval&quot;: 60
            },
            &quot;swap&quot;: {
                &quot;measurement&quot;: [
                    &quot;swap_used_percent&quot;
                ],
                &quot;metrics_collection_interval&quot;: 60
            }
        }
    }
}
</code></pre>
<p>Feel free to adapt this configuration to your own needs.</p>
<h3 id="runningtheagent">Running the agent</h3>
<p>The following command starts the CloudWatch Agent on an EC2 instance running Linux:</p>
<pre><code class="language-sh">sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json -s
</code></pre>
<p>The agent should automatically start after system reboots as well, but in case you encounter issues with that, you might want to configure this command to run on reboot. Note that <em>this should not be necessary</em>, but if it is, the easiest way to do that is to add it to the <code>root</code> account's crontab:</p>
<pre><code class="language-sh">sudo crontab -e
</code></pre>
<p>Add the following:</p>
<pre><code class="language-plaintext">@reboot /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json -s
</code></pre>
<p>If setting up an automated system installation via tools like Puppet or something else, you'll probably prefer to configure a new file under <code>/etc/cron.d/</code> or something similar, however that's also out of the scope of this guide.</p>
<h2 id="apache">Apache</h2>
<p>Most likely you want Apache. If not, an alternative setup with an NGINX reverse proxy is provided further below.</p>
<pre><code class="language-sh">sudo apt-get update
sudo apt-get install apache2
</code></pre>
<p>Allow traffic through firewall (not needed usually but just in case; more info on this later):</p>
<pre><code class="language-sh">sudo ufw app list
</code></pre>
<p>This should show a few options including &quot;Apache Full&quot;. Enable that:</p>
<pre><code class="language-sh">sudo ufw allow in &quot;Apache Full&quot;
</code></pre>
<p>That's it for now, we'll configure it later.</p>
<h2 id="mysql">MySQL</h2>
<p>If you're using AWS, you're probably going to use RDS as a database without installing MySQL on the server instance.</p>
<p>In that case, you'll usually want at least the client-side MySQL programs <code>mysql</code> and <code>mysqldump</code>, as they're useful for obvious reasons.</p>
<p>To install them, just run:</p>
<pre><code class="language-sh">sudo apt-get install mysql-client
</code></pre>
<p>If, however, you do need the MySQL server installed on the instance, you should skip installing only the client, and run something like:</p>
<pre><code class="language-sh">sudo apt-get install mysql-server
</code></pre>
<p>Note that this will automatically install the client as well.</p>
<p>Following this, run the secure MySQL setup program:</p>
<pre><code class="language-sh">mysql_secure_installation
</code></pre>
<p>Most of the steps to secure the installation should be obvious, use your best judgement and security awareness - be sure to also store the root password somewhere safe.</p>
<p>If you need to create a database for the project, here's a quick four-liner:</p>
<pre><code class="language-sql">CREATE DATABASE example CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE USER 'example'@'localhost' IDENTIFIED BY 'randomthirtytwocharacterpassword';
GRANT ALL PRIVILEGES ON example.* TO 'example'@'localhost';
FLUSH PRIVILEGES;
</code></pre>
<p>If connecting to an external database, such as RDS, you want to use <code>%</code> instead of <code>localhost</code> (ie. allow all hosts for that user, unless you know the exact IP address from where the client will connect, which you probably don't). Here's that version, for easier copy-pasting:</p>
<pre><code class="language-sql">CREATE DATABASE example CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE USER 'example'@'%' IDENTIFIED BY 'randomthirtytwocharacterpassword';
GRANT ALL PRIVILEGES ON example.* TO 'example'@'%';
FLUSH PRIVILEGES;
</code></pre>
<p>You're welcome.</p>
<h2 id="php7480">PHP 7.4/8.0</h2>
<p>If we're installing PHP, we want the latest version. On Ubuntu 20.04.1 LTS you need to add additional repos for this:</p>
<pre><code class="language-sh">sudo add-apt-repository ppa:ondrej/php
sudo apt-get update
</code></pre>
<p>Now install PHP and a bunch of extensions we might need:</p>
<pre><code class="language-sh">sudo apt-get -y install \
    php7.4 \
    php7.4-apc \
    php7.4-cli \
    php7.4-bcmath \
    php7.4-curl \
    php7.4-dom \
    php7.4-gd \
    php7.4-gmp \
    php7.4-imagick \
    php7.4-imap \
    php7.4-intl \
    php7.4-json \
    php7.4-ldap \
    php7.4-mailparse \
    php7.4-mbstring \
    php7.4-memcached \
    php7.4-mysql \
    php7.4-opcache \
    php7.4-pgsql \
    php7.4-pspell \
    php7.4-redis \
    php7.4-soap \
    php7.4-sqlite3 \
    php7.4-tidy \
    php7.4-xml \
    php7.4-xmlrpc \
    php7.4-xsl \
    php7.4-zip \
    unzip
</code></pre>
<p>Some of these packages will be installed by default, some might overlap a bit with their dependencies, but in general, this should cover you pretty well with most commonly used packages - or at least the ones I personally used in different environments and for different projects.</p>
<p>The command also installs the <code>unzip</code> program, which is recommeded as it makes Composer install dependencies faster.</p>
<p>One thing you should <em>NOT</em> install in your production environment is <code>xdebug</code> because it can slow down everything.</p>
<p>For PHP 8.0, everything should probably work just as well by simply replacing <code>php7.4</code> with <code>php8.0</code> in all of the packages listed in the install command above.</p>
<h2 id="project">Project</h2>
<blockquote>
<p><em>NOTE:</em> This part of the guide is specific to the projects and setup I used on the projects my team and I worked on, and relies on some internal knowledge about directory structure. I might add that documentation later in case it could be useful for someone. It also relies on some commands from a custom set of scripts. For the most part, you can skip this if you're only interested in the server setup itself.</p>
</blockquote>
<p>Clone your project. We like to use <code>~/apps</code>, but feel free to decide what you would like to use - for the rest of this guide but we'll assume it's <code>~/apps</code>.</p>
<p>Let's say your project is &quot;Example&quot;. Create your project folder within <code>~/apps</code>:</p>
<pre><code class="language-sh">mkdir -p ~/apps/example
</code></pre>
<p>Within it, recreate the structure we commonly use that relies on symlinking, e.g. <code>data</code> and <code>repo</code> folders, and a <code>production</code> folder within <code>repo</code> - and any other environments you might want to deploy.</p>
<p>Add the server's public key into the project repo's access keys and clone it with <code>clone-project</code>.</p>
<h3 id="dockerbasedprojectfolderstructure">Docker-based project folder structure</h3>
<p>If we're doing a Docker-based setup behind an NGINX proxy, we don't need the repo folder and the symlink to a specific checkout, we just want to clone the project into e.g. <code>~/apps/example/repo</code> - because our usual workflow with this is not to run the project from the host system's filesystem, but rather build a fully-contained Docker image and run that instead - optionally mounting volumes from the host filesystem for some common files we might need to retain, such as logs or file uploads (though we would usually stream logs into an external service, and use a cloud-based file storage mechanism such as S3).</p>
<p>Project updates in that scenario would be done by just doing a <code>git pull</code> and triggering a rebuild/restart of the Docker images/containers - the latter of which would usually be performed by a script provided along with the project.</p>
<h2 id="apachepart2">Apache part 2</h2>
<h3 id="additionalmodules">Additional modules</h3>
<p>So now that we have everything ready, we need to configure the virtual host for the project.</p>
<p>Let's first enable a few modules we'll need:</p>
<pre><code class="language-sh">sudo a2enmod \
    headers \
    rewrite \
    ssl
</code></pre>
<p>You don't need the <code>ssl</code> module if your server is running behind a Load Balancer (which it almost certainly should) that performs SSL termination.</p>
<p>Restart the server:</p>
<pre><code class="language-sh">sudo service apache2 restart
</code></pre>
<h3 id="directorypermissions">Directory permissions</h3>
<p>Configure the directory permissions:</p>
<pre><code class="language-sh">sudo vim /etc/apache2/apache2.conf
</code></pre>
<p>Find the <code>&lt;Directory /var/www/&gt;</code> entry and below that block add a new one:</p>
<pre><code class="language-apache">&lt;Directory /home/ubuntu/apps&gt;
    AllowOverride all
    Require all granted
&lt;/Directory&gt;
</code></pre>
<p>If deploying the websites from a different directory, specify that one instead.</p>
<h3 id="loggingbehindanawsloadbalancer">Logging behind an AWS load balancer</h3>
<p>If running the server behind a load balancer, by default Apache will log the load balancer's IP address, which will be fairly useless when reviewing log files. To override this, you want to change the default <code>LogFormat</code> to include the header containing the actual client IP address. In an AWS environment, you need to edit the <code>/etc/apache2/apache2.conf</code> file and find these lines:</p>
<pre><code class="language-apache">LogFormat &quot;%h %l %u %t \&quot;%r\&quot; %&gt;s %O \&quot;%{Referer}i\&quot; \&quot;%{User-Agent}i\&quot;&quot; combined
LogFormat &quot;%h %l %u %t \&quot;%r\&quot; %&gt;s %O&quot; common
</code></pre>
<p>and replace them with:</p>
<pre><code class="language-apache">LogFormat &quot;%{X-Forwarded-For}i %h %l %u %t \&quot;%r\&quot; %&gt;s %b \&quot;%{Referer}i\&quot; \&quot;%{User-Agent}i\&quot;&quot; combined
LogFormat &quot;%h %l %u %t \&quot;%r\&quot; %&gt;s %b&quot; common
</code></pre>
<p>This basically prepends the value of the <code>X-Forwarded-For</code> header, which is the one used by AWS load balancers, to each log entry, and also replaces <code>%O</code> (the total bytes sent, including headers) with <code>%b</code> (the total bytes sent, excluding headers).</p>
<p>This assumes you will use either the <code>combined</code> or <code>common</code> log formats when configuring a specific Virtual Host's <code>CustomLog</code> configuration.</p>
<p>More information about configuring Apache's <code>LogFormat</code>s can be found here:</p>
<ul>
<li><a href="http://httpd.apache.org/docs/current/mod/mod_log_config.html">http://httpd.apache.org/docs/current/mod/mod_log_config.html</a></li>
</ul>
<p>More information about configuring Apache's <code>LogFormat</code> to correctly handle operating behind an AWS load balancer can be found here:</p>
<ul>
<li><a href="https://aws.amazon.com/premiumsupport/knowledge-center/elb-capture-client-ip-addresses/">https://aws.amazon.com/premiumsupport/knowledge-center/elb-capture-client-ip-addresses/</a></li>
</ul>
<p>Note that this is potentially dangerous in non-AWS setups, as it depends on the exact header which the load balancer uses to pass the client's real IP address, and this may not be the same in other setups.</p>
<p>After updating the configuration, you need to reload Apache:</p>
<pre><code class="language-sh">sudo service apache2 reload
</code></pre>
<h3 id="defaultvirtualhosts">Default virtual hosts</h3>
<p>Edit the default virtual host to return a 404 instead of the Apache welcome page:</p>
<pre><code class="language-sh">sudo vim /etc/apache2/sites-available/000-default.conf
</code></pre>
<p>The contents should be something like:</p>
<pre><code class="language-apache">&lt;VirtualHost *:80&gt;
    RedirectMatch 204 /healthcheck
    Redirect 404 /
&lt;/VirtualHost&gt;
</code></pre>
<p>Yes, these are the only two directives you want - the <code>RedirectMatch 204 /healthcheck</code> responds with a <code>204 No Content</code> status for requests to <code>/healthcheck</code>, and the <code>Redirect 404 /</code> returns a <code>404 Not Found</code> for everything else - comment out the rest if you're worried about losing the original/default configuration.</p>
<p>You don't even need the healthcheck directive if you're not using some kind of a load balancer that needs to be able to check if the server is up and running.</p>
<p>This is needed so that requests to the server that don't match any virtual host defined later (e.g. the one for <code>example.com</code> you're about to setup) fall back to the first virtual host defined - which will usually be <code>000-default</code> - and you don't want these requests to return anything except a 404 page. This situation could occur when someone accesses the server's IP address directly, or points their own domain to the server's IP address - since we don't want to respond to hosts other than those we explicitly define as virtual hosts, a 404 makes sense for those requests.</p>
<p>You might require this if either your server is directly serving internet traffic - ie. not behind a load balancer - or is behind a load balancer configured to proxy all requests to the server (as opposed to explicitly configuring the hostnames you want to match and only proxying those requests - which is generally a much better idea anyway).</p>
<p>To do the same for the default SSL website (in case you're actually serving SSL traffic from this server; if your load balancer is doing SSL termination, just skip this), rename the <code>default-ssl.conf</code> file:</p>
<pre><code class="language-sh">sudo mv /etc/apache2/sites-available/default-ssl.conf /etc/apache2/sites-available/000-default-ssl.conf
</code></pre>
<p>Now edit it:</p>
<pre><code class="language-sh">sudo vim /etc/apache2/sites-available/000-default-ssl.conf
</code></pre>
<p>The contents should look mostly like:</p>
<pre><code class="language-apache">&lt;IfModule mod_ssl.c&gt;
    &lt;VirtualHost _default_:443&gt;
        #ServerAdmin webmaster@localhost

        #DocumentRoot /var/www/html

        # Nothing to see here.
        Redirect 404 /

        # Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
        # error, crit, alert, emerg.
        # It is also possible to configure the loglevel for particular
        # modules, e.g.
        #LogLevel info ssl:warn

        #ErrorLog ${APACHE_LOG_DIR}/error.log
        #CustomLog ${APACHE_LOG_DIR}/access.log combined

        # For most configuration files from conf-available/, which are
        # enabled or disabled at a global level, it is possible to
        # include a line for only one particular virtual host. For example the
        # following line enables the CGI configuration for this host only
        # after it has been globally disabled with &quot;a2disconf&quot;.
        #Include conf-available/serve-cgi-bin.conf

        #   SSL Engine Switch:
        #   Enable/Disable SSL for this virtual host.
        SSLEngine on

        #   A self-signed (snakeoil) certificate can be created by installing
        #   the ssl-cert package. See
        #   /usr/share/doc/apache2/README.Debian.gz for more info.
        #   If both key and certificate are stored in the same file, only the
        #   SSLCertificateFile directive is needed.
        SSLCertificateFile  /etc/ssl/certs/ssl-cert-snakeoil.pem
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key

        #   Server Certificate Chain:
        #   Point SSLCertificateChainFile at a file containing the
        #   concatenation of PEM encoded CA certificates which form the
        #   certificate chain for the server certificate. Alternatively
        #   the referenced file can be the same as SSLCertificateFile
        #   when the CA certificates are directly appended to the server
        #   certificate for convinience.
        #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt

        #   Certificate Authority (CA):
        #   Set the CA certificate verification path where to find CA
        #   certificates for client authentication or alternatively one
        #   huge file containing all of them (file must be PEM encoded)
        #   Note: Inside SSLCACertificatePath you need hash symlinks
        #        to point to the certificate files. Use the provided
        #        Makefile to update the hash symlinks after changes.
        #SSLCACertificatePath /etc/ssl/certs/
        #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt

        #   Certificate Revocation Lists (CRL):
        #   Set the CA revocation path where to find CA CRLs for client
        #   authentication or alternatively one huge file containing all
        #   of them (file must be PEM encoded)
        #   Note: Inside SSLCARevocationPath you need hash symlinks
        #        to point to the certificate files. Use the provided
        #        Makefile to update the hash symlinks after changes.
        #SSLCARevocationPath /etc/apache2/ssl.crl/
        #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl

        #   Client Authentication (Type):
        #   Client certificate verification type and depth.  Types are
        #   none, optional, require and optional_no_ca.  Depth is a
        #   number which specifies how deeply to verify the certificate
        #   issuer chain before deciding the certificate is not valid.
        #SSLVerifyClient require
        #SSLVerifyDepth  10

        #   SSL Engine Options:
        #   Set various options for the SSL engine.
        #   o FakeBasicAuth:
        #    Translate the client X.509 into a Basic Authorisation.  This means that
        #    the standard Auth/DBMAuth methods can be used for access control.  The
        #    user name is the `one line' version of the client's X.509 certificate.
        #    Note that no password is obtained from the user. Every entry in the user
        #    file needs this password: `xxj31ZMTZzkVA'.
        #   o ExportCertData:
        #    This exports two additional environment variables: SSL_CLIENT_CERT and
        #    SSL_SERVER_CERT. These contain the PEM-encoded certificates of the
        #    server (always existing) and the client (only existing when client
        #    authentication is used). This can be used to import the certificates
        #    into CGI scripts.
        #   o StdEnvVars:
        #    This exports the standard SSL/TLS related `SSL_*' environment variables.
        #    Per default this exportation is switched off for performance reasons,
        #    because the extraction step is an expensive operation and is usually
        #    useless for serving static content. So one usually enables the
        #    exportation for CGI and SSI requests only.
        #   o OptRenegotiate:
        #    This enables optimized SSL connection renegotiation handling when SSL
        #    directives are used in per-directory context.
        #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
        &lt;FilesMatch &quot;\.(cgi|shtml|phtml|php)$&quot;&gt;
                SSLOptions +StdEnvVars
        &lt;/FilesMatch&gt;
        &lt;Directory /usr/lib/cgi-bin&gt;
                SSLOptions +StdEnvVars
        &lt;/Directory&gt;

        #   SSL Protocol Adjustments:
        #   The safe and default but still SSL/TLS standard compliant shutdown
        #   approach is that mod_ssl sends the close notify alert but doesn't wait for
        #   the close notify alert from client. When you need a different shutdown
        #   approach you can use one of the following variables:
        #   o ssl-unclean-shutdown:
        #    This forces an unclean shutdown when the connection is closed, i.e. no
        #    SSL close notify alert is send or allowed to received.  This violates
        #    the SSL/TLS standard but is needed for some brain-dead browsers. Use
        #    this when you receive I/O errors because of the standard approach where
        #    mod_ssl sends the close notify alert.
        #   o ssl-accurate-shutdown:
        #    This forces an accurate shutdown when the connection is closed, i.e. a
        #    SSL close notify alert is send and mod_ssl waits for the close notify
        #    alert of the client. This is 100% SSL/TLS standard compliant, but in
        #    practice often causes hanging connections with brain-dead browsers. Use
        #    this only for browsers where you know that their SSL implementation
        #    works correctly.
        #   Notice: Most problems of broken clients are also related to the HTTP
        #   keep-alive facility, so you usually additionally want to disable
        #   keep-alive for those clients, too. Use variable &quot;nokeepalive&quot; for this.
        #   Similarly, one has to force some clients to use HTTP/1.0 to workaround
        #   their broken HTTP/1.1 implementation. Use variables &quot;downgrade-1.0&quot; and
        #   &quot;force-response-1.0&quot; for this.
        # BrowserMatch &quot;MSIE [2-6]&quot; \
        #       nokeepalive ssl-unclean-shutdown \
        #       downgrade-1.0 force-response-1.0

    &lt;/VirtualHost&gt;
&lt;/IfModule&gt;
</code></pre>
<p>What we did was comment out the <code>ServerAdmin</code>, <code>DocumentRoot</code>, <code>ErrorLog</code>, and <code>CustomLog</code> directives and added another <code>Redirect 404 /</code> directive.</p>
<p>We should of course enable that site and reload the Apache configuration:</p>
<pre><code class="language-sh">sudo a2ensite 000-default-ssl
sudo service apache2 reload
</code></pre>
<h2 id="projectspecificvirtualhosts">Project-specific virtual hosts</h2>
<p>Now create a new virtual host for the project:</p>
<pre><code class="language-sh">sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/example.com.conf
sudo vim /etc/apache2/sites-available/example.com.conf
</code></pre>
<p>The file oughta look something like this:</p>
<pre><code class="language-apache">&lt;VirtualHost *:80&gt;
    ServerName example.com
    ServerAlias www.example.com

    ServerAdmin admin@example.com
    DocumentRoot /home/ubuntu/apps/example/production/public

    ErrorLog ${APACHE_LOG_DIR}/example.com.error.log
    CustomLog ${APACHE_LOG_DIR}/example.com.access.log combined
&lt;/VirtualHost&gt;
</code></pre>
<p>Now enable it:</p>
<pre><code class="language-sh">sudo a2ensite example.com
</code></pre>
<p>And reload Apache:</p>
<pre><code class="language-sh">sudo service apache2 reload
</code></pre>
<h3 id="additionalpermissionconfiguration">Additional permission configuration</h3>
<p>Now here's the deal - Apache (usually, by default) runs under <code>www-data</code> which doesn't have permissions to access <code>ubuntu</code>'s home folder. To fix this you need to grant the correct permissions. However, don't do something crazy and irresponsible like <code>chmod 777 all-the-things</code> - instead, use <code>setfacl</code> (instructions courtesy of WebFaction mostly), while logged in as the <code>ubuntu</code> user:</p>
<pre><code class="language-sh"># Allow www-data to access ubuntu's home folder
setfacl -m u:www-data:--x $HOME

# Grant read/write/execute access to the apps folder
setfacl -R -m u:www-data:rwx $HOME/apps

# Grant default read/write/execute access for any future files/folders here
setfacl -R -m d:u:www-data:rwx $HOME/apps

# Make sure ubuntu's group is the owner of any future files/folders
chmod g+s $HOME/apps

# Grant ubuntu full access to any future files/folders
setfacl -R -m d:u:ubuntu:rwx $HOME/apps
</code></pre>
<p>You should now be able to open your website assuming the DNS records are configured correctly (if not, edit your local <code>/etc/hosts</code> file so you can try it out). You'll probably get a styled 500 error from the application but you can go ahead and view the log file in order to debug the app - you probably need to configure your application's <code>.env</code> file correctly, maybe migrate the database etc.</p>
<p>Of course, if the server runs behind a load balancer, you'll want to configure the load balancer's proxying configuration accordingly.</p>
<h2 id="nginx">NGINX</h2>
<p>Another common setup we've used in our team is to run Docker containers through an NGINX reverse proxy - either directly from the instance or through an AWS ALB.</p>
<p>The first step is installing NGINX:</p>
<pre><code class="language-sh">sudo apt-get update
sudo apt-get install nginx
</code></pre>
<p>Then we want to adjust the firewall, which is similar to how we would do it for Apache. First run:</p>
<pre><code class="language-sh">sudo ufw app list
</code></pre>
<p>This should show a few options including &quot;Nginx Full&quot;, &quot;Nginx HTTP&quot;, and &quot;Nginx HTTPS&quot;. If the instance is directly serving public traffic, we want to enable &quot;Nginx Full&quot;, otherwise if it's behind an AWS ALB, we would usually do SSL termination on the load balancer, and do plain HTTP between the load balancer and the instance. Either way, we want to allow access through the firewall, which would be accomplished, e.g. in the former case, by running:</p>
<pre><code class="language-sh">sudo ufw allow in &quot;Nginx Full&quot;
</code></pre>
<p>After that you can run the following to check the configuration:</p>
<pre><code class="language-sh">sudo ufw status
</code></pre>
<blockquote>
<p><em>NOTE:</em> The official Ubuntu AMIs available via AWS usually have the firewall disabled completely - which is probably fine, as we'll have a sensible networking/security group configuration anyway.</p>
</blockquote>
<p>You can check that Nginx is runing properly:</p>
<pre><code class="language-sh">systemctl status nginx
</code></pre>
<h3 id="virtualhostsetup">Virtual host setup</h3>
<p>We won't be providing instructions on running projects directly via NGINX here, only a reverse proxy configuration.</p>
<p>In short, you want to have an <code>/etc/nginx/includes/proxy.conf</code> file (that file most likely doesn't exist yet at this point, nor its parent <code>includes</code> folder - so just create them; if the file exists, pick a different name and reference that in the server configuration files documented further below) with something like this:</p>
<pre><code class="language-nginx">proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_bypass $http_upgrade;
</code></pre>
<p>This particular configuration is applicable when running the server behind a load balancer, due to the included <code>X-Forwarded-For</code> header configuration (which might be a vulnerability in other setups, e.g. when NGINX is directly serving internet traffic).</p>
<p>Backup the default website configuration <code>/etc/nginx/sites-available/default</code> and change it so it looks like this:</p>
<pre><code class="language-nginx">server {
    listen 80 default_server;
    listen [::]:80 default_server;

    root /var/www/html;

    # Add index.php to the list if you are using PHP
    index index.html index.htm index.nginx-debian.html;

    server_name _;

    # Respond with a 204 on this endpoint, for AWS target group
    # health checks.
    location /healthcheck {
        return 204;
    }

    location / {
        return 404;
    }
}
</code></pre>
<p>Similar to the previously documented default Apache configuration, this just always returns 404s by default (ie. if a request doesn't specify a hostname for which we have a specific configuration block), except for a <code>204</code> response on the <code>/healthcheck</code> URL that can be used for AWS target group health checks - feel free to remove this if you don't need it.</p>
<p>Finally, for a specific project such as <code>example-website</code>, which should be served at <code>example.com</code> and <code>www.example.com</code>, which is handled by a Docker container running on the local system on port <code>8080</code>, you want this configuration:</p>
<pre><code class="language-nginx">upstream example-website {
    server localhost:8080;
}

server {
    listen 80;
    server_name www.example.com example.com;

    location / {
        include /etc/nginx/includes/proxy.conf;
        proxy_pass http://example-website;
    }

    access_log /var/log/nginx/example-website.access.log combined;
    error_log /var/log/nginx/example-website.error.log error;
}
</code></pre>
<p>Adapt to your local use-case, sudo-symlink it into the <code>/etc/nginx/sites-enabled/</code> folder, and reload NGINX:</p>
<pre><code class="language-sh">sudo service nginx reload
</code></pre>
<p>You're done.</p>
<h2 id="docker">Docker</h2>
<p>If you need Docker, it's best to follow the official instructions found here: <a href="https://docs.docker.com/engine/install/ubuntu/">https://docs.docker.com/engine/install/ubuntu/</a> (in case this link changes in the future, it shouldn't be hard to just find the new URL). In short, these are the steps:</p>
<pre><code class="language-sh"># Ensure no older Docker versions are on the system - this shouldn't be
# necessary in most cases on a fresh system, but it really depends on where your
# base system image comes from and how it was installed - if you're not sure,
# just run this.
sudo apt-get remove docker docker-engine docker.io containerd runc

# Install dependencies required for installation.
sudo apt-get update
sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common

# Import Docker's official GPG key - this is potentially dangerous so best to
# check with the official installation guide if this is still the correct way to
# do this.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

# Make sure the key's fingerprint is:
#
#     9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
#
# This is the correct fingerprint at the time this tutorial was written, but
# again, best to check the official guide for this one as well.
sudo apt-key fingerprint 0EBFCD88

# Add the stable repository.
sudo add-apt-repository \
   &quot;deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable&quot;

# Install Docker Engine.
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

# Make sure it works. This should download an example image an run it, which
# should result in sensible output.
sudo docker run hello-world

# Check if running Docker as a non-root user works - it shouldn't.
docker run hello-world

# You want it to work, so you also want to run the following commands (the first
# one will probably report that the `docker` group already exists - that's fine
# and you should run the second command anyway):
sudo groupadd docker
sudo usermod -aG docker $USER

#
# Logout of the system and log back in for the group changes to take effect.
#

# This should work now:
docker run hello-world

# We almost certainly want Docker Compose too, which can be installed like this:
sudo apt-get install docker-compose

# Make sure Docker will start on boot:
sudo systemctl enable docker
</code></pre>
<p>You now have a working Docker Engine on your system.</p>
<h2 id="letsencrypt">Let's Encrypt</h2>
<p>If you're not behind a load balancer (which I strongly suggest you should be within an AWS setup), and you want to set up Let's Encrypt for certificates, that's easy-peasy!</p>
<p>The best choice is to follow the official instructions for configuring Certbot, which are available <a href="https://certbot.eff.org/lets-encrypt/ubuntufocal-apache">here</a> for the setup we're working with. In short, however, we need to perform the steps outlined below.</p>
<p>Install Certbot:</p>
<pre><code class="language-sh"># Ensure snapd is up-to-date.
sudo snap install core
sudo snap refresh core

# Install Certbot
sudo snap install --classic certbot

# Link Certbot into /usr/bin (alternatively add /snap/bin to $PATH. but it's a
# better idea to explicitly symlink programs as needed).
sudo ln -s /snap/bin/certbot /usr/bin/certbot
</code></pre>
<p>After that, if using Apache, run:</p>
<pre><code class="language-sh">sudo certbot --apache
</code></pre>
<p>If using NGINX, you should run this instead:</p>
<pre><code class="language-sh">sudo certbot --nginx
</code></pre>
<p>This will guide you through step-by-step instructions for configuring an SSL certificate for one of the available Apache websites. That's it!</p>
<p>Automatic renewals are performed by a Cron job that runs:</p>
<pre><code class="language-sh">sudo certbot renew
</code></pre>
<p>This needs to run as root though, and isn't trivial to setup manually. Luckily, this gets configured automatically during Certbot installation - on Ubuntu 20.04.1 LTS, you should be able to find the Cron configuration in the following file:</p>
<pre><code class="language-sh">cat /etc/cron.d/certbot
</code></pre>
<p>You don't need to do anything here, it's just to confirm that renewals will be automated.</p>
<h2 id="additionalusers">Additional users</h2>
<p>If you need to add a new user <code>username</code> with passwordless <code>sudo</code> permissions, you should generally follow the steps outlined below.</p>
<p>First, create the new user:</p>
<pre><code class="language-sh">sudo adduser username
</code></pre>
<p>When prompted for the password, enter any random password - we'll lock it later so it won't be usable anyway unless unlocked - but that is only doable by a user with <code>sudo</code> permissions, and if someone already has that, there's worse things they can do anyway.</p>
<p>You'll be prompted for the new user's information, which you most likely want to just leave blank. After that, lock the user's password so it cannot be used:</p>
<pre><code class="language-sh">sudo passwd -l username
</code></pre>
<p>Now, add the user to the sudo group:</p>
<pre><code class="language-sh">sudo usermod -aG sudo username
</code></pre>
<p>Next, in order to enable passwordless <code>sudo</code>, open the <code>/etc/sudoers</code> file using the special <code>visudo</code> program which validates the syntax and makes sure stuff will work after making changes - unlike if you were to use a regular text editor such as <code>vim</code> or <code>nano</code>:</p>
<pre><code class="language-sh">sudo visudo
</code></pre>
<p>At the <strong>end</strong> of the file, add the following line:</p>
<pre><code class="language-plaintext">username ALL=(ALL) NOPASSWD:ALL
</code></pre>
<p>Finally, in order for this user to be able to login via SSH, you want to edit their <code>authorized_keys</code> file and add a public SSH key to it.</p>
<p>The easiest way to do this is to switch into the new user's account and set everything up:</p>
<pre><code class="language-sh">sudo su - username
</code></pre>
<p>Next, create the necessary path and file:</p>
<pre><code class="language-sh"># Create the `.ssh` path in `$HOME` if it doesn't already exist, and assign the
# correct permissions to it.
mkdir --parents --mode=700 &quot;$HOME/.ssh&quot;

# Create an `authorized_keys` file.
touch &quot;$HOME/.ssh/authorized_keys&quot;

# Assign the correct permissions to it.
chmod 600 &quot;$HOME/.ssh/authorized_keys&quot;
</code></pre>
<p>Finally, edit the file with your favorite editor and add your public key of choice, to enable the user to login via SSH.</p>
<p>Don't forget to do</p>
<pre><code class="language-sh">exit
</code></pre>
<p>after you're done, in order to switch back to your own user account.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Xdebug 3 + Docker + VS Code setup guide on Ubuntu]]></title><description><![CDATA[<p>So here's my second annual (as in once-per-year) blog post. I hoped this would've happened more often, but oh well.</p><p>After many years of Sublime Text usage, I recently switched to VS Code, as the Sublime Text ecosystem for PHP development seems to be somewhat less active lately, based on</p>]]></description><link>https://blog.levacic.net/2020/12/19/xdebug-3-docker-vs-code-setup-guide-on-ubuntu/</link><guid isPermaLink="false">5fdd3a887c82ae4336695668</guid><category><![CDATA[php]]></category><category><![CDATA[xdebug]]></category><category><![CDATA[vscode]]></category><category><![CDATA[laravel]]></category><dc:creator><![CDATA[Milos Levacic]]></dc:creator><pubDate>Sat, 19 Dec 2020 00:42:01 GMT</pubDate><content:encoded><![CDATA[<p>So here's my second annual (as in once-per-year) blog post. I hoped this would've happened more often, but oh well.</p><p>After many years of Sublime Text usage, I recently switched to VS Code, as the Sublime Text ecosystem for PHP development seems to be somewhat less active lately, based on my personal experience.</p><p>I was also recently starting work on a new Laravel project and decided to setup Xdebug so I could, well, debug. Most of the online guides I've found while hoping for a quick copy and paste configuration didn't end up working, and were in fact aimed at Xdebug 2 - whereas the new Xdebug 3 version changed some of the configuration setting keys. I spent a couple of hours getting everything to work nicely, and encountered a weird (but in hindsight, sensible) issue.</p><p>"Running in Docker" is not very specific, so I'll start by explaining my project setup. It's a Laravel app with a <code>docker-compose.yml</code> file in the project root, which looks a bit like this (documentation, unrelated services, and unrelated configuration settings removed for readability):</p><pre><code class="language-yml">version: '3'

services:

  web:
    container_name: project--web
    build:
      context: ./docker/web
      args:
        HOST_UID: ${DOCKER_HOST_UID}
        HOST_GID: ${DOCKER_HOST_GID}
    restart: unless-stopped
    ports:
      - "${DOCKER_WEB_PORT}:80"
    volumes:
      - ./:/opt
    extra_hosts:
      - "host.docker.internal:host-gateway"

  php:
    container_name: project--php
    build:
      context: ./docker/php
      args:
        HOST_UID: ${DOCKER_HOST_UID}
        HOST_GID: ${DOCKER_HOST_GID}
    volumes:
      - ./:/opt
    extra_hosts:
      - "host.docker.internal:host-gateway"
    tty: true
</code></pre><p>The <code>HOST_UID</code> and <code>HOST_GID</code> are configured to be used by Apache in the <code>web</code> image and PHP in the <code>php</code> image so as not to mess up file permissions and ownership on the host machine.</p><p>For those curious, I also have the following services as a starting point for most Laravel projects:</p><ul><li>a MySQL service as the app's database</li><li>a queue worker service (based on the same Docker image as the <code>php</code> service) for running <code>artisan queue:work</code></li><li>a Node service for Node-related stuff (Yarn dependencies and the front-end build process)</li><li>Mailhog for local email testing</li><li>Beanstalk as a messaging queue</li><li>Beanstalk Aurora as a Beanstalk UI</li></ul><p>Both the <code>web</code> and <code>php</code> services run a similar Docker image in terms of PHP - with the difference being that <code>web</code> also includes Apache through which the web application is served. I also have a local Nginx setup for proxying requests to various projects so that I can use <code>https://project.localhost</code>, <code>https://mailhog.project.localhost</code> etc. instead of having to remember the per-project ports for individual exposed services. For SSL in local development I use <a href="https://github.com/FiloSottile/mkcert">mkcert</a>, though I guess none of this is too relevant for today's topic of Xdebug, and I should just make a separate blog post documenting that whole setup if anyone's interested.</p><p>Importantly though, the Docker images for running the web-app and CLI commands are based on <code>ubuntu:bionic</code> - I've been wanting to switch to the <code>php</code> images but just haven't gotten around to it yet. The <code>Dockerfile</code>s for the <code>web</code> and <code>php</code> images install PHP 7.4 from <code>ppa:ondrej/php</code>, along with a bunch of extensions including <code>php7.4-xdebug</code> - which, as of recently, ends up with Xdebug 3.0.1 installed in the image.</p><p>This whole guide should work just as well for PHP 8.0 as well as for older versions, though you probably shouldn't be using older versions anyway.</p><p>If you have a very specific setup that's similar to mine, you're gonna want to do the following:</p><h2 id="variables-order">Variables order</h2><p>In my Docker images, PHP's default variables order was set to <code>GPCS</code> - which is not good because Xdebug looks for an environment variable in <code>$_ENV</code> when attempting to detect a trigger from a CLI command. Thus, my <code>Dockerfile</code> copies a file with the following contents to <code>/etc/php/7.4/cli/conf.d/99-variables-order.ini</code>:</p><pre><code class="language-ini">; By default this is GPCS - we also want E so that $_ENV would be populated, and
; we could trigger Xdebug using an environment variable on the command line.
variables_order = "EGPCS"
</code></pre><p>This was giving me trouble and actually took me more time than everything else here to figure out why Xdebug wasn't working for me when attempting to trigger it from the command line.</p><h2 id="xdebug-configuration">Xdebug configuration</h2><p>The <code>Dockerfile</code> also copies a file with the following contents to <code>/etc/php/7.4/cli/conf.d/99-xdebug.ini</code> (and <code>/etc/php/7.4/apache2/conf.d/99-xdebug.ini</code> in the <code>web</code> image):</p><pre><code class="language-ini">xdebug.mode=debug
xdebug.client_host=host.docker.internal</code></pre><p>The first line configures step debugging, and also affects how some other Xdebug configuration works by default (notably the default value of the <code>xdebug.start_with_request</code> setting).</p><p>The second line tells Xdebug which address to use to connect to the IDE - which is running on the host machine, and <code>host.docker.internal</code> is a special hostname which resolves to the host machine's IP address.</p><p>Note that my <code>Dockerfile</code> configuration which installs <code>php7.4</code> and (among others) <code>php7.4-xdebug</code> from <code>ppa:ondrej/php</code> using <code>apt</code> will automatically enable the extension as well, so I don't need to explicitly do that. If you do, you'll want to also add <code>zend_extension=/path/to/xdebug.so</code> in this file.</p><h2 id="host-docker-internal">host.docker.internal</h2><p>As a sidenote, this wasn't supported on Linux at all for a long time until <a href="https://github.com/moby/moby/pull/40007">this PR</a> was merged - and with it, Linux users still have to explicitly map <code>host.docker.internal</code> to the magic <code>host-gateway</code> value, which Docker then resolves to the host machine's IP address when starting the container and saves it in the container's <code>/etc/hosts</code> file. That's why the <code>extra_hosts</code> setting is needed in the <code>docker-compose.yml</code> file - and hopefully this shouldn't cause any issues in non-Linux (ie. Mac or Windows) Docker environments, though I haven't been personally able to test it yet.</p><h2 id="vs-code">VS Code</h2><p>In VS Code you want to install the <code>felixfbecker.php-debug</code> extension (<a href="https://marketplace.visualstudio.com/items?itemName=felixfbecker.php-debug">VS Code Marketplace</a>, <a href="https://github.com/felixfbecker/vscode-php-debug">GitHub</a>):</p><pre><code>ext install felixfbecker.php-debug</code></pre><p>You can follow the <a href="https://github.com/felixfbecker/vscode-php-debug#vs-code-configuration">VS Code configuration</a> section of that extension's installation instructions, but what worked for my setup was to add the following into my project's <code>project.code-workspace</code> file:</p><pre><code class="language-json">{
    // other stuff
    "launch": {
        "version": "0.2.0",
        "configurations": [
            {
                "name": "Listen for XDebug",
                "type": "php",
                "request": "launch",
                "port": 9003,
                "pathMappings": {
                    "/opt": "${workspaceFolder}"
                },
                "ignore": [
                    "**/vendor/**/*.php"
                ],
                "xdebugSettings": {
                    "max_children": 10000,
                    "max_data": 10000,
                    "show_hidden": 1
                }
            }
        ]
    }
}
</code></pre><p>The extension also suggests adding a "Launch currently open script" launch configuration, but I don't see how that would be useful in the context of a framework like Laravel, so I skipped it.</p><p>As for the other stuff, here's what the configuration settings mean:</p><ul><li><code>name</code> - just the name of the configuration which will be displayed in the debugger and the taskbar</li><li><code>type</code> - should be <code>php</code> to tell VS Code that this configuration should run debugging with the PHP Debug extension we just installed</li><li><code>request</code> - should be <code>launch</code> as that's appropriate for how we debug PHP with Xdebug</li><li><code>port</code> - should be <code>9003</code> for Xdebug 3, which is the new default port that Xdebug will connect to - in older versions, the default port vas <code>9000</code>, but we want to minimize our configuration work, so best to stick with the defaults when we can</li><li><code>pathMappings</code> - this is important because Xdebug is running in the Docker container, where the app's files are under a different path; this is very configuration-specific so adjust it to your own setup, but my <code>Dockerfile</code>s use <code>/opt</code> as the <code>WORKDIR</code> and that's the volume I bind my project's directory to - hence my value for this configuration setting</li><li><code>ignore</code> - optional paths that errors will be ignored from</li><li><code>xDebugSettings</code> - consult the extension's documentation for more info, but I needed to set this higher than the default values because the defaults were too small for my use-case</li></ul><p>To expand on the last point, one issue I had with the low default <code>xDebugSettings</code> was that my debugger would show, for example, that <code>$_SERVER</code> is an <code>array()</code> with ~60ish items, but when I expanded that variable, it would always only show the first 32 items and nothing else. Increasing these values fixed the problem.</p><p>So now we're done with configuring Xdebug and VS Code, so how do we actually debug?</p><p>There's two contexts in which it makes sense to debug PHP code - browser requests and CLI commands. Both are explained in <a href="https://xdebug.org/docs/step_debug">Xdebug's documentation</a>, but here's a short summary.</p><h2 id="browser">Browser</h2><p>You want to install one (or more) of the following extensions, depending on which browser you're working with:</p><ul><li><a href="https://addons.mozilla.org/en-GB/firefox/addon/xdebug-helper-for-firefox/">Xdebug Helper for Firefox</a> (<a href="https://github.com/BrianGilbert/xdebug-helper-for-firefox">source</a>)</li><li><a href="https://chrome.google.com/extensions/detail/eadndfjplgieldjbigjakmdgkmoaaaoc">Xdebug Helper for Chrome</a> (<a href="https://github.com/mac-cain13/xdebug-helper-for-chrome">source</a>)</li><li><a href="https://apps.apple.com/app/safari-xdebug-toggle/id1437227804?mt=12">XDebugToggle for Safari</a> (<a href="https://github.com/kampfq/SafariXDebugToggle">source</a>)</li></ul><p>Once installed, you'll get an extension icon/button in your browser which enables you to select the Xdebug feature you want to trigger. I'm only interested in debugging, so when I want to do that, I'll click the icon and select the "debug" mode with the green bug icon.</p><p>Then I'll start a debugging session in VS Code by pressing F5 (which should be the default keybinding - if not, you can always open your "Run" panel in VS Code and click the little "play" icon next to the dropdown menu where "Listen for XDebug" should be selected).</p><p>I'll add a breakpoint where I need it (by simply clicking next to the line number), and refresh my browser page. What should happen is that Xdebug will connect to VS Code, which will in turn inform Xdebug about the breakpoint which was set, and then while the code is executing, if it reaches that line of code, it will pause execution and you can then step-debug in VS Code. Yay!</p><p>Note that depending on how the web server serving your PHP application is configured, you might get timeouts in your browser - but this article won't deal with that as it's already getting too long.</p><p>Don't forget to switch the browser extension's Xdebug mode back to "Disable" (gray bug icon) after you're done with it, to avoid invoking Xdebug on every request and slowing down performance.</p><h2 id="command-line">Command Line</h2><p>As with the browser-based flow, this again requires you to first start a debugging session in VS Code and set a breakpoint on a line of code.</p><p>To trigger Xdebug when running command-line applications (such as when unit testing or running an Artisan command), you need to configure a specific environment variable.</p><p>As I'm working with Docker Compose, I would usually run unit tests like this:</p><pre><code class="language-sh">docker-compose run --rm php artisan test</code></pre><p>The environment variable you need to set is <code>XDEBUG_SESSION</code> and with the Xdebug configuration described above, it can be set to any value, as long as it's set. Xdebug's documentation suggests <code>XDEBUG_SESSION=1</code> so we'll go with that:</p><pre><code class="language-sh">docker-compose run -e XDEBUG_SESSION=1 --rm php artisan test</code></pre><p>That's it - when the code reaches a line where you've set a breakpoint, it should pause execution and focus that line within VS Code, where you can proceed to step through the code and track what's going on.</p><h2 id="conclusion">Conclusion</h2><p>While this might seem like a long post compared to the actual amount of work that needs to be done to just configure Xdebug and VS Code and start debugging, I was attempting to describe my environment and setup in more detail so it can be more helpful to anyone running a similar setup - as well as trying to explain <em>why</em> all the required configuration is required in the first place, and what it does.</p><p>Hopefully this will work for you, but if it doesn't, feel free to let me know and I'll try to help out.</p><p>Stay healthy!</p>]]></content:encoded></item><item><title><![CDATA[Optional Laravel service providers]]></title><description><![CDATA[<p>Sometimes in development, you want to use some Composer packages that you don't want to have installed in live environments, such as <code>staging</code> or <code>production</code>. Usually, you would just install this as dev dependencies, with something like:</p><pre><code class="language-shell">composer require --dev vendor/package</code></pre><p>Such dependencies are then declared as "development-only", and,</p>]]></description><link>https://blog.levacic.net/2019/12/07/optional-laravel-service-providers/</link><guid isPermaLink="false">5deb0ac15eebda1dfec80584</guid><category><![CDATA[php]]></category><category><![CDATA[laravel]]></category><dc:creator><![CDATA[Milos Levacic]]></dc:creator><pubDate>Sat, 07 Dec 2019 03:19:19 GMT</pubDate><content:encoded><![CDATA[<p>Sometimes in development, you want to use some Composer packages that you don't want to have installed in live environments, such as <code>staging</code> or <code>production</code>. Usually, you would just install this as dev dependencies, with something like:</p><pre><code class="language-shell">composer require --dev vendor/package</code></pre><p>Such dependencies are then declared as "development-only", and, in general, you don't install those in any environments except locally.</p><p>And sometimes, these packages are Laravel-specific and include their own service providers so they can register themselves within the application and function properly. Laravel has a nifty feature since <a href="https://medium.com/@taylorotwell/package-auto-discovery-in-laravel-5-5-ea9e3ab20518">version 5.5</a> (please don't install dependencies with a <code>dev-master</code> version specified, as that link suggests, instead using stable versions only) called Package Auto-Discovery, enabling package developers to declare their service providers within their <code>composer.json</code> files, which Laravel will later automatically register when they are installed.</p><p>Laravel is, however, slightly opinionated about the order in which service providers load, and this logic can be found in <a href="https://github.com/laravel/framework/blob/483481519adca80b4f58617940c1b00fbf1696a0/src/Illuminate/Foundation/Application.php#L571-L587"><code>Illuminate\Foundation\Application::registerConfiguredProviders()</code></a>:</p><pre><code class="language-php">public function registerConfiguredProviders()
{
    $providers = Collection::make($this-&gt;config['app.providers'])
                    -&gt;partition(function ($provider) {
                        return Str::startsWith($provider, 'Illuminate\\');
                    });

    $providers-&gt;splice(1, 0, [$this-&gt;make(PackageManifest::class)-&gt;providers()]);

    (new ProviderRepository($this, new Filesystem, $this-&gt;getCachedServicesPath()))
                -&gt;load($providers-&gt;collapse()-&gt;toArray());
}</code></pre><p>What this does, basically, is read all of the providers declared in the <code>providers</code> section of the primary <code>config/app.php</code> configuration file, and extract all providers starting with <code>Illuminate\</code> (ie. all core framework providers) as the first group of providers to load, leaving the others after that, followed by injecting all auto-discovered packages in-between, resulting in the following order in which providers get registered and booted:</p><ol><li><code>Illuminate\</code> service providers declared in <code>app.providers</code></li><li>Auto-discovered packages' service providers</li><li>Other service providers declared in <code>app.providers</code></li></ol><p>This might not be ideal, because it somewhat relies on magic behavior, and makes it less obvious about which providers will ultimately get registered in your application. I've found that it's best to just disable auto-discovery completely, which you can achieve by adding a section like the following in your <code>composer.json</code> file:</p><pre><code class="language-json">"extra": {
    "laravel": {
        "dont-discover": [
            "*"
        ]
    }
},</code></pre><p>After that, you'll just explicitly register the service providers you need in <code>app.providers</code>, and make it less confusing for yourself and other developers working on the same codebase - otherwise, the only obvious ways to figure out which providers are registered are to manually inspect your dependencies and see which ones are developed with auto-discover in mind, inspecting the cached packages file usually located in <code>bootstrap/cache/packages.php</code>, or doing something silly like editing the previously mentioned <code>registerConfiguredProviders()</code> method to temporarily dump the compiled list of providers to load - and none of those options are as easy as simply reading the list in <code>app.providers</code>.</p><p>This introduces a new problem - you need to figure out how to load some service providers only in your development environment, but not in others. One option I've found recommended online is to use a package such as <a href="https://github.com/percymamedy/laravel-dev-booter"><code>percymamedy/laravel-dev-booter</code></a>, which allows you to define additional service provider groups, and configure them to be loaded only in certain environments. Such an approach, however, suffers from the same issue we were trying to resolve in the first place, which is explicitly declaring the order in which service provider get loaded, as it will still load providers in groups.</p><p>The best solution I was able to find that solves the issue to my satisfaction is to create a kind of a "proxy" provider, which checks whether the real service provider class is available, and registers it if so.</p><p>It's an easy solution, so here's an example featuring <a href="https://github.com/barryvdh/laravel-debugbar"><code>barryvdh/laravel-debugbar</code></a>, a package commonly used in development to help out with debugging:</p><pre><code class="language-php">&lt;?php

declare(strict_types=1);

namespace App\Debugbar;

use Barryvdh\Debugbar\ServiceProvider as DebugbarServiceProvider;
use Illuminate\Support\ServiceProvider as BaseServiceProvider;

class ServiceProvider extends BaseServiceProvider
{
    /**
     * @inheritDoc
     */
    public function register(): void
    {
        if (class_exists(DebugbarServiceProvider::class)) {
            $this-&gt;app-&gt;register(DebugbarServiceProvider::class);
        }
    }
}
</code></pre><p>Then, just add <code>App\Debugbar\ServiceProvider</code> to the <code>app.providers</code> list, and it will load the actual package's service provider only if it's installed, while still enabling us to always be explicit about all of the service providers we register, and the order in which they will be loaded.</p><p>You could also just add this same logic into the <code>App\Providers\AppServiceProvider</code>, which is provided by default in Laravel, but then once again you lose the clarity of having the service provider explicitly listed in <code>app.providers</code>, which was one of the original goals for me.</p><p>If you would like to generalize the above solution, one option is to do something like this:</p><pre><code class="language-php">&lt;?php

declare(strict_types=1);

namespace App\Providers;

use Illuminate\Support\ServiceProvider as BaseServiceProvider;

abstract class OptionalServiceProvider extends BaseServiceProvider
{
    /**
     * The name of the optional service provider.
     *
     * @var string
     */
    protected $optionalProviderClassName;

    /**
     * @inheritDoc
     */
    public function register(): void
    {
        if (class_exists($this-&gt;optionalProviderClassName)) {
            $this-&gt;app-&gt;register($this-&gt;optionalProviderClassName);
        }
    }
}</code></pre><p>And then, for actual service providers (such as the one previously shown), you could do:</p><pre><code class="language-php">&lt;?php

declare(strict_types=1);

namespace App\Debugbar;

use App\Providers\OptionalServiceProvider;
use Barryvdh\Debugbar\ServiceProvider as DebugbarServiceProvider;

class ServiceProvider extends OptionalServiceProvider
{
    /**
     * @inheritDoc
     */
    protected $optionalProviderClassName = DebugbarServiceProvider::class;
}</code></pre><p>I also tinkered with the idea of being able to add something like <code>optional(SomePackage\ServiceProvider::class)</code> directly into <code>app.providers</code>, where the <code>optional()</code> function would be some helper that dynamically creates a service provider as the one above, though that one seems like a lot more work.</p><p>Ultimately, the overhead of figuring out a generalized solution to this problem just wasn't worth the effort for me, as I'll usually only have no more than a few such "optional" providers in an application, where the non-generalized solution is just good enough as it is.</p>]]></content:encoded></item></channel></rss>