Posts
Suspend and Resume Processes in Linux
Managing processes in Linux is an essential skill for any user. This post covers how to suspend, resume, and manage jobs using simple commands.
A Nod to 'Golang: Testing Cobra CLI Applications with Dependency Injection'
Ahnii, fellow Gophers! Today, I want to bring your attention to a blog post that I stumbled upon while flailing about, trying to figure out how to test my Dependency Injected Cobra CLI app.
Setting Up a DevContainer in VSCode
Visual Studio Code (VSCode) has become one of the most popular code editors due to its extensive features and capabilities. One such feature is the ability to use DevContainers, which allows developers to define their development environment as code. This blog post will guide you through the process of setting up a DevContainer in VSCode.
Start Developing with Laravel in Ubuntu 20.04
First and foremost, I find Ubuntu the Linux distribution easiest to install and best supported when learning Web Development. I’m sure that’s open for debate, but that’s what the comments are for.
Imposter Syndrome
Ahnii! Notice the exclamation point? The Ojibwe greeting for ‘Hello’?
Quickly view nodejs project 'scripts' on the cli
Ahnii! I previously wrote a command line utility named ‘packages’ which simply prints a list of project dependencies on the command line.
Quickly view project dependencies on the cli
Ahnee! I frequently find myself on the command line wanting to know which dependencies and devDependencies are in the package.json file.
Use DDEV to locally develop with Drupal
I’ve been developing with Drupal for over 10 years. It’s never been known to be quick and easy to install, but with the rise of containers it’s now as easy as executing a few commands in a terminal.
Whalebrew
Docker Images as ‘Native’ Commands
Ahnee! If you’re from the Mac World you’ve probably used, or at least heard of, Homebrew. For the uninformed, Homebrew is The missing package manager for macOS. Or more accurately it’s a package management system for macOS that’s comparable to Redhat’s RPM, Debian’s APT, and Window’s Chocolatey.
Package managers make installing software easy by automagically fetching a pre-compiled binary and its dependencies, then copying them into your $PATH.
Depending on the software, compiling from source code is often difficult and time-consuming. Package managers let you get on with the using the software.
Installing With APT
I’ll demonstrate installing a package with APT in Ubuntu 18.10:
$ sudo apt install figlet
As you can see in the screenshot, APT downloads the figlet package (figlet_2.2.5–3_amd64.deb), unpacks it, then finally installs to /usr/bin/figlet.
$ figlet "p4ck4g3's 4 l1fe\!"
I Whale Always Love You
Whalebrew is an inevitable side effect of container proliferation. Their ease of use, speed, and low resource consumption make them ideal vehicles for single command or function execution.
As I’ve previously written, containers can be started, perform a task, then stopped in a matter of milliseconds. And that’s exactly what Whalebrew allows you to do in the form of Docker images aliased in your $PATH.
Now let’s put a magnifying glass up to Whalebrew by walking through its installation then “install a package”.
Whalebrew Demonstration
By creating an alias for running a Docker container and storing it in $PATH, running a command within a container is seamless and virtually indistinguishable from running a command directly in the environment.
What does that look like exactly? Assuming you already have Docker installed, we’ll start by installing Whalebrew (from https://github.com/bfirsh/whalebrew):
$ sudo curl -L "https://github.com/bfirsh/whalebrew/releases/download/0.1.0/whalebrew-$(uname -s)-$(uname -m)" -o /usr/local/bin/whalebrew; sudo chmod +x /usr/local/bin/whalebrew
Now let’s install figlet again, but this time with Whalebrew:
$ sudo whalebrew install whalebrew/figlet
Now let’s run figlet again and adore the glorious results (We’ll use the full path in case the APT figlet is first in $PATH):
$ /usr/local/bin/figlet "It's a whale of a time\!"
Tada! We’ve just run figlet from within a container. You may have noticed it took a bit longer to execute, depending on your computer’s runtime juice.
So what just happened? Before we wrap it up we’ll take a quick look under the hood and examine the difference between running a native binary and a Whalebrew command.
Native vs. ‘Native’
Maazhichige, wrong ‘native’! The figlet program installed with APT is an ELF executable, the source code compiled from C, and it runs directly on your system.
The Whalebrew alias looks like this:
$ cat /usr/local/bin/figlet
When a package is executed, Whalebrew will run the specified image with Docker, mount the current working directory in /workdir, and pass through all of the arguments.
And this is essentially what Whalebrew executes:
$ docker run -it -v "$(pwd)":/workdir -w /workdir whalebrew/figlet "It's a whale of a time\!"
And well, that’s it, move along. Baamaapii.
Docker for Legacy Drupal Development
Leveraging Linux containers for Migrating Drupal 6 to Drupal 8
Ahnee. Let me start by saying this article/tutorial (artorial, tutarticle!?), this artorial could be titled “Docker for Development, Leveraging Linux containers” and be applied to virtually any stack you want.
I’m using Drupal because I recently began a Drupal 6 (D6) to Drupal 8 (D8) website migration.
Drupal is a free, open-source content management system (CMS) with a large, supportive community. It’s used by millions of people and organizations around the globe to build and maintain their websites.
Both versions run on a LAMP stack but with different versions of PHP. D6 reached it’s end-of-life in early 2016, almost a year before PHP 7 was released. Consequently it requires PHP 5.6 and lower to run.
The folks at myDropWizard.com are bravely supporting D6 until the cows come home, props to them! I have no affiliation with them, I’m just thunderstruck by their level of commitment.
According to the docs D8 will run on PHP 5.5.9+, but any version less than 7.1 is not recommended. If running Drupal 8 on PHP 5.6 you go, only pain will you find.
So how do you run PHP 5 and PHP 7 simultaneously on the same host? Spin up a pair of VMs? Slip in Nginx and PHP-FPM alongside Apache? The former option is acceptable. The latter borders on sadomasochism.
The answer is, of course, Docker.
This Guy’s Setup
I use Linux as my primary Operating System (OS). Ubuntu 18.04 loaded with the latest packages of Apache 2.4, MySQL 5.7, and PHP 7.2 from Ubuntu’s official repositories.
Drupal 8
My Ubuntu host is similar enough to the production environment where D8 is to be deployed that I created an Apache Virtual Host (vhost) and MySQL database then downloaded D8 using a composer template and installed it with Drupal Console.
What is the Drupal Console? The Drupal CLI. A tool to generate boilerplate code, interact with and debug Drupal. From the ground up, it has been built to utilize the same modern PHP practices which were introduced in Drupal 8.
Drupal 6
This is where the fun begins. But first I’ll explain the differences between a VM and a Container.
VMs and Containers Compared
VM
There are many VM providers. VirtualBox, QEMU, and VMWare to name a few. A VM contains a full OS and kernel running in isolation (so lonely) from the host. It is indistinguishable from a proper desktop or server.
Before booting, VMs are allocated resources such as RAM and CPU cores. The VM provides a hardware emulation layer between the guest OS and the host, which looks and feels like bare metal as far as the guest OS is concerned.
Because they resemble physical desktops and servers, VMs require significant amounts of the host’s system resources. In contrast to Containers this severely limits the amount of VMs that can run concurrently on a single host. The boot-up and shutdown time is also the same as a physical machine; another significant difference.
Containers
Containers offer the advantages of VMs without the overhead. By virtualizing at the kernel level containers share resources with the host. Many more containers can run simultaneously on a single machine compared to VMs.
Containers worry more about resource prioritization rather than resource allocation. In other words, a container says “When will you run this process for me niijikiwenh?” rather than “How much CPU do I have to run this process?”.
Finally, starting up or shutting down a Container is super fast *whoooooosh*. Because Containers share a fully loaded kernel with the host, they can be started, perform a task, then shut down within milliseconds. Containers are the mayflies of the tech world. On the flip side, they can last until an act of God brings them down along with your house.
Docker
I messed with Docker years ago but only recently gave it a prime time slot in my regularly scheduled programming.
Docker makes it easier to create, deploy, and run an application in a lightweight and portable Container by packaging it up with it’s dependencies and shipping them out as an image.
I’ve only skimmed the surface of Docker and don’t fully understand how it works under the hood. I’m also anxious to check out a competitor such as Canonical’s LXD or CoreOS/Redhat’s Rkt. All in good time.
Docker Images
Docker loads an image containing an OS and the software needed to do a job into a container. In other words, an image contains your applications runtime environment.
Creating an image is rather painless, depending on the complexity of your requirements. You write a set of instructions in YAML saved as a Dockerfile, then run docker build. Our tutorial requirements are simple and can be met with pre-existing images pulled from Docker Hub, a Docker image registry service.
While I can find an image which contains Apache, PHP, and MySQL all together, we’re going to follow best practices and separate the web server from the database into 2 containers where they will communicate through an internal subnet created by Docker.
Persisting Data
Finally, containers are designed to be disposable, with the ability to run as a single instance on a single host, or to be scaled as multiple instances distributed over server clusters. By default, data is written to a containers writable layer and will be disposed of along with the container.
Volumes and Bind Mounts are Dockers two options for persisting data. I can, and maybe will, write an entire post to fully explain them. But to keep it brief I will say Volumes are managed by Docker, isolated from the host, can be mounted into multiple containers, and stored on remote hosts or in a cloud provider.
Bind Mounts are a file or directory on the host machine mounted into a container. They are a good option to share configuration data, source code, and build artifacts during development. In production, your build artifacts are best copied directly into the image, configuration in the environment, and source code unnecessary.
Volumes are recommended for storing data used and generated by containers. Bind mounts depend on the host machine’s directory structure, hampering container portability.
In this tutorial we will get by with a bind mount.
Summary
That’s Docker so far as I understand it. I hope you find it beneficial and are encouraged to begin developing with Docker. I invite you to join in on the fun below and follow the step-by-step instructions to get down and dirty with Docker.
Tutorial
Let’s setup Drupal 6 within containers in Ubuntu. If you are not using Ubuntu don’t fret, the only step you need to change is “Install Docker”. In that case refer to https://docs.docker.com/install/#supported-platforms for instructions to install Docker on your OS.
If you catch any mistakes or see room for improvement please contact me. Otherwise, wacka wacka.
Prerequisites
sudo (or root) — Required to install and run Docker. To run docker commands without sudo or root you must add your user account to the docker group.
Table of Contents
- Install Docker
- Add user to docker group
- Start Docker
- Pull MySQL image
- Start container
- Download Drupal 6
- Pull Apache/PHP image
- Enable mod_rewrite
- Allow Overrides
- Start container with a bind mount
- Install Drupal
- Cleanup
Biminizha’.
Install Docker
Open a terminal and ensure your package lists are up to date then install Docker (aka Docker Engine):
$ sudo apt update
$ sudo apt install docker.io -yOutput:
<heaps of output>
Processing triggers for systemd (237-3ubuntu10.3) ...Docker Engine is comprised of three major components:
- dockerd (Server) — a daemon that is a long-running background process
- docker (Client) — a command line interface
- REST API — specifies interfaces that programs can use to communicate with the daemon
Start Docker
Kick-start the aforementioned long-running background process:
$ sudo systemctl start docker
Optionally, tell systemd to start docker on system boot:
$ sudo systemctl enable docker
Docker is now installed and ready for use. Check if docker is running:
$ systemctl is-active docker
Output:
active
Pull MySQL image
Now that you have docker running you can pull your first image. Start with MySQL version 5.6 (without :5.6 specified, :latest is implied):
$ sudo docker pull mysql:5.6
Output:
5.6: Pulling from library/mysql
802b00ed6f79: Pull complete
30f19a05b898: Pull complete
3e43303be5e9: Pull complete
94b281824ae2: Pull complete
51eb397095b1: Pull complete
3f6fe5e46bae: Pull complete
b5a334ca6427: Pull complete
115764d35d7a: Pull complete
719bba2efabc: Pull complete
284e66788ee1: Pull complete
0f085ade122c: Pull complete
Digest: sha256:4c44f46efaff3ebe7cdc7b35a616c77aa003dc5de4b26c80d0ccae1f9db4a372
Status: Downloaded newer image for mysql:5.6Start MySQL
Start the DB container, options are explained below:
$ sudo docker run -d \
--name="drupal-mysql" \
-e MYSQL_ROOT_PASSWORD=drupalroot \
-e MYSQL_DATABASE=drupal6 \
-e MYSQL_USER=drupal \
-e MYSQL_PASSWORD=drupal6pass \
mysql:5.6- -d — Start the container as a background process.
- --name —Will be referenced during Drupal install. A random name will be assigned if one isn’t provided.
- -e — Set’s an environment variable. MySQL will be configured with values passed in by the environment.
Output (will differ):
de99c912e3fbeb4f113889c145b5fab82787259c21d51962c9186e90c27d2857
Download Drupal 6
D6 is available for download from the official Drupal site packaged as a gzipped tarball. You can grab it with wget:
$ cd ~
$ wget https://ftp.drupal.org/files/projects/drupal-6.38.tar.gz
$ tar -xzf drupal-6.38.tar.gzVerify drupal-6.38 exists in your home directory:
$ if test -d ~/drupal-6.38; then echo “It exists”; fi
Output:
It exists
Pull Apache/PHP image
Now pull a docker image of Ubuntu 14.04 LTS with Apache 2, PHP 5, and Composer from https://hub.docker.com/r/nimmis/apache-php5/:
$ sudo docker pull nimmis/apache-php5
Output:
Using default tag: latest
latest: Pulling from nimmis/apache-php5
c2c80a08aa8c: Pull complete
6ace04d7a4a2: Pull complete
f03114bcfb25: Pull complete
99df43987812: Pull complete
9c646cd4d155: Pull complete
5c017123b62e: Pull complete
8f95d9abec41: Pull complete
c46de42c66c3: Pull complete
9a19620cecad: Pull complete
5c62abdf642f: Pull complete
Digest: sha256:712d35d5cc30e6a911e260e871f08f77d5684edcc50cba21163535714c547ff5
Status: Downloaded newer image for nimmis/apache-php5:latestDocumentRoot and Incoming Port
The containerized Apache’s default DocumentRoot is /var/www/html, which we will bind mount to the D6 files in ~/drupal-6.38.
Because I already have Apache on the host I have to bind the container’s port 80 to something else. I’m using 10080 but you can choose almost any other free port.
$ sudo docker run -d \
-p 10080:80 \
-v ~/drupal-6.38:/var/www/html \
--name="drupal-app" \
--link="drupal-mysql" \
nimmis/apache-php5Output:
0398890ab8e0a082f68373c8e7fd088e925f9bac0eca178399b883091919ee77
An explanation of what’s between run and nimmis/apache-php5:
- -d — Daemonize, run in background.
- -p 10080:80 — Bind host port 10080 to container port 80.
- -v ~/drupal-6.38:/var/www/html — Bind host directory to container directory.
- — name="drupal-app" — Name the container instance for convenience.
- --link="drupal-mysql" — Link to the MySQL container so Drupal can communicate with the database.
Install Drupal
Open http://localhost:10080 in a browser (xdg-open is a program that will open a file or URL in the preferred application as set in your OS):
$ xdg-open http://localhost:10080
Tada! The Drupal 6 installation page should be open in a browser, served from within a set of Docker containers.
To complete the installation use the database name (drupal6), username (drupal), and password (drupal6pass) as set in the Start MySQL step. Under Advanced Options, set the Database host to the name of your MySQL container, drupal-mysql.
Cleanup
When you have finished with Drupal 6 shut down the containers and delete them from the host.
Stop the containers:
$ sudo docker container stop drupal-app drupal-mysql
Output:
drupal-app
drupal-mysqlRemove the containers:
$ sudo docker container rm drupal-app drupal-mysql
Output:
drupal-app
drupal-mysqlVerify the containers have been deleted:
$ sudo docker container ls
That’s it, move along. Baamaapii.
Bonus: docker group
To display a list of groups you belong to is simple:
$ group
Output:
roosta adm cdrom sudo dip plugdev lpadmin sambashare
Add your user account to the docker group:
$ sudo usermod -aG docker $USER
You must log out then log back in before it takes effect.
Scaffold and Deploy a Jekyll GitHub Pages Blog in 5 Minutes
Ahnee! Static websites have made a comeback. Innovations in content generation, the adoption of Markdown in workflows, deployment technology, and free hosting have made static websites an attractive option for those who don’t need the capabilities of a framework or content management system.
Jekyll is a static site generator that made a big splash in the world of static website generation. And GitHub has become the defacto standard in social coding, with GitHub Pages offered as an attractive option for free static website hosting.
Jekyll is a blog-aware static site generator in Ruby
My experience in static
18 years ago, in my first job as a web developer, we generated and maintained static websites. The shop I worked for built an in-house Perl templating “engine” which recursively crawled a directory looking for files with a custom file extension to parse.
These files contained content in the form of HTML and XML-esque tags which were essentially variables that set the active section in the navigation, helped generate breadcrumbs, figure out which shared content blocks to display in the sidebars, etc… The meat of each file was the content for the page being generated. Using regex, the contents of a master template was then populated with the contents of the parsed files then output into an html file.
The websites were generally 100+ pages and the engine wasn’t great. There was no easy way to generate a single page, or a subset of pages. And it was slow. We would inevitably edit the HTML files directly when doing maintenance work. Site-wide changes ended up being a scary Perl regexp which ran independently of the engine, and our master template would quickly become obsolete.
Today we have much better tooling. Written in Ruby, Jekyll is arguably the most popular static site generator.
Why develop a static website?
Why indeed. I can think of a few pros to creating and maintaining a static website.
-
No server side programming required
Creating a website usually begins with a framework. The most popular software for creating websites today is Wordpress. Written in PHP, WP began as a blogging platform but has since progressed to a full fledged Content Management System. A WordPress website requires a server which supports PHP, which you can setup yourself or pay a hosting provider like https://wordpress.com/. It’s definitely possible to create a WordPress website without knowing PHP, but to customize your site, you will eventually have to get your hands dirty with a bit of coding. -
Your website will be fast
A static web page is written in HTML and is served by a web server such as Apache or Nginx. Because the HTML document does not require any additional processing, it is served directly to the browser with minimal effort on the web servers part. Web servers such as Nginx are optimized for serving static assets, resulting in web pages which load near instantaneously. -
Search engines favor fast web pages
Google provides us with guidelines for Search Engine Optimization. In 2012, Google revealed that speed is a factor in rankings.
Tutorial
Enough talk, let’s get down to generating our new site.
Pre-requisites
- Terminal (a command-line) — How to open Terminal on Ubuntu Bionic Beaver 18.04 Linux
- RubyGems — RVM package for Ubuntu
- Git — How To Install Git on Ubuntu 18.04
- A GitHub account — Join GitHub
Installing ‘gem’ and ‘git’ is out of the scope of this tutorial but the links provided above are great for Ubuntu 18.04. Drop me a line or comment below if you need help.
Create and Preview Your Site
To kick off, first you need to open a terminal and install the Jekyll ruby gem:
$ gem install bundler jekyll
Use jekyll to generate your awesome new site:
$ jekyll new my-awesome-site
If all went well you can use jekyll to preview your new site:
$ cd my-awesome-site
$ bundle exec jekyll servePoint your browser to http://localhost:4000.
When you are finished previewing press Ctrl-C to shut down the preview server.
Deploy to GitHub Pages
GitHub Pages is an excellent FREE web hosting service for your GitHub repositories.
Websites for you and your projects.
Hosted directly from your GitHub repository. Just edit, push, and your changes are live.Create a New Repository
A repository (or repo for short) is a place to manage a project and track changes to project files over time.
Visit https://github.com/new
Under “Repository name” type “my-awesome-site”
Optionally fill in a “Description”:
Leave “Public” selected (“Private” repos are only available with paid GitHub plans), and leave all other options as is.
To continue, click “Create repository”. When your repo is created you will see this screen:
You now have an empty repo and may continue with configuration.
Configure Repository for GitHub Pages
To link your repository to GitHub Pages you must specify which branch will be published.
A git branch is essentially a series of snapshots of your code over time. Branching allows you, or multiple users, to modify code in an isolated work area that doesn’t affect other branches or the project as a whole.
For the purpose of this tutorial we will assign the “master” branch to be published.
From your “my-awesome-site” repository page click the “Settings” tab which brings you to https://github.com/username/my-awesome-site/settings.
Scroll down to “GitHub Pages” where you will see “Source” with a drop-down set to “None”.
Click the drop-down and select “master branch”:
With “master branch” selected, click “Save”.
Once saved scroll back down to see the message “Your site is ready to be published at https://username.github.io/my-awesome-site/.”
Configure your Gemfile
Gemfiles are used by Ruby to set the dependencies required to properly build a Ruby Gem. A Gemfile was automatically generated by Jekyll in the previous steps.
You must update your Gemfile to instruct GitHub to build your gem for deployment to GitHub Pages.
Use whichever editor you are comfortable with but here I’m using “vim”:
$ vim Gemfile
Our changes are straight-forward, simply comment out (append the line with #):
gem “jekyll”, “~> 3.8.4” (line 11 in the above screenshot; 3.8.4 is the version at the time of writing and may have changed by now)
and un-comment:
# gem “github-pages”, group: :jekyll_plugins (line 18)
Your Gemfile should now look like the screenshot below. A ‘#’ has been appended to line 11, and the ‘#’ has been removed from the beginning of line 18.
Now Save and close the Gemfile (instructions for vim):
:wq
Update _config.yml
_config.yml contains variables specific to your website. Before you deploy you may want to set the title and description, and possibly your Twitter and GitHub usernames.
You must set the baseurl to ensure URLs are generated correctly, otherwise your assets and page links will not work.
$ vim _config.yml
Make whatever edits you like, but be sure to edit the “baseurl” (line 22) so it reads:
baseurl: “/my-awesome-site” # the subpath of your site, e.g. /blog
Save and close.
:wq
Commit Your Site Files
You have your site files, configured for GitHub Pages, and you have an empty GitHub repository named “my-awesome-site”.
The next step is to “check-in” your files to the repo (replace username with your GitHub username):
$ git init
$ git add .
$ git commit -m “first commit”
$ git remote add origin https://github.com/username/my-awesome-site.git
$ git push -u origin masterVerify your files have been published to GitHub by visiting https://github.com/username/my-awesome-site:
Tada! Visit your site
When you committed your files to the repository, GitHub automagically built a set of HTML files and deployed to GitHub pages.
Verify your blog is deployed by visiting https://username.github.io/my-awesome-site/.
Conclusion
You are now initiated into the world of Jekyll and GitHub Pages. To create a new blog post, add a new file to the “_posts” directory and check it into your repo. Be sure to examine the default post and stick to it’s naming convention and the template it uses within. Once checked in visit your site and your new post should appear in the homepage list.
I hope this post was helpful, drop me a line or comment with any questions, corrections, or just to say Ahnee!
Gabekana!
-
No server side programming required
subscribe via RSS