I think, therefore I am. I am, therefore I sail

Category: Uncategorized Page 1 of 8

Retire early with the F.I.R.E. method

I do not know many people who want to work until they are old, just to retire and not have the physical ability to do the things they have dreamt of doing. It is easier said than done though…

There is currently a minimalist trend. Some have taken it to extremes of living in their car to save money, but there must be something better.

I ran across this youtube video that explains the F.I.R.E. method, and Enough.


Enough: define what you need and do not live beyond that.

Watch this 26 minute video for an explanation from someone who has been doing it for 20 years.

Photo by Chepe Nicoli on Unsplash

How to Secure Docker Containers with a Read-Only Filesystem

A compromised website sucks. A compromised website that an attacker can insert code into to manipulate your visitors is even worse! Out-of-the-box, Docker containers provide some security advantages over running directly on the host, however Docker provides additional features to increase security. There is a little known flag in Docker that will convert a container’s files system to read-only.

My father runs an e-commerce site for his dahlia flower hobby business at https://lobaughsdahlias.com. Many years ago, a hacker was able to gain access to the site files through FTP, and then injected a bit of code into the head of the theme. That code was capturing data about customers and sending it to a server in Russia. Luckily the issue was caught before any sensitive data was sent out, but it served to highlight the many layers it takes to secure a website.

Security is like an onion, peel back one layer and another reveals itself beneath. If the filesystem had been read-only, the attacker still would have been able to get in, but they would have been unable to alter anything on the filesystem, thus rendering their end goal moot.

To simplify securing a container, Docker provides a read-only runtime flag that will enforce the filesystem into a read-only state.

In the remainder of this article, I am going to show you how to utilize the read-only flag in Docker. This is not a theory lesson, but rather a walkthrough as I show you exactly the steps I took to deploy https://lobaugh.fish, a live production site, with a read-only filesystem.


  • Working knowledge of Docker
  • Docker installed
  • Docker-compose installed
  • An existing website to play with

The https://lobaugh.fish site is a constantly evolving site where I can share my passion for fishkeeping. The site is built on top of WordPress and hosted in a VPS at Digital Ocean. The webserver in use is Apache, which will be important to know for later.

There are two ways to add the read-only flag: via the docker cli too, and via docker-compose.

When using the docker cli tool, simply add the `— read-only` flag, and presto, you have a read-only filesystem in the container.


docker run — read-only [image-name]

Docker-compose is a wrapper for the cli tool that automatically fills in the flags for you. I prefer to use docker-compose because all the parameters that need to be passed to the container are stored in the docker-compose.yml file and I do not need to remember which I had used last time a container was started. This file can also be shared with other developers to ensure a consistent setup amongst the team, and even deployed directly to a server.

The flag for docker-compose is just as simple as the command line. All that needs to be added to the service is:

read_only: true

Yep, that simple. Restart the service and the container’s filesystem will be read-only.

Uh-oh! Danger, Danger, Danger! We have an issue!

When running `docker-compose up` we encountered the following error and the Apache server died:

lobaugh.fish | Sun Jan 10 04:29:45 2021 (1): Fatal Error Unable to create lock file: Bad file descriptor (9)

As it turns out, some server applications do need to write to the filesystem, for any variety of reasons. In particular, Apache writes a pid file to `/run/apache2` and a lock file in `/run/lock`. Without the ability to write those files, the apache2 service will not start.

To combat the issue of required writes by some applications, Docker has a `tmpfs` flag that can be supplied. The tmpfs flag points to a filesystem location and will allow writes to that single location on the filesystem. It must be kept in mind that these locations will be erased when the container ends. If you need the data to persist between container runs, use a volume.

When using the Docker cli, the flag is `tmpfs`:

docker run — read-only — tmpfs /run/apache2 — tmpfs /run/lock [image]

Similarly, docker-compose contains a `tmpfs` entry under the service:

- /tmp
- /run/apache2
- /run/lock

Note: I added `/tmp` to enable uploads and other temporary filesystem actions from WordPress.

Re-run the `docker-compose up` command and the site comes alive.

Now for a couple of quick tests…

I am going to get a shell inside the container with:

docker exec -it -u 0 lobaugh.fish /bin/bash

That gets me in with root privileges, which provides the ability to write to any area of the system that a regular user would not have access to, such as `/etc`.

Let’s see if we can write a file to `/etc`. It should not be possible.

root@0c13e6454934:/var/www/html# touch /etc/fstest
touch: cannot touch '/etc/fstest': Read-only file system

Perfect! That is exactly what we wanted to see! The root user cannot write to the filesystem, which means it is truly loaded read-only.

Now lets try in `/tmp`, which we should be able to write to.

root@0c13e6454934:/var/www/html# touch /tmp/fstest
root@0c13e6454934:/var/www/html# ls /tmp

Perfect again! The file was successfully written.

At this point, we could perform more tests, and I did initially, but I will leave that to you to test more if you would like.

One other thing to consider- mounted volumes will retain the abilities they were mounted with. I.E. if you used the default volume command, that location will be writable. I do recommend that whenever possible the files running your app are built into the container image, and volumes not used in production. It will make your app more secure and scalable. Volumes can also incur a performance hit, and may not work well on all container platforms, such as Kubernetes.

Here is the final docker-compose.yml entry for https://lobaugh.fish:

version: "3.5"

image: blobaugh/php:7.4-apache
restart: always
read_only: true
- /tmp
- /run/apache2
- /run/lock

That is all it takes! One simple little trick that may dramatically increase the security of your site.

Build Highly Performant WordPress Sites with Minio and WP Offload Media

WordPress is the leading content management system and it is often thought that WordPress cannot scale. That assertion is a bit misleading. It is true that out of the box WordPress does not scale well, however, WordPress has a flexible hooks system that provides developers with the ability to tap into and alter many of its features. With a little effort, WordPress can indeed be made to be highly performant and infinitely scalable.

One of the first steps that must be taken when building any highly performant and scalable website is to offload the uploaded media files from the web servers. Amazon’s S3 is the top contender in this area, but there are many reasons why it may be desirable to host the files on your own server. Minio is an open source project that allows you to run and host your own S3 compliant service. Minio can be run as a standalone or fully distributed service and has been used to power sites with many terabytes of media.

In this article I am going to show you how I was able to leverage the WordPress WP Offload Media plugin to host uploaded media files on a Minio service.

Both WordPress and Minio have well written installation guides. You will need both set up if you are following along:

There are two versions of WP Offload Media:

The Lite version is free and provides everything you need to host new images on Minio. The Pro version has some nice additional features, such as the ability to migrate existing files in the Media Library. Both versions are excellent. Which you need will be dependent upon your use case.

For the rest of this article, I will be using on of my own websites for demonstration purposes. The site is live and available at https://lobaugh.fish. Raising fish has been a hobby of mine since I was a child. This is a site I have been working on that will allow anyone to share my passion, and get a glimpse into my tanks.

At the this articles was published, the site architecture was:

The site already existed when I decided to add support for Minio. Because there were existing media files that needed to be offloaded, I chose to go with WP Offload Media Pro version, for its ability to offload existing content. To save disk space on the web server, I also opted to remove all uploaded files from the web server as soon as they transferred to the Minio server.

After the initial installation, WP Offload Media will present you with a storage provider page where you can fill in an Amazon S3 access key and secret.

Because I am hosting on Minio, I am going to ignore the storage provider form completely and manually add the keys to the wp-config.php file. Even if you are using S3, I recommend putting the configuration into the wp-config.php file. Using the database option will cause delays as the site queries the database, and have a negative performance impact, especially on high traffic sites.

Place the following in the wp-config.php file:

define( 'AS3CF_SETTINGS', serialize( array(
'provider' => 'aws',
'access-key-id' => 'minio-key',
'secret-access-key' => 'minio-secret',
) ) );

Notice the provider key says aws and there is no mention of our Minio server? Out of the box, WP Offload Media does not have an option for Minio, however, because Minio is fully S3 compatible, we can alter the URL for the provider away from S3 and to our own services with the following code:

function minio_s3_client_args( $args ) {
$args['endpoint'] = 'http://lobaugh.fish.minio:9000';
$args['use_path_style_endpoint'] = true;return $args;
}add_filter( 'as3cf_aws_s3_client_args', 'minio_s3_client_args' );

I put that code into an mu-plugin, to ensure it runs all the time and cannot easily be disabled by plugin deactivation or theme changes.

The URL http://lobaugh.fish.minio:9000 is not a web accessible domain. It is how the web server Docker container communicates with the other container that is running Minio*. In your use case, it may be a web accessible domain that is pointed to.

Media files are accessible at https://assets.lobaugh.fish. This is managed by another small snippet of code that creates the URL string for the media file:

add_filter( 'as3cf_aws_s3_url_domain', 'minio_s3_url_domain' , 10, 2 );function minio_s3_url_domain( $domain, $bucket ) {
return 'assets.lobaugh.fish/' . $bucket;

Back on the storage provider page, pick the bucket you would like to save the images in. Then update the rest of the settings to your liking.

My config is:

  • Provider: Amazon S3
  • Bucket: media
  • Copy files to Bucket: true
  • Path: off
  • Year/Month: off
  • Object Versioning: off
  • Rewrite Media URLs: on
  • Force HTTPS: off (The webserver handles this for use)
  • Remove Files from Server: Yes

I then clicked the offload existing media button and began to see files appear on the Minio server immediately.

Load the site and validate the media URLs are pointing to the Minio URL.

That is all it took to offload the WordPress Media Library to Minio!

Offloading the Media Library is one of the components required to build a highly performant and scalable WordPress site. This will allow you to increase the number of web server instances near limitlessly, without worrying about file system replication or syncing issues. You have now seen how easily this can be accomplished with Minio. I challenge you to go forth and conquer Minio on your own WordPress powered site!

How to Use the Maxmind Javascript API to Control Content by City, State, or Country

Controlling what a website visitor sees, based on their geo location, is a fairly common activity today. Most often, this happens on the server, before the content is generated for the visitor to see, but what if you do not have access to manipulate server side code and can only update the javascript that the site is using? Using the MaxMind Javascript API makes this much easier than you might think.

In this article I am going to show you how I accomplished this, in a way that you can easily replicate. What I am not going to do is show you how to build a website or teach Javascript principles.

Let’s consider this scenario:

  • HTML has already been generated by the server
  • There is no access to change the server side code
  • We do have that ability to add Javascript to the site
  • All elements that must be hidden have the class attribute of “geo-hide”
  • Visitors from Washington State, USA must not see the specified content


  • Working website with edit access for javascript
  • MaxMind account. The free trial will suffice

We will be using the MaxMind GeoIP2 Javascript Client API and the GeoIP2 Precision Service.

Architecting the solution

  • When a page loads, determine what state the visitor is making the request from
  • If the state is not Washington, allow them to see all the content
  • If the state is Washington, remove all HTML elements with the class attribute of “geo-hide” from the DOM

Note: This solution does require javascript be enabled on the visitor’s browser. It is rare that javascript is disabled these days. If you are concerned, look up any of the methods of requiring javascript to be enabled in a browser.

Another Note: This solution does not provide any protection against bots. MaxMind charges per query to their API- to save money, be sure you only query Maxmind on legitimate visits.

Final Note: This solution will query MaxMind on every page load. It is advised to use some caching method to prevent unnecessary calls. For example, you could cache the geolocation response as a session cookie. If the cookie is set, do not call Maxmind, if not, call Maxmind.

Include the GeoIP2 Javascript client library

MaxMind has already built a Javascript library that contains the functionality we need. All that needs to be done is to include it.

Add the following script include:

<script src=”//geoip-js.com/js/apis/geoip2/v2.1/geoip2.js” type=”text/javascript”></script>

Set up the content control code

The Javascript API has a single object that lets us easily retrieve the visitor location. It is

geoip2.city( onSuccess, onError );

The two parameters are callbacks, the first contains the geolocation data, the second contains an error, in the event that a geolocation could not be determined.

Plug that into a simple object and we have the following.

var geoipcheck = (function () {
var onSuccess = function (geoipResponse) {
var state = geoipResponse.subdivisions[0].iso_code; if ( WA == state ) {
var elements = document.getElementsByClassName(‘geo-hide’);
while(elements.length > 0){

}; var onError = function (error) {
// Error control code here
}; return function () {
geoip2.city( onSuccess, onError );

As you can see in the onSuccess function, if the visitor’s state is listed as WA, or Washington, the HTML elements with the class of “geo-hide” will be removed from the DOM.

This is by no means a foolproof, or complete, solution, but it will get you up and running with the ability to control content via geolocation. This is particularly useful on services such as Shopify which do not allow you to alter what is rendered on the server side of things.


JWT User Authentication API with Lumen

Lumen is a great framework to build an API off of, but it does not come with user authentication or authorization. I needed to create a small API that allowed users to create an account and access the service with a JWT. Quality information on how to pull that off with Lumen is not very well available- this article will provide a single reference point on building a simple user authentication and authorization system with JWTs on Lumen.

In this article I will teach you how to set up user authentication and authorization in Lumen. I will not teach you what Lumen or JWT is. I am going to assume you know what a they are or you would not be reading this article.

If you are following along, the prerequisite for what follows is:

  • Running Lumen project

To see the complete code from this article, visit https://github.com/blobaugh/lumen-api-jwt-auth-example

This example includes a docker-compose.yml file that will get you up and running quickly.

For the JWT portion, we will be utilizing the excellent library from https://github.com/tymondesigns/jwt-auth

Install the JWT Library

We will be utilizing the JWT library by Sean Tymon. The library is installable as a composer package, and can be installed with the following command:

composer require tymon/jwt-auth

A secret needs to be generated to configure the JWT library, and added to the .env file. It can be generated with the following artisan command.

php artisan jwt:secret

The .env file was automatically updated with the key. The key will be used to sign all the JWT tokens.

Prep Lumen

There are now some steps we need to take to prep Lumen, before we can implement the user authentication portion.

To begin, open up the file bootstrap/app.php, then add or uncomment the following:

'auth' => App\Http\Middleware\Authenticate::class,

Add the JWT Service Provider


Set up the Auth Config

This part bit me at first- Lumen does not come with the config directory like Laravel does. You will need to create it and a file called auth.php.

Create the file config/auth.php and add the following:

<?phpreturn [
'defaults' => [
'guard' => 'api',
'passwords' => 'users',
],'guards' => [
'api' => [
'driver' => 'jwt',
'provider' => 'users',
],'providers' => [
'users' => [
'driver' => 'eloquent',
'model' => \App\Models\User::class

Create the user table

Lumen does not come with the user tables out of the box, so we will need to create them ourselves. We are going to create the same tables that Laravel uses.

Run the following two artisan commands to generate the migration files:

php artisan make:migration create_users_tablephp artisan make:migration create_password_resets_table

Place the following code in the up() method of the create_users_table migration:

Schema::create('users', function (Blueprint $table) {  $table->increments('id');  $table->string('name');  $table->string('email')->unique();  $table->timestamp('email_verified_at')->nullable();  $table->string('password');  $table->rememberToken();  $table->timestamps();});

Place the following code in the up() method of the create_password_resets_table migration:

Schema::create(‘password_resets’, function (Blueprint $table) {

Schema::create('password_resets', function (Blueprint $table) {  $table->string('email')->index();  $table->string('token');  $table->timestamp('created_at')->nullable();});

Finally, run the migrate command to create the tables!

php artisan migrate

Set up the User Model

The User model will allow us to create a representation of the user that can manipulate the database table, and manage a user’s JWT tokens.

Open your User model and add the following use statement:

use Tymon\JWTAuth\Contracts\JWTSubject;

Add JWTSubject to the class implements clause, which will make it similar to:

class User extends Model implements AuthenticatableContract, AuthorizableContract, JWTSubject

Now add the following methods for JWT handling

* Retrieve the identifier for the JWT key.
* @return mixed
public function getJWTIdentifier()
return $this->getKey();
* Return a key value array, containing any custom claims to be added to the JWT.
* @return array
public function getJWTCustomClaims()
return [];

Create Authentication Controller

The Authentication controller will handle both the registration of new users and creation/refreshing of the JWT tokens.

I am not going to go through the AuthController line by line. It will be similar to any other auth controller, with a few tweaks for the JWTs.

Create the app/Http/Controllers/AuthController.php file and place in the following code:

namespace App\Http\Controllers;use Illuminate\Http\Request;
use App\Models\User;
use Illuminate\Support\Facades\Auth;class AuthController extends Controller
{ public function __construct() {
$this->middleware(‘auth’, [‘except’ => [‘login’, ‘register’, ]]);
} /**
* Attempt to register a new user to the API.
* @param Request $request
* @return Response
*/ public function register(Request $request)
// Are the proper fields present?
$this->validate($request, [
‘name’ => ‘required|string|between:2,100’,
‘email’ => ‘required|string|email|max:100|unique:users’,
‘password’ => ‘required|string|min:6’,
]); try {
$user = new User;
$user->name = $request->input(‘name’);
$user->email = $request->input(‘email’);
$plainPassword = $request->input(‘password’);
$user->password = app(‘hash’)->make($plainPassword);
$user->save(); return response()->json([‘user’ => $user, ‘message’ => ‘CREATED’], 201);
} catch (\Exception $e) {
return response()->json([‘message’ => ‘User Registration Failed!’], 409);
} /**
* Attempt to authenticate the user and retrieve a JWT.
* Note: The API is stateless. This method _only_ returns a JWT. There is not an
* indicator that a user is logged in otherwise (no sessions).
* @param Request $request
* @return Response
public function login(Request $request)
// Are the proper fields present?
$this->validate($request, [
‘email’ => ‘required|string’,
‘password’ => ‘required|string’,
]); $credentials = $request->only([‘email’, ‘password’]); if (! $token = Auth::attempt($credentials)) {
// Login has failed
return response()->json([‘message’ => ‘Unauthorized’], 401);
} return $this->respondWithToken($token);
} /**
* Log the user out (Invalidate the token). Requires a login to use as the
* JWT in the Authorization header is what is invalidated
* @return \Illuminate\Http\JsonResponse
public function logout() {
return response()->json([‘message’ => ‘User successfully signed out’]);
} /**
* Refresh the current token.
* @return \Illuminate\Http\JsonResponse
public function refresh() {
return $this->respondWithToken( auth()->refresh() );
} /**
* Helper function to format the response with the token.
* @return \Illuminate\Http\JsonResponse
private function respondWithToken($token)
return response()->json([
‘token’ => $token,
‘token_type’ => ‘bearer’,
‘expires_in’ => Auth::factory()->getTTL() * 60
], 200);}

Set up the Routes

Next up is some routing! We are almost done!

Open up the routes/web.php file and add the following to it:

$router->post( ‘/login’, ‘AuthController@login’);
$router->post( ‘/register’, ‘AuthController@register’ );/*
‘middleware’ => ‘auth’,
], function( $router ) {
$router->post( ‘/logout’, ‘AuthController@logout’ );
$router->get( ‘/refresh’, ‘AuthController@refresh’ );
$router->post( ‘/refresh’, ‘AuthController@refresh’ );

Note: You can do this in routes/api.php if you would like. That will cause a prefix of `api/` in the URL


That is it! You now have user authentication and authorization set up on your Lumen API. Congrats!

Page 1 of 8

Powered by WordPress & Beards