Ben Lobaugh Online

I think, therefore I am. I am, therefore I sail

Turn Me Up! How I Control My Own In-Ear Monitor Mix

“I need more of myself! Turn me up!” – Musicians Worldwide

It is an age old dilemma for the sound crew- how to provide the volume levels that each musician wants in order to hear themself. I recall the time when floor monitors were ubiquitous and all the time the band would spend arguing over their monitor levels. In smaller venues it sometimes to the point that the stage monitors were louder than the house speakers! The drop in price of good in-ear monitors to a range the average music could afford was a game changer.

As a drummer, I love in-ear monitors because it puts the monitor audio closer to my eardrums. I do not have to strain to hear the floor monitor over the sounds of enthusiastic banging. 

As a keyboardist, in-ear monitors have allowed me to increase the lushness and types of sounds I can use. Getting the floor monitor volume up where I wanted it often would cause the rest of the band to not be able to hear their own part due to the covering nature of pads.

One problem solved, but there is still the issue of many mixing consoles having a limited number of aux channels for monitors.What I commonly encountered were four aux channels, split something along the lines of:

  • Piano
  • Singers
  • Drums and bass
  • The rest of the band

That left everyone to bicker about their own volume still.

Nearly two decades ago I came up with a deceptively simple solution that allowed me to both hear as much of myself as I want and control my own volume. All without affecting anyone else sharing the monitor mix. In fact, I could be removed from the monitor mix entirely if desired and still hear myself. 

The solution: Run the monitor send from the board and the output from your instrument into a personal mixer.

Here is a diagram of the idea:

The Personal Mixer sits next to your instrument and allows you to turn up and down the volume of your instrument independently of the monitor mix coming from the soundboard.

During set up, connect a splitter to the output of the instrument, in my case a keyboard. One side runs into the direct box and out to the house, the other into the personal mixer. The monitor send from the house runs into another channel on the personal mixer. Headphones plug in to the personal mixer and viola! I now have control to hear myself as loudly as I would like.

For the splitter, go with something like

My current go to personal mixer is the Rolls MX28, due to it’s compact design and how easy it is to mount on a flat surface.

Other mixers I have used with great success

And, of course everyone wants to know what earbuds I used. For the price and sound quality I have never found any better than the KZ lineup

How to Detect Mobile Devices Accessing a Shopify Store

how you how to detect mobile devices accessing a Shopify store.

Shopify is a great platform for many ecommerce stores. Developing themes is simple with the Liquid template language, however it does have some drawbacks. Liquid templates are pre-rendered on the server and not able to respond dynamically to the device accessing the store. This means whether the customer is visiting from a desktop, tablet, or phone they will receive the exact same html content. There is no way to send custom content based upon the device the customer is connecting with.

I hear you thinking, “But wait, isn’t this article about mobile devices and Shopify?” Yes it is. In order to detect and serve different content to mobile devices another tool will be used, javascript.

Javascript runs on the browser, meaning it takes after the Shopify store has delivered the content to the customer. With a little care, it is possible to dynamically load content based on the customer’s device.

Here is the Javascript you can use. It has several browsers to detect in it. Customize the list as needed.

var isMobile = navigator.userAgent.match(/(iPhone|iPod|iPad|Android|webOS|BlackBerry|IEMobile|Opera Mini)/i);if(isMobile){
// Mobile functionality
// Desktop functionality

It should be noted, Apple updated the iPad OS to request desktop versions of pages. This means that this method may not work 100% of the time for detecting an iPad.

Photo by Josh Sorenson on Unsplash

How to Make a Domain Proxy for Digital Ocean Spaces

Digital Ocean has been my go to solution for hosting for many years. When the Spaces service, an S3 compatible object store, was introduceed, I jumped on board right away. The service performs well and allows me to manage all the web infrastructure from one location.

The drawback with Spaces, to me, is how custom domains are handled. It is possible to do, but you have to turn over DNS control of the domain to Digital Ocean. That is not always possible or practical to do. For a couple years I have run various sites with the domain provided by Digital Ocean.

A default Spaces domain has the format:
For my personal blog this looks like:

A useable, but not very attractive domain. I decided to revisit the topic.

Nginx has some powerful proxy capabilities in it, and it turns out that it works quite well to create a domain proxy.

With the proxy enabled, visiting
Will return the file from

Running my own domain proxy does introduce additional complexity and slight overhead, but I am comfortable with it.

I will present the nginx.conf file in its entirety here, then walk through it below.

log_format upstream '[$time_local] Requested: $host$uri - Proxied $proxy_host - Response time remote $upstream_response_time request $request_time';proxy_cache_path /tmp/nginxcache levels=1:2 keys_zone=my_cache:10m max_size=2g 
inactive=600m use_temp_path=off;server {listen 80;location / {
proxy_cache my_cache;
add_header X-Proxy-Cache $upstream_cache_status;
access_log /dev/stdout upstream;

That’s it! The configuration is fairly simple.

* log_format upstream — (optional) Establishes the format of the log file. Not needed if logging is disabled. Turning off logging may help performance.
* proxy_cache_path — Configures the nginx caching of the files from Spaces. A 10 minute cache, with a max of 2 gigabytes is created. Though not necessary, this will help save on server resources and wait time for clients.
* server.listen — Establish the web service
* server.location — Configure the web service
* proxy_cache — Sets up the previously configured cache
* proxy_pass — This is the meat and potatoes. This passes the call to to the Spaces service, and retrieves the file
* add_header — (optional) Adds a simple header item that allows us to inspect whether the response was cached. Can be safely left out
* access_log — Send the output of the log to /dev/stdout, based on the upstream format. Not needed if logging is disabled. Turning off logging may help performance

This is v1 of the configuration. Nginx provides a lot of neat options that can tweak and optimize it. To learn more about the options, here are a few helpful links:

I am satisfied with this setup for now. It has allowed me to achieve the custom domain I wanted, and has minimal performance impact.

Retire early with the F.I.R.E. method

I do not know many people who want to work until they are old, just to retire and not have the physical ability to do the things they have dreamt of doing. It is easier said than done though…

There is currently a minimalist trend. Some have taken it to extremes of living in their car to save money, but there must be something better.

I ran across this youtube video that explains the F.I.R.E. method, and Enough.


Enough: define what you need and do not live beyond that.

Watch this 26 minute video for an explanation from someone who has been doing it for 20 years.

Photo by Chepe Nicoli on Unsplash

How to Secure Docker Containers with a Read-Only Filesystem

A compromised website sucks. A compromised website that an attacker can insert code into to manipulate your visitors is even worse! Out-of-the-box, Docker containers provide some security advantages over running directly on the host, however Docker provides additional features to increase security. There is a little known flag in Docker that will convert a container’s files system to read-only.

My father runs an e-commerce site for his dahlia flower hobby business at Many years ago, a hacker was able to gain access to the site files through FTP, and then injected a bit of code into the head of the theme. That code was capturing data about customers and sending it to a server in Russia. Luckily the issue was caught before any sensitive data was sent out, but it served to highlight the many layers it takes to secure a website.

Security is like an onion, peel back one layer and another reveals itself beneath. If the filesystem had been read-only, the attacker still would have been able to get in, but they would have been unable to alter anything on the filesystem, thus rendering their end goal moot.

To simplify securing a container, Docker provides a read-only runtime flag that will enforce the filesystem into a read-only state.

In the remainder of this article, I am going to show you how to utilize the read-only flag in Docker. This is not a theory lesson, but rather a walkthrough as I show you exactly the steps I took to deploy, a live production site, with a read-only filesystem.


  • Working knowledge of Docker
  • Docker installed
  • Docker-compose installed
  • An existing website to play with

The site is a constantly evolving site where I can share my passion for fishkeeping. The site is built on top of WordPress and hosted in a VPS at Digital Ocean. The webserver in use is Apache, which will be important to know for later.

There are two ways to add the read-only flag: via the docker cli too, and via docker-compose.

When using the docker cli tool, simply add the `— read-only` flag, and presto, you have a read-only filesystem in the container.


docker run — read-only [image-name]

Docker-compose is a wrapper for the cli tool that automatically fills in the flags for you. I prefer to use docker-compose because all the parameters that need to be passed to the container are stored in the docker-compose.yml file and I do not need to remember which I had used last time a container was started. This file can also be shared with other developers to ensure a consistent setup amongst the team, and even deployed directly to a server.

The flag for docker-compose is just as simple as the command line. All that needs to be added to the service is:

read_only: true

Yep, that simple. Restart the service and the container’s filesystem will be read-only.

Uh-oh! Danger, Danger, Danger! We have an issue!

When running `docker-compose up` we encountered the following error and the Apache server died: | Sun Jan 10 04:29:45 2021 (1): Fatal Error Unable to create lock file: Bad file descriptor (9)

As it turns out, some server applications do need to write to the filesystem, for any variety of reasons. In particular, Apache writes a pid file to `/run/apache2` and a lock file in `/run/lock`. Without the ability to write those files, the apache2 service will not start.

To combat the issue of required writes by some applications, Docker has a `tmpfs` flag that can be supplied. The tmpfs flag points to a filesystem location and will allow writes to that single location on the filesystem. It must be kept in mind that these locations will be erased when the container ends. If you need the data to persist between container runs, use a volume.

When using the Docker cli, the flag is `tmpfs`:

docker run — read-only — tmpfs /run/apache2 — tmpfs /run/lock [image]

Similarly, docker-compose contains a `tmpfs` entry under the service:

- /tmp
- /run/apache2
- /run/lock

Note: I added `/tmp` to enable uploads and other temporary filesystem actions from WordPress.

Re-run the `docker-compose up` command and the site comes alive.

Now for a couple of quick tests…

I am going to get a shell inside the container with:

docker exec -it -u 0 /bin/bash

That gets me in with root privileges, which provides the ability to write to any area of the system that a regular user would not have access to, such as `/etc`.

Let’s see if we can write a file to `/etc`. It should not be possible.

root@0c13e6454934:/var/www/html# touch /etc/fstest
touch: cannot touch '/etc/fstest': Read-only file system

Perfect! That is exactly what we wanted to see! The root user cannot write to the filesystem, which means it is truly loaded read-only.

Now lets try in `/tmp`, which we should be able to write to.

root@0c13e6454934:/var/www/html# touch /tmp/fstest
root@0c13e6454934:/var/www/html# ls /tmp

Perfect again! The file was successfully written.

At this point, we could perform more tests, and I did initially, but I will leave that to you to test more if you would like.

One other thing to consider- mounted volumes will retain the abilities they were mounted with. I.E. if you used the default volume command, that location will be writable. I do recommend that whenever possible the files running your app are built into the container image, and volumes not used in production. It will make your app more secure and scalable. Volumes can also incur a performance hit, and may not work well on all container platforms, such as Kubernetes.

Here is the final docker-compose.yml entry for

version: "3.5"

image: blobaugh/php:7.4-apache
restart: always
read_only: true
- /tmp
- /run/apache2
- /run/lock

That is all it takes! One simple little trick that may dramatically increase the security of your site.

Page 2 of 168

Powered by WordPress & Beards