Ben Lobaugh Online

I think, therefore I am. I am, therefore I sail

How to Detect Mobile Devices Accessing a Shopify Store

how you how to detect mobile devices accessing a Shopify store.

Shopify is a great platform for many ecommerce stores. Developing themes is simple with the Liquid template language, however it does have some drawbacks. Liquid templates are pre-rendered on the server and not able to respond dynamically to the device accessing the store. This means whether the customer is visiting from a desktop, tablet, or phone they will receive the exact same html content. There is no way to send custom content based upon the device the customer is connecting with.

I hear you thinking, “But wait, isn’t this article about mobile devices and Shopify?” Yes it is. In order to detect and serve different content to mobile devices another tool will be used, javascript.

Javascript runs on the browser, meaning it takes after the Shopify store has delivered the content to the customer. With a little care, it is possible to dynamically load content based on the customer’s device.

Here is the Javascript you can use. It has several browsers to detect in it. Customize the list as needed.

<script>
var isMobile = navigator.userAgent.match(/(iPhone|iPod|iPad|Android|webOS|BlackBerry|IEMobile|Opera Mini)/i);if(isMobile){
// Mobile functionality
}else{
// Desktop functionality
}
</script>

It should be noted, Apple updated the iPad OS to request desktop versions of pages. This means that this method may not work 100% of the time for detecting an iPad.

Photo by Josh Sorenson on Unsplash

How to Make a Domain Proxy for Digital Ocean Spaces

Digital Ocean has been my go to solution for hosting for many years. When the Spaces service, an S3 compatible object store, was introduceed, I jumped on board right away. The service performs well and allows me to manage all the web infrastructure from one location.

The drawback with Spaces, to me, is how custom domains are handled. It is possible to do, but you have to turn over DNS control of the domain to Digital Ocean. That is not always possible or practical to do. For a couple years I have run various sites with the domain provided by Digital Ocean.

A default Spaces domain has the format: account.datacenter.digitaloceanspaces.com
For my personal blog this looks like: lobaugh.sfo2.digitaloceanspaces.com

A useable, but not very attractive domain. I decided to revisit the topic.

Nginx has some powerful proxy capabilities in it, and it turns out that it works quite well to create a domain proxy.

With the proxy enabled, visiting
https://assets.lobaugh.net/image.png
Will return the file from
https://lobaugh.sfo2.digitaloceanspaces.com/image.png

Running my own domain proxy does introduce additional complexity and slight overhead, but I am comfortable with it.

I will present the nginx.conf file in its entirety here, then walk through it below.

log_format upstream '[$time_local] Requested: $host$uri - Proxied $proxy_host - Response time remote $upstream_response_time request $request_time';proxy_cache_path /tmp/nginxcache levels=1:2 keys_zone=my_cache:10m max_size=2g 
inactive=600m use_temp_path=off;server {listen 80;location / {
proxy_cache my_cache;
proxy_pass https://lobaugh.sfo2.cdn.digitaloceanspaces.com$uri$is_args$args;
add_header X-Proxy-Cache $upstream_cache_status;
access_log /dev/stdout upstream;
}
}

That’s it! The configuration is fairly simple.

* log_format upstream — (optional) Establishes the format of the log file. Not needed if logging is disabled. Turning off logging may help performance.
* proxy_cache_path — Configures the nginx caching of the files from Spaces. A 10 minute cache, with a max of 2 gigabytes is created. Though not necessary, this will help save on server resources and wait time for clients.
* server.listen — Establish the web service
* server.location — Configure the web service
* proxy_cache — Sets up the previously configured cache
* proxy_pass — This is the meat and potatoes. This passes the call to assets.lobaugh.net to the Spaces service, and retrieves the file
* add_header — (optional) Adds a simple header item that allows us to inspect whether the response was cached. Can be safely left out
* access_log — Send the output of the log to /dev/stdout, based on the upstream format. Not needed if logging is disabled. Turning off logging may help performance

This is v1 of the configuration. Nginx provides a lot of neat options that can tweak and optimize it. To learn more about the options, here are a few helpful links:

https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/ 
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass
https://dev.to/danielkun/nginx-everything-about-proxypass-2ona 
https://www.digitalocean.com/community/tutorials/understanding-nginx-http-proxying-load-balancing-buffering-and-caching
https://www.nginx.com/blog/nginx-caching-guide/ 
https://dev.to/shameemreza/accelerate-your-website-with-nginx-as-a-reverse-proxy-cache-a9o

I am satisfied with this setup for now. It has allowed me to achieve the custom domain I wanted, and has minimal performance impact.

Retire early with the F.I.R.E. method

I do not know many people who want to work until they are old, just to retire and not have the physical ability to do the things they have dreamt of doing. It is easier said than done though…

There is currently a minimalist trend. Some have taken it to extremes of living in their car to save money, but there must be something better.

I ran across this youtube video that explains the F.I.R.E. method, and Enough.

Financial
Independence
Retire
Early

Enough: define what you need and do not live beyond that.

Watch this 26 minute video for an explanation from someone who has been doing it for 20 years.

Photo by Chepe Nicoli on Unsplash

How to Secure Docker Containers with a Read-Only Filesystem

A compromised website sucks. A compromised website that an attacker can insert code into to manipulate your visitors is even worse! Out-of-the-box, Docker containers provide some security advantages over running directly on the host, however Docker provides additional features to increase security. There is a little known flag in Docker that will convert a container’s files system to read-only.

My father runs an e-commerce site for his dahlia flower hobby business at https://lobaughsdahlias.com. Many years ago, a hacker was able to gain access to the site files through FTP, and then injected a bit of code into the head of the theme. That code was capturing data about customers and sending it to a server in Russia. Luckily the issue was caught before any sensitive data was sent out, but it served to highlight the many layers it takes to secure a website.

Security is like an onion, peel back one layer and another reveals itself beneath. If the filesystem had been read-only, the attacker still would have been able to get in, but they would have been unable to alter anything on the filesystem, thus rendering their end goal moot.

To simplify securing a container, Docker provides a read-only runtime flag that will enforce the filesystem into a read-only state.

In the remainder of this article, I am going to show you how to utilize the read-only flag in Docker. This is not a theory lesson, but rather a walkthrough as I show you exactly the steps I took to deploy https://lobaugh.fish, a live production site, with a read-only filesystem.

Prerequisites

  • Working knowledge of Docker
  • Docker installed
  • Docker-compose installed
  • An existing website to play with

The https://lobaugh.fish site is a constantly evolving site where I can share my passion for fishkeeping. The site is built on top of WordPress and hosted in a VPS at Digital Ocean. The webserver in use is Apache, which will be important to know for later.

There are two ways to add the read-only flag: via the docker cli too, and via docker-compose.

When using the docker cli tool, simply add the `— read-only` flag, and presto, you have a read-only filesystem in the container.

E.G:

docker run — read-only [image-name]

Docker-compose is a wrapper for the cli tool that automatically fills in the flags for you. I prefer to use docker-compose because all the parameters that need to be passed to the container are stored in the docker-compose.yml file and I do not need to remember which I had used last time a container was started. This file can also be shared with other developers to ensure a consistent setup amongst the team, and even deployed directly to a server.

The flag for docker-compose is just as simple as the command line. All that needs to be added to the service is:

read_only: true

Yep, that simple. Restart the service and the container’s filesystem will be read-only.

Uh-oh! Danger, Danger, Danger! We have an issue!

When running `docker-compose up` we encountered the following error and the Apache server died:

lobaugh.fish | Sun Jan 10 04:29:45 2021 (1): Fatal Error Unable to create lock file: Bad file descriptor (9)

As it turns out, some server applications do need to write to the filesystem, for any variety of reasons. In particular, Apache writes a pid file to `/run/apache2` and a lock file in `/run/lock`. Without the ability to write those files, the apache2 service will not start.

To combat the issue of required writes by some applications, Docker has a `tmpfs` flag that can be supplied. The tmpfs flag points to a filesystem location and will allow writes to that single location on the filesystem. It must be kept in mind that these locations will be erased when the container ends. If you need the data to persist between container runs, use a volume.

When using the Docker cli, the flag is `tmpfs`:

docker run — read-only — tmpfs /run/apache2 — tmpfs /run/lock [image]

Similarly, docker-compose contains a `tmpfs` entry under the service:

tmpfs:
- /tmp
- /run/apache2
- /run/lock

Note: I added `/tmp` to enable uploads and other temporary filesystem actions from WordPress.

Re-run the `docker-compose up` command and the site comes alive.

Now for a couple of quick tests…

I am going to get a shell inside the container with:

docker exec -it -u 0 lobaugh.fish /bin/bash

That gets me in with root privileges, which provides the ability to write to any area of the system that a regular user would not have access to, such as `/etc`.

Let’s see if we can write a file to `/etc`. It should not be possible.

root@0c13e6454934:/var/www/html# touch /etc/fstest
touch: cannot touch '/etc/fstest': Read-only file system

Perfect! That is exactly what we wanted to see! The root user cannot write to the filesystem, which means it is truly loaded read-only.

Now lets try in `/tmp`, which we should be able to write to.

root@0c13e6454934:/var/www/html# touch /tmp/fstest
root@0c13e6454934:/var/www/html# ls /tmp
fstest

Perfect again! The file was successfully written.

At this point, we could perform more tests, and I did initially, but I will leave that to you to test more if you would like.

One other thing to consider- mounted volumes will retain the abilities they were mounted with. I.E. if you used the default volume command, that location will be writable. I do recommend that whenever possible the files running your app are built into the container image, and volumes not used in production. It will make your app more secure and scalable. Volumes can also incur a performance hit, and may not work well on all container platforms, such as Kubernetes.

Here is the final docker-compose.yml entry for https://lobaugh.fish:

version: "3.5"

services:
lobaugh.fish:
image: blobaugh/php:7.4-apache
restart: always
read_only: true
tmpfs:
- /tmp
- /run/apache2
- /run/lock

That is all it takes! One simple little trick that may dramatically increase the security of your site.

Build Highly Performant WordPress Sites with Minio and WP Offload Media

WordPress is the leading content management system and it is often thought that WordPress cannot scale. That assertion is a bit misleading. It is true that out of the box WordPress does not scale well, however, WordPress has a flexible hooks system that provides developers with the ability to tap into and alter many of its features. With a little effort, WordPress can indeed be made to be highly performant and infinitely scalable.

One of the first steps that must be taken when building any highly performant and scalable website is to offload the uploaded media files from the web servers. Amazon’s S3 is the top contender in this area, but there are many reasons why it may be desirable to host the files on your own server. Minio is an open source project that allows you to run and host your own S3 compliant service. Minio can be run as a standalone or fully distributed service and has been used to power sites with many terabytes of media.

In this article I am going to show you how I was able to leverage the WordPress WP Offload Media plugin to host uploaded media files on a Minio service.

Both WordPress and Minio have well written installation guides. You will need both set up if you are following along:

There are two versions of WP Offload Media:

The Lite version is free and provides everything you need to host new images on Minio. The Pro version has some nice additional features, such as the ability to migrate existing files in the Media Library. Both versions are excellent. Which you need will be dependent upon your use case.

For the rest of this article, I will be using on of my own websites for demonstration purposes. The site is live and available at https://lobaugh.fish. Raising fish has been a hobby of mine since I was a child. This is a site I have been working on that will allow anyone to share my passion, and get a glimpse into my tanks.

At the this articles was published, the site architecture was:

The site already existed when I decided to add support for Minio. Because there were existing media files that needed to be offloaded, I chose to go with WP Offload Media Pro version, for its ability to offload existing content. To save disk space on the web server, I also opted to remove all uploaded files from the web server as soon as they transferred to the Minio server.

After the initial installation, WP Offload Media will present you with a storage provider page where you can fill in an Amazon S3 access key and secret.

Because I am hosting on Minio, I am going to ignore the storage provider form completely and manually add the keys to the wp-config.php file. Even if you are using S3, I recommend putting the configuration into the wp-config.php file. Using the database option will cause delays as the site queries the database, and have a negative performance impact, especially on high traffic sites.

Place the following in the wp-config.php file:

define( 'AS3CF_SETTINGS', serialize( array(
'provider' => 'aws',
'access-key-id' => 'minio-key',
'secret-access-key' => 'minio-secret',
) ) );

Notice the provider key says aws and there is no mention of our Minio server? Out of the box, WP Offload Media does not have an option for Minio, however, because Minio is fully S3 compatible, we can alter the URL for the provider away from S3 and to our own services with the following code:

function minio_s3_client_args( $args ) {
$args['endpoint'] = 'http://lobaugh.fish.minio:9000';
$args['use_path_style_endpoint'] = true;return $args;
}add_filter( 'as3cf_aws_s3_client_args', 'minio_s3_client_args' );

I put that code into an mu-plugin, to ensure it runs all the time and cannot easily be disabled by plugin deactivation or theme changes.

The URL http://lobaugh.fish.minio:9000 is not a web accessible domain. It is how the web server Docker container communicates with the other container that is running Minio*. In your use case, it may be a web accessible domain that is pointed to.

Media files are accessible at https://assets.lobaugh.fish. This is managed by another small snippet of code that creates the URL string for the media file:

add_filter( 'as3cf_aws_s3_url_domain', 'minio_s3_url_domain' , 10, 2 );function minio_s3_url_domain( $domain, $bucket ) {
return 'assets.lobaugh.fish/' . $bucket;
}

Back on the storage provider page, pick the bucket you would like to save the images in. Then update the rest of the settings to your liking.

My config is:

  • Provider: Amazon S3
  • Bucket: media
  • Copy files to Bucket: true
  • Path: off
  • Year/Month: off
  • Object Versioning: off
  • Rewrite Media URLs: on
  • Force HTTPS: off (The webserver handles this for use)
  • Remove Files from Server: Yes

I then clicked the offload existing media button and began to see files appear on the Minio server immediately.

Load the site and validate the media URLs are pointing to the Minio URL.

That is all it took to offload the WordPress Media Library to Minio!

Offloading the Media Library is one of the components required to build a highly performant and scalable WordPress site. This will allow you to increase the number of web server instances near limitlessly, without worrying about file system replication or syncing issues. You have now seen how easily this can be accomplished with Minio. I challenge you to go forth and conquer Minio on your own WordPress powered site!

Page 2 of 168

Powered by WordPress & Beards