Magento 2.4.x, Elastic Search 7.9.1 on Alpine Linux 3.12, jdk/bin/java: No such file or directory

Magento2 Teaser

While setting up a new local test environment specifically for Magento 2.4.x with Elastic Search on Alpine Linux ( Docker ) I had some problems with getting elastic search up and running.

The first issue I had was related to a missing home path of JAVA. Because of that, I wasn’t able to start Elastic Search at all.

I’ve fixed it by setting the missing path for JAVA_HOME which I have added later to my profile settings. https://stackoverflow.com/questions/35325856/where-to-set-system-default-environment-variables-in-alpine-linux

The second issue was related to the JAVA version inside the docker container.

Alpine Linux has the required version in the repository, which made it easy to install the missing package with apk.

GCE – Open Up Port 8025 in VPC for MailHog

google_cloud_plattform

If you want to use MailHog ( https://github.com/mailhog/MailHog ) on your Google Compute Engines (GCE) you probably have to create a new VPC firewall rule in your project. In this post, I will skip the installation and configuration part of MailHog with Supervisord, because there are so many different ways to set up MailHog.

By default, the MailHog interface is running on port 8025.

Mailhog GUI interface

Once the MailHog process is up and running, you probably won’t be able to access your server on port 8025. This is because those ports are not defined in VPC. In order to get access from outside, you must create a new firewall rule in your GCE project. This can be done in the Google console or simply via the command-line. Here is how I created a new rule ” custom-allow-mailhog ” for port 8025.

Once the rule has been created, you should be able to open the interface of MailHog.

GCE – SSH Directly Into Your Docker Container

google_cloud_plattform

In this article, I am going to change the SSH port from 22 to 52241 and create a new firewall rule in VPC. Please do your own research and make sure you understand the impact, especially on a production environment.

When you deploy a new docker container on GCE, you can SSH into the host machine ( VM ). This is simply because the IP address is pointing to the host machine. But what about the docker container that runs on it? In this case, you must use docker exec or the gcloud command-line tool in order to get access to the container, which can be pretty time consuming over time.

I am working on a project where I need direct access to my alpine docker container on port 22 or any other port. As you can see in the below output, no ports are exposed for my container.

This is mainly because the docker container shares the network with the host. Source

Containerized VMs launch containers with the network set to host mode. A container shares the host network stack, and all interfaces from the host are available to the container.

This is very convenient because it allows you to access your docker container from pretty much any port. However, only a few ports are actually open and configured in the VPC Network > Firewall settings for security reasons. For the SSH connection I decided to open a custom port 52241. You can do this in your GCP Console under VPC Network > Firewall or with the following gcloud command-line tool.

Once completed, here is how the new firewall rule looks like in the console. If you scroll down, you will also see all affected instances.

GCP Custom Firewall

Before you can SSH into your container on port 52241 you must change the port in the sshd configuration /etc/ssh/sshd_config within your container or container image. Not on the VM.

You may have to restart your ssh service before you can use the new port. Once this is done, you can SSH into your docker container.

AWS – ECS Keeps Recreating Docker Container

AWS ECS Logo

Today was a strange day. I’ve launched a new version of my docker image within an ECS cluster, which usually worked fine. However, today my container always got recreated every 5 minutes for no reason.

ecs_keeps_recreating_containers

The new docker image that I have pushed to the repository had no significant changes, except a redirect from non-SSL ( 80 ) to SSL ( 443 ) for all loaded virtual hosts. So, when I had a look at the processes, everything looked fine. There was nothing that could put the container into an unhealthy status.

Even the log output of the docker container didn’t show any error. After a little bit of research, I ended up on the following AWS developer guide page https://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service.html

It was not exactly what I was looking for, but it pretty much explained the problem I had. One of my colleagues said I should maybe have a look at the health checks of the load balancer.

Indeed, when I had a look at the settings in EC2 > Load Balancing > Target Groups > Health Checks the health check expected a success code 200 instead of 301 on port 80. Because of that, the container has been marked unhealthy every 5 minutes.

aws_ec2_load_balancer_health_checks

I hope this will point someone in the right direction.

Octopus – Couldn’t find a valid ICU package installed on the system

Octopus Deploy

This morning I created a new ECS environment and tried to deploy a Laravel project on it.

So far so good, until one of the deployment steps got stuck in Octopus because of the following error.

While looking for a solution, I’ve noticed that I forgot to install the package libicu on the new ECS machine that was required during the deployment process.

After installing the missing package, I was able to deploy again without errors.

October CMS – How To Fix Broken Thumbnails

October CMS Logo

I recently started setting up a dockerized October CMS environment based on Alpine Linux, MariaDB and PHP-FPM which was actually a very straightforward process. However, for some reason the integrated image resizer didn’t create thumbnails of uploaded images.

October CMS - Broken Thumbnail Image

Unfortunately there was nothing in the log files which could help me to identify the cause of the problem. After checking the file permissions, I decided to trace back the issue and found the reason in file vendor/october/rain/src/Database/Attach/Resizer.php.

The method getMimeType() returned always null, which broke the entire image manipulation process.

After a quick research I found out that getMimeType requires php_fileinfo which I simply forgot to set in my Dockerfile. After adding php7-fileinfo and rebuilding the container, the issue was finally fixed.

 

Notice: Use of undefined constant T_CURLY_OPEN – assumed ‘T_CURLY_OPEN’ in /var/www/src/setup/src/ Magento/Setup/Module/ Di/Code/Reader/FileClassScanner.php on line 72

Today I finally had some time to cleaned up my Dockerfile for Magento 2. I am using Alpine Linux for most of my Magento 2 projects which usually includes NGINX, PHP-FPM and MariaDB.

While testing my updated Dockerfile I came across the following PHP notice which prevented the deployment scripts from finishing the Magento 2 setup.

The problem was simply a missing PHP extension php7.1-tokenizer which has resolved the issue after re-creating the container with –build.

 

PHP message: PHP Fatal error: Interface ‘SessionHandlerInterface’ not found in /var/www/src/vendor/ laravel/framework/src/Illuminate/Session/FileSessionHandler.php on line 10

Today I successfully installed my first October CMS project locally via composer / command line. The installation was pretty straightforward and worked surprisingly well on my Alpine Linux docker container which includes NGINXPHP-FPM and MariaDB.

However, when I tried to open the front-end I got the following error.

After a quick research I found the solution on Stackoverflow. I basically forgot to install the PHP extension php7-session inside my docker container. I manually installed the missing extension, but also updated my Dockerfile in case I have to rebuild my container.

After that I manually killed all PHP processes to make sure php-fpm loads the new extension with the next page reload.

 

Nginx Docker Container and getpwnam(“www-data”) problem

I am using nginx:1.10.1-alpine to run Nginx as a docker container. Today I have replaced the Nginx container and always got the status Restarting (1) Less than a second ago after starting the container.

With the command docker logs nginx I got the following error logs.

The error message basically means that the user www-data which I have defined in /etc/nginx/nginx.conf doesn’t exists in /etc/passwd. That makes sense, because nginx:1.10.1-alpine comes with a minimal setup. I my case I need the user www-data for other linked php-fpm containers. To fix the problem I just added the missing user during the container build.

Voilà.

Reload Nginx Inside Docker Container

I am using docker for Nginx which is running with multiple virtual hosts. Sometimes I have to reload my updated nginx configurations, but I don’t want restart the container each time. Here is how you can reload your nginx without any downtime and without interrupting any connections.

1. Find your container name

Use docker ps to find your nginx container.

2. Reload Nginx

With docker exec -it {container_name} {command} you can directly access your container and execute commands. In my case the name of the container is nginx-server.