Magento 2.4.x, Elastic Search 7.9.1 on Alpine Linux 3.12, jdk/bin/java: No such file or directory

Magento2 Teaser

While setting up a new local test environment specifically for Magento 2.4.x with Elastic Search on Alpine Linux ( Docker ) I had some problems with getting elastic search up and running.

The first issue I had was related to a missing home path of JAVA. Because of that, I wasn’t able to start Elastic Search at all.

I’ve fixed it by setting the missing path for JAVA_HOME which I have added later to my profile settings. https://stackoverflow.com/questions/35325856/where-to-set-system-default-environment-variables-in-alpine-linux

The second issue was related to the JAVA version inside the docker container.

Alpine Linux has the required version in the repository, which made it easy to install the missing package with apk.

GCE – Open Up Port 8025 in VPC for MailHog

google_cloud_plattform

If you want to use MailHog ( https://github.com/mailhog/MailHog ) on your Google Compute Engines (GCE) you probably have to create a new VPC firewall rule in your project. In this post, I will skip the installation and configuration part of MailHog with Supervisord, because there are so many different ways to set up MailHog.

By default, the MailHog interface is running on port 8025.

Mailhog GUI interface

Once the MailHog process is up and running, you probably won’t be able to access your server on port 8025. This is because those ports are not defined in VPC. In order to get access from outside, you must create a new firewall rule in your GCE project. This can be done in the Google console or simply via the command-line. Here is how I created a new rule ” custom-allow-mailhog ” for port 8025.

Once the rule has been created, you should be able to open the interface of MailHog.

GCE – SSH Directly Into Your Docker Container

google_cloud_plattform

In this article, I am going to change the SSH port from 22 to 52241 and create a new firewall rule in VPC. Please do your own research and make sure you understand the impact, especially on a production environment.

When you deploy a new docker container on GCE, you can SSH into the host machine ( VM ). This is simply because the IP address is pointing to the host machine. But what about the docker container that runs on it? In this case, you must use docker exec or the gcloud command-line tool in order to get access to the container, which can be pretty time consuming over time.

I am working on a project where I need direct access to my alpine docker container on port 22 or any other port. As you can see in the below output, no ports are exposed for my container.

This is mainly because the docker container shares the network with the host. Source

Containerized VMs launch containers with the network set to host mode. A container shares the host network stack, and all interfaces from the host are available to the container.

This is very convenient because it allows you to access your docker container from pretty much any port. However, only a few ports are actually open and configured in the VPC Network > Firewall settings for security reasons. For the SSH connection I decided to open a custom port 52241. You can do this in your GCP Console under VPC Network > Firewall or with the following gcloud command-line tool.

Once completed, here is how the new firewall rule looks like in the console. If you scroll down, you will also see all affected instances.

GCP Custom Firewall

Before you can SSH into your container on port 52241 you must change the port in the sshd configuration /etc/ssh/sshd_config within your container or container image. Not on the VM.

You may have to restart your ssh service before you can use the new port. Once this is done, you can SSH into your docker container.

AWS – ECS Keeps Recreating Docker Container

AWS ECS Logo

Today was a strange day. I’ve launched a new version of my docker image within an ECS cluster, which usually worked fine. However, today my container always got recreated every 5 minutes for no reason.

ecs_keeps_recreating_containers

The new docker image that I have pushed to the repository had no significant changes, except a redirect from non-SSL ( 80 ) to SSL ( 443 ) for all loaded virtual hosts. So, when I had a look at the processes, everything looked fine. There was nothing that could put the container into an unhealthy status.

Even the log output of the docker container didn’t show any error. After a little bit of research, I ended up on the following AWS developer guide page https://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service.html

It was not exactly what I was looking for, but it pretty much explained the problem I had. One of my colleagues said I should maybe have a look at the health checks of the load balancer.

Indeed, when I had a look at the settings in EC2 > Load Balancing > Target Groups > Health Checks the health check expected a success code 200 instead of 301 on port 80. Because of that, the container has been marked unhealthy every 5 minutes.

aws_ec2_load_balancer_health_checks

I hope this will point someone in the right direction.

Download Magento 1 Extensions after M1 EOL

Magento 1 Download Extension

Magento 1 extensions were removed from https://marketplace.magento.com about a month ago. According to Adobe’s recent blog posts, all extensions will be removed early August from their repository, which causes a lot of frustration on developers who are still in the middle of moving towards Magento 2 or simply need the code to translate it into Magento 2 modules.

Source: https://community.magento.com/t5/Magento-DevBlog/How-Extension-Developers-Can-Prepare-for-M1-End-of-Life/ba-p/446216

Source: https://magento.com/blog/magento-news/support-magento-1-software-ends-june-30-2020

So, in order to download an M1 package, you simply have to point to a specific URL. Luckily they didn’t turn off https://connect20.magentocommerce.com yet, but I believe Magento will get rid of it very soon.

I’ve spent a few minutes writing a ” quick and dirty ” PHP script that downloads the most recent version of each extension in https://connect20.magentocommerce.com/community/packages.xml including package details such as name, description, and author. For example https://connect20.magentocommerce.com/community/BankPayment/1.2.0/package.xml

If you download all packages, you will end up with about 1300 extensions.

However, it seems the packages.xml file contains a lot of extensions that are actually not available anymore, that leads to a 404.

Another good resource is https://packages.firegento.com or http://freegento.com/ddl-magento-extension.php to download M1 extensions.

Feel free to change or share the code.

Octopus – Couldn’t find a valid ICU package installed on the system

Octopus Deploy

This morning I created a new ECS environment and tried to deploy a Laravel project on it.

So far so good, until one of the deployment steps got stuck in Octopus because of the following error.

While looking for a solution, I’ve noticed that I forgot to install the package libicu on the new ECS machine that was required during the deployment process.

After installing the missing package, I was able to deploy again without errors.

Magento 2.3.x – Unit Tests with ScopeConfigInterface

Magento2 Teaser

Imagine you have a helper class that is responsible for downloading and processing JSON files. This is something you don’t want to test manually all the time. I didn’t implement many unit tests in the past, simply because it wasn’t necessary or just not scoped in a project. However, there are cases where you actually save a lot of valuable time with just a few simple unit tests.

In this article I would like to show you how to inject store configurations ( ScopeConfigInterface ) to your mocked classes which are necessary to test multiple scenarios. In the following example I am going to test a helper class and make sure the following scenarios are working as expected.

  1. Is the module enabled in Stores > Configuration > General?
  2. Is a URL defined?
  3. Does the URL return a correct HTTP response?
  4. Is the downloaded content a valid JSON string?

Here is how my module looks like. I basically created a new helper Helper\Curl.php and a unit test file Test\Unit\Helper\CurlTest.php.

I’ve added some basic methods to the helper, such as isEnabled(), getFileUrl(), getStatusCode(), isJson() and downloadFile(). Nothing fancy, just a helper that can download a file.

Usually if you want to use system variables from core_config_data you have to inject the ScopeConfigInterface in your constructor. In unit tests it is quite similar. First, you have to create a mockup of the ScopeConfigInterface.

Next, you need a mockup of the Context, but this is not always the case.

Now you can inject these two mockups to the helper by using setConstructorArgs like in the below example.

Now you can define values for the scope configuration. This allows you to enable or disable your module or set random values and see how your code behaves. In the below example, I simply enable the module ( $isEnabled ) and set the file url ( $fileUrl ) to https://raw.githubusercontent.com/ljharb/json-file-plus/master/package.json which I will test with assertEquals and assertNotContains.

I also define an expected value for the CURL httpStatus and an expected return value of isJson(). Okay, lets run the test and see what happens.

All assertions within the test have succeeded. Let’s see what happens when I change the URL slightly. It should return a 404.

Yes, the test fails because I’ve received a status code 404 instead of 200.

That’s it. If you have questions or need help, please let me know in the comments.

Octopus – Basic ” Staging First ” Lifecycle

Octopus Deploy

In this post, I am going to show you how to prevent a release from being deployed directly to production if it hasn’t been deployed to staging. Octopus has a great feature called ( Lifecycles ) that allows you to create advanced deployment workflows for any type of project.

Octopus Lifecycles Example

This way you are able to force your team to follow a strict deployment process that will definitely improve your test routine and the way how you deploy code changes. No more untested code changes on production!

Okay, let’s start. Here is how you can create a custom Lifecycle.

  • From the Lifecycle page, click on the ADD LIFECYCLE button.
  • Give the Lifecycle a name and add a description. For example ” Project X “.
  • You can leave the Retention Policy unchanged ( Keep all ) for now.

Scroll down and continue with the next tab.

  • Click ADD PHASE, to explicitly define the phases of the lifecycle.
  • Give the phase a name. For example ” Staging ” because I want you deploy and review your changes on staging first.
  • Click ADD ENVIRONMENT to define which environments can be deployed to during this phase of the lifecycle. Choose your staging environment from the dropdown list.

Octopus Select Environment

If you haven’t add an environment yet, you must create at least two ( staging, production ). Learn more about Environments.

Scroll up again and save the new Lifecycle. Now go back to your project and click the link Process in the left sidebar. Next, hit the button Change on the right-hand side and select the Lifecycle that you’ve just created and save the changes.

Octopus Process Lifecycle

After that, try to deploy a new release. You will see that there is no option to deploy on production. Just must deploy your code successfully to your staging environment first in order to be able to push changes to production.

Octopus Deploy ReleaseThis is just one of many examples of how you can improve your deployment routine with Octopus.

Octopus – How to fix ” Missing deployment buttons “

Octopus Deploy

I saw a few comments on Stackoverflow where people ask how to setup or re-enable deployment buttons for staging or production. The answer is not quite straightforward, because you have to review a few things. Here is a quick checklist that may help you to identify the cause of missing deployment buttons.

1. Process

Make sure you have selected the right Lifecycle in Projects > Your Client > Deployments > Process.

Octopus Lifecycles Sidebar

2. Lifecycles

If you believe one Phase / Environment missing, go to Library > Lifecycles and try to add the missing environment in the Phases section.

Octopus Lifecycle Details

3. Environments

In order to be able to add a new phase, make sure all required environments are listed in Infrastructure > Environments.

Octopus Environments List

Learn more about Environments.

October CMS – Execute Console Commands Sequentially

October CMS Logo

In this tutorial, I am going to show you how to execute long running PHP processes sequentially in October CMS from anywhere of your code.

First, you must define a default driver, which is pretty much a handler for managing how to run a queued job, identifying whether the jobs succeeded or failed, and trying the job again. In October CMS, the driver settings are located in the file in ./config/queue.php.

If you are new to this, I recommend to start with the database driver. This driver is good enough to execute your console commands synchronously in the background without slowing down your PHP calls. As a next step, you have to register a custom console command in your Plugin.php file. For example company.pluginname.

For test purposes, I created a console command company.pluginname which will execute the method backupList.

Let’s assume you have a front-end controller http://yourpage.dev/customer/list that calls an API, creates a list and saves it on disk. Executing all these steps can be time-consuming and simply slow down your controller. Instead, you can run such processes in the background with Artisan::queue.

As soon as you hit the controller, artisan will queue a new record in the table jobs and wait until a queue worker executes the job. You just have to make sure that a Queue Worker is running in the background, otherwise jobs won’t get executed.

I recommend to start a queue worker with Supervisor which is well explained on https://medium.com/@rohit_shirke/configuring-supervisor-for-laravel-queues-81e555e550c6