GCE – Open Up Port 8025 in VPC for MailHog

google_cloud_plattform

If you want to use MailHog ( https://github.com/mailhog/MailHog ) on your Google Compute Engines (GCE) you probably have to create a new VPC firewall rule in your project. In this post, I will skip the installation and configuration part of MailHog with Supervisord, because there are so many different ways to set up MailHog.

By default, the MailHog interface is running on port 8025.

Mailhog GUI interface

Once the MailHog process is up and running, you probably won’t be able to access your server on port 8025. This is because those ports are not defined in VPC. In order to get access from outside, you must create a new firewall rule in your GCE project. This can be done in the Google console or simply via the command-line. Here is how I created a new rule ” custom-allow-mailhog ” for port 8025.

Once the rule has been created, you should be able to open the interface of MailHog.

AWS – ECS Keeps Recreating Docker Container

AWS ECS Logo

Today was a strange day. I’ve launched a new version of my docker image within an ECS cluster, which usually worked fine. However, today my container always got recreated every 5 minutes for no reason.

ecs_keeps_recreating_containers

The new docker image that I have pushed to the repository had no significant changes, except a redirect from non-SSL ( 80 ) to SSL ( 443 ) for all loaded virtual hosts. So, when I had a look at the processes, everything looked fine. There was nothing that could put the container into an unhealthy status.

Even the log output of the docker container didn’t show any error. After a little bit of research, I ended up on the following AWS developer guide page https://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service.html

It was not exactly what I was looking for, but it pretty much explained the problem I had. One of my colleagues said I should maybe have a look at the health checks of the load balancer.

Indeed, when I had a look at the settings in EC2 > Load Balancing > Target Groups > Health Checks the health check expected a success code 200 instead of 301 on port 80. Because of that, the container has been marked unhealthy every 5 minutes.

aws_ec2_load_balancer_health_checks

I hope this will point someone in the right direction.

Invalid value for field resource.machineType in Google Compute Engine

Currently I am working on a light-weight PHP class which should help me to create, list and delete VM instances on Googles Compute Engine. While writing the class, I came across few issue I would like to share with you. In the following example I tried to create a micro instance, because didn’t need much computing power. As you probably know, here is how you can select the machine type via dashboard.

google_cloud_compute_engine_micro_machine

When I tried to create a micro instance via PHP API with the correct Machine Type I received the following error message.

Here is a part of a PHP file I am using for API connection tests.

As you can see, I set a string f1-micro which is obviously not valid. I double-checked the API guides and found this URL.

So I defined the value for the machine type as follows which has fixed the invalid value for field resource.machineType.

 

Bulk Update Customizable Product Options in Magento2

Today I had to bulk-update the Price and Max Characters in the tab Customizable Options for hundreds of simple products based on the title.

Magento2 Customizable Product Option

I knew that it could be done programmatically but I figured out that it can be done just by updating the table catalog_product_option and catalog_product_option_price. First I got a comma separated list of option_id’s based on a specific title.

With the comma separated list I was able to update the value of the Price and Max Characters as follows.

1. Price

2. Max Characters

Possible that you have to flush the cache ( php/magento cache:flush ) and re-index everything ( php/magento indexer:reindex ) to see the results.

AWS RDS – You Need (at least one of) The SUPER Privilege(s)

Today I had the opportunity to move a local Magento database to an Amazon RDS. Based on the well-documented article Creating a DB Instance Running the MySQL Database Engine I was optimistic to finish the migration pretty soon. But as you know, there is always something that goes wrong.

After creating the database instance class ( db.m3.large ) I have decided to proceed with some test imports before I change any database credentials in Magento.

Good decision because the first import didn’t run through.

I was able to fix the problem by adding the missing parameter log_bin_trust_function_creators as explained here. Before the next import I had to delete the database and create a new one.

Unfortunately there was another problem with a DEFINER which tried to create a trigger as a different user. On Amazon RDS only a user with SUPER privileges can do that. Therefore, I had to remove the DEFINER from the MySQL schema you see below.

That’s it. Now I was able to import my DB on Amazon RDS without errors.

 

Find Malware in Image Files

Malware

It’s a nightmare. Your production environment was compromised and you actually don’t know how and how much data was stolen. For sure there are different ways to be compromised. In this post I just want to explain, how hackers can get full access to your production environment by using image files.

As explained on Snapfast and Sucuri it is pretty easy to store any kind of PHP code in EXIF headers. Often times it only needs a simple script to create an administrator account. If you are using third-party extensions with an image upload function or if you are late with the last security update, there is a high risk to be compromised.

You can use commands like svn status or git status to find changes, but this is no guarantee to find malicious code, because oftentimes your /media/ or similar folders are not version controlled.

Here is one simple command how you can find infected image files.

In case if any file is infected, you will see the following search result.

5 useful commands for your daily work with Magento

Daily Work

1. Backup your media/catalog

Create a manually backup of media/catalog without /cache/ folder

The same with the current date in the filename.

2. Files not owned by apache

Find files which are not owned by apache group or user. This is useful if you want detect possible permission issues.

3. SQL / CSV files in your document root

You should never left database dump or other export files with sensible information in your document root, because there is a potential risk that those files can be downloaded from outside, depending on your security settings. To make sure your production environment is save, just run a simple ” find ” with the option ” iregex “.

4. List report files created in the last 24 hours

Here are some more options you can use for ” -ctime “.

5. Truncate log files

A fast growing system.log or exception.log can be the cause of a server outage. If you want empty log files on a production environment, the command ” truncate ” will help you.

For further debugging you can keep log data by specifying the size.

Practical grep command examples for your Magento log file audit

Improve your bug fixing routine with some simple grep commands. Run these commands before any production deployment just to make sure your new code is working properly. To reduce the output on your terminal, extend my examples as you can see below.

Count the results with ” wc -l ” at the end.

Display the 1000 rows with ” tail -n 1000 ” at the beginning.

1. System.log

1.1 Access to an array by a key that doesn’t exist

1.2 Use a variable that wasn’t previously defined

1.3 Debug output generated by Mage::log()

1.4 Any kind of permission issues

1.5 Missing Magento template files

1.6 Access to a non-object

1.7 Missing argument in a function call

1.8 Invalid argument supplied to a loop

Find all:

2. Exception.log

2.1 Any kind if XML issues

2.2 Specific Authorize.net gateway issues

2.3 Specific PayPal gateway issues

2.4 Find general curl issues

Find all:

3. Reports

Find the last 10 error report files and grep for any kind of error message.
3.1 MySQL connection issues

3.2 Invalid config field

3.3 Unable to read response

3.4 External urls redirect

Find all:

4. Apache error logs

4.1 Find missing files caused by any reason

Setup cron job for apache user

Cronjob

As long as you don’t need root permissions in your PHP application / script, it is recommended to set up cron jobs with a web server specific user.

1. Find the apache user

Get the current apache user from the process status.

2. Edit the tab

Edit the crontab with the option -u and the specific apache user. This command will create a user-specific cron file in /var/spool/cron/crontabs/.

Set up your crontab and save the changes with CTRL-O.

3. Check the syslog file

After that, check the syslog file to make sure that the cronjob is running correctly.

Query to get size of database tables in MB

Database

If your hosting environment has no database monitoring, you should be able to monitor it manually once in a while or with your own shell script to prevent rapidly growing MySQL tables, because the result of large MySQL tables ( I am talking about 2 – 10 GB with millions of rows ) are often random performance issues, especially if you are using community modules with not well written MySQL queries.

This is the output of the query.