Octopus – Basic ” Staging First ” Lifecycle

Octopus Deploy

In this post, I am going to show you how to prevent a release from being deployed directly to production if it hasn’t been deployed to staging. Octopus has a great feature called ( Lifecycles ) that allows you to create advanced deployment workflows for any type of project.

Octopus Lifecycles Example

This way you are able to force your team to follow a strict deployment process that will definitely improve your test routine and the way how you deploy code changes. No more untested code changes on production!

Okay, let’s start. Here is how you can create a custom Lifecycle.

  • From the Lifecycle page, click on the ADD LIFECYCLE button.
  • Give the Lifecycle a name and add a description. For example ” Project X “.
  • You can leave the Retention Policy unchanged ( Keep all ) for now.

Scroll down and continue with the next tab.

  • Click ADD PHASE, to explicitly define the phases of the lifecycle.
  • Give the phase a name. For example ” Staging ” because I want you deploy and review your changes on staging first.
  • Click ADD ENVIRONMENT to define which environments can be deployed to during this phase of the lifecycle. Choose your staging environment from the dropdown list.

Octopus Select Environment

If you haven’t add an environment yet, you must create at least two ( staging, production ). Learn more about Environments.

Scroll up again and save the new Lifecycle. Now go back to your project and click the link Process in the left sidebar. Next, hit the button Change on the right-hand side and select the Lifecycle that you’ve just created and save the changes.

Octopus Process Lifecycle

After that, try to deploy a new release. You will see that there is no option to deploy on production. Just must deploy your code successfully to your staging environment first in order to be able to push changes to production.

Octopus Deploy ReleaseThis is just one of many examples of how you can improve your deployment routine with Octopus.

Octopus – How to fix ” Missing deployment buttons “

Octopus Deploy

I saw a few comments on Stackoverflow where people ask how to setup or re-enable deployment buttons for staging or production. The answer is not quite straightforward, because you have to review a few things. Here is a quick checklist that may help you to identify the cause of missing deployment buttons.

1. Process

Make sure you have selected the right Lifecycle in Projects > Your Client > Deployments > Process.

Octopus Lifecycles Sidebar

2. Lifecycles

If you believe one Phase / Environment missing, go to Library > Lifecycles and try to add the missing environment in the Phases section.

Octopus Lifecycle Details

3. Environments

In order to be able to add a new phase, make sure all required environments are listed in Infrastructure > Environments.

Octopus Environments List

Learn more about Environments.

Octopus – How to fix ” There must be at least one enabled healthy machine to deploy “

Octopus Deploy

I am pretty new to Octopus and just started implementing a deployment process for a new PHP project. After creating a new machine in Infrastructure > Deployment Targets I went back to Projects > My Project > Create Release and tried to start a deployment to our dev environment which caused the following error.

As you can see in the below screenshot, I tried to deploy a new release to the target Development.

Octopus Deploy Release Error Healty Machine

The error message was strange because the target status of my Deployment Target was flagged as healthy. After comparing the settings I noticed that the environment Development in my deployment target settings was simply missing.

Octopus Deployment Target Settings

After adding the target Development to the list of Environments, I was able to proceed with the above deployment.

Notice: Use of undefined constant T_CURLY_OPEN – assumed ‘T_CURLY_OPEN’ in /var/www/src/setup/src/ Magento/Setup/Module/ Di/Code/Reader/FileClassScanner.php on line 72

Today I finally had some time to cleaned up my Dockerfile for Magento 2. I am using Alpine Linux for most of my Magento 2 projects which usually includes NGINX, PHP-FPM and MariaDB.

While testing my updated Dockerfile I came across the following PHP notice which prevented the deployment scripts from finishing the Magento 2 setup.

The problem was simply a missing PHP extension php7.1-tokenizer which has resolved the issue after re-creating the container with –build.

 

Invalid value for field resource.machineType in Google Compute Engine

google_cloud_plattform

Currently I am working on a light-weight PHP class which should help me to create, list and delete VM instances on Googles Compute Engine. While writing the class, I came across few issue I would like to share with you. In the following example I tried to create a micro instance, because didn’t need much computing power. As you probably know, here is how you can select the machine type via dashboard.

google_cloud_compute_engine_micro_machine

When I tried to create a micro instance via PHP API with the correct Machine Type I received the following error message.

Here is a part of a PHP file I am using for API connection tests.

As you can see, I set a string f1-micro which is obviously not valid. I double-checked the API guides and found this URL.

So I defined the value for the machine type as follows which has fixed the invalid value for field resource.machineType.

 

AWS RDS – You Need (at least one of) The SUPER Privilege(s)

Today I had the opportunity to move a local Magento database to an Amazon RDS. Based on the well-documented article Creating a DB Instance Running the MySQL Database Engine I was optimistic to finish the migration pretty soon. But as you know, there is always something that goes wrong.

After creating the database instance class ( db.m3.large ) I have decided to proceed with some test imports before I change any database credentials in Magento.

Good decision because the first import didn’t run through.

I was able to fix the problem by adding the missing parameter log_bin_trust_function_creators as explained here. Before the next import I had to delete the database and create a new one.

Unfortunately there was another problem with a DEFINER which tried to create a trigger as a different user. On Amazon RDS only a user with SUPER privileges can do that. Therefore, I had to remove the DEFINER from the MySQL schema you see below.

That’s it. Now I was able to import my DB on Amazon RDS without errors.

 

Nginx Docker Container and getpwnam(“www-data”) problem

I am using nginx:1.10.1-alpine to run Nginx as a docker container. Today I have replaced the Nginx container and always got the status Restarting (1) Less than a second ago after starting the container.

With the command docker logs nginx I got the following error logs.

The error message basically means that the user www-data which I have defined in /etc/nginx/nginx.conf doesn’t exists in /etc/passwd. That makes sense, because nginx:1.10.1-alpine comes with a minimal setup. I my case I need the user www-data for other linked php-fpm containers. To fix the problem I just added the missing user during the container build.

Voilà.

Nginx and Cached CSS / JS files

Yesterday I had a pretty hard time with cached CSS / JS files on my developer environment. I supposed to review layout changes, but I always got the old CSS files. I have spent about 2 hours to find the reason. At the end the solution was to turn off the syscall sendfile in my virtual host configuration.

If you use Nginx as a reverse proxy to serve pages from an application server, you can leave sendfile turned off.

 

Reload Nginx Inside Docker Container

I am using docker for Nginx which is running with multiple virtual hosts. Sometimes I have to reload my updated nginx configurations, but I don’t want restart the container each time. Here is how you can reload your nginx without any downtime and without interrupting any connections.

1. Find your container name

Use docker ps to find your nginx container.

2. Reload Nginx

With docker exec -it {container_name} {command} you can directly access your container and execute commands. In my case the name of the container is nginx-server.