Preamble
The IC/CD is a set of practices for package, test and deploy an application continuously, whatever the time or period.
It is based on an integration string that automatically deploys the same application.
Continuous integration (CI) is all the steps that make it possible to publish a new version of a site or application. It can go through the automatic launch of tests when adding code, but also the quality test of it etc.
This string is often connected to tools such as SonarQube for code quality, project testing framework etc.
This allows the application to be tested as it is and to verify that it complies with our conditions. quality and reliability before arriving in production.
Continuous deployment (CD) is the continuation of this same integration chain, i.e. it only deploys versions if they have been validated by the continuous integration chain. Continuous deployment also assumes that our package is reliable.
Continuous deployment also implies that it is able to make a back (rollback) at any time.
These practices are part of the DevOps shift that merges software development as well as the infrastructure part and in particular here deployment.
It accelerated with the appearance of Docker-type containers to host applications.
Return to 2010
In 2010, the landscape of the web was very different from the one we know today. All the tools that are used naturally today did not exist or were at the very beginning of their implementation.
At this time we relied on what WordPress did:
- SVN for versionning
- Versioning of everything in the code, whether it's ours or not
- Development branch, trunk, tags etc.
But also:
- Development on a central server, upload to FTP
- Updates to WordPress, plugins and hand themes, with all the problems that this can cause
The deployment is also the same acabit, as an agency we have to adapt to different customers, with different SIs.
That is, depending on the case, we can have either an FTP access, a zip to be set up somewhere, or an email to be made with the site etc.
Moreover, each accommodation is different, with its own limitations and access.
This leads to a deployment policy for each project, messy identifiers and written processes but not always easy to follow ad-hoc.
Each site update was dependent on the Internet connection of the post, the FTP software used.
This process was not reliable, the deleted files on SVN were not on the remote site automatically, so there were old files on the hosting.
Each update was the ability to break things if a file was misloaded, if a network break appeared, if the remote disk was full or any other problem.
In addition, the maintenance activity also had to know and learn to deploy its modifications.
I can't count the times when we heard:
Can you deploy for me? I don't know how to do it.
I couldn't deploy, access wasn't up to date.
But also all the security problems that this could bring, often the same access for everyone. Since access is in SFTP user/password in almost all cases, sharing and storing credentials was a problem.
2013: Software change, paradigm change
In 2013 the change from SVN to GIT allowed to modify the way deployments were made, now it is possible to connect to the server and make a git pull or checkout a branch.
This change alone led to the deployment scriptsallow for pre- and post-deployment actions.
The deployment is deported to the client's server, however the procedure remains dependent on a human.
Access must be properly filled, and it must be allowed to connect to the servers.
Security is also a problem, it is very easy to end up with the file .git accessible to the root and checkout all the project locally.
This process remains manual, uncertain, and hardly reversible.
New tools : Composer, Grunt, Gulp, Webpack
In order to make site maintenance and management at a certain level it is necessary to start industrialize its developments. Not editing the code of others is a first step and composing meets this need. But also change how CSS and JS are coded, compiled and optimized.
The compilation tools and dependency management tools have become more and more present and have helped to address many problems.
However, these tools bring a complexity that did not previously exist, as they depend on:
- The machine of the person who uses them
- Versions of Node, PHP
- Different trends to look for addictions
This does not allow to create a package ready to deploy in production.
It is easy to forget to compile the CSS/JS, but also to have the wrong version of the software locally which makes deployments slower, more risky and more complex.
These tools quickly brought questions like
- Who compiles?
- How?
- When?
- Does it release the compiled code?
Several teams work on the final product, as much on pure PHP, on HTML/CSS/JS and they are not the same people.
These same people don't need to know that there are 3 scripts to run before they can deploy the code.
This intermediate state quickly led to problems of poorly compiled or uncompiled code, scripts that passed 6 months ago but which an update of such or such tool made impossible to exploit.
2017: Automation with Buddy
The need for a deployment tool was greatly felt, it was no longer possible to do the actions locally or on the client's server, a lot of time was lost. Whether in development or maintenance, no one was satisfied.
Tools existed, but they seemed too technical, with an interface too complicated or totally incomprehensible.
It also required that this tool be flexible enough to adapt to the many tools we use, customer environments, accommodation, etc.
In 2017 at the turn of a discussion with Alexandre, one of our developers, Buddy.works is mentioned.
The solution seemed magical, deploy everywhere, all the time, whatever the web technologies present.
Buddy's principle is the same as other CI/CD software with a twist: perform actions in a pipeline through docker machines.
The central pivot of the operation is a restitory git: github, gitlab, bitbucket all provider that has webhooks.
These pipelines are executed according to various conditions, but it can be simply connected to repository commits.
This simple principle allows to adapt to almost all projects, accommodations, situations.
A conventional pipeline would be:
- Compile PHP with composing in the PHP version of PROD
- Compile CSS and JS with webpack in the development Node version
- Send compiled sources to PROD server in RSYNC
- Empty caches on PROD server
- Send a Slack notification to say everything went well
Integration
Buddy's strength is to be able to be adapted to different and varied situations.
Do you use AWS and its beanstalkApps to deploy your apps? One action exists
You need to run tests before you deploy? Actions exist
Do you want to pinger external/internal services (New Relic, Sentry ...) during deployments? Actions exist.
Do you need to create a zip, send it to a server, send an email to someone? Actions exist.
This change was not made in a single day, new projects moved on to this tool quickly and the method of deployment has changed as we move forward.
But as soon as this tool was put in place, the teams asked for it each time.
One of its greatest forces is the interface: setting up a deployment pipeline takes a few minutes, it is ergonomic, clear and well done.
Hard not to find his happiness in the catalogue of actions.
It is also possible to use any Docker image of the official restitory or its own restitory.
An API
The devs are not left behind, a CLI and HTTP API are available to handle Buddy easily.
Instead of going to the interface it is possible to trigger and manage projects from an external application.
Cherry on the cake, you can install on your server (we first) so you can have your own IP for example. But it's up to you to manage scaling and updates.
Unify deployments
Deployments are now globally unified, even if there are special cases with some customers (vpn, bastion etc.)
- The development branch is linked to the development environment, each commit deploys a new version on this environment
- The preproduction environment is linked to the main branch
- The production environment is linked to tags
Each pipeline shall carry out minimum actions
- PHP compilation
- CSS/JS compilation
- Change version for Sentry
- Deployment of code( rsync, atomic deployment, image docker etc.)
- Ping on Slack
If something goes wrong, then a Slack notification is sent with the execution log.
This created pipelines to duplicate for deployments:

With this way of working, teams are assured that their work is deployed on the server, as well as in production.
Each deployment resembles each other, we are sure that the code on the environment is the one that has been pushed.
Still suitable for customers
Although the deployment method has been unified, sometimes we can't have access to environments or have to create a package for the client.
No problem, we then use the method of « constructed branch » Directly borrowed from VIP WordPress and that we use on VIP projects.
The principle is simple, instead of sending the package as it is to environments, normally ignored files are versioned into an appropriate branch, for example main becomes main-built.
But it is also possible to create the package and then position it on an FTP, a S3 fucket, send it by email, Slack message etc.
Any type of scenario can be considered, with manual actions, allowed hours and days of deployment etc.
What has changed now
Now it's an automaticism to have these pipelines, we no longer ask the question. « Can I deploy? ».
We can deploy at any time and go back if necessary.
Buddy is not the only product offering this kind of functionality, it can be found with Github Actions, gitlab CI or on different cloud providers.
With the deployment and compilation processes that are becoming more complex, the use of such a tool is essential nowadays.
Whether you use simple actions like sending your site to FTP, going through a cloud provider, Docker images or other, this solution adapts to your needs.
If you do not currently use a deployment tool, I strongly encourage you to do so, this is the first step in automation of the rest of your software stack.
It is not necessary to implement everything directly, but starting with it allows the end to succeed in moving slowly.
The creation and maintenance activities of the sites gain.
