Deployments, Continuous Delivery

Currently, for deploying the new ‘modules’ , we have inherited the old ways of doing it from our legacy project.
It is something like this:

  • the code is merged into the git master
  • we deploy on test server and check the master branch
  • every developer merges his changes/work during the sprint
  • Weekly, every Tuesday, we deploy the master

We deploy on Tuesday, so that we’ll have the rest of the week to intervene, if something goes wrong (-> fear, uncertainty for your new code)

Unfortunately, the same mentality/process has crept into the new modules as well.

But for the new modules/microservices this deployment process doesn’t make sense. (It’s an anti-pattern)

Another issue is that the guy who is on OPS and Support we, has to deploy all the changes that are due, even the ones in the new modules.
So you end up deploying 4,5 systems at the same time…
Talk about tight coupling! …

In Design Microservice Architectures the Right Way it says that they deploy as soon as all the Tests Pass, unlike our process, where we only deploy on Tuesdays?

… and because we’ve spent so much time on our testing, the policy is that once it’s green, we deploy, that’s it!

ContinousDelivery

__

Microservices require solutions for different challenges.
Let’s examine the operations concept:
  • deployments
  • monitoring
  • log analysis
  • tracing
And let’s drill down into deployments
In this lesson, we’ll focus on continuous delivery as an advantage of using microservices.
Microservices represent independently deployable modules. Therefore each microservice has its own continuous delivery pipeline.

MICROSERVICES FACILITATE CONTINUOUS DELIVERY

  • The continuous delivery pipeline is significantly faster because the deployment units are smaller. Consequently, deployment is faster.
  • The tests are also faster because they need to cover fewer functionalities. Only the features in the individual microservice have to be tested, whereas in the case of a deployment monolith, the entire functionality has to be tested due to possible regressions.
  • Building up a continuous delivery pipeline is easier for microservices. Setting up an environment for a deployment monolith is complicated. Most of the time, powerful servers are required. In addition, third-party systems are frequently necessary for tests. A microservice requires less powerful hardware. Besides, not many third-party systems are needed in the test environments.

DEPLOYMENT MUST BE AUTOMATED

However, note that microservice architectures can only work when the deployment is automated! Microservices substantially increase the number of deployable units compared to a deployment monolith. This is only feasible when the deployment processes are automated.
Independent deployment means that the continuous delivery pipelines have to be completely independent. Integration tests conflict with this independence. They introduce dependencies between the continuous delivery pipelines of the individual microservices. Therefore, integration tests must be reduced to the minimum.

Migrating from ISP to IaaS/Cloud/AWS

At my company, about 2 years ago, we began the process of re-architecting our 15 years old ‘web site’ by

  • breaking up, and rewriting the monolith
  • moving our web apps from Contegix ISP to AWS/EC2s
    that is: Moving our Infrastructure from ISP to AWS/EC2s

We are moving from about 9 or 10 physical servers  in a Data Center, servers which are running our applications

  • Website{MySQL DataBase server, 2 Nodes/Servers running Tomcat}
  • Analytics Tool{ES+Kibana}
  • IssueTracker{YouTrack}
  • Test Server + C.I Tool/Jenkins
  • CMS Web Application
  • REST APIs
  • Log Index {ELK}
  • Social Notifications module

Into running the new Microservices/Modules on AWS EC2s

We took just one (small) step/improvement from physical servers to IaaS (AWS EC2s)

The PROs of this decision

  • AWS IaaS(EC2s) it is within our competencies area
  • the generic Cloud Advantages over physical servers in a Data center
    • bring up/down EC2s faster than commissioning/decommissioning physical servers
    • better availability/reliability/up-time?
      reliable networking, reliable servers(machines)
      About 2 months ago one our server nodes running the website had a critical hardware failure, and needed to be replaced by a new machine.
      This will/shouldn’t  not happen in AWS(!?)
    • flexibility, you can spawn  and kill new servers as you wish
    • independence, ownership and self-reliance: In AWS you build it, you Run it.
      It forces you handle aspects like:

      • incidents recovery – a Node is down, you have to have monitoring and incident recovery. There is no on-call personnel to restart your servers
      • monitoring – you will not receive urgent emails from your ISP telling you that your site/service is down
  • You own the infrastructure
    You describe the Servers, Networking, Storage in CloudFormation and AWS materializes

The CONs of this decision(compared to Docker/Containers)

  • an Increase in number of required EC2s

Because we’re breaking up the monolith in smaller MicroServices/Modules/web applications, and each is deployed in a separate/dedicates EC2… it means we will double the number of server that we need. Whereas, if we would have gone for Docker/Kubernetes we could have optimized this and run multiple services in a container engine

But we didn’t had the know how/technical skills

  • Monitoring is harder/resource consuming

Our Infrastructure engineer built monitoring from scratch and it was time-consuming
He installed Grafana Iflux DB, Nagios Alerts, Agents on Servers, etc…
And you have to integrated and build Grafana  Dashboards for each new module

But if you use Kubernetes, you have some monitoring built in

  • You have to cater to your EC2s, keep them up-to-date with latest Patches, Security Updates , etc… – and you have to do this for every EC2

IMG_0019