Microservices — Deployment Patterns

Denny Lesmana
7 min readJan 2, 2023

--

What are deployment patterns ?

The microservices deployment pattern is a technique for updating and modifying software components. A microservices deployment pattern or strategy enables easy deployments and allows you to modify microservices.

The following subsections list the microservice deployment patterns that help improve microservice availability:

1. Canary Deployment

source: https://www.gocd.org/assets/images/blog/continuous-deployment-strategies/canary-release-5c74ac79.jpeg

A “canary deployment” is a method of spotting possible issues before they affect all consumers. Before making a new feature available to everyone, the plan is to only show it to a select group of users. In a canary release, we keep an eye on what transpires after the feature is made available. If there are issues with the release, we fix them. We transfer the canary release to the actual production environment once its stability has been established.

Pros:

  • allow us to test in production with real users and use cases and compare different service versions side by side.
  • It’s cheaper than a blue-green deployment because it does not require two production environments.
  • And finally, it is fast and safe to trigger a rollback to a previous version of an application.

Cons:

  • Drawbacks to canary deployments involve testing in production and the implementations needed.
  • Scripting a canary release can be complex: manual verification or testing can take time,
  • The required monitoring and instrumentation for testing in production may involve additional research.

2. Blue/green deployments

source: https://www.gocd.org/assets/images/blog/continuous-deployment-strategies/blue-green-deployments-d73adc69.jpeg

Blue/green is a technique for deployments where the existing running deployment is left in place. A new version of the application is installed in parallel with the existing version. When the new version is ready, cut over to the new version by changing the load balancer configuration.

This makes rollback really simple and gives time to make sure that the new version works as expected before putting it live.

Pros:

  • No downtime
  • It is simple, fast, well-understood, and easy to implement.
  • Rollback is also straightforward, because you can simply flip traffic back to the old environment in case of any issues.
  • not as risky compared to other deployment strategies.

Cons:

  • Cost is a drawback to blue-green deployments. Replicating a production environment can be complex and expensive, especially when working with microservices.
  • Quality assurance and user acceptance testing may not identify all of the anomalies or regressions either, and so shifting all user traffic at once can present risks.
  • An outage or issue could also have a wide-scale business impact before a rollback is triggered, and depending on the implementation, in-flight user transactions may be lost when the shift in traffic is made.

3. Dark Launching

source: https://dz2cdn1.dzone.com/storage/temp/1951111-featuretoggleflag.jpg

A dark launch is a technique that deploys updates to microservices catering to a small percentage of the user base. It does not affect the entire system. When you dark launch a new feature, you’ll initially hide it from most end users.

For example, a new feature is being added to a social media platform. Instead of announcing the feature to all users, the developers may do a dark launch, where the feature is made available to a small group of users without any notification. This allows the developers to gather feedback and test the effectiveness of the feature without any potential bias from users knowing that they are participating in a test.

Feature toggles — also known as feature flags — allow you to further decouple the deployment of different software versions from the release of features to users. You can deploy new versions of an application as often as needed, with certain features disabled: releasing a feature to users is simply a matter of toggling it “on.”

Pros:

  • Perform more tests before releasing your product. This lets you catch bugs early, thus saving time and cost associated with fixing bugs in production.
  • Allows your development team to test the new system architecture before end-users can see it.
  • Rollback is also straightforward, because you can simply flip traffic back to the old environment in case of any issues.
  • Not as risky compared to other deployment strategies.

Cons:

  • Microservices are behind feature toggles, which can lead to increased costs and time for debugging microservices.
  • Additionally, to enable continuous development, teams must be able to move microservices behind feature toggles during development itself.

4. A/B testing

source: https://diggintravel.com/wp-content/uploads/2018/10/Airline-AB-testing-in-action-2048x1236.jpg

Two versions of an app are compared using A/B testing to see which one performs better. An experiment is like A/B testing. In A/B testing, we present users with two or more page versions at random. Then, we use statistical analysis to determine which variant is more effective in achieving our objectives.

Pros:

  • Get clear evidence
    It’s easy to see how many users complete a transaction with site A over site B. The evidence is based on real behaviour, so is hard data of the type that money men love (and can be presented in a simple-looking, hard hitting chart).
  • Test new ideas
    If you have an innovative idea for an existing site, A/B testing provides hard proof as to whether it works or not. However, you will need to implement that big idea in hard code before you can test it this way.
  • Optimise one step at a time
    If you run a large site, or many sites, then A/B testing is a fantastic opportunity to “patch” test, by starting out in a small corner and then working up to the main pages of the site. However, can smaller site users with less traffic afford to gamble with real users by giving half of them a site experience that might not be optimal?
  • Answer specific design questions
    Are green buttons better than red ones for your site design? This and many other questions can be answered by A/B testing as they allow the designer to test different colours, placement of buttons, page layouts, different images which are all good areas to slowly improve a website.

Cons:

  • Can take lots of time and resources
    A/B testing can take a lot longer to set up than other forms of testing. Setting up the A/B system can be a resource and time hog, although third-party services can help. Depending on the company size, there may be endless meetings about which variables to include in the tests. Once a set of variables have been agreed, designers and coders will need to effectively work on double the amount of information. In addition, in order to get conclusive results, tests can take weeks and months for low-traffic sites.
  • Only works for specific goals
    This kind of testing is ideal if you want to solve one dilemma, which product page gives me the best results? But, if your goals are less easy to measure pure A/B testing won’t provide those answers.
  • Doesn’t improve a dud
    If your site had usability problems to begin with and the variations are just an iteration of that, it is likely to still have the fundamental flaws that your other site contained. A/B Testing won’t reveal these types of flaw or reveal user frustration and you won’t be able pick up on the reasons behind the site’s problems. Just because A resulted in more sales, it is only in relation to B. Removing the original usability issue could be much quicker to identify and result in much better results.
  • Could end up with constant testing
    Once the test is over, that is it. The data is useless for anything else. Further A/B tests will have to start from a new baseline and other types of testing will only likely be applied to the more successful site, when they could have found equally useful information from the rejected version.

--

--