Nearshore Americas
best practice

How to Make Microservices Sing – Part 2

In the lead up to Nearshore Americas’s special webinar presentation on April 3rd at 2pm EDT Mike Hahn, one of the webinar panel members, is exploring what you need to know about microservices and getting it right for your organization in this special series for Nearshore Americas. You can register for the webinar here.

In the first part of this two-part discussion on how to make microservices work for you, we looked at decoupling and identifying pain points in the process. This part examines best practices in deploying microservices and how to develop a plan for moving to microservices.

Once you have identified any potential pain points, you can set up the application containers in which you will build the microservices. Using a container orchestration tool to automate the process of container creation can be helpful, as they will facilitate the breaking up the monolithic application into small services and in managing the microservices post-production. 

Begin the actual decoupling by taking a macro, then a micro, approach. First, break the monolithic application into modules based on how the different functions of the applications are grouped. Then divide the modules into subsystems that will consist of multiple microservices.

Best Practices

As you proceed through the decoupling process, here are some best practices to apply:

  • Determine where to start by using dependency trackers;
  • Identify hidden coupling with code forensics;
  • Uncover application failures and development practices that do not adhere to common standards by using static analysis;
  • Model subsystems and define the boundaries in the monolith for bounded context by leveraging domain-driven design;
  • Map each service into separate domains to see where information flows and the dependencies among services;
  • Strip the tables from the main database that are needed for each service (one-by-one), and turn each table into a separate database for its designated service; this improves query performance and makes it easier to diagnose query failures;
  • Expose each of the new database tables as a service so that other services can access it via HTTP, messaging or queues; and
  • Decouple services vertically and release data early so each service can be managed in a decentralized manner.

Each microservice will need to ask for information from other microservices because they don’t share the same database and resources. You will need to design a communication plan among the multiple microservices to get information from one service to another, which you can do by using a message queueing platform or event-driven design.

Minimizing the dependency that each microservice has on the part of the application that is still within the monolithic architecture is crucial. The lower the dependency, the faster you can roll out future updates to the service.

After testing the data flows and fault responses for each microservice, fix any bugs and check the logs to validate whether every resource/endpoint is exposed to each API. Now, you’re ready to deploy!

Planning for Microservices

Transitioning an application to a microservices is a big step for most organizations, and the path to success is paved with risk and uncertainty. But by carefully evaluating the key considerations—including the decision as to whether microservices is the right fit—you can ensure your organization takes the right path at the right time. A good start point is to look at the needed infrastructure and processes.

To begin the transition away from monolithic legacy software, you have the flexibility to implement microservices to a portion of an application to gain just the functionality you need, and then evolve the application further later on. That means conceptually and technically decoupling large application suites into distinct services, which can be created, tested, deployed and provisioned to users one-at-a-time.

Once you identify your business needs and ensure the microservices model matches your culture and processes, then carefully plan your approach. There are a number of key factors to consider and decisions to make:

  • Decide which microservices patterns to use and what the best-fit choices are. There are a range to choose from, from saga pattern to composite pattern, API gateway pattern, and so on.
  • If you decide on the pattern of one database per microservice—where changes in the database don’t impact other services—you also need to determine whether it is to your advantage to use a composer pattern, which could be per domain, per product, or even per team.
  • You don’t want to end up with too many composers and needless complications, but you also should not have fewer than you need, which would saddle you with a large code set that is difficult to maintain.
  • To maintain transactions across microservices, another tool decision needs to be made. There is also the question of whether to use a discovery service, and if so, what kind: for the client side, the server side, or the registry service.
  • Determine how the clients of the microservices application will access the individual services. This can often be solved by using an API gateway, like Netflix did, for example, to provide a single entry point to the microservices for the front-end. You can choose among a variety of ways to set up such a gateway.
  • The microservices architecture needs a solid recovery strategy, or else failure at any of the likely multiple points could be ruinous to your effort, especially if the application cascades.
  • Consider the possible consequences of a microservice failing or taking too long to respond before you apply a remedy. Hystrix by Netflix is one possible solution that can help you ensure latency and fault tolerance, and it can make your distributed systems more resilient when failure is highly probable.

Another key factor to consider is whether to rely on multiple instances of the same microservice to enable quick failure-recovery. You can do this by using a client load-balancing tool. If the database crashes, a good remedy might be a cache from where you can retrieve data until the database is up once again. Of course, you can also combine the two approaches—multiple instances of one microservice together with a recovery tool—and benefit from better scalability and availability.

Sign up for our Nearshore Americas newsletter:

The decisions you make in relation to the factors discussed above will have a profound impact on the results you experience. Having a solid understanding of everything involved in a microservices migration and a strategic plan to guide you will be your best bet to achieving success.

Mike Hahn

Add comment