Micro services – Big Changes

The shift to development on microservices architecture that mostly runs on containers streamlines work processes for many organizations and accelerates time to market (TTM) -- when performed correctly, of course. Here are the five tips every organization needs to know before venturing into this new world.

In collaboration with VMWare & TeraSky 13:0721.07.21
Until a decade ago, software development processes were based on a Waterfall methodology. The Waterfall model responded to the needs of the time, before cloud services made their appearance and before the exponential growth of the number of app service consumers. In the “old world,” when an update was needed or a new feature was developed for a product, each team had to make the necessary adaptations to the code in their layer, whether it was the interface (which is what the end user sees), the business logic, or the data base.

 

This meant that each new development had to move through three separate teams, so they could each implement their respective requirements. It also meant development and TTM times that could last anywhere from weeks to months, and prolonged periods of downtime to ensure seamless transitions. More often than not, products had to be rolled back when bugs were detected in the end-user version – a complex process in and of itself – and started over.

 

The market’s changing needs in recent years ushered in the microservices architecture, in which the features of each application are broken down into micro components, each developed separately. This allows for the development of new components for a given application in a manner that does not affect the app’s existing components.

 

Like adding Lego bricks

 

Let’s say, for example, Facebook wants to add another reaction emoji to make its users happy, or Netflix wants to add personalized thumbnails to make us watch more TV; Perhaps the military wants to improve target acquirement in one of its missile systems to save lives. The microservices model cuts development times for these objectives to a minimum. Once the required microservice is developed, it is tested and then added to the existing app – just like adding a block to a complete and fully functioning Lego structure.

 

Imagination is the only limit. Need to adapt your organization to 5G technology or to any other future technologies? No problem. You adapt ad hoc. Need to develop several components at once, each with the code that’s best for that new feature? Easy as pie.

 

But shorter timeframes are not the sole benefit that microservices offer. When a bug occurs, only that specific microservice is rolled back, rather than the entire application. To take our previous example, it’s akin to gently removing the newly added Lego block without anyone noticing that something has changed in the main structure. It allows the application to continue running smoothly.

 

 

5 tips for using microservices more effectively

 

Any important organizational process should be implemented with care to allow the organization to reap its benefits without incurring damages. There are more than a few possible pitfalls on the way, so here are the top five tips to help you make the leap from the Waterfall and other monolithic development models to microservices:

 

1. Pack the app: What happens when you want to transfer the microservices from its development environment to testing and then to production? It might not work as smoothly as you anticipated due to compatibility issues. “It’s kind of like taking a cake recipe into a kitchen you’re unfamiliar with – you don’t know whether the ingredients in the refrigerator are the same as the ones you usually use”, explains Lev Endelman, Cloud and Devops CTO at TeraSky. “So, the cake can turn out the same or not as good, or even come out of the oven burnt”. This is where containers come in – following Endelman’s appetizing analogy, the container holds the already cooked food, and all you have to do is heat and serve. Going back to microservices, the container holds various components such as the code itself, settings files, and dependance on the operating system (i.e. work folders, API, etc). This allows mobilizing containers across environments, and retrieving them in the event that an update or a fix is required.

 

2. Automate the code: “Crating a code package for a container is a relatively simple process, but getting it done efficiently and securely is not as simple – particularly as this part is usually dumped on the development team, who don’t have the knowledge or interest to handle its maintenance. Developers want to develop – not to take care of containers,” explains Kobi Shamama, Account Executive at VMware Tanzu, who consults companies on this process. “You have to understand how to maintain the base image – a critical component of a container -- how to create effective layers and deal with updates that might be problematic. This calls for an automated response that can take the developer’s source code and use it to build an effective and secure container that updates itself for every change from the developer. This can be performed with technologies such as Tanzu Build Service.”

 

3. Secure and backup: New technologies can introduce many benefits and capabilities. However, when adopting these technologies, tried and tested practices, such as backups and information security, often fall by the wayside. Enterprises and veteran organizations – like medical and financial companies – are obligated to do this by regulation and therefore seek technologies that support these requirements. Yet, startup companies are characteristically early adopters of new technologies and are not necessarily subjected to the same regulatory constraints, which means they often rush to implement new technologies while risking the information they already have. “It’s crucial to remember that no one has proclaimed that there is no longer a need for backups, information security, and environment component monitoring , just because our application runs on a new platform called Kubernetes. It’s important to understand the differences between legacy systems and Kubernetes’ container-based systems, and act accordingly, with the proper tools and approach”.

 

4. Structure fits scale: Make sure the structure corresponds with the scale and demands of the organization. “The story doesn’t end with a container and one Kubernetes”, explains Endelman. Enterprises create multiple development, operational, testing, and regulation environments. “It’s important to equip an organization with the right tools to build, upkeep, and manage all of these environments in a convenient manner, and with management capabilities and proper authorizations that correspond with the complexity of their organization. Furthermore, developers must have easy access to these environments such that does not hold back the development process, no matter what cloud infrastructure they’re on – whether private vSphere or external”.

 

5. Adopt proper Dev methodology: A container infrastructure and Kubernetes are just part of the story. Organizations need to adopt modern development methods such as extreme programming, and revisit the ways in which they characterize the critical features that create the best value in the shortest time for their company, thereby putting their backlog in order.