With the sun set on 2019, one of the most interesting decades in software history, we thought to provide an overview on some of the exciting trends we saw over the last few years, where software has continued eating the world, and some predictions on where we are going. We are living in very interesting times, aren’t we?
//side note from the editor: today’s post was contributed by David Melamed, the Sr. Technical Lead in the Cloud Security CTO Office at Cisco. David has nearly 2 decades of experience in research and software development. We are grateful that he has taken the time to share his thoughts on the last decade!
The Cloud Computing Revolution
Agile, DevOps, DevSecOps, GitOps, NoOps, Containers, FaaS, Serverless… The list of buzzwords for ‘hot’ technology trends has grown exponentially over the last few years. It is getting harder to keep track of them when you are focusing on always delivering faster business value to your customers while simultaneously coping with scaling challenges both in terms of infrastructure and workforce.
The advancements in and adoption of cloud computing was one of the major themes of the last decade.
Digital transformation is mandatory for businesses that want to thrive in a highly competitive and dynamic market. Continuously changing requirements are leading to massive adoption of cloud services that enable companies to delegate parts of their ops work, embrace infrastructure elasticity, and avoid upfront provisioning costs. This enables companies to focus their energy on their core business. In the case of serverless computing technology, we’ve reached the next stage. Not only is the server running the code being operated by a third-party vendor ‘somewhere in a data center’ (the Cloud), but even the code execution environment is no longer known or accessible.
Containers & Microservices: Velocity Boost with Challenges
Better, faster, stronger
In parallel to the massive adoption of cloud services that touched all industries, the software Engineering and IT world has also been turned upside down by the adoption of new patterns of development led by the emergence of containers. Teams moved away from a deprecated monolithic stack that was shattered into smaller pieces of code called microservices, defined by their functional intent and some sane and clear boundaries, usually developed by a single highly-focused team.
A Service is Born
Each service has its own lifecycle, test suites, packaging, deployment, and monitoring. This tends to drive a higher velocity among agile teams since their focus is scoped to a specific service, leveraging the “contract” (interface) set up with other teams to ensure a proper inter-service communication. While some may be tempted to create a mock of those internal dependencies, a more sustainable model consists in asking each team to provide a mock for their own service to ensure everyone is using the same and latest version. Once packaged into a container, the piece of code can be tested locally. This is a step that is too many times overlooked, especially when upgrading from legacy software, but highly critical to allow quick feedback loops during development. Then and only then, once unit tests have successfully completed to run in an isolated way – no external dependency or cloud service allowed here – code would be deployed to a production-like staging server where integration tests are running against both mock internal services and real cloud services. Finally, the same container stored in a repository can be promoted to be deployed in production where it is expected to work in the same way it did during the tests.
With Freedom Comes Responsibility
The autonomy that was given to teams to control the entire lifecycle of their services, including deployments, released engineering organizations from the traditional monthly or yearly global release cycles and helped them to deploy more and faster. This consequently reduced the time to market, provided shorter feedback cycles through continuous A/B tests, and enabled faster bug fixes. However, at the same time, it required a combination of more thinking ahead to plan deployments, e.g. making sure proper backward compatibility is supported in order to avoid maintaining a matrix of service versions compatibility, and teams that included more well rounded skill sets (devops).
While velocity may have increased locally, it also raised new challenges. The number of services tended to rise quickly and often became a major pain to manage due to the need for orchestration tools. Debugging and testing a single service locally became easier, but observability across multiple services was quite challenging and the introduction of tracing became vital to be able to debug and follow a request lifecycle in a production environment. Central logging management to monitor and debug distributed systems became critical as well. Another risk that spawned from the autonomy given to teams to work independently was the lack of collective knowledge and consistency. Teams that were empowered to pick their own frameworks and deployment tools without limitations may have moved faster, but there is some negative fallout as a level of overall consistency can prove to have benefits, e.g. rotation between teams, developing subject matter expertise and knowledge exchange.
FaaS and Serverless Computing: The Next Step
Function-as-a-service (or FaaS) is often called serverless but it is actually only one type of serverless service. A serverless service can be defined by three main criteria: pay-per-use (meaning the customer does not pay for idle computation time), auto-scaling (on-demand automatic provisioning), and managed service. The major FaaS promise of letting development teams focus only on the code itself and the business logics (up to the level of a simple function) while the cloud provider takes care of all the rest led developers to replace their microservices with nanoservices, often event-driven, or adopt them on top of their existing services. While there is a substantial gain in terms of cost (think of handling spikes of tens of thousands of requests for 5 minutes and only be billed for those 5 minutes thanks to transient resources) and flexibility (e.g. AWS Lambda can easily be used to glue multiple services creating complex workflows, ETL, hook receivers,…), it does not come without its own challenges.
In terms of operations, teams need to develop expertise in latency, CPU, and memory requirements and manage timeouts, retries, traceability, and potentially hundreds of running functions in the cloud. Development teams also need to choose between two models for source control, a FaaS monorepo model (a model in which a single repository would maintain all of the functions’ code) on one end and a repository per function model on the other end. The right model needs to be considered carefully, each one having its own merits – or alternatively using one repository per “service”, where service can be composed by multiple related functions. And trust us, we’re just scratching the surface here.
Conclusion: to Infinity…
It is undeniable that the last few years have seen a major shift in how software is being developed and shipped and how development teams work. Only a few years back, developing and deploying a static website could take several weeks or months. It is now a matter of seconds to deploy a function in the cloud that can automatically scale up and handle high-traffic spikes while costing a few cents. Technology is enabling teams to deliver business value faster. We predict that the two trends below will continue to evolve and balance each other:
- Legacy highly complex systems will continue to be broken down into autonomous subsystems and traditional development practices will continue to modernize in order to support the rapid release mentality.
- Teams that are using these fast paced trends carelessly, without the proper checks and balances in place, will eventually slow down dramatically due to quality issues when trying to scale their systems and workforces.
Development teams that will find the right balance between these approaches will be able to leverage them together with future promising technologies and enable their businesses to innovate in new ways while still being laser-focused on their core business value.
About the Author
David Melamed is a Sr. Technical Leader working in the Cloud Security CTO Office at Cisco on various strategic projects for the company. With over 18 years of experience in leading research, advanced software development in high-scale environments and cloud architecture, he is a regular speaker at local meetups and international conferences. He also holds a PhD in Bioinformatics from Paris XI University, France.