Introduction
For those of us who work with technologies every day, it’s important to remember one key thing: every topic is new to someone somewhere every day.
With that in mind, we are starting a series of posts here that will start from basics to help you build your knowledge of modern application delivery. Think of it as Containers 101.
To understand what containers are and how they benefit application developers, devops, and operations teams, let’s look at an essential change in the architecture of applications: the use of microservices.
What are Microservices?
Microservices are an evolution of a software architecture concept that developed in the 1990s and became widespread in the 2000s – service-oriented architecture (SOA). SOA defines an application as a collection of services. A service is an independent and self-contained function that is well-defined and stateless. Services act together as an application by taking input from each other (or, at one end of the application pipeline, from a user or other input source), performing some processing on the data, and passing it on to another service (or, at the other end of the pipeline, to some data store or to a user).
Services are reusable – that is, the same service can be connected to many different services, often from different applications with the same needs. Here’s a very simple example: whether it is a person, a command shell, or another program that needs to convert a domain name to an IP address, there can be a single Domain Name Service in the environment that resolves those requests.
Many of today’s developers were exposed to SOA in the form of web services, functions that could be exposed by web protocols such as HTTP, with their inputs and outputs composed into structured requests via REST APIs. These services communicate with each other over networks. Services can also use other communication mechanisms, for example, shared memory.
Microservices are a next step, where monolithic applications that traditionally run on a single server (or redundantly in a cluster) are decomposed – or new ones built – as a collection of small well-defined units of processing. Microservices may run on the same system or across nodes of a cluster.
The benefits of using microservice-based architecture include:
- functions can be shared with other applications
- functions can be updated without requiring rebuilding and updating entire applications (continuous delivery)
- functions can be scaled up and down independently, making it easy to deploy resources where they are needed
Using microservices has become much simpler with the development of a relatively new architectural construct: containers.
What are Containers?
The adoption of virtual machines became widespread in the 1990s and 2000s for IT on industry-standard system architectures because they made it possible to do two very important things: to isolate an application from the behavior of other applications on the same system or cluster, and to package up all of the resources an application or set of applications require into an easily deployable, easily portable format. But virtual machines can be a solution for these issues that is too resource-intensive for the needs of many applications, and especially, of microservices. Each virtual machine needs to not only carry with it the application or service and all of its dependencies, but also, an entire operating system environment, and the emulation of a software version of a standalone computer.
Containers are a “best of both worlds” architectural idea that attains many of the isolation and packaging benefits of virtualization, but by using lighter-weight mechanisms within a shared operating system, offers many benefits. Because containers don’t need to boot a new operating system environment, they can start and stop rapidly, often in less than a second – especially useful when scaling them up and down to accommodate changing demand. Because this makes them smaller than a virtual machine, more of them can be run on the same hardware simultaneously. For the same reason, they are especially well suited to microservices, of which a well decomposed application may have a large number. But containers still carry with them the libraries and commands that each application or service needs – making it possible for apps and services build on different OS releases to coexist on the same hardware.
What aren’t Containers?
Containers are not virtual machines. They do not offer the heavy-weight security and performance isolation that virtual machines can offer. (Though there are new container implementations in development that come close; we will discuss those in a future educational blog post.)
Containers are not installation packages – they take the place of software installation. Containers can be deployed on demand to specific servers and their deployment can replace the complex tasks of software installation.
Containers are not whole applications. Well, to be honest, some of them can be: there are certainly gains in flexibility of deployment and management that can be realized by just putting a monolithic application in a container, But the real gain comes from rearchitecting legacy applications into microservices, and designing new ones that way. Note that the journey to microservices and application modernization need not be all-or-nothing: many organizations start on their existing application by chipping away at them to break off reusable and scalable capabilities into microservices gradually.
Where Do I Go From Here?
If you’re new to containers and microservices, I hope this has given you a good introduction. The next post that builds on this knowledge will be available in about two weeks. If you want to read ahead, SUSE Linux Enterprise Server includes a containers module about which you can find information on our website and in the blog. And SUSE CaaS Platform includes containers and management capabilities for them in a purpose-built product. If you find the reading gets deep for you, though, stop back to the SUSE Blog for more of Containers 101 soon.