Each application creates its processes, uses CPU resources, RAM, disk space and depends on other system components.

Migrating applications, running additional instances, testing or separating them to ensure security and stability jeopardizes the environment’s balance. Do you really have to reinstall the application and restore its configuration every time? What if we need a thousand, or ten thousand copies, or if each instance requires a different IP address? And what if the OS doesn’t include the required dependencies? 

Those kinds of problems are solved with a virtual machine, which can be easily migrated. At the same time, you can adjust resources, assign addressing and isolate the new system from others. But why the whole system, if you only need the application? 

This is where containerization comes into play.

Containerization vs virtualization  

Most of the existing virtualization methods assume a stack consisting of hardware on which the operating system is running, and a hypervisor, i.e. the controller of the next layer: virtual machines. These machines re-use the operating system as the bottom layer of the stack. Above are the libraries and binaries. The application is running at the very top of the stack. 

Virtual machines are isolated from each other, creating a complete and independent environment. The system administrator can adjust the assigned hardware resources quite freely. However, applications running on virtual machines have lower performance than those running on physical hardware, i.e. without virtualization. This is because the virtual machine’s operating system and hypervisor take up hardware resources. Just like a buyer who buys goods via a chain of brokers must take into account higher purchase costs, so each subsequent abstraction layer separating physical hardware from the application depletes its performance. Fortunately, there is a way to lean down the abstraction layers. 

Apps all packed and ready to go

The basic idea of containerization is to place the application, its processes, configuration and dependencies in a virtual unit called a container. From the application’s point of view, these containers are separate and independent instances of the runtime environment. They have a separate RAM and disk space, as well as a private IP address. Despite the isolation, containers can communicate with each other through well-defined information exchange channels.

Containerization allows removing a duplicated operating system and hypervisor, which dramatically reduces the performance gap between the container application and the application running on physical hardware. Simultaneously, it allows the same scalability, portability and instance separation. 

However, please remember that containerization is by no means a successor to virtualization, or some virtualization 2.0. Instead, it is an improved and cheaper way to make the IT ecosystem more flexible, to match individual business needs and make it both on-prem-, and cloud-ready.

Docker, Kubernetes and Docker Swarm

Implementing containerization and then managing such a stack requires an appropriate platform. Undoubtedly, the most well-known and commonly used containerization tool is Docker. It is an open-source solution and a container file format. Due to its popularity, this technology has even become synonymous with the word “containerization”.

When there are more and more containers, and they are deployed on many servers, Kubernetes comes handy. This most popular container orchestration platform was created by Google for its own needs and later released as open-source software. Kubernetes is used to manage, automate and scale container applications. The platform allows managing a large number of containerized application instances and offers a proprietary, integrated form of load balancing. 

Kubernetes enables not only automation but also creating declarative tasks. It is often compared with Docker Swarm, the orchestration software for Docker. The latter solution, although it allows less flexibility, requires much less configuration, offers a faster publishing process and is easier to use. The choice between these two solutions often depends on the preferences of the DevOps team and the needs of the project itself.

SoDA in the cloud

Due to the favourable pros and cons balance, containerization and orchestration are extremely dynamically developing fields of computer science, especially in the face of increasing requirements for business IT.

CCA Europe.pl cooperates with 5 partner companies from the Polish Software Development Association (SoDA) in implementing containerization of applications and migrating applications to the cloud. Each of these companies has its character and specialization. Based on them, we precisely choose the partner’s skill set for a specific project:

  1. Specialization: applications, Java technology-based stack and a well-thought-out application development process, in line with Continuous Integration/Continuous Delivery. The partner offers a solid testing strategy based on a standard pyramid, including a Nexus artifacts repository, as well as building Docker Swarm container farms. 
  2. Specialization: AWS cloud, including Jenkins CI/CD GitLab Pipelines for running the OS scripts. 
  3. Specialization: automatic, performance and stress tests for CI processes. The partner offers consulting services for CI process testing strategies.
  4. Specialization: Google cloud, with experience in creating a public cloud. We move Java-based applications to the cloud. Our partner offers Kubernetes certificates for administration and development. 
  5. Specialization: MS Azure cloud. The partner offers full coverage of the CI/CD process with Azure DevOps and Azure Pipelines with Jenkins, GitLabs, Bamboo or Concourse, when applicable. Both Kubernetes and Docker Swarm can be freely selected. We use Azure Artifacts or Artifactory as an artifacts repository. We specialize in migrating applications to the cloud in the .NET stack, but we also have experience in application environments based on other technology stacks.

With containerization, microservice farms on cloud platforms are possible. They enable not only easy migration of applications but also virtually unlimited adjustment of the resources to load used. 

With partners demonstrating such a wide range of competences, we can implement even very advanced CI/CD processes or migrate applications to the cloud.