It becomes apparent that microservices and containerization are becoming two major trends that contribute substantially to software evolution and inherently are tightly coupled. This is where Docker comes in. Docker is an excellent containerization tool for managing and deploying microservices. It encapsulates a microservice into what we call as Docker container which can then be independently maintained and deployed. With Docker, you can make your application independent of the host environment. Since you follow a microservices architecture, you can now encapsulate each microservice in Docker containers. Docker containers are lightweight, resource isolated environments through which you can build, maintain, ship and deploy your application. Among its great advantages we could summarize the following:
- Docker comes with an excellent community support and built for microservices
- It is lightweight when compared to VMs making it cost and resource effective
- It provides uniformity across development and production environments making it a suitable fit for building cloud-native applications
- It provides facilities for continuous integration and deployment
Kubernetes comes to fill this gap and provides a plenty of benefits to both developers and OPS teams. Developers are now able to deploy their applications themselves quite easily, while OPS engineers can now monitor and reschedule these applications in the event of a hardware failure. It could be stated then that the focus for system administrators (sysadmins) shifts from supervising individual apps to mostly supervising and managing Kubernetes and the rest of the infrastructure, while Kubernetes itself takes care of the applications themselves. More precisely, Kubernetes is an open source container orchestration platform developed by Google for managing microservices or containerized applications across a distributed cluster of nodes. Kubernetes is highly resilient and supports zero downtime, rollback, scaling, and self-healing of containers. As such, the main objective of Kubernetes is to hide the complexity of managing a fleet of containers. It can run on bare metal machines or on public or private cloud platforms such as AWS, Azure and OpenStack. In any case, it abstracts away the hardware infrastructure and exposes the whole datacenter as a single enormous computational resource. It eases the deployment and bootstrap of software components without having to know about the actual servers underneath. But where Kubernetes really starts shine in, is that it alleviates the burdens of manually managing a lot of containers in a large scale production environment. If set up properly, it can save developers time and money by automating infrastructure resource management. For example, when an instance fails, Kubernetes automatically re-creates it. The end result is a smoother user experience and less downtime for applications in the cloud.
With more and more big companies accepting the Kubernetes model as the best way to run apps, it’s becoming the standard way of running distributed apps both in the cloud, as well as on local on-premises infrastructure. Using Kubernetes comes with a learning curve, but the rewards are well worth the effort.
- This specific Code.Learn program lasts 2 weeks (2 Fridays & 2 Saturdays) and consists of 24 hours of lectures and hands-on exercise on real case studies and projects.
Key Objectives – Curriculum (High level)
The core perspectives of this program will be to present, explore and adequately cover with extended real-life business case studies & industry scenarios the following aspects:
- Introduction to Docker
- Installation of Docker and other tools
- Command line structure – Basic Information
- Containers lifecycle (run, stop rm)
- Docker Networking Basics
- Container Images – Docker Hub Registry
- Build Images – The Dockerfile Basics
- Persistent Data and Volumes
- Docker Compose: The Multi-Container Tool
- Architecture of Kubernetes in detail
- Kubernetes Installation
- Running containers in Kubernetes
- Attaching storage to containers
- Managing computational resources
- Running jobs
- Automatic scaling
- Container security
- Networking and load balancers
- Updates, gradual rollouts and autoscaling
- Monitoring and logging
- Best practices for developing applications
The lessons can be carried out:
- Inside a physical classroom with an instructor,
- In an online environment as a virtual classroom, with live connection with the instructor through video conferencing; or lastly,
- A combination of both physical and online.
The method of teaching will depend on the current conditions, and also on the participants’ preferences.
Regarding online, the instructor provides the taught material through screen sharing, live broadcast, or by working on the cloud where attendees can see and interact with everything in real-time. Attendees themselves can seamlessly and actively participate and ask questions, as they would in a physical classroom. Additionally, they can collaborate in team projects and deliver assignments and hands-on projects that the instructor can see and provide feedback easily and without delays.
Education & Experience
Computer scientists, software engineers, developers and system administrators are welcome to participate to this code.learn program and unlock the full potentiality of the topics taught by upskilling their future career.