Course Audience and Requirements

Audience

This course is addressed to Linux administrators or software developers starting to work with containers and wondering how to manage them in production. In this course, you will learn the key principles that will put you on the journey to managing containerized applications in production.

Knowledge/Skills

To make the most of this course, you will need the following:

  • A good understanding of Linux

  • Familiarity with the command line

  • Familiarity with package managers

  • Familiarity with Git and GitHub

  • Access to a Linux server or Linux desktop/laptop

  • VirtualBox on your machine, or access to a public cloud.

Software Environment

The material produced by The Linux Foundation is distribution-flexible. This means that technical explanations, labs and procedures should work on most modern Linux distributions, and we do not promote products sold by any specific vendor (although we may mention them for specific scenarios).

In practice, most of our material is written with the three main Linux distribution families in mind:

  • Debian/Ubuntu

  • Red Hat/Fedora​

  • openSUSE/SUSE.

Distributions used by our students tend to be one of these three alternatives, or a product that is derived from them.

Lab Environment

The lab exercises were written and tested using Ubuntu instances running on Google Cloud Platform. They have been written to be vendor-agnostic, so they could run on AWS, local hardware, or inside of virtual machines, to give you the most flexibility and options.

🚩

Each platform will have different access methods and considerations.

Each node has 3 vCPUs and 7.5G of memory, running Ubuntu 18.04. ​Smaller nodes should work, but you should expect a slow response. Other operating system images are also possible, but there may be a slight difference in some command outputs.

Using GCP requires setting up an account, and will incur expenses if using nodes of the size suggested. For more information review Quickstart Using a Linux VM.

Amazon Web Service (AWS) is another provider of cloud-based nodes, and requires an account; you will incur expenses for nodes of the suggested size. You can find videos and information about how to launch a Linux virtual machine on the AWS website.

Virtual machines such as KVM, VirtualBox, or VMWare can also be used for the lab systems. Putting the VMs on a private network can make troubleshooting easier. As of Kubernetes v1.16.1, the minimum (as in barely works) size for VirtualBox is 3vCPU/4G memory/5G minimal OS for the master, and 1vCPU/2G memory/5G minimal OS for the worker node.

Finally, using bare-metal nodes, with access to the Internet, will also work for the lab exercises.

If using a cloud provider like GCP or AWS, you should be able to complete the lab exercises using the free tier or credits provided to you. However, you may incur charges if you exceed the credits initially allocated by the cloud provider, or if the cloud provider’s terms and conditions change.

Last updated