Betterment In Container World With Red hat Openshift

Kushank Patel
11 min readMar 13, 2021

What is Openshift?

OpenShift enables efficient container orchestration, allowing rapid container provisioning, deploying, scaling, and management. The tool enhances the DevOps process by streamlining and automating the container management process. Built on Red Hat® Enterprise Linux® and Kubernetes, OpenShift Container Platform provides a secure and scalable multi-tenant operating system for today’s enterprise-class applications. It also provides integrated application runtimes and libraries.

OpenShift is a layered system wherein each layer is tightly bound with the other layer using Kubernetes and Docker clusters. The architecture of OpenShift is designed in such a way that it can support and manage Docker containers, which are hosted on top of all the layers using Kubernetes. Unlike the earlier version of OpenShift V2, the new version of OpenShift V3 supports containerized infrastructure. In this model, Docker helps in the creation of lightweight Linux-based containers and Kubernetes supports the task of orchestrating and managing containers on multiple hosts.

Components of OpenShift

One of the key components of OpenShift architecture is to manage containerized infrastructure in Kubernetes. Kubernetes is responsible for the Deployment and Management of infrastructure. In any Kubernetes cluster, we can have more than one master and multiple nodes, which ensures there is no point of failure in the setup.

Kubernetes Master Machine Components

Etcd

It stores the configuration information, which can be used by each of the nodes in the cluster. It is a high availability key-value store that can be distributed among multiple nodes. It should only be accessible by the Kubernetes API server as it may have sensitive information. It is a distributed key-value Store that is accessible to all.

API Server

Kubernetes is an API server that provides all the operations on a cluster using the API. API server implements an interface which means different tools and libraries can readily communicate with it. A kubeconfig is a package along with server-side tools that can be used for communication. It exposes Kubernetes API”.

Controller Manager

This component is responsible for most of the collectors that regulate the state of the cluster and perform a task. It can be considered as a daemon that runs in a non-terminating loop and is responsible for collecting and sending information to the API server. It works towards getting the shared state of the cluster and then makes changes to bring the current status of the server to the desired state. The key controllers are replication controller, endpoint controller, namespace controller, and service account, controller. The controller manager runs different kinds of controllers to handle nodes, endpoints, etc.

Scheduler

It is a key component of Kubernetes master. It is a service in the master which is responsible for distributing the workload. It is responsible for tracking the utilization of working load on cluster nodes and then placing the workload on which resources are available and accepting the workload. In other words, this is the mechanism responsible for allocating pods to available nodes. The scheduler is responsible for workload utilization and allocating a pod to a new node.

Kubernetes Node Components

Docker

The first requirement of each node is Docker which helps in running the encapsulated application containers in a relatively isolated but lightweight operating environment.

Kubelet Service

This is a small service in each node, which is responsible for relaying information to and from the control plane service. It interacts with the etcd store to read the configuration details and Wright values. This communicates with the master component to receive commands and work. The kubelet process then assumes responsibility for maintaining the state of work and the node server. It manages network rules, port forwarding, etc.

Kubernetes Proxy Service

This is a proxy service that runs on each node and helps in making the services available to the external host. It helps in forwarding the request to correct containers. Kubernetes Proxy Service is capable of carrying out primitive load balancing. It makes sure that the networking environment is predictable and accessible but at the same time it is isolated as well. It manages pods on node, volumes, secrets, creating new containers, health checkups, etc.

Integrated OpenShift Container Registry

OpenShift container registry is an inbuilt storage unit of Red Hat, which is used for storing Docker images. With the latest integrated version of OpenShift, it has come up with a user interface to view images in OpenShift internal storage. These registries are capable of holding images with specified tags, which are later used to build containers out of them.

Frequently Used Terms

Image

Kubernetes (Docker) images are the key building blocks of Containerized Infrastructure. As of now, Kubernetes only supports Docker images. Each container in a pod has its Docker image running inside it. When configuring a pod, the image property in the configuration file has the same syntax as the Docker command.

Project

They can be defined as the renamed version of the domain which was present in the earlier version of OpenShift V2.

Container

They are the ones that are created after the image is deployed on a Kubernetes cluster node.

Node

A node is a working machine in the Kubernetes cluster, which is also known as a minion for master. They are working units that can a physical, VM, or cloud instance.

Pod

A pod is a collection of containers and its storage inside a node of a Kubernetes cluster. It is possible to create a pod with multiple containers inside it. For example, keeping the database container and web server container inside the pod.

Benefits of Openshift

OpenShift enables your development team to focus on doing what they do best designing and testing applications. When they are freed from spending excessive time managing and deploying containers, they can speed up the development process and get products to market more rapidly.

Consider the case of a company specializing in the design and sale of integrated circuits. The cycle of innovation in this industry is relentless; as new technologies arise, chipmakers who can most effectively design chips for these new uses will be the ones who gain market share. For example, the rapid rise of the smartphone has been a boon to companies that have designed chips to power it.

Orchestrating container usage via the OpenShift platform provides a marked efficiency advantage to chipmakers who utilize containers for the next-generation virtualization benefits they offer. Deploying an increased number of apps on existing system resources enables a chipmaker to provide its developers with an expanded toolset to increase their ability to innovate. In an industry where a chipmaker’s main products can become outdated if not obsolete within a year or less, the ability to innovate and bring a product to market rapidly is a significant competitive advantage.

Accelerate application development

Deploying and managing containers at scale is a complicated process. OpenShift enables efficient container orchestration, allowing rapid container provisioning, deploying, scaling, and management. The tool enhances the DevOps process by streamlining and automating the container management process. Cutting down on the time that would otherwise be spent managing containers improves your company’s productivity and speeds up application development.

Accelerated application development is especially valuable in enterprises where a company’s IT system must accommodate rapidly evolving functions. An example of this is the cybersecurity industry. Companies in this industry face an arms race against hackers, who are continually looking for software flaws to exploit. When an exploit is found, cybersecurity firms are expected to respond with fixes as rapidly as possible often in days, if not hours.

Self-service provisioning

Assembling the proper tools to create applications on your system architecture can be a challenge, especially at the enterprise level. OpenShift makes the process easy by allowing for the integration of the tools you use most across your entire operating environment.

This self-service provisioning helps improve developer productivity by allowing your development team to work with the tools they are most comfortable using speeding up the development process by enabling faster creation and deployment of applications. At the same time, OpenShift allows your operations staff to retain control over the environment as a whole.

Enable DevOps and department-wide collaboration

The DevOps process relies upon transparent communication between all involved parties. Containerization provides a convenient means of enabling your IT operations staff to test instances of a new app. OpenShift assists this process by making it easy to test apps throughout your IT architecture without being impeded by framework conflicts, deployment issues, or language discrepancies.

One industry that can benefit from OpenShift’s enablement of enhanced DevOps processes is the web hosting and development field. Companies competing in this industry are constantly racing to offer their customers enhanced functionality. For instance, as web commerce increases by leaps and bounds, companies and individuals progressively look to sell their products over the web. They can do this by adding web sales functionality to their own sites via widgets designed for this purpose, or by purchasing sites with built-in sales functionality.

Why Use OpenShift?

OpenShift provides a common platform for enterprise units to host their applications on the cloud without worrying about the underlying operating system. This makes it very easy to use, develop, and deploy applications on the cloud. One of the key features is, it provides managed hardware and network resources for all kinds of development and testing. With OpenShift, PaaS developers have the freedom to design their required environment with specifications.

OpenShift provides different kinds of service level agreements when it comes to service plans.

Free − This plan is limited to three years with 1GB space for each.

Bronze − This plan includes 3 years and expands up to 16 years with 1GB space per year.

Sliver − This is a 16-year plan of bronze, however, has a storage capacity of 6GB with no additional cost.

Other than the above features, OpenShift also offers an on-premises version known as OpenShift Enterprise. In OpenShift, developers have the leverage to design scalable and non-scalable applications and these designs are implemented using HAproxy servers.

Features

There are multiple features supported by OpenShift. Few of them are −

  • Multiple Language Support
  • Multiple Database Support
  • Extensible Cartridge System
  • Source Code Version Management
  • One-Click Deployment
  • Multi Environment Support
  • Standardized Developers’ workflow
  • Dependency and Build Management
  • Automatic Application Scaling
  • Responsive Web Console
  • Rich Command-line Tools
  • Remote SSH Login to Applications
  • Rest API Support
  • Self-service On-Demand Application Stack
  • Built-in Database Services
  • Continuous Integration and Release Management
  • IDE Integration
  • Remote Debugging of Applications

CASE STUDY

Ford Motor Company

About Ford Motor Company

Ford Motor Company is a global company based in Dearborn, Michigan. The company designs, manufactures, markets, and services a full line of Ford cars, trucks, SUVs, electrified vehicles, and Lincoln luxury vehicles, provides financial services through Ford Motor Credit Company, and is pursuing leadership positions in electrification; mobility solutions, including self-driving services; and connected services. Ford employs approximately 190,000 people worldwide. For more information regarding Ford, its products, and Ford Motor Credit Company.

Challenge

Ford’s business units host a robust, engaged developer community. But collaboration between hundreds of thousands of employees and across thousands of internal applications and sites created complexity that Ford’s traditional IT environment and development approaches could not accommodate. Even with hypervisors and virtual machines, the company struggled with inefficient resource use and high staffing costs. Ford wanted a new environment to help it use its resources more efficiently.

Ford Motor Company adopts Kubernetes and Red Hat OpenShift

Ford Motor Company seeks to provide mobility solutions at accessible prices to its customers, including dealerships and parts distributors who sell to a variety of retail and commercial consumers. To speed delivery and simplify maintenance, the company sought to create a container-based application platform to modernize its legacy stateful applications and optimize its hardware use. With this platform, based on Red Hat OpenShift and supported by Red Hat and Sysdig technology, Ford has improved developer productivity, enhanced its security and compliance approach, and optimized its hardware use to improve operating costs. Now, the company can focus on exploring new ways to innovate, from big data to machine learning and artificial intelligence.

Benefits

  • Improved productivity with the standardized development environment and self-service provisioning
  • Enhanced security with enterprise technology from Red Hat and continuous monitoring provided by Sysdig
  • Significantly reduced hardware costs by running OpenShift on bare metal
  • Performance and security improvements help Ford deliver services and work with partners more efficiently
  • Significantly increased developer productivity
  • Successful adoption of OpenShift and DevOps creates the foundation for new opportunities to innovate

Ford is already experiencing significant growth in demand for its OpenShift-based applications and services. It aims to achieve migration of most of its on-premise, legacy deployments within the next few years.

The company is also looking for ways to use its container platform environment to address opportunities like big data, mobility, machine learning, and AI to continue delivering high-quality, timely services to its customers worldwide.

“Kubernetes and OpenShift have really forced us to think differently about our problems because we can’t solve new business challenges with traditional approaches. Innovation and constantly exploring and questioning are the only way we can move forward,” said Puranam. “It’s a journey, but one that we have a good start on. Thanks to having the right set of partners, with both Red Hat and Sysdig, we’re well-situated for future success.”

BUSINESS OUTCOME

Increase productivity while reducing costs

With the new multitenant Red Hat OpenShift environment, dealers and plant operators gain access to new features, fixes, and updates faster. Now, many processes for stateful workloads take less time, and the company has seen a productivity improvement for Containers-as-a-Service support. Shifting to a container-based approach requires less initial hardware investment — and ongoing savings as Ford continues to modernize and migrate its legacy applications.

CONCLUSION

Red Hat solutions involving the Red Hat OpenShift Container Platform provide an excellent foundation for building a production-ready environment that simplifies the deployment process, provides the latest best practices and ensures stability by running applications in a highly available environment.

--

--

Kushank Patel

I am pursuing my master's degree at University of Windsor and interested in devops field.