Kubernetes

Abhishek Biswas
5 min readDec 31, 2020

--

What is Kubernetes?

When the going gets good, the good get boring — GeekWire

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community.

Why we need Kubernetes?

Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system?

That’s how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.

About Kubernetes components and how they work?

For this you can refer the link below which will explain the concepts to you more clearly.

Case Studies of few MNC’s benefited by Kubernetes

NetEase

“A better life doesn’t have to be costly.”

Challenge

Its gaming business is one of the largest in the world, but that’s not all that NetEase provides to Chinese consumers. The company also operates e-commerce, advertising, music streaming, online education, and email platforms; the last of which serves almost a billion users with free email services through sites like 163.com. In 2015, the NetEase Cloud team providing the infrastructure for all of these systems realized that their R&D process was slowing down developers. “Our users needed to prepare all of the infrastructure by themselves,” says Feng Changjian, Architect for NetEase Cloud and Container Service. “We were eager to provide the infrastructure and tools for our users automatically via serverless container service.”

Solution

After considering building its own orchestration solution, NetEase decided to base its private cloud platform on Kubernetes. The fact that the technology came out of Google gave the team confidence that it could keep up with NetEase’s scale. “After our 2-to-3-month evaluation, we believed it could satisfy our needs,” says Feng. The team started working with Kubernetes in 2015, before it was even 1.0. Today, the NetEase internal cloud platform — which also leverages the CNCF projects Prometheus, Envoy, Harbor, gRPC, and Helm — runs 10,000 nodes in a production cluster and can support up to 30,000 nodes in a cluster. Based on its learnings from its internal platform, the company introduced a Kubernetes-based cloud and microservices-oriented PaaS product, NetEase Qingzhou Microservice, to outside customers.

Impact

The NetEase team reports that Kubernetes has increased R&D efficiency by more than 100%. Deployment efficiency has improved by 280%. “In the past, if we wanted to do upgrades, we needed to work with other teams, even in other departments,” says Feng. “We needed special staff to prepare everything, so it took about half an hour. Now we can do it in only 5 minutes.” The new platform also allows for mixed deployments using GPU and CPU resources. “Before, if we put all the resources toward the GPU, we won’t have spare resources for the CPU. But now we have improvements thanks to the mixed deployments,” he says. Those improvements have also brought an increase in resource utilization.

“The system can support 30,000 nodes in a single cluster. In production, we have gotten the data of 10,000 nodes in a single cluster. The whole internal system is using this system for development, test, and production.”

— ZENG YUXING, ARCHITECT, NETEASE

PingCap

Challenge

PingCAP is the company leading the development of the popular open source NewSQL database TiDB, which is MySQL-compatible, can handle hybrid transactional and analytical processing (HTAP) workloads, and has a cloud native architectural design. “Having a hybrid multi-cloud product is an important part of our global go-to-market strategy,” says Kevin Xu, General Manager of Global Strategy and Operations. In order to achieve that, the team had to address two challenges: “how to deploy, run, and manage a distributed stateful application, such as a distributed database like TiDB, in a containerized world,” Xu says, and “how to deliver an easy-to-use, consistent, and reliable experience for our customers when they use TiDB in the cloud, any cloud, whether that’s one cloud provider or a combination of different cloud environments.” Knowing that using a distributed system isn’t easy, they began looking for the right orchestration layer to help reduce some of that complexity for end users.

Solution

The team started looking at Kubernetes for orchestration early on. “We knew Kubernetes had the promise of helping us solve our problems,” says Xu. “We were just waiting for it to mature.” In early 2018, PingCAP began integrating Kubernetes into its internal development as well as in its TiDB product. At that point, the team has already had experience using other cloud native technologies, having integrated both Prometheus and gRPC as parts of the TiDB platform earlier on.

Impact

Xu says that PingCAP customers have had a “very positive” response so far to Kubernetes being the tool to deploy and manage TiDB. Prometheus, with Grafana as the dashboard, is installed by default when customers deploy TiDB, so that they can monitor performance and make any adjustments needed to reach their target before and while deploying TiDB in production. That monitoring layer “makes the evaluation process and communication much smoother,” says Xu.

With the company’s Kubernetes-based Operator implementation, which is open sourced, customers are now able to deploy, run, manage, upgrade, and maintain their TiDB clusters in the cloud with no downtime, and reduced workload, burden and overhead. And internally, says Xu, “we’ve completely switched to Kubernetes for our own development and testing, including our data center infrastructure and Schrodinger, an automated testing platform for TiDB. With Kubernetes, our resource usage is greatly improved. Our developers can allocate and deploy clusters themselves, and the deploying process has gone from hours to minutes, so we can devote fewer people to manage IDC resources. The productivity improvement is about 15%, and as we gain more Kubernetes knowledge on the debugging and diagnosis front, the productivity should improve to more than 20%.”

“We knew Kubernetes had the promise of helping us solve our problems. We were just waiting for it to mature, so we can fold it into our own development and product roadmap.”

— KEVIN XU, GENERAL MANAGER OF GLOBAL STRATEGY AND OPERATIONS, PINGCAP

If you are willing to read about more case studies of Kubernetes then refer below link:

Thanks for reading…..

--

--