Linux Today – 7 pieces of contrarian Kubernetes advice

Kubernetes

You can find many great resources for getting smarter about Kubernetes out there. (Ahem, we’ve written a few ourselves.)

That’s good news for IT teams and professionals looking to boost their knowledge and consider how Kubernetes might solve problems in their organization. The excited chatter about Kubernetes has gotten so loud, however, that it can become difficult to make sense of it all. Moreover, it can be challenging to sort the actual business and technical benefits from the sales pitches.

[ Need to help others understand Kubernetes? Check out our related article, How to explain Kubernetes in plain English. ]

So we asked several IT leaders and Kubernetes users to share some advice that goes against the grain.

If you always take conventional wisdom – or straight-up hype – at face value, you’re bound to be disappointed at some point. So consider these bits of contrarian thinking as another important dimension of your Kubernetes learning.

1. Don’t treat Kubernetes as a silver bullet

Interest in Kubernetes is astronomical for good reason: It’s a powerful tool when properly used. But if you treat Kubernetes as a cure-all for anything and everything that ails your applications and infrastructure, expect new challenges ahead.

“Kubernetes is not a silver bullet for all solutions,” Raghu Kishore Vempati, principal systems engineer at Aricent. “Understand and use it carefully.”

Indeed, the spotlight on Kubernetes has grown so bright as to suggest that it’s some kind of IT sorcery: Just put and everything in containers, deploy ’em to production, and let Kubernetes handle the rest while you plan your next vacation.

Even if you’re more realistic about it, it may be tempting to assume Kubernetes will automatically solve existing issues with, say, your application design. It won’t. (Even Kubernetes’ original co-creators agree with this.) Focus on what it’s good at rather than trying to use it as a blanket solution.

Containers and Kubernetes provide an opportunity to create solutions that previously would have required a lot of effort and code plumbing with higher costs,” Vempati says. “While Kubernetes can provide orchestration, it doesn’t solve any of the inherent design problems or limitations of the applications hosted on it. In short, application overheads cannot be addressed using Kubernetes.”

[ Related read: Getting started with Kubernetes: 5 misunderstandings, explained. ]

2. You don’t have to immediately refactor everything for microservices

Microservices and containers pair well together, so it’s reasonable to assume that Kubernetes and containerized microservices are a good match, too.

“Kubernetes is ideal for new and refactored applications that don’t carry the baggage – and requirements – of traditional and monolithic applications,” says Ranga Rajagopalan, CTO and cofounder at Avi Networks.

Just don’t mistake the ideal scenario as the only scenario.

“The conventional wisdom is to refactor or rewrite your monoliths before deploying them within a Kubernetes environment,” Rajagopalan says. “However, this can be a massive undertaking that risks putting your team in analysis paralysis.”

Rajagopalan notes that you can indeed run a monolith in a container and then begin to incrementally break off pieces of the application as microservices, rather than trying to do everything at once.

“This can jumpstart your modernization efforts and deliver value well before the application has been completely refactored,” Rajagopalan says. “You don’t have to be a purist about microservices.”

Vempati concurs, noting that there might be some legacy applications that you never refactor because the costs outweigh the benefits.

3. Account for feature differences on public clouds

One of the overarching appeals of containers is greater portability among environments, especially as multi-cloud and hybrid cloud environments proliferate. Indeed, Vempati notes that it’s a common scenario for a team to deploy Kubernetes clusters on public cloud platforms. But you can’t always assume “vendor-neutral” as a default.

“The native Kubernetes capabilities on public clouds differ. It is important to understand the features and workflows carefully,” Vempati advises. “All the key public cloud service providers provide native Kubernetes cluster services. While these are much similar, there will be certain features that are different. When designing the solution with an aim to keep it vendor-neutral, while still choosing one of the vendors, such differences must be taken into account.”

Vempati shares as an example that one public cloud provider will assign public (or external) IPs to nodes when a cluster is created with its Kubernetes service, while another does not. “So if there is any dynamic behavior of the apps to infer/use external IP, it may work with [one platform] but not on [another],” Vempati says.

4. It will take time to get automation right

A basic selling point of Kubernetes and orchestration in general is that it makes manageable the operational burden of running production containers at scale, largely through automation. So this is one of those times where it’s best to be reminded that “automation” is not a synonym of “easy,” especially as you’re setting it up. Expect some real effort to get it right, and you’ll get a return on that investment over time.

“Kubernetes is a wonderful platform for building highly scalable and elastic solutions,” Vempati says. “One of the key [selling points] of this platform is that it very effectively supports continuous delivery of microservices hosted in containers for cloud scale.”

This sounds great to any IT team working in multi-cloud or hybrid cloud environments, especially as their cloud-native development increases. Just be ready to do some legwork to reap the benefits.

“To support automated continuous delivery for any Kubernetes-based solution is not [as] simple as it may [first] appear,” Vempati says. “It involves a lot of preparation, simulation of multiple scenarios based on the solution, and several iterations to achieve the [desired results.]”

Red Hat VP and CTO Chris Wright recently wrote about four emerging tools that play into this need for simplification. Read also What’s next for Kubernetes and hybrid cloud.

5. Be judicious in your use of persistent volumes

The original conventional wisdom that containers should be stateless has changed, especially as it has become easier to manage stateful applications such as databases.

[ Read also How to explain Kubernetes Operators in plain English. ]

Persistent volumes are the Kubernetes abstraction that enables storage for stateful applications running in containers. In short, that’s a good thing. But Vempati urges careful attention to avoid longer-term issues, especially if you’re in the earlier phases of adoption.

“The use of persistent volumes in Kubernetes for data-centric apps is a common scenario,” Vempati says. “Understand the storage primitives available so that using persistent volumes doesn’t spike costs.”

Factors such as the type of storage you’re using can lead to cost surprises, especially when PVs are created dynamically in Kubernetes. Vempati offers an example scenario:

“Persistent volumes and their claims can be dynamically created as part of a deployment for a pod. Verify the storage class to make sure that the right kind of storage is used,” Vempati advises. “SSDs on public cloud will have a higher cost associated with them, compared to a standard storage.”

6. Don’t evangelize Kubernetes by shouting “Kubernetes!”

If you’re trying to make the case for Kubernetes in your organization, it may be tempting to simply ride the surging wave of excitement around it.

“The common wisdom out there is that simply mentioning Kubernetes is enough to gain someone’s interest in how you are using it to solve a particular problem,” says Glenn Sullivan, co-founder at SnapRoute, which uses Kubernetes as part of its cloud-native networking software’s underlying infrastructure. “My advice would be to spend less time pointing to Kubernetes as the answer and piggybacking on the buzz that surrounds the platform and focus more on the results of using Kubernetes. When you present Kubernetes as the solution for solving a problem, you will immediately elate some [people] and alienate others.”

One reason they might resist it or tune out is that they simply don’t understand what Kubernetes is. You can explain it to them in plain terms, but Sullivan says the lightbulb moment – and subsequent buy-in – is more likely to occur when you show them the results.

“We find it more advantageous to promote the [benefits] gained from using Kubernetes instead of presenting the integration into Kubernetes itself as the value-add,” Sullivan says.

7. Kubernetes is not an island

Kubernetes is an important piece of the cloud-native puzzle, but it’s only one piece. As Red Hat technology evangelist Gordon Haff notes, “The power of the open source cloud-native ecosystem comes only in part from individual projects such as Kubernetes. It derives, perhaps even more, from the breadth of complementary projects that come together to create a true cloud-native platform.”

This includes service meshes like  Istio, monitoring tools like Prometheus, command-line tools like Podman, distributed tracing from the likes of Jaeger and Kiali, enterprise registries like Quay, and inspection utilities like Skopeo, says Haff. And, of course, Linux, which is the foundation for the containers orchestrated by Kubernetes.

[ Want to learn more about automation and building cloud-native apps? Get the guide: Principles of container-based application design. ]

Source

Leave a Reply

Your email address will not be published. Required fields are marked *

WP2Social Auto Publish Powered By : XYZScripts.com