Kubernetes – Art2Dec SoftLab https://www.appservgrid.com/paw93 Automate Your Noble Relevance 2023 Sat, 25 Mar 2023 01:50:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://www.appservgrid.com/paw93/wp-content/uploads/2018/10/logo11_cr.png Kubernetes – Art2Dec SoftLab https://www.appservgrid.com/paw93 32 32 Native Kubernetes Monitoring: Monitoring and Metrics for Users https://www.appservgrid.com/paw93/index.php/2023/03/25/native-kubernetes-monitoring-monitoring-and-metrics-for-users/ https://www.appservgrid.com/paw93/index.php/2023/03/25/native-kubernetes-monitoring-monitoring-and-metrics-for-users/#respond Sat, 25 Mar 2023 01:50:33 +0000 https://www.appservgrid.com/paw93/?p=1539 Continue reading "Native Kubernetes Monitoring: Monitoring and Metrics for Users"

]]>

Introduction

Kubernetes is an open-source orchestration platform for working with containers. At its core, it gives us the means to do deployments, easy ways to scale, and monitoring. In this article, we will talk about the built-in monitoring capabilities of Kubernetes and include some demos for better understanding.

Brief Overview of Kubernetes Architecture

At the infrastructure level, a Kubernetes cluster is a set of physical or virtual machines, each acting in a specific role. The machines acting in the role of master function as the brain of the operations and are charged with orchestrating the management of all containers that run on all of the nodes.

  • Master components manage the life cycle of a pod:
    • apiserver: main component exposing APIs for all the other master components
    • scheduler: uses information in the pod spec to decide on which node to run a pod
    • controller-manager: responsible for node management (detecting if a node fails), pod replication, and endpoint creation
    • etcd: key/value store used for storing all internal cluster data
  • Node components are worker machines in Kubernetes, managed by the master. Each node contains the necessary components to run pods:
    • kubelet: handles all communication between the master and the node on which it is running. It interfaces with the container runtime to deploy and monitor containers
    • kube-proxy: is in charge with maintaining network rules for the node. It also handles communication between pods, nodes, and the outside world.
    • container runtime: runs containers on the node.

From a logical perspective, a Kubernetes deployment is comprised of various components, each serving a specific purpose within the cluster:

  • Pods: are the basic unit of deployment within Kubernetes. A pod consists of one or more containers that share the same network namespace and IP address.
  • Services: act like a load balancer. They provide an IP address in front of a pool (set of pods) and also a policy that controls access to them.
  • ReplicaSets: are controlled by deployments and ensure that the desired number of pods for that deployment are running.
  • Namespaces: define a logical segregation for different kind of resources like pods and/or services.
  • Metadata: marks containers based on their deployment characteristics.

Monitoring Kubernetes

Monitoring an application is absolutely required if we want to anticipate problems and have visibility of potential bottlenecks in a dev or production deployment.

To help monitor the cluster and the many moving parts that form a deployment, Kubernetes ships with some built-in monitoring capabilities:

  • Kubernetes dashboard: gives an overview of the resources running on your cluster. It also gives a very basic means of deploying and interacting with those resources.
  • cAdvisor: is an open source agent that monitors resource usage and analyzes the performance of containers.
  • Liveness and Readiness Probes: actively monitor the health of a container.
  • Horizontal Pod Autoscaler: increases the number of pods if needed based on information gathered by analyzing different metrics.

In this article, we will be covering the first two built-in tools. A follow up article focusing on the remaining tools can be found here.

There are many Kubernetes metrics to monitor. As we’ve described the architecture in two separate ways (infrastructure and logical), we can do the same with monitoring and separate this into two main components: monitoring the cluster itself and monitoring the workloads running on it.

Cluster Monitoring

All clusters should monitor the underlying server components since problems at the server level will show up in the workloads. Some metrics to look for while monitoring node resources are CPU, disk, and network bandwidth. Having an overview of these metrics will let you know if it’s time to scale the cluster up or down (this is especially useful when using cloud providers where running cost is important).

Workload Monitoring

Metrics related to deployments and their pods should be taken into consideration here. Checking the number of pods a deployment has at a moment compared to its desired state can be relevant. Also, we can look for health checks, container metrics, and finally application metrics.

Prerequisites for the Demo

In the following sections, we will take each of the listed built-in monitoring features one-by-one to see how they can help us. The prerequisites needed for this exercise include:

  • a Google Cloud Platform account: the free tier is more than enough. Most other cloud should also work the same.
  • a host where Rancher will be running: This can be a personal PC/Mac or a VM in a public cloud.
  • Google Cloud SDK: should be installed along kubectl on the host running Rancher. Make sure that gcloud has access to your Google Cloud account by authenticating with your credentials (gcloud init and gcloud auth login).

Starting a Rancher Instance

To begin, start your Rancher instance. There is a very intuitive getting started guide for Rancher that you can follow for this purpose.

Using Rancher to Deploy a GKE cluster

Use Rancher to set up and configure a Kubernetes cluster by following the how-to guide.

As mentioned previously, in this guide we will be covering the first two built-in tools: the Kubernetes dashboard and cAdvisor. A follow up article that discusses probes and horizontal pod autoscalers can be found here.

Kubernetes Dashboard

The Kubernetes dashboard is a web-based Kubernetes user interface that we can use to troubleshoot applications and manage cluster resources.

Rancher, as seen above, helps us install the dashboard by just checking a radio button. Let’s take a look now at how the dashboard can help us by listing some of its uses:

  • Provides an overview of cluster resources (overall and per individual node), shows us all of the namespaces, lists all of the storage classes defined
  • Shows all applications running on the cluster
  • Provides information about the state of Kubernetes resources in your cluster and on any errors that may have occurred

To access the dashboard, we need to proxy the request between our machine and Kubernetes API server. Start a proxy server with kubectl by typing the following:

kubectl proxy &

The proxy server will start in the background, providing output that looks similar to this:

[1] 3190
$ Starting to serve on 127.0.0.1:8001

Now, to view the dashboard, navigate to the following address in the browser:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

We will then be prompted with the login page to enter the credentials:

Fig. 4: Dashboard login

Fig. 4: Dashboard login

Let’s take a look on how to create a user with admin permission using the Service Account mechanism. We will use two YAML files.

One will create the Service Account:

cat ServiceAccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

The other will create the ClusterRoleBinding for our user:

cat ClusterRoleBinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

Apply the two YAML files to create the objects they define:

kubectl apply -f ServiceAccount.yaml 
kubectl apply -f ClusterRoleBinding.yaml 
serviceaccount "admin-user" created
clusterrolebinding.rbac.authorization.k8s.io "admin-user" created

Once our user is created and the correct permissions have been set, we will need to find out the token in order to login:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-lnnsn
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=admin-user
              kubernetes.io/service-account.uid=e34a9438-4e12-11e9-a57b-42010aa4009e

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1119 bytes
namespace:  11 bytes
token:      COPY_THIS_STRING

Select “Token” at the Kubernetes dashboard credentials prompt and enter the value you retrieved above in the token field to authenticate.

The Kubernetes Dashboard consists of few main views:

  • Admin view: lists nodes, namespaces, and persistent volumes along with other details. We can get an aggregated view for our nodes (CPU and memory usage metrics) and an individual details view for each node showing its metrics, specification, status, allocated resources, events, and pods.
  • Workload view: shows all applications running in a selected namespace. It summarizes important information about workloads, like the number of pods ready in a StatefulSet or Deployment or the current memory usage for a pod.
  • Discover and Load Balancing view: shows Kubernetes resources that expose services to the external world and enable discovery within the cluster.
  • Config and storage view: shows persistent volume claim resources used by applications. Config view is used to shows all of the Kubernetes resources used for live configuration of applications running in the cluster.

Without any workloads running, the dashboard’s views will be mainly empty since there will be nothing deployed on top of Kubernetes. If you want to explore all of the views the dashboard has to offer, the best option is to deploy apps that use different workload types (stateful set, deployments, replica sets, etc.). You can check out this article on deploying a Redis cluster for an example that deploys a Redis cluster (a stateful set with volume claims and configMaps) and a testing app (a Kubernetes deployment) so the dashboard tabs will have some relevant info.

After provisioning some workloads, we can take down one node and then check the different tabs to see some updates:

kubectl delete pod redis-cluster-2
kubectl get pods
pod "redis-cluster-2" deleted
NAME                              READY     STATUS        RESTARTS   AGE
hit-counter-app-9c5d54b99-xv5hj   1/1       Running       0          1h
redis-cluster-0                   1/1       Running       0          1h
redis-cluster-1                   1/1       Running       0          1h
redis-cluster-2                   0/1       Terminating   0          1h
redis-cluster-3                   1/1       Running       0          44s
redis-cluster-4                   1/1       Running       0          1h
redis-cluster-5                   1/1       Running       0          1h

cAdvisor

cAdvisor is an open-source agent integrated into the kubelet binary that monitors resource usage and analyzes the performance of containers. It collects statistics about the CPU, memory, file, and network usage for all containers running on a given node (it does not operate at the pod level). In addition to core metrics, it also monitors events as well. Metrics can be accessed directly, using commands like kubectl top or used by the scheduler to perform orchestration (for example with autoscaling).

Note that cAdvisor doesn’t store metrics for long-term use, so if you want that functionality, you’ll need to look for a dedicated monitoring tool.

cAdvisor’s UI has been marked deprecated as of Kubernetes version 1.10 and the interface is scheduled to be completely removed in version 1.12. Rancher gives you the option to choose what version of Kubernetes to use for your clusters. When setting up the infrastructure for this demo, we configured the cluster to use version 1.10, so we should still have access to the cAdvisor UI.

To access the cAdvisor UI, we need to proxy between our machine and Kubernetes API server. Start a local instance of the proxy server by typing:

kubectl proxy &
[1] 3190
$ Starting to serve on 127.0.0.1:8001

Next, find the name of your nodes:

kubectl get nodes

You can view the UI in you browser by navigating to the following address, replacing the node name with the identifier you found on the command line:

http://localhost:8001/api/v1/nodes/gke-c-plnf4-default-pool-5eb56043-23p5:4194/proxy/containers/

To confirm that kubelet is listening on port 4194, you can log into the node to get more information:

gcloud compute ssh admin@gke-c-plnf4-default-pool-5eb56043-23p5 --zone europe-west4-c
Welcome to Kubernetes v1.10.12-gke.7!

You can find documentation for Kubernetes at:
  http://docs.kubernetes.io/

The source for this release can be found at:
  /home/kubernetes/kubernetes-src.tar.gz
Or you can download it at:
  https://storage.googleapis.com/kubernetes-release-gke/release/v1.10.12-gke.7/kubernetes-src.tar.gz

It is based on the Kubernetes source at:
  https://github.com/kubernetes/kubernetes/tree/v1.10.12-gke.7

For Kubernetes copyright and licensing information, see:
  /home/kubernetes/LICENSES

We can confirm that in our version of Kubernetes, the kubelet process is serving the cAdvisor web UI over that port:

sudo su -
netstat -anp | grep LISTEN | grep 4194
tcp6       0      0 :::4194                 :::*                    LISTEN      1060/kubelet 

If you run Kubernetes version 1.12 or later, the UI has been removed, so kubelet does not listening on port 4194 anymore. You can confirm this with the commands above. However, the metrics are still there since cAdvisor is part of the kubelet binary.

The kubelet binary exposes all of its runtime metrics and all of the cAdvisor metrics at the /metrics endpoint using the Prometheus exposition format:

http://localhost:8001/api/v1/nodes/gke-c-plnf4-default-pool-5eb56043-23p5/proxy/metrics/cadvisor

Among the output, metrics you can look for include:

  • CPU:
    • container_cpu_user_seconds_total: Cumulative “user” CPU time consumed in seconds
    • container_cpu_system_seconds_total: Cumulative “system” CPU time consumed in seconds
    • container_cpu_usage_seconds_total: Cumulative CPU time consumed in seconds (sum of the above)
  • Memory:
    • container_memory_cache: Number of bytes of page cache memory
    • container_memory_swap: Container swap usage in bytes
    • container_memory_usage_bytes: Current memory usage in bytes, including all memory regardless of when it was accessed
    • container_memory_max_usage_bytes: Maximum memory usage in byte
  • Disk:
    • container_fs_io_time_seconds_total: Count of seconds spent doing I/Os
    • container_fs_io_time_weighted_seconds_total: Cumulative weighted I/O time in seconds
    • container_fs_writes_bytes_total: Cumulative count of bytes written
    • container_fs_reads_bytes_total: Cumulative count of bytes read
  • Network:
    • container_network_receive_bytes_total: Cumulative count of bytes received
    • container_network_receive_errors_total: Cumulative count of errors encountered while receiving
    • container_network_transmit_bytes_total: Cumulative count of bytes transmitted
    • container_network_transmit_errors_total: Cumulative count of errors encountered while transmitting

Some additional useful metrics can be found here:

  • /healthz: Endpoint for determining whether cAdvisor is healthy
  • /healthz/ping: To check connectivity to etcd
  • /spec: Endpoint returns the cAdvisor MachineInfo()

For example, to see the cAdvisor MachineInfo(), we could visit:

http://localhost:8001/api/v1/nodes/gke-c-plnf4-default-pool-5eb56043-23p5:10255/proxy/spec/

The pods endpoint provides the same output as kubectl get pods -o json for the pods running on the node:

http://localhost:8001/api/v1/nodes/gke-c-plnf4-default-pool-5eb56043-23p5:10255/proxy/pods/

Similarly, logs can also be retrieved by visiting:

http://localhost:8001/logs/kube-apiserver.log

Conclusion

Monitoring is vital in order to understand what is happening with our applications. Kubernetes helps us with a number of built-in tools and provides some great insights for both, infrastructure layer (nodes) and logical one (pods).

This article concentrated on the tools that focus on providing monitoring and metrics for users. Continue on to the second part of this series to learn about the included monitoring tools focused on workload scaling and life cycle management.

Source

]]>
https://www.appservgrid.com/paw93/index.php/2023/03/25/native-kubernetes-monitoring-monitoring-and-metrics-for-users/feed/ 0
HTTP Alternative RSocket Gets a Home at The Linux Foundation https://www.appservgrid.com/paw93/index.php/2023/03/25/http-alternative-rsocket-gets-a-home-at-the-linux-foundation/ https://www.appservgrid.com/paw93/index.php/2023/03/25/http-alternative-rsocket-gets-a-home-at-the-linux-foundation/#respond Sat, 25 Mar 2023 01:47:10 +0000 https://www.appservgrid.com/paw93/?p=2027 Continue reading "HTTP Alternative RSocket Gets a Home at The Linux Foundation"

]]>
The list of foundations hosted by the Linux Foundation continues growing this week with the launch of the Reactive Foundation, “a community of leaders established to accelerate technologies for building the next generation of networked applications,” according to a statement. Upon founding, those leaders will include Alibaba, Facebook, Netifi and Pivotal, with the open source RSocket specification and programming language implementations also joining the new foundation’s “formal open governance model and neutral ecosystem.”

RSocket provides a basis for reactive programming at the protocol level and lays the groundwork for reactive programming in cloud native environments, Ryland Degnan, co-founder and CTO at Netifi, said in an interview with The New Stack. It’s considered a replacement for HTTP in cloud native communications.

Lest you confuse reactive programming with the React UI Javascript framework of a similar name, reactive programming is defined by its message-driven approach that aims to achieve resiliency and scalability independent of infrastructure, network issues or end-user device. More specifically, the reactive manifesto lists four characteristics of reactive systems: responsive, resilient, elastic and message-driven.

“With the whole cloud native movement, a lot more people are building software for the cloud that operates on a scale that’s an order of magnitude higher than anything we’ve seen before. It’s got a lot of moving parts and rationalizing how all those parts communicate is an incredibly important problem,” Degnan said. “The present problem that RSocket is seeking to solve is defining a standardized way for services to communicate that was built from the ground up and not using last year’s technology.”

“The de facto standard right now when people go out and build stuff is to use HTTP. That comes with loads of problems because it was really designed to be a request-response protocol,” Degnan said. “It has very little that was built for microservices. It was really built for this previous generation of requesting a resource from a monolithic server and returning a response. RSocket has been designed from the ground up for cloud native communication.”

HTTP vs. RSocket

Originally created by Netflix, RSocket is an application protocol for Reactive Streams that provides application flow control over the network to prevent outages and increase resiliency. As opposed to HTTP, RSocket does not await a response or request from the client. Degnan explains this core difference between HTTP and RSocket.

“A lot of effort spent in building these distributed systems ends up being workarounds for problems with HTTP. Circuit breakers are a great example of that. It’s working around the problem that HTTP doesn’t have flow control built into it, which means that you have to guess whether the downstream service is available or not, to not cause it to receive too many requests,” said Degnan. “You need to cut off traffic to it if you think that it’s now unavailable, but that involves a lot of configurations. RSocket could remove that problem entirely.”

By contrast, RSocket employs the idea of asynchronous stream processing with non-blocking back-pressure, in which a failing component will, rather than simply dropping traffic, communicate its stress to upstream components, getting them to reduce the load and allowing the system to “gracefully respond to load rather than collapse under it,” according to the Reactive Manifesto glossary. Degnan further explained how this applied to the modern world of microservices.

“Reactive streams are designed to have vendors and receivers of information that are decoupled from each other. Rather than having the receiver control the flow of information, it’s allowed the sender to asynchronously send data that’s really important,” said Degnan. “In microservices for example, where you have a lot of independent components that need to communicate, what often happens is some services are able to operate at a higher speed than others, or traffic spikes overwhelm parts of the system. Reactive Streams allows you to have receivers say, ‘all right, now I’m ready to receive five more requests’ and then have that message be asked around the system and the flow of information to be regulated.”

While the donation of RSocket to the Reactive Foundation lays the groundwork, a spokesperson for the foundation explained that it will also “work to expand the existing set of language implementations/integrations, create a test platform to ensure interoperability between different protocol implementations, host a repository of documentation about reactive systems/programming in general as well as specific projects like RSocket.”

The Linux Foundation is a sponsor of The New Stack.

Source

]]>
https://www.appservgrid.com/paw93/index.php/2023/03/25/http-alternative-rsocket-gets-a-home-at-the-linux-foundation/feed/ 0
Kubernetes 1.14: Local Persistent Volumes GA https://www.appservgrid.com/paw93/index.php/2019/04/04/kubernetes-1-14-local-persistent-volumes-ga/ https://www.appservgrid.com/paw93/index.php/2019/04/04/kubernetes-1-14-local-persistent-volumes-ga/#respond Thu, 04 Apr 2019 22:28:15 +0000 https://www.appservgrid.com/paw93/?p=1538 Continue reading "Kubernetes 1.14: Local Persistent Volumes GA"

]]>
The Local Persistent Volumes feature has been promoted to GA in Kubernetes 1.14. It was first introduced as alpha in Kubernetes 1.7, and then beta in Kubernetes 1.10. The GA milestone indicates that Kubernetes users may depend on the feature and its API for production use. GA features are protected by the Kubernetes deprecation policy.

What is a Local Persistent Volume?

A local persistent volume represents a local disk directly-attached to a single Kubernetes Node.

Kubernetes provides a powerful volume plugin system that enables Kubernetes workloads to use a wide variety of block and file storage to persist data. Most of these plugins enable remote storage – these remote storage systems persist data independent of the Kubernetes node where the data originated. Remote storage usually can not offer the consistent high performance guarantees of local directly-attached storage. With the Local Persistent Volume plugin, Kubernetes workloads can now consume high performance local storage using the same volume APIs that app developers have become accustomed to.

How is it different from a HostPath Volume?

To better understand the benefits of a Local Persistent Volume, it is useful to compare it to a HostPath volume. HostPath volumes mount a file or directory from the host node’s filesystem into a Pod. Similarly a Local Persistent Volume mounts a local disk or partition into a Pod.

The biggest difference is that the Kubernetes scheduler understands which node a Local Persistent Volume belongs to. With HostPath volumes, a pod referencing a HostPath volume may be moved by the scheduler to a different node resulting in data loss. But with Local Persistent Volumes, the Kubernetes scheduler ensures that a pod using a Local Persistent Volume is always scheduled to the same node.

While HostPath volumes may be referenced via a Persistent Volume Claim (PVC) or directly inline in a pod definition, Local Persistent Volumes can only be referenced via a PVC. This provides additional security benefits since Persistent Volume objects are managed by the administrator, preventing Pods from being able to access any path on the host.

Additional benefits include support for formatting of block devices during mount, and volume ownership using fsGroup.

What’s New With GA?

Since 1.10, we have mainly focused on improving stability and scalability of the feature so that it is production ready.

The only major feature addition is the ability to specify a raw block device and have Kubernetes automatically format and mount the filesystem. This reduces the previous burden of having to format and mount devices before giving it to Kubernetes.

Limitations of GA

At GA, Local Persistent Volumes do not support dynamic volume provisioning. However there is an external controller available to help manage the local PersistentVolume lifecycle for individual disks on your nodes. This includes creating the PersistentVolume objects, cleaning up and reusing disks once they have been released by the application.

How to Use a Local Persistent Volume?

Workloads can request a local persistent volume using the same PersistentVolumeClaim interface as remote storage backends. This makes it easy to swap out the storage backend across clusters, clouds, and on-prem environments.

First, a StorageClass should be created that sets volumeBindingMode: WaitForFirstConsumer to enable volume topology-aware scheduling. This mode instructs Kubernetes to wait to bind a PVC until a Pod using it is scheduled.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Then, the external static provisioner can be configured and run to create PVs for all the local disks on your nodes.

$ kubectl get pv
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM  STORAGECLASS   REASON      AGE
local-pv-27c0f084   368Gi      RWO            Delete           Available          local-storage              8s
local-pv-3796b049   368Gi      RWO            Delete           Available          local-storage              7s
local-pv-3ddecaea   368Gi      RWO            Delete           Available          local-storage              7s

Afterwards, workloads can start using the PVs by creating a PVC and Pod or a StatefulSet with volumeClaimTemplates.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: local-test
spec:
  serviceName: "local-service"
  replicas: 3
  selector:
    matchLabels:
      app: local-test
  template:
    metadata:
      labels:
        app: local-test
    spec:
      containers:
      - name: test-container
        image: k8s.gcr.io/busybox
        command:
        - "/bin/sh"
        args:
        - "-c"
        - "sleep 100000"
        volumeMounts:
        - name: local-vol
          mountPath: /usr/test-pod
  volumeClaimTemplates:
  - metadata:
      name: local-vol
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "local-storage"
      resources:
        requests:
          storage: 368Gi

Once the StatefulSet is up and running, the PVCs are all bound:

$ kubectl get pvc
NAME                     STATUS   VOLUME              CAPACITY   ACCESS MODES   STORAGECLASS      AGE
local-vol-local-test-0   Bound    local-pv-27c0f084   368Gi      RWO            local-storage     3m45s
local-vol-local-test-1   Bound    local-pv-3ddecaea   368Gi      RWO            local-storage     3m40s
local-vol-local-test-2   Bound    local-pv-3796b049   368Gi      RWO            local-storage     3m36s

When the disk is no longer needed, the PVC can be deleted. The external static provisioner will clean up the disk and make the PV available for use again.

$ kubectl patch sts local-test -p '{"spec":{"replicas":2}}'
statefulset.apps/local-test patched

$ kubectl delete pvc local-vol-local-test-2
persistentvolumeclaim "local-vol-local-test-2" deleted

$ kubectl get pv
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                            STORAGECLASS   REASON      AGE
local-pv-27c0f084   368Gi      RWO            Delete           Bound       default/local-vol-local-test-0   local-storage              11m
local-pv-3796b049   368Gi      RWO            Delete           Available                                    local-storage              7s
local-pv-3ddecaea   368Gi      RWO            Delete           Bound       default/local-vol-local-test-1   local-storage              19m

You can find full documentation for the feature on the Kubernetes website.

What Are Suitable Use Cases?

The primary benefit of Local Persistent Volumes over remote persistent storage is performance: local disks usually offer higher IOPS and throughput and lower latency compared to remote storage systems.

However, there are important limitations and caveats to consider when using Local Persistent Volumes:

  • Using local storage ties your application to a specific node, making your application harder to schedule. Applications which use local storage should specify a high priority so that lower priority pods, that don’t require local storage, can be preempted if necessary.
  • If that node or local volume encounters a failure and becomes inaccessible, then that pod also becomes inaccessible. Manual intervention, external controllers, or operators may be needed to recover from these situations.
  • While most remote storage systems implement synchronous replication, most local disk offerings do not provide data durability guarantees. Meaning loss of the disk or node may result in loss of all the data on that disk

For these reasons, local persistent storage should only be considered for workloads that handle data replication and backup at the application layer, thus making the applications resilient to node or data failures and unavailability despite the lack of such guarantees at the individual disk level.

Examples of good workloads include software defined storage systems and replicated databases. Other types of applications should continue to use highly available, remotely accessible, durable storage.

How Uber Uses Local Storage

M3, Uber’s in-house metrics platform, piloted Local Persistent Volumes at scale in an effort to evaluate M3DB — an open-source, distributed timeseries database created by Uber. One of M3DB’s notable features is its ability to shard its metrics into partitions, replicate them by a factor of three, and then evenly disperse the replicas across separate failure domains.

Prior to the pilot with local persistent volumes, M3DB ran exclusively in Uber-managed environments. Over time, internal use cases arose that required the ability to run M3DB in environments with fewer dependencies. So the team began to explore options. As an open-source project, we wanted to provide the community with a way to run M3DB as easily as possible, with an open-source stack, while meeting M3DB’s requirements for high throughput, low-latency storage, and the ability to scale itself out.

The Kubernetes Local Persistent Volume interface, with its high-performance, low-latency guarantees, quickly emerged as the perfect abstraction to build on top of. With Local Persistent Volumes, individual M3DB instances can comfortably handle up to 600k writes per-second. This leaves plenty of headroom for spikes on clusters that typically process a few million metrics per-second.

Because M3DB also gracefully handles losing a single node or volume, the limited data durability guarantees of Local Persistent Volumes are not an issue. If a node fails, M3DB finds a suitable replacement and the new node begins streaming data from its two peers.

Thanks to the Kubernetes scheduler’s intelligent handling of volume topology, M3DB is able to programmatically evenly disperse its replicas across multiple local persistent volumes in all available cloud zones, or, in the case of on-prem clusters, across all available server racks.

Uber’s Operational Experience

As mentioned above, while Local Persistent Volumes provide many benefits, they also require careful planning and careful consideration of constraints before committing to them in production. When thinking about our local volume strategy for M3DB, there were a few things Uber had to consider.

For one, we had to take into account the hardware profiles of the nodes in our Kubernetes cluster. For example, how many local disks would each node cluster have? How would they be partitioned?

The local static provisioner README provides guidance to help answer these questions. It’s best to be able to dedicate a full disk to each local volume (for IO isolation) and a full partition per-volume (for capacity isolation). This was easier in our cloud environments where we could mix and match local disks. However, if using local volumes on-prem, hardware constraints may be a limiting factor depending on the number of disks available and their characteristics.

When first testing local volumes, we wanted to have a thorough understanding of the effect disruptions (voluntary and involuntary) would have on pods using local storage, and so we began testing some failure scenarios. We found that when a local volume becomes unavailable while the node remains available (such as when performing maintenance on the disk), a pod using the local volume will be stuck in a ContainerCreating state until it can mount the volume. If a node becomes unavailable, for example if it is removed from the cluster or is drained, then pods using local volumes on that node are stuck in an Unknown or Pending state depending on whether or not the node was removed gracefully.

Recovering pods from these interim states means having to delete the PVC binding the pod to its local volume and then delete the pod in order for it to be rescheduled (or wait until the node and disk are available again). We took this into account when building our operator for M3DB, which makes changes to the cluster topology when a pod is rescheduled such that the new one gracefully streams data from the remaining two peers. Eventually we plan to automate the deletion and rescheduling process entirely.

Alerts on pod states can help call attention to stuck local volumes, and workload-specific controllers or operators can remediate them automatically. Because of these constraints, it’s best to exclude nodes with local volumes from automatic upgrades or repairs, and in fact some cloud providers explicitly mention this as a best practice.

Portability Between On-Prem and Cloud

Local Volumes played a big role in Uber’s decision to build orchestration for M3DB using Kubernetes, in part because it is a storage abstraction that works the same across on-prem and cloud environments. Remote storage solutions have different characteristics across cloud providers, and some users may prefer not to use networked storage at all in their own data centers. On the other hand, local disks are relatively ubiquitous and provide more predictable performance characteristics.

By orchestrating M3DB using local disks in the cloud, where it was easier to get up and running with Kubernetes, we gained confidence that we could still use our operator to run M3DB in our on-prem environment without any modifications. As we continue to work on how we’d run Kubernetes on-prem, having solved such an important pending question is a big relief.

What’s Next for Local Persistent Volumes?

As we’ve seen with Uber’s M3DB, local persistent volumes have successfully been used in production environments. As adoption of local persistent volumes continues to increase, SIG Storage continues to seek feedback for ways to improve the feature.

One of the most frequent asks has been for a controller that can help with recovery from failed nodes or disks, which is currently a manual process (or something that has to be built into an operator). SIG Storage is investigating creating a common controller that can be used by workloads with simple and similar recovery processes.

Another popular ask has been to support dynamic provisioning using lvm. This can simplify disk management, and improve disk utilization. SIG Storage is evaluating the performance tradeoffs for the viability of this feature.

Source

]]>
https://www.appservgrid.com/paw93/index.php/2019/04/04/kubernetes-1-14-local-persistent-volumes-ga/feed/ 0
Kubernetes v1.14 delivers production-level support for Windows nodes and Windows containers https://www.appservgrid.com/paw93/index.php/2019/04/02/kubernetes-v1-14-delivers-production-level-support-for-windows-nodes-and-windows-containers/ https://www.appservgrid.com/paw93/index.php/2019/04/02/kubernetes-v1-14-delivers-production-level-support-for-windows-nodes-and-windows-containers/#respond Tue, 02 Apr 2019 04:29:18 +0000 https://www.appservgrid.com/paw93/?p=1532 Continue reading "Kubernetes v1.14 delivers production-level support for Windows nodes and Windows containers"

]]>

The first release of Kubernetes in 2019 brings a highly anticipated feature – production-level support for Windows workloads. Up until now Windows node support in Kubernetes has been in beta, allowing many users to experiment and see the value of Kubernetes for Windows containers. While in beta, developers in the Kubernetes community and Windows Server team worked together to improve the container runtime, build a continuous testing process, and complete features needed for a good user experience. Kubernetes now officially supports adding Windows nodes as worker nodes and scheduling Windows containers, enabling a vast ecosystem of Windows applications to leverage the power of our platform.

As Windows developers and devops engineers have been adopting containers over the last few years, they’ve been looking for a way to manage all their workloads with a common interface. Kubernetes has taken the lead for container orchestration, and this gives users a consistent way to manage their container workloads whether they need to run on Linux or Windows.

The journey to a stable release of Windows in Kubernetes was not a walk in the park. The community has been working on Windows support for 3 years, delivering an alpha release with v1.5, a beta with v1.9, and now a stable release with v1.14. We would not be here today without rallying broad support and getting significant contributions from companies including Microsoft, Docker, VMware, Pivotal, Cloudbase Solutions, Google and Apprenda. During this journey, there were 3 critical points in time that significantly advanced our progress.

  1. Advancements in Windows Server container networking that provided the infrastructure to create CNI (Container Network Interface) plugins
  2. Enhancements shipped in Windows Server semi-annual channel releases enabled Kubernetes development to move forward – culminating with Windows Server 2019 on the Long-Term Servicing Channel. This is the best release of Windows Server for running containers.
  3. The adoption of the KEP (Kubernetes Enhancement Proposals) process. The Windows KEP outlined a clear and agreed upon set of goals, expectations, and deliverables based on review and feedback from stakeholders across multiple SIGs. This created a clear plan that SIG-Windows could follow, paving the path towards this stable release.

With v1.14, we’re declaring that Windows node support is stable, well-tested, and ready for adoption in production scenarios. This is a huge milestone for many reasons. For Kubernetes, it strengthens its position in the industry, enabling a vast ecosystem of Windows-based applications to be deployed on the platform. For Windows operators and developers, this means they can use the same tools and processes to manage their Windows and Linux workloads, taking full advantage of the efficiencies of the cloud-native ecosystem powered by Kubernetes. Let’s dig in a little bit into these.

Operator Advantages

  • Gain operational efficiencies by leveraging existing investments in solutions, tools, and technologies to manage Windows containers the same way as Linux containers
  • Knowledge, training and expertise on container orchestration transfers to Windows container support
  • IT can deliver a scalable self-service container platform to Linux and Windows developers

Developer Advantages

  • Containers simplify packaging and deploying applications during development and test. Now you also get to take advantage of Kubernetes’ benefits in creating reliable, secure, and scalable distributed applications.
  • Windows developers can now take advantage of the growing ecosystem of cloud and container-native tools to build and deploy faster, resulting in a faster time to market for their applications
  • Taking advantage of Kubernetes as the leader in container orchestration, developers only need to learn how to use Kubernetes and that skillset will transfer across development environments and across clouds

CIO Advantages

  • Leverage the operational and cost efficiencies that are introduced with Kubernetes
  • Containerize existing.NET applications or Windows-based workloads to eliminate old hardware or underutilized virtual machines, and streamline migration from end-of-support OS versions. You retain the benefit your application brings to the business, but decrease the cost of keeping it running

“Using Kubernetes on Windows allows us to run our internal web applications as microservices. This provides quick scaling in response to load, smoother upgrades, and allows for different development groups to build without worry of other group’s version dependencies. We save money because development times are shorter and operation’s time is not spent maintaining multiple virtual machine environments,” said Jeremy, a lead devops engineer working for a top multinational legal firm, one of the early adopters of Windows on Kubernetes.

There are many features that are surfaced with this release. We want to turn your attention to a few key features and enablers of Windows support in Kubernetes. For a detailed list of supported functionality, you can read our documentation.

  • You can now add Windows Server 2019 worker nodes
  • You can now schedule Windows containers utilizing deployments, pods, services, and workload controllers
  • Out of tree CNI plugins are provided for Azure, OVN-Kubernetes, and Flannel
  • Containers can utilize a variety of in and out-of-tree storage plugins
  • Improved support for metrics/quotas closely matches the capabilities offered for Linux containers

When looking at Windows support in Kubernetes, many start drawing comparisons to Linux containers. Although some of the comparisons that highlight limitations are fair, it is important to distinguish between operational limitations and differences between the Windows and Linux operating systems. From a container management standpoint, we must strike a balance between preserving OS-specific behaviors required for application compatibility, and reaching operational consistency in Kubernetes across multiple operating systems. For example, some Linux-specific file system features, user IDs and permissions exposed through Kubernetes will not work on Windows today, and users are familiar with these fundamental differences. We will also be adding support for Windows-specific configurations to meet the needs of Windows customers that may not exist on Linux. The alpha support for Windows Group Managed Service Accounts is one example. Other areas such as memory reservations for Windows pods and the Windows kubelet are a work in progress and highlight an operational limitation. We will continue working on operational limitations based on what’s important to our community in future releases.

Today, Kubernetes master components will continue to run on Linux. That way users can add Windows nodes without having to create a separate Kubernetes cluster. As always, our future direction is set by the community, so more components, features and deployment methods will come over time. Users should understand the differences between Windows and Linux and utilize the advantages of each platform. Our goal with this release is not to make Windows interchangeable with Linux or to answer the question of Windows vs Linux. We offer consistency in management. Managing workloads without automation is tedious and expensive. Rewriting or re-architecting workloads is even more expensive. Containers provide a clear path forward whether your app runs on Linux or Windows, and Kubernetes brings an IT organization operational consistency.

As a community, our work is not complete. As already mentioned , we still have a fair bit of limitations and a healthy roadmap. We will continue making progress and enhancing Windows container support in Kubernetes, with some notable upcoming features including:

  • Support for CRI-ContainerD and Hyper-V isolation, bringing hypervisor-level isolation between pods for additional security and extending our container-to-node compatibility matrix
  • Additional network plugins, including the stable release of Flannel overlay support
  • Simple heterogeneous cluster creation using kubeadm on Windows

We welcome you to get involved and join our community to share feedback and deployment stories, and contribute to code, docs, and improvements of any kind.

Thank you and feel free to reach us individually if you have any questions.

Michael Michael
SIG-Windows Chair
Director of Product Management, VMware
@michmike77 on Twitter
@m2 on Slack

Patrick Lang
SIG-Windows Chair
Senior Software Engineer, Microsoft
@PatrickLang on Slack

Source

]]>
https://www.appservgrid.com/paw93/index.php/2019/04/02/kubernetes-v1-14-delivers-production-level-support-for-windows-nodes-and-windows-containers/feed/ 0
kube-proxy Subtleties: Debugging an Intermittent Connection Reset https://www.appservgrid.com/paw93/index.php/2019/03/30/kube-proxy-subtleties-debugging-an-intermittent-connection-reset/ https://www.appservgrid.com/paw93/index.php/2019/03/30/kube-proxy-subtleties-debugging-an-intermittent-connection-reset/#respond Sat, 30 Mar 2019 12:38:57 +0000 https://www.appservgrid.com/paw93/?p=1530 Continue reading "kube-proxy Subtleties: Debugging an Intermittent Connection Reset"

]]>
I recently came across a bug that causes intermittent connection resets. After some digging, I found it was caused by a subtle combination of several different network subsystems. It helped me understand Kubernetes networking better, and I think it’s worthwhile to share with a wider audience who are interested in the same topic.

The symptom

We received a user report claiming they were getting connection resets while using a Kubernetes service of type ClusterIP to serve large files to pods running in the same cluster. Initial debugging of the cluster did not yield anything interesting: network connectivity was fine and downloading the files did not hit any issues. However, when we ran the workload in parallel across many clients, we were able to reproduce the problem. Adding to the mystery was the fact that the problem could not be reproduced when the workload was run using VMs without Kubernetes. The problem, which could be easily reproduced by a simple app, clearly has something to do with Kubernetes networking, but what?

Kubernetes networking basics

Before digging into this problem, let’s talk a little bit about some basics of Kubernetes networking, as Kubernetes handles network traffic from a pod very differently depending on different destinations.

Pod-to-Pod

In Kubernetes, every pod has its own IP address. The benefit is that the applications running inside pods could use their canonical port, instead of remapping to a different random port. Pods have L3 connectivity between each other. They can ping each other, and send TCP or UDP packets to each other. CNI is the standard that solves this problem for containers running on different hosts. There are tons of different plugins that support CNI.

Pod-to-external

For the traffic that goes from pod to external addresses, Kubernetes simply uses SNAT. What it does is replace the pod’s internal source IP:port with the host’s IP:port. When the return packet comes back to the host, it rewrites the pod’s IP:port as the destination and sends it back to the original pod. The whole process is transparent to the original pod, who doesn’t know the address translation at all.

Pod-to-Service

Pods are mortal. Most likely, people want reliable service. Otherwise, it’s pretty much useless. So Kubernetes has this concept called “service” which is simply a L4 load balancer in front of pods. There are several different types of services. The most basic type is called ClusterIP. For this type of service, it has a unique VIP address that is only routable inside the cluster.

The component in Kubernetes that implements this feature is called kube-proxy. It sits on every node, and programs complicated iptables rules to do all kinds of filtering and NAT between pods and services. If you go to a Kubernetes node and type iptables-save, you’ll see the rules that are inserted by Kubernetes or other programs. The most important chains are KUBE-SERVICESKUBE-SVC-* and KUBE-SEP-*.

  • KUBE-SERVICES is the entry point for service packets. What it does is to match the destination IP:port and dispatch the packet to the corresponding KUBE-SVC-* chain.
  • KUBE-SVC-* chain acts as a load balancer, and distributes the packet to KUBE-SEP-* chain equally. Every KUBE-SVC-* has the same number ofKUBE-SEP-* chains as the number of endpoints behind it.
  • KUBE-SEP-* chain represents a Service EndPoint. It simply does DNAT, replacing service IP:port with pod’s endpoint IP:Port.

For DNAT, conntrack kicks in and tracks the connection state using a state machine. The state is needed because it needs to remember the destination address it changed to, and changed it back when the returning packet came back. Iptables could also rely on the conntrack state (ctstate) to decide the destiny of a packet. Those 4 conntrack states are especially important:

  • NEW: conntrack knows nothing about this packet, which happens when the SYN packet is received.
  • ESTABLISHED: conntrack knows the packet belongs to an established connection, which happens after handshake is complete.
  • RELATED: The packet doesn’t belong to any connection, but it is affiliated to another connection, which is especially useful for protocols like FTP.
  • INVALID: Something is wrong with the packet, and conntrack doesn’t know how to deal with it. This state plays a centric role in this Kubernetes issue.

Here is a diagram of how a TCP connection works between pod and service. The sequence of events are:

  • Client pod from left hand side sends a packet to a service: 192.168.0.2:80
  • The packet is going through iptables rules in client node and the destination is changed to pod IP, 10.0.1.2:80
  • Server pod handles the packet and sends back a packet with destination 10.0.0.2
  • The packet is going back to the client node, conntrack recognizes the packet and rewrites the source address back to 192.169.0.2:80
  • Client pod receives the response packet
Good packet flow
Good packet flow

What caused the connection reset?

Enough of the background, so what really went wrong and caused the unexpected connection reset?

As the diagram below shows, the problem is packet 3. When conntrack cannot recognize a returning packet, and mark it as INVALID. The most common reasons include: conntrack cannot keep track of a connection because it is out of capacity, the packet itself is out of a TCP window, etc. For those packets that have been marked as INVALID state by conntrack, we don’t have the iptables rule to drop it, so it will be forwarded to client pod, with source IP address not rewritten (as shown in packet 4)! Client pod doesn’t recognize this packet because it has a different source IP, which is pod IP, not service IP. As a result, client pod says, “Wait a second, I don’t recall this connection to this IP ever existed, why does this dude keep sending this packet to me?” Basically, what the client does is simply send a RST packet to the server pod IP, which is packet 5. Unfortunately, this is a totally legit pod-to-pod packet, which can be delivered to server pod. Server pod doesn’t know all the address translations that happened on the client side. From its view, packet 5 is a totally legit packet, like packet 2 and 3. All server pod knows is, “Well, client pod doesn’t want to talk to me, so let’s close the connection!” Boom! Of course, in order for all these to happen, the RST packet has to be legit too, with the right TCP sequence number, etc. But when it happens, both parties agree to close the connection.

Connection reset packet flow
Connection reset packet flow

How to address it?

Once we understand the root cause, the fix is not hard. There are at least 2 ways to address it.

  • Make conntrack more liberal on packets, and don’t mark the packets as INVALID. In Linux, you can do this by echo 1 > /proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_be_liberal.
  • Specifically add an iptables rule to drop the packets that are marked as INVALID, so it won’t reach to client pod and cause harm.

The fix is drafted (https://github.com/kubernetes/kubernetes/pull/74840), but unfortunately it didn’t catch the v1.14 release window. However, for the users that are affected by this bug, there is a way to mitigate the problem by applying the following rule in your cluster.

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: startup-script
  labels:
    app: startup-script
spec:
  template:
    metadata:
      labels:
        app: startup-script
    spec:
      hostPID: true
      containers:
      - name: startup-script
        image: gcr.io/google-containers/startup-script:v1
        imagePullPolicy: IfNotPresent
        securityContext:
          privileged: true
        env:
        - name: STARTUP_SCRIPT
          value: |
            #! /bin/bash
            echo 1 > /proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_be_liberal
            echo done

Summary

Obviously, the bug has existed almost forever. I am surprised that it hasn’t been noticed until recently. I believe the reasons could be: (1) this happens more in a congested server serving large payloads, which might not be a common use case; (2) the application layer handles the retry to be tolerant of this kind of reset. Anyways, regardless of how fast Kubernetes has been growing, it’s still a young project. There are no other secrets than listening closely to customers’ feedback, not taking anything for granted but digging deep, we can make it the best platform to run applications.

Source

]]>
https://www.appservgrid.com/paw93/index.php/2019/03/30/kube-proxy-subtleties-debugging-an-intermittent-connection-reset/feed/ 0
Istio monitoring explained https://www.appservgrid.com/paw93/index.php/2019/03/29/istio-monitoring-explained/ https://www.appservgrid.com/paw93/index.php/2019/03/29/istio-monitoring-explained/#respond Fri, 29 Mar 2019 02:43:46 +0000 https://www.appservgrid.com/paw93/?p=1528 Continue reading "Istio monitoring explained"

]]>
Istio monitoring explained

Nobody would be surprised if I say “Service Mesh” is a trending topic in the tech community these days. One of the most active projects in this area is Istio. It was jointly created by IBM, Google, and Lyft as a response to known problems with microservice architectures. Containers and Kubernetes greatly help with adopting a microservices architecture. However, at the same time, they bring a new set of new problems we didn’t have before.

Nowadays, all our services use HTTP/gRPC APIs to communicate between themselves. In the old monolithic times, these were just function calls flowing through a single application. This means, in a microservice system, that there are a large number of interactions between services which makes observability, security, and monitoring harder.

There are already a lot of resources that explain what Istio looks like and how it works. I don’t want to repeat those here, so I am going to focus on one area – monitoring. The official documentation covers this but understanding it took me some time. So in this tutorial, I will guide you through it. So you can gain a deeper understanding of using Istio for monitoring tasks.

State of the art

One of the main characteristics of why a service mesh is chosen is to improve the observability. Up to now, developers had to instrument their applications to expose a series of metrics, often using a common library or a vendor’s agent like New Relic or Datadog. Afterwards, operators were able to scrape the application’s metric endpoints using a monitoring solution getting a picture of how the system was behaving. But having to modify the code is a pain, especially when there are many changes or additions. And scaling this approach to multiple teams can make it hard to maintain.

The Istio approach is to expose and track application behaviour without touching a single line of code. This is achieved thanks to the ‘sidecar’ concept, which is a container that runs alongside our applications and supplies data to a central telemetry component. The sidecars can sniff a lot of information about the requests, thanks to being able to recognise the protocol being used (redis, mongo, http, grpc, etc.).

Mixer, the Swiss Army Knife

Let’s start by explaining the Mixer component. What it does and what benefits does it bring to monitoring. In my opinion, the best way to define ‘Mixer’ is by visualizing it as an attribute processor. Every proxy in the mesh sends a different set of attributes, like request data or environment information, and ‘Mixer’ processes all this data and routes it to the right adapters.

An ‘adapter’ is a handler which is attached to the ‘Mixer‘ and is in charge of adapting the attribute data for a backend. A backend could be whichever external service is interested in this data. For example, a monitoring tool (like Prometheus or Stackdriver), an authorization backend, or a logging stack.

A diagram how Mixer works

Concepts

One of the hardest things when entering the Istio world is getting familiar with the new terminology. Just when you think you’ve understood the entire Kubernetes glossary, you realize Istio adds more than fifty new terms to the arena!

Focussing on monitoring, let’s describe the most interesting concepts that will help us benefit from the mixer design:

  • Attribute: A piece of data that is processed by the mixer. Most of the time this comes from a sidecar but it can be produced by an adapter too. Attributes are used in the Instance to map the desired data to the backend.
  • Adapter: Logic embedded in the mixer component which manages the forwarding of data to a specific backend.
  • Handler: Configuration of an adapter. As an adapter can serve multiple use cases, the configuration is decoupled making it possible to run the same adapter with multiple settings.
  • Instance: Is the entity which binds the data coming from Istio to the adapter model. Istio has a unified set of attributes collected by its sidecar containers. This data has to be translated into the backend language.
  • Template: A common interface to define the instance templates. https://istio.io/docs/reference/config/policy-and-telemetry/templates/

Creating a new monitoring case

After defining all the concepts around Istio observability, the best way to embed it in our minds is with a real-world scenario.

For this exercise, I thought it would be great to get the benefits from Kubernetes labels metadata and thanks to it, track the versioning of our services. It is a common situation when you’re moving to a microservice architecture to end up having multiple versions of your services (A/B testing, API versioning, etc). The Istio sidecar sends all kinds of metadata from your cluster to the mixer. So in our example, we will leverage the deployment labels to identify the service version and observe the usage stats for each version.

For the sake of simplicity let’s take an existing project, the Google microservices demo project, and make some modifications to match our plan. This project simulates a microservice architecture composed of multiple components to build an e-commerce website.

First things first, let’s ensure the project runs correctly in our cluster with Istio. Let’s use the auto-injection feature to deploy all the components in a namespace and have the sidecar injected automatically by Istio.

$ kubectl label namespace mesh istio-injection=enabled

Warning: Ensure mesh namespace is created beforehand and your kubectl context point to it.

If you have a pod security policy enabled, you will need to configure some permissions for the init container in order to let it configure the iptables magic correctly. For testing purposes you can use:

$ kubectl create clusterrolebinding mesh –clusterrole cluster-admin –serviceaccount=mesh:default

This binds the default service account to the cluster admin role. Now we can deploy all the components using the all-in resources YAML document.

$ kubectl apply -f release/kubernetes-manifests.yaml

Now you should be able to see pods starting in the mesh namespace. Some of them will fail because the Istio resources are not yet added. For example, egress traffic will not be allowed and the currency component will fail. Apply these resources to fix the problem and expose the frontend component through the Istio ingress.

$ kubectl apply -f release/istio-manifests.yaml

Now, we can browse to see the frontend using the IP or domain supplied by your cloud provider (the frontend-external service is exposed via the cloud provider load balancer).

As we have now our microservices application running, let’s go a step further and configure one of the components to have multiple versions. As you can see in the microservices YAML, the deployment has a single label with the application name. If we want to manage canary deployments or run multiple versions of our app we could add another label with the versioning.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: currencyservice
spec:
template:
metadata:
labels:
app: currencyservice
version: v1

After applying the changes to our cluster, we can duplicate the deployment with a different name and changing the version.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: currencyservice2
spec:
template:
metadata:
labels:
app: currencyservice
version: v2

And now submit it to the API again.

$ kubectl apply -f release/kubernetes-manifests.yaml

Note: Although we apply again all the manifests, only the ones that have changed will be updated by the API.

An avid reader has noticed that we did a trick because the service selector only points to the app label. That way the traffic will be split between the versions equitably.

From the ground to the sky

Now let’s add the magic. We will need to create three resources to expose the version as a new metric in prometheus.

First, we’ll create the instance. Here we use the metric instance template to map the values provider by the sidecars to the adapter inputs. We are only interested in the workload name (source) and the version.

apiVersion: “config.istio.io/v1alpha2”
kind: metric
metadata:
name: versioncount
namespace: mesh
spec:
value: “1”
dimensions:
source: source.workload.name | “unknown”
version: destination.labels[“version”] | “unknown”
monitored_resource_type: ‘”UNSPECIFIED”‘

Now let’s configure the adapter. In our case we want to connect the metric to a Prometheus backend. So we’ll define the metric name and the type of value the metric that will serve to the backend (Prometheus DSL) in the handler configuration. Also the label names it will use for the dimensions.

apiVersion: “config.istio.io/v1alpha2”
kind: prometheus
metadata:
name: versionhandler
namespace: mesh
spec:
metrics:
– name: version_count # Prometheus metric name
instance_name: versioncount.metric.mesh # Mixer instance name (fully-qualified)
kind: COUNTER
label_names:
– source
– version

Finally, we’ll need to link this particular handler with a specific instance (metric).

apiVersion: “config.istio.io/v1alpha2”
kind: rule
metadata:
name: versionprom
namespace: mesh
spec:
match: destination.service == “currencyservice
.mesh.svc.cluster.local”
actions:
– handler: versionhandler.prometheus
instances:
– versioncount.metric.mesh

Once those definitions are applied, Istio will instruct the prometheus adapter to start collect and serve the new metric. If we take a look at the prometheus UI now searching for the new metric, we should be able to see something like:

Prometheus version's graph

Conclusion

Good observability in a microservice architecture is not easy. Istio can help to remove the complexity from developers and leave the work to the operator.

At the beginning it may be hard to deal with all the complexity added by a service mesh. But once you’ve tamed it, you’ll be able to standardize and automate your monitoring configuration and build a great observability system in record time.

Further information

Source

]]>
https://www.appservgrid.com/paw93/index.php/2019/03/29/istio-monitoring-explained/feed/ 0
An Introduction to Big Data Concepts https://www.appservgrid.com/paw93/index.php/2019/03/29/an-introduction-to-big-data-concepts/ https://www.appservgrid.com/paw93/index.php/2019/03/29/an-introduction-to-big-data-concepts/#respond Fri, 29 Mar 2019 02:43:36 +0000 https://www.appservgrid.com/paw93/?p=1527 Continue reading "An Introduction to Big Data Concepts"

]]>
An Introduction to Big Data Concepts

An Introduction to Big Data Concepts


Gigantic amounts of data are being generated at high speeds by a variety of sources such as mobile devices, social media, machine logs, and multiple sensors surrounding us. All around the world, we produce vast amount of data and the volume of generated data is growing exponentially at a unprecedented rate. The pace of data generation is even being accelerated by the growth of new technologies and paradigms such as Internet of Things (IoT).

What is Big Data and How Is It Changing?

The definition of big data is hidden in the dimensions of the data. Data sets are considered “big data” if they have a high degree of the following three distinct dimensions: volume, velocity, and variety. Value and veracity are two other “V” dimensions that have been added to the big data literature in the recent years. Additional Vs are frequently proposed, but these five Vs are widely accepted by the community and can be described as follows:

  • Velocity: the speed at which the data is been generated
  • Volume: the amount of the data that is been generated
  • Variety: the diversity or different types of the data
  • Value: the worth of the data or the value it has
  • Veracity: the quality, accuracy, or trustworthiness of the data

Large volumes of data are generally available in either structured or unstructured formats. Structured data can be generated by machines or humans, has a specific schema or model, and is usually stored in databases. Structured data is organized around schemas with clearly defined data types. Numbers, date time, and strings are a few examples of structured data that may be stored in database columns. Alternatively, unstructured data does not have a predefined schema or model. Text files, log files, social media posts, mobile data, and media are all examples of unstructured data.

Based on a report provided by Gartner, an international research and consulting organization, the application of advanced big data analytics is part of the Gartner Top 10 Strategic Technology Trends for 2019, and is expected to drive new business opportunities. The same report also predicts that more than 40% of data science tasks will be automated by 2020, which will likely require new big data tools and paradigms.

By 2017, global internet usage reached 47% of the world’s population based on an infographic provided by DOMO. This indicates that an increasing number of people are starting to use mobile phones and that more and more devices are being connected to each other via smart cities, wearable devices, Internet of Things (IoT), fog computing, and edge computing paradigms. As internet usage spikes and other technologies such as social media, IoT devices, mobile phones, autonomous devices (e.g. robotics, drones, vehicles, appliances, etc) continue to grow, our lives will become more connected than ever and generate unprecedented amounts of data, all of which will require new technologies for processing.

The Scale of Data Generated by Everyday Interactions

At a large scale, the data generated by everyday interactions is staggering. Based on research conducted by DOMO, for every minute in 2018, Google conducted 3,877,140 searches, YouTube users watched 4,333,560 videos, Twitter users sent 473,400 tweets, Instagram users posted 49,380 photos, Netflix users streamed 97,222 hours of video, and Amazon shipped 1,111 packages. This is just a small glimpse of a much larger picture involving other sources of big data. It seems like the internet is pretty busy, does not it? Moreover, it is expected that mobile traffic will experience tremendous growth past its present numbers and that the world’s internet population is growing significantly year-over-year. By 2020, the report anticipates that 1.7MB of data will be created per person per second. Big data is getting even bigger.

At small scale, the data generated on a daily basis by a small business, a start up company, or a single sensor such as a surveillance camera is also huge. For example, a typical IP camera in a surveillance system at a shopping mall or a university campus generates 15 frame per second and requires roughly 100 GB of storage per day. Consider the storage amount and computing requirements if those camera numbers are scaled to tens or hundreds.

Big Data in the Scientific Community

Scientific projects such as CERN, which conducts research on what the universe is made of, also generate massive amounts of data. The Large Hadron Collider (LHC) at CERN is the world’s largest and most powerful particle accelerator. It consists of a 27-kilometer ring of superconducting magnets along with some additional structures to accelerate and boost the energy of particles along the way.

During the spin, particles collide with LHC detectors roughly 1 billion times per second, which generates around 1 petabyte of raw digital “collision event” data per second. This unprecedented volume of data is a great challenge that cannot be resolved with CERN’s current infrastructure. To work around this, the generated raw data is filtered and only the “important” events are processed to reduce the volume of data. Consider the challenging processing requirements for this task.

The four big LHC experiments, named ALICEATLASCMS, and LHCb, are among the biggest generators of data at CERN, and the rate of the data processed and stored on servers by these experiments is expected to reach about 25 GB/s (gigabyte per second). As of June 29, 2017, the CERN Data Center announced that they had passed the 200 petabytes milestone of data archived permanently in their storage units.

Why Big Data Tools are Required

The scale of the data generated by famous well-known corporations, small scale organizations, and scientific projects is growing at an unprecedented level. This can be clearly seen by the above scenarios and by remembering again that the scale of this data is getting even bigger.

On the one hand, the mountain of the data generated presents tremendous processing, storage, and analytics challenges that need to be carefully considered and handled. On the other hand, traditional Relational Database Management Systems (RDBMS) and data processing tools are not sufficient to manage this massive amount of data efficiently when the scale of data reaches terabytes or petabytes. These tools lack the ability to handle large volumes of data efficiently at scale. Fortunately, big data tools and paradigms such as Hadoop and MapReduce are available to resolve these big data challenges.

Analyzing big data and gaining insights from it can help organizations make smart business decisions and improve their operations. This can be done by uncovering hidden patterns in the data and using them to reduce operational costs and increase profits. Because of this, big data analytics plays a crucial role for many domains such as healthcare, manufacturing, and banking by resolving data challenges and enabling them to move faster.

Big Data Analytics Tools

Since the compute, storage, and network requirements for working with large data sets are beyond the limits of a single computer, there is a need for paradigms and tools to crunch and process data through clusters of computers in a distributed fashion. More and more computing power and massive storage infrastructure are required for processing this massive data either on-premise or, more typically, at the data centers of cloud service providers.

In addition to the required infrastructure, various tools and components must be brought together to solve big data problems. The Hadoop ecosystem is just one of the platforms helping us work with massive amounts of data and discover useful patterns for businesses.

Below is a list of some of the tools available and a description of their roles in processing big data:

  • MapReduce: MapReduce is a distributed computing paradigm developed to process vast amount of data in parallel by splitting a big task into smaller map and reduce oriented tasks.
  • HDFS: The Hadoop Distributed File System is a distributed storage and file system used by Hadoop applications.
  • YARN: The resource management and job scheduling component in the Hadoop ecosystem.
  • Spark: A real-time in-memory data processing framework.
  • PIG/HIVE: SQL-like scripting and querying tools for data processing and simplifying the complexity of MapReduce programs.
  • HBase, MongoDB, Elasticsearch: Examples of a few NoSQL databases.
  • Mahout, Spark ML: Tools for running scalable machine learning algorithms in a distributed fashion.
  • Flume, Sqoop, Logstash: Data integration and ingestion of structured and unstructured data.
  • Kibana: A tool to visualize Elasticsearch data.

Conclusion

To summarize, we are generating a massive amount of data in our everyday life, and that number is continuing to rise. Having the data alone does not improve an organization without analyzing and discovering its value for business intelligence. It is not possible to mine and process this mountain of data with traditional tools, so we use big data pipelines to help us ingest, process, analyze, and visualize these tremendous amounts of data.

Learn to deploy databases in production on Kubernetes

For more training in big data and database management, watch our free online training on successfully running a database in production on kubernetes.

]]>
https://www.appservgrid.com/paw93/index.php/2019/03/29/an-introduction-to-big-data-concepts/feed/ 0
Running Kubernetes locally on Linux with Minikube – now with Kubernetes 1.14 support https://www.appservgrid.com/paw93/index.php/2019/03/29/running-kubernetes-locally-on-linux-with-minikube-now-with-kubernetes-1-14-support/ https://www.appservgrid.com/paw93/index.php/2019/03/29/running-kubernetes-locally-on-linux-with-minikube-now-with-kubernetes-1-14-support/#respond Fri, 29 Mar 2019 02:43:17 +0000 https://www.appservgrid.com/paw93/?p=1526 Continue reading "Running Kubernetes locally on Linux with Minikube – now with Kubernetes 1.14 support"

]]>

  • A few days ago, the Kubernetes community announced Kubernetes 1.14, the most recent version of Kubernetes. Alongside it, Minikube, a part of the Kubernetes project, recently hit the 1.0 milestone, which supports Kubernetes 1.14 by default.

    Kubernetes is a real winner (and a de facto standard) in the world of distributed Cloud Native computing. While it can handle up to 5000 nodes in a single cluster, local deployment on a single machine (e.g. a laptop, a developer workstation, etc.) is an increasingly common scenario for using Kubernetes.

    This is post #1 in a series about the local deployment options on Linux, and it will cover Minikube, the most popular community-built solution for running Kubernetes on a local machine.

    Minikube is a cross-platform, community-driven Kubernetes distribution, which is targeted to be used primarily in local environments. It deploys a single-node cluster, which is an excellent option for having a simple Kubernetes cluster up and running on localhost.

    Minikube is designed to be used as a virtual machine (VM), and the default VM runtime is VirtualBox. At the same time, extensibility is one of the critical benefits of Minikube, so it’s possible to use it with drivers outside of VirtualBox.

    By default, Minikube uses Virtualbox as a runtime for running the virtual machine. Virtualbox is a cross-platform solution, which can be used on a variety of operating systems, including GNU/Linux, Windows, and macOS.

    At the same time, QEMU/KVM is a Linux-native virtualization solution, which may offer benefits compared to Virtualbox. For example, it’s much easier to use KVM on a GNU/Linux server, so you can run a single-node Minikube cluster not only on a Linux workstation or laptop with GUI, but also on a remote headless server.

    Unfortunately, Virtualbox and KVM can’t be used simultaneously, so if you are already running KVM workloads on a machine and want to run Minikube there as well, using the KVM minikube driver is the preferred way to go.

    In this guide, we’ll focus on running Minikube with the KVM driver on Ubuntu 18.04 (I am using a bare metal machine running on packet.com.)

    Minikube architecture (source: kubernetes.io)
    Minikube architecture (source: kubernetes.io)

    Disclaimer

    This is not an official guide to Minikube. You may find detailed information on running and using Minikube on it’s official webpage, where different use cases, operating systems, environments, etc. are covered. Instead, the purpose of this guide is to provide clear and easy guidelines for running Minikube with KVM on Linux.

    Prerequisites

    • Any Linux you like (in this tutorial we’ll use Ubuntu 18.04 LTS, and all the instructions below are applicable to it. If you prefer using a different Linux distribution, please check out the relevant documentation)
    • libvirt and QEMU-KVM installed and properly configured
    • The Kubernetes CLI (kubectl) for operating the Kubernetes cluster

    QEMU/KVM and libvirt installation

    NOTE: skip if already installed

    Before we proceed, we have to verify if our host can run KVM-based virtual machines. This can be easily checked using the kvm-ok tool, available on Ubuntu.

    sudo apt install cpu-checker && sudo kvm-ok

    If you receive the following output after running kvm-ok, you can use KVM on your machine (otherwise, please check out your configuration):

    $ sudo kvm-ok
    INFO: /dev/kvm exists
    KVM acceleration can be used

    Now let’s install KVM and libvirt and add our current user to the libvirt group to grant sufficient permissions:

    sudo apt install libvirt-clients libvirt-daemon-system qemu-kvm \
        && sudo usermod -a -G libvirt $(whoami) \
        && newgrp libvirt

    After installing libvirt, you may verify the host validity to run the virtual machines with virt-host-validate tool, which is a part of libvirt.

    sudo virt-host-validate

    kubectl (Kubernetes CLI) installation

    NOTE: skip if already installed

    In order to manage the Kubernetes cluster, we need to install kubectl, the Kubernetes CLI tool.

    The recommended way to install it on Linux is to download the pre-built binary and move it to a directory under the $PATH.

    curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl \
        && sudo install kubectl /usr/local/bin && rm kubectl

    Alternatively, kubectl can be installed with a big variety of different methods (eg. as a .deb or snap package – check out the kubectl documentation to find the best one for you).

    Minikube installation

    Minikube KVM driver installation

    A VM driver is an essential requirement for local deployment of Minikube. As we’ve chosen to use KVM as the Minikube driver in this tutorial, let’s install the KVM driver with the following command:

    curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 \
        && sudo install docker-machine-driver-kvm2 /usr/local/bin/ && rm docker-machine-driver-kvm2

    Minikube installation

    Now let’s install Minikube itself:

    curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
        && sudo install minikube-linux-amd64 /usr/local/bin/minikube && rm minikube-linux-amd64

    Verify the Minikube installation

    Before we proceed, we need to verify that Minikube is correctly installed. The simplest way to do this is to check Minikube’s status.

    minikube version

    To use the KVM2 driver:

    Now let’s run the local Kubernetes cluster with Minikube and KVM:

    minikube start --vm-driver kvm2

    Set KVM2 as a default VM driver for Minikube

    If KVM is used as the single driver for Minikube on our machine, it’s more convenient to set it as a default driver and run Minikube with fewer command-line arguments. The following command sets the KVM driver as the default:

    minikube config set vm-driver kvm2

    So now let’s run Minikube as usual:

    minikube start

    Verify the Kubernetes installation

    Let’s check if the Kubernetes cluster is up and running:

    kubectl get nodes

    Now let’s run a simple sample app (nginx in our case):

    kubectl create deployment nginx --image=nginx

    Let’s also check that the Kubernetes pods are correctly provisioned:

    kubectl get pods

    Screencast

    asciicast

    Next steps

    At this point, a Kubernetes cluster with Minikube and KVM is adequately set up and configured on your local machine.

    To proceed, you may check out the Kubernetes tutorials on the project website:

    It’s also worth checking out the “Introduction to Kubernetes” course by The Linux Foundation/Cloud Native Computing Foundation, available for free on EDX:

  • Source

    ]]>
    https://www.appservgrid.com/paw93/index.php/2019/03/29/running-kubernetes-locally-on-linux-with-minikube-now-with-kubernetes-1-14-support/feed/ 0
    Kubernetes 1.14: Production-level support for Windows Nodes, Kubectl Updates, Persistent Local Volumes GA https://www.appservgrid.com/paw93/index.php/2019/03/26/kubernetes-1-14-production-level-support-for-windows-nodes-kubectl-updates-persistent-local-volumes-ga/ https://www.appservgrid.com/paw93/index.php/2019/03/26/kubernetes-1-14-production-level-support-for-windows-nodes-kubectl-updates-persistent-local-volumes-ga/#respond Tue, 26 Mar 2019 13:54:59 +0000 https://www.appservgrid.com/paw93/?p=1523 Continue reading "Kubernetes 1.14: Production-level support for Windows Nodes, Kubectl Updates, Persistent Local Volumes GA"

    ]]>
  • We’re pleased to announce the delivery of Kubernetes 1.14, our first release of 2019!

    Kubernetes 1.14 consists of 31 enhancements: 10 moving to stable, 12 in beta, and 7 net new. The main themes of this release are extensibility and supporting more workloads on Kubernetes with three major features moving to general availability, and an important security feature moving to beta.

    More enhancements graduated to stable in this release than any prior Kubernetes release. This represents an important milestone for users and operators in terms of setting support expectations. In addition, there are notable Pod and RBAC enhancements in this release, which are discussed in the “additional notable features” section below.

    Let’s dive into the key features of this release:

    Production-level Support for Windows Nodes

    Up until now Windows Node support in Kubernetes has been in beta, allowing many users to experiment and see the value of Kubernetes for Windows containers. Kubernetes now officially supports adding Windows nodes as worker nodes and scheduling Windows containers, enabling a vast ecosystem of Windows applications to leverage the power of our platform. Enterprises with investments in Windows-based applications and Linux-based applications don’t have to look for separate orchestrators to manage their workloads, leading to increased operational efficiencies across their deployments, regardless of operating system.

    Some of the key features of enabling Windows containers in Kubernetes include:

    • Support for Windows Server 2019 for worker nodes and containers
    • Support for out of tree networking with Azure-CNI, OVN-Kubernetes, and Flannel
    • Improved support for pods, service types, workload controllers, and metrics/quotas to closely match the capabilities offered for Linux containers

    Notable Kubectl Updates

    New Kubectl Docs and Logo

    The documentation for kubectl has been rewritten from the ground up with a focus on managing Resources using declarative Resource Config. The documentation has been published as a standalone site with the format of a book, and it is linked from the main k8s.io documentation (available at https://kubectl.docs.kubernetes.io).

    The new kubectl logo and mascot (pronounced kubee-cuddle) are shown on the new docs site logo.

    Kustomize Integration

    The declarative Resource Config authoring capabilities of kustomize are now available in kubectl through the -k flag (e.g. for commands like apply, get) and the kustomize subcommand. Kustomize helps users author and reuse Resource Config using Kubernetes native concepts. Users can now apply directories with kustomization.yaml to a cluster using kubectl apply -k dir/. Users can also emit customized Resource Config to stdout without applying them via kubectl kustomize dir/. The new capabilities are documented in the new docs at https://kubectl.docs.kubernetes.io

    The kustomize subcommand will continue to be developed in the Kubernetes owned kustomize repo. The latest kustomize features will be available from a standalone kustomize binary (published to the kustomize repo) at a frequent release cadence, and will be updated in kubectl prior to each Kubernetes releases.

    kubectl Plugin Mechanism Graduating to Stable

    The kubectl plugin mechanism allows developers to publish their own custom kubectl subcommands in the form of standalone binaries. This may be used to extend kubectl with new higher-level functionality and with additional porcelain (e.g. adding a set-ns command).

    Plugins must have the kubectl- name prefix and exist on the user’s $PATH. The plugin mechanics have been simplified significantly for GA, and are similar to the git plugin system.

    Persistent Local Volumes are Now GA

    This feature, graduating to stable, makes locally attached storage available as a persistent volume source. Distributed file systems and databases are the primary use cases for persistent local storage due performance and cost. On cloud providers, local SSDs give better performance than remote disks. On bare metal, in addition to performance, local storage is typically cheaper and using it is a necessity to provision distributed file systems.

    PID Limiting is Moving to Beta

    Process IDs (PIDs) are a fundamental resource on Linux hosts. It is trivial to hit the task limit without hitting any other resource limits and cause instability to a host machine. Administrators require mechanisms to ensure that user pods cannot induce PID exhaustion that prevents host daemons (runtime, kubelet, etc) from running. In addition, it is important to ensure that PIDs are limited among pods in order to ensure they have limited impact to other workloads on the node.

    Administrators are able to provide pod-to-pod PID isolation by defaulting the number of PIDs per pod as a beta feature. In addition, administrators can enable node-to-pod PID isolation as an alpha feature by reserving a number of allocatable PIDs to user pods via node allocatable. The community hopes to graduate this feature to beta in the next release.

    Additional Notable Feature Updates

    Pod priority and preemption enables Kubernetes scheduler to schedule more important Pods first and when cluster is out of resources, it removes less important pods to create room for more important ones. The importance is specified by priority.

    Pod Readiness Gates introduce an extension point for external feedback on pod readiness.

    Harden the default RBAC discovery clusterrolebindings removes discovery from the set of APIs which allow for unauthenticated access by default, improving privacy for CRDs and the default security posture of default clusters in general.

    Availability

    Kubernetes 1.14 is available for download on GitHub. To get started with Kubernetes, check out these interactive tutorials. You can also easily install 1.14 using kubeadm.

    Features Blog Series

    If you’re interested in exploring these features more in depth, check back next week for our 5 Days of Kubernetes series where we’ll highlight detailed walkthroughs of the following features:

    • Day 1 – Windows Server Containers
    • Day 2 – Harden the default RBAC discovery clusterrolebindings
    • Day 3 – Pod Priority and Preemption in Kubernetes
    • Day 4 – PID Limiting
    • Day 5 – Persistent Local Volumes

    Release Team

    This release is made possible through the efforts of hundreds of individuals who contributed both technical and non-technical content. Special thanks to the release team led by Aaron Crickenberger, Senior Test Engineer at Google. The 43 individuals on the release team coordinated many aspects of the release, from documentation to testing, validation, and feature completeness.

    As the Kubernetes community has grown, our release process represents an amazing demonstration of collaboration in open source software development. Kubernetes continues to gain new users at a rapid clip. This growth creates a positive feedback cycle where more contributors commit code creating a more vibrant ecosystem. Kubernetes has had over 28,000 individual contributors to date and an active community of more than 57,000 people.

    Project Velocity

    The CNCF has continued refining DevStats, an ambitious project to visualize the myriad contributions that go into the project. K8s DevStats illustrates the breakdown of contributions from major company contributors, as well as an impressive set of preconfigured reports on everything from individual contributors to pull request lifecycle times. On average over the past year, 381 different companies and over 2,458 individuals contribute to Kubernetes each month. Check out DevStats to learn more about the overall velocity of the Kubernetes project and community.

    User Highlights

    Established, global organizations are using Kubernetes in production at massive scale. Recently published user stories from the community include:

    Is Kubernetes helping your team? Share your story with the community.

    Ecosystem Updates

  • KubeCon

    The world’s largest Kubernetes gathering, KubeCon + CloudNativeCon is coming to Barcelona from May 20-23, 2019 and Shanghai (co-located with Open Source Summit) from June 24-26, 2019. These conferences will feature technical sessions, case studies, developer deep dives, salons, and more! Register today!

    Webinar

    Join members of the Kubernetes 1.14 release team on April 23rd at 10am PDT to learn about the major features in this release. Register here.

    Get Involved

    The simplest way to get involved with Kubernetes is by joining one of the many Special Interest Groups (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly community meeting, and through the channels below.

    Thank you for your continued feedback and support.

    Source

    ]]>
    https://www.appservgrid.com/paw93/index.php/2019/03/26/kubernetes-1-14-production-level-support-for-windows-nodes-kubectl-updates-persistent-local-volumes-ga/feed/ 0
    Grafana Logging using Loki https://www.appservgrid.com/paw93/index.php/2019/03/22/grafana-logging-using-loki/ https://www.appservgrid.com/paw93/index.php/2019/03/22/grafana-logging-using-loki/#respond Fri, 22 Mar 2019 23:40:06 +0000 https://www.appservgrid.com/paw93/?p=1521 Continue reading "Grafana Logging using Loki"

    ]]>
    Grafana Logging using Loki

    Loki is a Prometheus-inspired logging service for cloud native infrastructure.

    What is Loki?

    Open sourced by Grafana Labs during KubeCon Seattle 2018, Loki is a logging backend optimized for users running Prometheus and Kubernetes with great logs search and visualization in Grafana 6.0.

    Loki was built for efficiency alongside the following goals:

    • Logs should be cheap. Nobody should be asked to log less.
    • Easy to operate and scale.
    • Metrics, logs (and traces later) need to work together.

    Loki vs other logging solutions

    As said, Loki is designed for efficiency to work well in the Kubernetes context in combination with Prometheus metrics.

    The idea is to switch easily between metrics and logs based on Kubernetes labels you already use with Prometheus.

    Unlike most logging solutions, Loki does not parse incoming logs or do full-text indexing.

    Instead, it indexes and groups log streams using the same labels you’re already using with Prometheus. This makes it significantly more efficient to scale and operate.

    Loki components

    Loki is a TSDB (Time-series database), it stores logs as split and gzipped chunks of data.

    The logs are ingested via the API and an agent, called Promtail (Tailing logs in Prometheus format), will scrape Kubernetes logs and add label metadata before sending it to Loki.

    This metadata addition is exactly the same as Prometheus, so you will end up with the exact same labels for your resources.

    https://grafana.com/blog/2018/12/12/loki-prometheus-inspired-open-source-logging-for-cloud-natives/

    How to deploy Loki on your Kubernetes cluster

    1. Deploy Loki on your cluster

    The easiest way to deploy Loki on your Kubernetes cluster is by using the Helm chart available in the official repository.

    You can follow the setup guide from the official repo.

    This will deploy Loki and Promtail.

    1. Add Loki datasource in Grafana (built-in support for Loki is in 6.0 and newer releases)
      1. Log into your Grafana.
      2. Go to Configuration > Data Sources via the cog icon in the left sidebar.
      3. Click the big + Add data source button.
      4. Choose Loki from the list.
      5. The http URL field should be the address of your Loki server: http://loki:3100
    2. See your logs in the “Explore” view
      1. Select the “Explore” view on the sidebar.
      2. Select the Loki data source.
      3. Choose a log stream using the “Log labels” button.

    Promtail configuration

    Promtail is the metadata appender and log sending agent

    The Promtail configuration you get from the Helm chart is already configured to get all the logs from your Kubernetes cluster and append labels on it as Prometheus does for metrics.

    However, you can tune the configuration for your needs.

    Here are two examples:

    1. Get logs only for specific namespace

    You can use the action: keep for your namespace and add a new relabel_configs for each scrape_config in promtail/configmap.yaml

    For example, if you want to get logs only for the kube-system namespace:

    scrape_configs:
    – job_name: kubernetes-pods
    kubernetes_sd_configs:
    – role: pod
    relabel_configs:
    – source_labels: [__meta_kubernetes_namespace]
    action: keep
    regex: kube-system

    # […]

    – job_name: kubernetes-pods-app
    kubernetes_sd_configs:
    – role: pod
    relabel_configs:
    – source_labels: [__meta_kubernetes_namespace]
    action: keep
    regex: kube-system

    1. Exclude logs from specific namespace

    For example, if you want to exclude logs from kube-system namespace:

    You can use the action: drop for your namespace and add a new relabel_configs for each scrape_config in promtail/configmap.yaml

    scrape_configs:
    – job_name: kubernetes-pods
    kubernetes_sd_configs:
    – role: pod
    relabel_configs:
    – source_labels: [__meta_kubernetes_namespace]
    action: drop
    regex: kube-system

    # […]

    – job_name: kubernetes-pods-app
    kubernetes_sd_configs:
    – role: pod
    relabel_configs:
    – source_labels: [__meta_kubernetes_namespace]
    action: drop
    regex: kube-system

    For more info on the configuration, you can refer to the official Prometheus configuration documentation.

    Use fluentd output plugin

    Fluentd is a well-known and good log forwarder that is also a [CNCF project] (https://www.cncf.io/projects/). It has a lot of input plugins and good filtering built-in. So, if you want to for example, forward journald logs to Loki, it’s not possible via Promtail so you can use the fluentd syslog input plugin with the fluentd Loki output plugin to get those logs into Loki.

    You can refer to the installation guide on how to use the fluentd Loki plugin.

    There’s also an example, of how to forward API server audit logs to Loki with fluentd.

    Here is the fluentd configuration:

    <match fluent.**>
    type null
    </match>
    <source>
    @type tail
    path /var/log/apiserver/audit.log
    pos_file /var/log/fluentd-audit.log.pos
    time_format %Y-%m-%dT%H:%M:%S.%NZ
    tag audit.*
    format json
    read_from_head true
    </source>
    <filter kubernetes.**>
    type kubernetes_metadata
    </filter>
    <match audit.**>
    @type loki
    url “#”
    username “#”
    password “#”
    extra_labels {“env”:”dev”}
    flush_interval 10s
    flush_at_shutdown true
    buffer_chunk_limit 1m
    </match>

    Promtail as a sidecar

    By default, Promtail is configured to automatically scrape logs from containers and send them to Loki. Those logs come from stdout.

    But sometimes, you may like to be able to send logs from an external file to Loki.

    In this case, you can set up Promtail as a sidecar, i.e. a second container in your pod, share the log file with it through a shared volume, and scrape the data to send it to Loki

    Assuming you have an application simple-logger. The application logs into /home/slog/creator.log

    Your kubernetes deployment will look like this :

    1. Add Promtail as a sidecar

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      name: my-app
      spec:
      template:
      metadata:
      name: my-app
      spec:
      containers:
      – name: simple-logger
      image: giantswarm/simple-logger:latest

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      name: my-app
      spec:
      template:
      metadata:
      name: my-app
      spec:
      containers:
      – name: simple-logger
      image: giantswarm/simple-logger:latest
      – name: promtail
      image: grafana/promtail:master
      args:
      – “-config.file=/etc/promtail/promtail.yaml”
      – “-client.url=http://loki:3100/api/prom/push”

    2. Use a shared data volume containing the log file

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      name: my-app
      spec:
      template:
      metadata:
      name: my-app
      spec:
      containers:
      – name: simple-logger
      image: giantswarm/simple-logger:latest
      volumeMounts:
      – name: shared-data
      mountPath: /home/slog
      – name: promtail
      image: grafana/promtail:master
      args:
      – “-config.file=/etc/promtail/promtail.yaml”
      – “-client.url=http://loki:3100/api/prom/push”
      volumeMounts:
      – name: shared-data
      mountPath: /home/slog
      volumes:
      – name: shared-data
      emptyDir: {}

    3. Configure Promtail to read your log file

    As Promtail uses the same config as Prometheus, you can use the scrape_config type static_configs to read the file you want.

    scrape_configs:
    – job_name: system
    entry_parser: raw
    static_configs:
    – targets:
    – localhost
    labels:
    job: my-app
    my-label: awesome
    __path__: /home/slog/creator.log

    And you’re done.

    A running example can be found here

    Conclusion

    So Loki looks very promising. The footprint is very low. It integrates nicely with Grafana and Prometheus. Having the same labels as in Prometheus is very helpful to map incidents together and quickly find logs related to metrics. Another big point is the simple scalability, Loki is horizontally scalable by design.

    As Loki is currently alpha software, install it and play with it. Then, join us on grafana.slack.com and add your feedback to make it better.

    Interested in finding out how Giant Swarm handles the entire cloud native stack including Loki? Request your free trial of the Giant Swarm Infrastructure here.

    Source

    ]]>
    https://www.appservgrid.com/paw93/index.php/2019/03/22/grafana-logging-using-loki/feed/ 0