Using New Relic, Splunk, AppDynamics and Netuitive for Container Monitoring


If you use containers as part of your day-to-day operations, you need to
monitor them — ideally, by using a monitoring solution that you
already have in place, rather than implementing an entirely new tool.
Containers are often deployed quickly and at a high volume, and they
frequently consume and release system resources at a rapid rate. You
need to have some way of measuring container performance, and the impact
that container deployment has on your system. In this article, we’ll
take a look at four widely used monitoring
platforms—Netuitive, NewRelic, Splunk, and AppDynamics—that support containers, and compare how they measure up when it comes to monitoring containers.
First, though, a question: When you monitor containers, what kind of
metrics do you expect to see? The answer, as we’ll see below, varies
with the monitoring platform. But in general, container metrics fall
into two categories—those that measure overall container impact on the
system, and those that focus on the performance of individual

Setting up the Monitoring System

The first step in any kind of software monitoring, of course, is to
install the monitoring service. For all of the platforms covered in this
article, you can expect additional steps for setting up standard
monitoring features. Here we cover only those directly related to
container monitoring. Setup: New Relic With New Relic, you start by
installing New Relic Servers for Linux,
which includes integrated Docker monitoring. It should be installed on
the Docker server, rather than the Docker container. The Servers for
Linux package is available for most common Linux distributions; Docker
monitoring, however, requires a 64-bit system. After you install New
Relic Servers for Linux, you will need to create a docker group (if it
doesn’t exist), then add the newrelic user to that group. You may need
to do some basic setup after that, including (depending on the Linux
distribution) setting the location of the container data files and
enabling memory metrics. Setup: Netuitive Netuitive also requires
you to install its Linux monitoring agent on the Docker server. You then
need to enable Docker metrics collection in the Linux Agent config file,
and optionally limit the metrics and/or containers by creating a
regex-based blacklist or whitelist. As with New Relic, you may wind up
setting a few additional options. Netuitive, however, offers an
additional installation method. If you are unable to install the Linux
Agent on the Docker server, you can install a Linux Agent Docker
container, which will then do the job of monitoring the host and
containers. Setup: Splunk Splunk takes a very different approach to
container monitoring. It uses a Docker API logging driver to send
container log data directly to Splunk Enterprise and Splunk Cloud via
its HTTP Event Collector. You specify the splunk driver (and its
associated options) on the Docker command line. Splunk’s monitoring, in
other words, is integrated directly with Docker, rather than with the
host system. The Splunk log driver requires an HTTP Event Collector
token, and the path (plus port) to the user’s Splunk Cloud/Splunk
Enterprise instance. It also takes several optional arguments. Setup:
AppDynamics AppDynamics uses a Docker Monitoring extension
to send Docker Remote API metrics to its Machine Agent. In some ways,
this places it in a middle ground between New Relic and Netuitive’s
agent-based monitoring and Splunk’s close interaction with Docker.
AppDynamics’ extension installation, however, is much more hands-on.
The instructions suggest that you can expect to come out of it with
engine grease up to your elbows, and perhaps a few scraped knuckles. You
must first manually bind Docker to the TCP Socket or the Unix Socket.
After that, you need to install the Machine Agent, followed by the
extension. You then need to manually edit several sections of the config
file, including the custom dashboard. You must also set executable
permissions, and AppDynamics asks you to review both the Docker Remote
API and the Docker Monitoring extension’s socket command file. The
AppDynamics instructions also include troubleshooting instructions for
most of the installation steps.

What You Get

As you might expect, there are some significant differences in the
metrics which each of these platforms monitors and displays. Output:
New Relic New Relic displays Docker container metrics as part of its
Application Performance Monitoring (APM) Overview page; containers are
grouped host servers when Servers for Linux is installed, and are
indicated by a container symbol. The overview page includes drill-down
for detailed performance features. The New Relic Servers monitor
includes a Docker page, which shows the impact of Docker containers on
the server’s performance. It displays server-related metrics for
individual Docker images, with drill-down to image details. It does not,
however, display data for individual containers. Output: Netuitive
Netuitive’s Docker monitor collects a variety of metrics, including
several related to CPU and network performance, and almost two dozen
involving memory. It also collects computed CPU, memory, and throttling
percentages. With Netuitive, you build a dashboard by assembling
widgets, so the actual data shown (and the manner in which it is
displayed) will depend on your choice of widgets and their
configuration. Output: Splunk Splunk is designed to use data from a
wide range of logs and related sources; for containers, it pulls data
from individual container logs, from Docker and cloud APIs, and from
application logs. Since Splunk integrates such a large amount of data at
the cloud and enterprise level, it is up to the user to configure
Splunk’s analysis and monitoring tools to display the required data.
For containers, Splunk recommends looking at CPU and memory use,
downtime/outage-related errors, and specific container and application
logs to identify problem containers. Output: AppDynamics AppDynamics
reports basic container and system statistics (individual container
size, container and image count, total memory, memory limit, and swap
limit), along with various ongoing network, CPU, and memory statistics.
It sends these to the dashboard, where they are displayed in a series of

Which Service Should You Use?

When it comes to the question of which monitoring service is right for
your container deployment, there’s no single answer. For most
container-based operations, including Rancher-managed operations on a
typical Linux distribution, either New Relic or Netuitive should do
quite well. With reasonably similar setup and monitoring features, the
tradeoff is between New Relic’s preconfigured dashboard pages and the
do-it-yourself customizability of Netuitive’s dashboard system. For
enterprise-level operations concerned with integrated monitoring of
performance at all scales, from system-level down to individual
container and application logs, Splunk is the obvious choice. Since
Splunk works directly with the Docker API, it is also likely to be the
best option for use with minimal-feature RancherOS deployments. If, on
the other hand, you simply want to monitor container performance via the
Docker API in a no-frills, basic way, the AppDynamics approach might
work best for you. So there it is: Look at what kind of container
monitoring you need, and take your pick.


Get started with OpenFaaS and KinD

Get started with OpenFaaS and KinD

In this post I want to show you how I’ve started deploying OpenFaaS with the new tool from the Kubernetes community named Kubernetes in Docker or KinD. You can read my introductory blog post Be KinD to Yourself here.

The mission of OpenFaaS is to Make Serverless Functions Simple. It is open-source and built by developers, for developers in the open with a growing and welcoming community. With OpenFaaS can run stateless microservices and functions with a single control-plane that focuses on ease of use on top of Kubernetes. The widely accepted OCI/Docker image format is used to package and deploy your code and can be run on any cloud.

Over the past two years more than 160 developers have contributed to code, documentation and packaging. A large number of them have also written blog posts and held events all over the world.

Find out more about OpenFaaS on the blog or GitHub openfaas/faas


Unlike prior development environments for Kubernetes such as Docker for Mac or minikube – the only requirement for your system is Docker which means you can install this almost anywhere you can get docker installed.

This is also a nice experience for developers because it’s the same on MacOS, Linux and Windows.

On a Linux host or Linux VM type in $ curl -sLS | sudo sh.

Download Docker Desktop for Windows or Mac.

Create your cluster

Install kubectl

The kubectl command is the main CLI needed to operate Kubernetes.

I like to install it via the binary release here.

Get kind

You can get the latest and greatest by running the following command (if you have Go installed locally)

$ go get

Or if you don’t want to install Golang on your system you can grab a binary from the release page.

Create one or more clusters

Another neat feature of kind is the ability to create one or more named clusters. I find this useful because OpenFaaS ships plain YAML files and a helm chart and I need to test both independently on a clean and fresh cluster. Why try to remove and delete all the objects you created between tests when you can just spin up an entirely fresh cluster in about the same time?

$ kind create cluster --name openfaas

Creating cluster 'kind-openfaas' ...
 ✓ Ensuring node image (kindest/node:v1.12.2) 🖼 
 ✓ [kind-openfaas-control-plane] Creating node container 📦 
 ✓ [kind-openfaas-control-plane] Fixing mounts 🗻 
 ✓ [kind-openfaas-control-plane] Starting systemd 🖥
 ✓ [kind-openfaas-control-plane] Waiting for docker to be ready 🐋 
⠈⡱ [kind-openfaas-control-plane] Starting Kubernetes (this may take a minute) ☸ 

Now there is something you must not forget if you work with other remote clusters. Always, always switch context into your new cluster before making changes.

$ export KUBECONFIG="$(kind get kubeconfig-path --name="openfaas")"

Install OpenFaaS with helm

Install helm and tiller

The easiest way to install OpenFaaS is to use the helm client and its server-side equivalent tiller.

  • Create a ServiceAccount for Tiller:
$ kubectl -n kube-system create sa tiller \
 && kubectl create clusterrolebinding tiller \
      --clusterrole cluster-admin \
  • Install the helm CLI
$ curl -sLSf | sudo bash
  • Install the Tiller server component
helm init --skip-refresh --upgrade --service-account tiller

Note: it may take a minute or two to download tiller into your cluster.

Install the OpenFaaS CLI

$ curl -sLSf | sudo sh

Or on Mac use brew install faas-cli.

Install OpenFaaS

You can install OpenFaaS with authentication on or off, it’s up to you. Since your cluster is running locally you may want it turned off. If you decide otherwise then checkout the documentation.

  • Create the openfaas and openfaas-fn namespaces:
$ kubectl apply -f
  • Install using the helm chart:
$ helm repo add openfaas && \
    helm repo update && \
    helm upgrade openfaas --install openfaas/openfaas \
      --namespace openfaas  \
      --set basic_auth=false \
      --set functionNamespace=openfaas-fn \
      --set operator.create=true

The command above adds the OpenFaaS helm repository, then updates the local helm library and then installs OpenFaaS locally without authentication.

Note: if you see Error: could not find a ready tiller pod then wait a few moments then try again.

You can fine-tune the settings like the timeouts, how many replicas of each service run, what version you are using and more using the Helm chart readme.

Check OpenFaaS is ready

The helm CLI should print a message such as: To verify that openfaas has started, run:

$ kubectl --namespace=openfaas get deployments -l "release=openfaas, app=openfaas"

The KinD cluster will now download all the core services that make up OpenFaaS and this could take a few minutes if you’re on WiFi, so run the command above and look out for “AVAILABLE” turning to 1 for everything listed.

Access OpenFaaS

Now that you’ve setup a cluster and OpenFaaS it’s time to access the UI and API.

First forward the port of the gateway to your local machine using kubectl.

$ kubectl port-forward svc/gateway -n openfaas 8080:8080

Note: If you already have a service running on port 8080, then change the port binding to 8888:8080 for instance. You should also run export OPENFAAS_URL= so that the CLI knows where to point to.

You can now use the OpenFaaS CLI and UI.

Open the UI at and deploy a function from the Function store – a good example is “CertInfo” which can check when a TLS certificate will expire.

Downloading your chosen image may take a few seconds or minutes to deploy depending on your connection.

  • Invoke the function then see its statistics and other information via the CLI:
$ faas-cli list -v
  • Deploy figlet which can generate ASCII text messages, try it out.
$ faas-cli store deploy figlet
$ echo Hi! | faas-cli invoke figlet

You can use the describe verb for more information and to find your URL for use with curl or other tools and services.

$ faas-cli describe figlet

Use the OpenFaaS CRD

You can also use the OpenFaaS Custom Resource Definition or CRD by typing in:

$ kubectl get functions -n openfaas-fn

When you create a new function for OpenFaaS you can use the CLI which calls the RESTful API of the OpenFaaS API Gateway, or generate a CRD YAML file instead.

  • Here’s an example with Node.js:

Change the --prefix do your own Docker Hub account or private Docker registry.

$ mkdir -p ~/dev/kind-blog/ && \
  cd ~/dev/kind-blog/ && \
  faas-cli template store pull node10-express && \
  faas-cli new --lang node10-express --prefix=alexellis2 openfaas-loves-crds

Our function looks like this:

$ cat openfaas-loves-crds/handler.js

"use strict"

module.exports = (event, context) => {
    let err;
    const result =             {
        status: "You said: " + JSON.stringify(event.body)


Now let’s build and push the Docker image for our function

$ faas-cli up --skip-deploy -f openfaas-loves-crds.yml 

Followed by generating a CRD file to apply via kubectl instead of through the OpenFaaS CLI.

$ faas-cli generate crd  -f openfaas-loves-crds.yml 
kind: Function
  name: openfaas-loves-crds
  namespace: openfaas-fn
  name: openfaas-loves-crds
  image: alexellis2/openfaas-loves-crds:latest

You can then pipe this output into a file to store in Git or pipe it directly to kubectl like this:

$ faas-cli generate crd  -f openfaas-loves-crds.yml | kubectl apply -f - "openfaas-loves-crds" created

$ faas-cli list -v
Function                      	Image                                   	Invocations    	Replicas
openfaas-loves-crds           	alexellis2/openfaas-loves-crds:latest   	0              	1    

Wrapping up

KinD is not the only way to deploy Kubernetes locally, or the only way to deploy OpenFaaS, but it’s quick and easy and you could even create a bash script to do everything in one shot.

  • If you’d like to keep learning then checkout the official workshop which has been followed by hundreds of developers around the world already.
  • Join Slack if you’d like to chat more or contribute to the project Slack

You can also read the docs to find out how to deploy to GKE, AKS, DigitalOcean Kubernetes, minikube, Docker Swarm and more.


Hidden Dependencies with Microservices | Rancher Labs

Hidden Dependencies with Microservices

One of the great things about microservices is that they allow engineering to decouple software development from application lifecycle. Every microservice:

  • can be written in its own language, be it Go, Java, or Python
  • can be contained and isolated form others
  • can be scaled horizontally across additional nodes and instances
  • is owned by a single team, rather than being a shared responsibility among many teams
  • communicates with other microservices through an API a message bus
  • must support a common service level agreement to be consumed by other microservices, and conversely, to consume other microservices

These are all very cool features, and most of them help to decouple various software dependencies from each other. But what is the operations point of view? While the cool aspects of microservices bulleted above are great for development teams, they pose some new challenges for DevOps teams. Namely:

Scalability: Traditionally, monolithic applications and systems scaled vertically, with low dynamism in mind. Now, we need horizontal architectures to support massive dynamism – we need infrastructure as code (IaC). If our application is not a monolith, then our infrastructure cannot be, either.

Orchestration: Containers are incredibly dynamic, but they need resources – CPU, memory, storage (SDS) and networking (SDN) when executed. Operations and DevOps teams need a new piece of software that knows which resources are available to run tasks fairly (if sharing resources with other teams), and efficiently.

System Discovery: To merge dynamism and orchestration, you need a system discovery service. With microservices and containers, one can implement still use a configuration management database (CMDB), but with massive dynamism in mind. A good system has to be aware of every container deployment change, able to get or receive information from every container (metadata, labels), and provide a method for making this info available.

There are many tools in the ecosystem one can choose. However, the scope of this article is not to do a deep dive into these tools, but to provide an overview of how to reduce dependencies between microservices and your tooling.

Scenario 1: Rancher + Cattle

Consider the following scenario, where a team is using Rancher. Rancher is facilitating infrastructure as code, and using Cattle for orchestration, and Rancher discovery (metadata, DNS, and API) for managing system discovery. Assume that the DevOps team is familiar with this stack, but must begin building functionality for the application to run. Let’s look at the dependency points they’ll need to consider:

  1. The IaC tool shouldn’t affect the development or deployment of microservices. This layer is responsible for providing, booting, and enabling communication between servers (VMs or bare metal). Microservices need servers to run, but it doesn’t matter how those servers were provided. We should be able to change our IaC method without affecting microservices development or deployment paths.
  2. Microservice deployments are dependent on orchestration. The development path for microservices could be the same, but the deployment path is tightly coupled to the orchestration service, due to deployment syntax and format. There’s no easy way to avoid this dependency, but it can be minimized by using different orchestration templates for specific microservice deployments.
  3. Microservice developments could be dependent on system discovery. It depends on the development path.

Points (1) and (2) are relatively clear, but let’s take a closer look at (3). Due to the massive dynamism in microservices architectures, when one microservice is deployed, it must be able to retrieve its configuration easily. It also needs to know where its peers and collaborating microservices are, and how they communicate (which IP, and via which port, etc). The natural conclusion is that for each microservice, you also define logic coupling it with service discovery. But what happens if you decide to use another orchestrator or tool for system discovery? Consider a second system scenario:

Scenario 2: Rancher + Kubernetes + etcd

In the second scenario, the team is still using Rancher to facilitate Infrastructure as Code. However, this team has instead decided to use Kubernetes for orchestration and system discovery (using etcd). The team would have to create Kubernetes deployment files for microservices (Point 2, above), and refactor all the microservices to talk with Kubernetes instead of Rancher metadata (Point 3). The solution is to decouple services from configuration. This is easier said than done, but here is one possible way to do it:

  • Define a container hierarchy
  • Separate containers into two categories: executors and configurators
  • Create generic images based on functionality
  • Create application images from the generic executor images; similarly, create config-apps images from the generic configurator images
  • Define logic for running multiple images in a collaborative/accumulative mode

Here’s an example with specific products to clarify these steps: container-hierarchiesIn the image above, we’ve used:

  • base: alpine Built from Alpine Linux, with some extra packages: OpenSSL, curl, and bash. This has the smallest OS footprint with package manager, based in uClib, and it’s ideal for containerized microservices that don’t need glibc.
  • executor: monit. Built from the base above, with monitoring installed under /opt/monit. It’s written in C with static dependencies, a small 2 MB footprint, and cool features.
  • configurator: confd. Built from the base above, with confd and useful system tools under /opt/tools. It’s written in Golang with static dependencies, and provides an indirect path for system discovery, due to supporting different backends like Rancher, etcd, and Consul.

The main idea is to keep microservices development decoupled from system discovery; this way, they can run on their own, or complemented by another tool that provides dynamic configuration. If you’d like to test out another tool for system discovery (such as etcd, ZooKeeper, or Consul), then all you’ll have to do is develop a new branch for the configurator tool. You won’t need to develop another version of the microservice. By avoiding reconfiguring the microservice itself, you’ll be able to reuse more code, collaborate more easily, and have a more dynamic system. You’ll also get more control, quality, and speed during both development and deployment. To learn more about hierarchy, images, and build I’ve used here, you can access this repoon GitHub. Within are additional examples using Kafka and Zookeeper packages; these service containers can run alone (for dev/test use cases), or with Rancher and Kubernetes for dynamic configuration with Cattle or Kubernetes/etcd. Zookeeper (repo here, 67 MB)

  • With Cattle (here)
  • With Kubernetes (here)

Kafka (repo here, 115 MB)


Orchestration engines and system discovery services can result in “hidden” microservice dependencies if not carefully decoupled from microservices themselves. This decoupling makes it easier to develop, test and deploy microservices across infrastructure stacks without refactoring, and in turn allows users to build wider, more open systems with better service agreements. Raul Sanchez Liebana (@rawmindNet) is a DevOps engineer at Rancher.