GlusterFS Docker | Building an HTML5 Game

gluster logoGlusterFS
is a scalable, highly available, and distributed network file system
widely used for applications that need shared storage including cloud
computing, media streaming, content delivery networks, and web cluster
solutions. High availability is ensured by the fact that storage data is
redundant, so in case one node fails another will cover it without
service interruption. In this post I’ll show you how to create a
GlusterFS cluster for Docker that you can use to store your containers
data. The storage volume where data resides is replicated twice, so data
will be accessible if at least one Gluster container is working. We’ll
use Rancher for Docker management and orchestration. In order to test
storage availability and reliability I’ll be deploying an Asteroids
game.
GlusterFS-Asteroids-Architecture

Prerequisites

Preparing AWS environment Before deploying the GlusterFS cluster you
need to satisfy the following requirements in AWS:

  • Create an Access Key to use Rancher AWS provisioning feature.
    You can get an Access Ke
    • Allow 22/tcp, 2376/tcp and 8080/tcp ports from any source,
      needed for Docker machine to provision hosts
    • Allow 500/udp and 4500/udp ports from any source, needed for
      Rancher network
    • Allow 9345/tcp and 9346/tcp ports from any source, needed for UI
      features like graphs, view logs, and execute shell
    • Allow 80/tcp and 443/tcp ports from any source, needed to
      publish the Asteroids game
  • Create a RancherOS instance (look for RancherOS AMI in
    Community AMIs). Configure it to run Rancher Server by
    defining the following user data and associate it to the Gluster
    Security Group. Once the instance is running you can browse to
    Rancher UI: http://RANCHER_INSTANCE_PUBLIC_IP:8080/

#!/bin/bash
docker run -d -p 8080:8080 rancher/server:v0.17.1

Preparing Docker images

I have prepared two Docker images that we are using later. This is how I
built them. The GlusterFS server image This is the Dockerfile:

FROM ubuntu:14.04

MAINTAINER Manel Martinez <manel@nixelsolutions.com>

RUN apt-get update &&
apt-get install -y python-software-properties software-properties-common
RUN add-apt-repository -y ppa:gluster/glusterfs-3.5 &&
apt-get update &&
apt-get install -y glusterfs-server supervisor

RUN mkdir -p /var/log/supervisor

ENV GLUSTER_VOL ranchervol
ENV GLUSTER_REPLICA 2
ENV GLUSTER_BRICK_PATH /gluster_volume
ENV GLUSTER_PEER **ChangeMe**
ENV DEBUG 0

VOLUME [“/gluster_volume”]

RUN mkdir -p /usr/local/bin
ADD ./bin /usr/local/bin
RUN chmod +x /usr/local/bin/*.sh
ADD ./etc/supervisord.conf /etc/supervisor/conf.d/supervisord.conf

CMD [“/usr/local/bin/run.sh”]

As you can see, we are using 2 replicas for distributing the Gluster
volume ranchervol. All its data will be persisted in Docker volume
/gluster_volume. Note that we are not exposing any port because
GlusterFS containers are connecting through Rancher network. The
run.sh script is as follows:

#!/bin/bash

[ “$DEBUG” == “1” ] && set -x

prepare-gluster.sh &
/usr/bin/supervisord

It will invoke another script to prepare GlusterFS cluster in
background. This is required because Gluster commands need to be
executed when its daemon is running. This is the content for
prepare-gluster.sh script:

#!/bin/bash

set -e

[ “$DEBUG” == “1” ] && set -x

if [ “$” == “**ChangeMe**” ]; then
# This node is not connecting to the cluster yet
exit 0
fi

echo “=> Waiting for glusterd to start…”
sleep 10

if gluster peer status | grep $ >/dev/null; then
echo “=> This peer is already part of Gluster Cluster, nothing to do…”
exit 0
fi

echo “=> Probing peer $…”
gluster peer probe $

echo “=> Creating GlusterFS volume $…”
my_rancher_ip=`echo $ | awk -F/ ”`
gluster volume create $ replica $ $:$ $:$ force

echo “=> Starting GlusterFS volume $…”
gluster volume start $

As we can see, if we don’t provide GLUSTER_PEER environment variable
the container will only start GlusterFS daemon and wait for a second
peer container to join the cluster. The second container needs to know
about GLUSTER_PEER address in order to contact it (peer probe) and
create the shared storage volume. This is the supervisor configuration
file, needed to start GlusterFS daemon:

[supervisord]
nodaemon=true

[program:glusterd]
command=/usr/sbin/glusterd -p /var/run/glusterd.pid

The following commands are required to publish the Docker image:

docker build -t nixel/rancher-glusterfs-server .
docker push nixel/rancher-glusterfs-server .

The Asteroids game image This is the image we are using to publish
the Asteroids HTML5 game for testing Gluster HA capabilities. This
container acts as a GlusterFS client that will mount the shared volume
where the following game content is being stored:

  • static files (HTML, JS, CSS) needed to open the client-side game in
    your browser. A Nginx server will publish this to the Internet.
  • A WebSocket server application used to handle user connections and
    control game logics. A Node.js service will publish this application
    to the Internet.

This is the Dockerfile which defines the image:

FROM ubuntu:14.04

MAINTAINER Manel Martinez <manel@nixelsolutions.com>

RUN apt-get update &&
apt-get install -y python-software-properties software-properties-common
RUN add-apt-repository -y ppa:gluster/glusterfs-3.5 &&
apt-get update &&
apt-get install -y git nodejs nginx supervisor glusterfs-client dnsutils

ENV GLUSTER_VOL ranchervol
ENV GLUSTER_VOL_PATH /mnt/$
ENV GLUSTER_PEER **ChangeMe**
ENV DEBUG 0

ENV HTTP_CLIENT_PORT 80
ENV GAME_SERVER_PORT 443
ENV HTTP_DOCUMENTROOT $/asteroids/documentroot

EXPOSE $
EXPOSE $

RUN mkdir -p /var/log/supervisor $
WORKDIR $

RUN mkdir -p /usr/local/bin
ADD ./bin /usr/local/bin
RUN chmod +x /usr/local/bin/*.sh
ADD ./etc/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
ADD ./etc/nginx/sites-available/asteroids /etc/nginx/sites-available/asteroids

RUN echo “daemon off;” >> /etc/nginx/nginx.conf
RUN rm -f /etc/nginx/sites-enabled/default
RUN ln -fs /etc/nginx/sites-available/asteroids /etc/nginx/sites-enabled/asteroids
RUN perl -p -i -e “s/HTTP_CLIENT_PORT/$/g” /etc/nginx/sites-enabled/asteroids
RUN HTTP_ESCAPED_DOCROOT=`echo $ | sed “s///\\\\//g”` && perl -p -i -e “s/HTTP_DOCUMENTROOT/$/g” /etc/nginx/sites-enabled/asteroids

RUN perl -p -i -e “s/GAME_SERVER_PORT/$/g” /etc/supervisor/conf.d/supervisord.conf
RUN HTTP_ESCAPED_DOCROOT=`echo $ | sed “s///\\\\//g”` && perl -p -i -e “s/HTTP_DOCUMENTROOT/$/g” /etc/supervisor/conf.d/supervisord.conf

CMD [“/usr/local/bin/run.sh”]

And this is the run.sh script:

#!/bin/bash

set -e

[ “$DEBUG” == “1” ] && set -x && set +e

if [ “$” == “**ChangeMe**” ]; then
echo “ERROR: You did not specify “GLUSTER_PEER” environment variable – Exiting…”
exit 0
fi

ALIVE=0
for PEER in `echo “$” | sed “s/,/ /g”`; do
echo “=> Checking if I can reach GlusterFS node $ …”
if ping -c 10 $ >/dev/null 2>&1; then
echo “=> GlusterFS node $ is alive”
ALIVE=1
break
else
echo “*** Could not reach server $ …”
fi
done

if [ “$ALIVE” == 0 ]; then
echo “ERROR: could not contact any GlusterFS node from this list: $ – Exiting…”
exit 1
fi

echo “=> Mounting GlusterFS volume $ from GlusterFS node $ …”
mount -t glusterfs $:/$ $

echo “=> Setting up asteroids game…”
if [ ! -d $ ]; then
git clone https://github.com/BonsaiDen/NodeGame-Shooter.git $
fi

my_public_ip=`dig -4 @ns1.google.com -t txt o-o.myaddr.l.google.com +short | sed “s/”//g”`
perl -p -i -e “s/HOST = ‘.*’/HOST = ‘$’/g” $/client/config.js
perl -p -i -e “s/PORT = .*;/PORT = $;/g” $/client/config.js

/usr/bin/supervisord

As you can see we need to inform about GlusterFS containers where
ranchervol storage is being served using GLUSTER_PEER environment
variable. Although GlusterFS client does not need to know about all
cluster nodes, this is useful for Asteroids container to be able to
mount the volume if at least one GlusterFS container is alive. We are
proving this HA feature later. In this case we are exposing 80 (Nginx)
and 443 (Node.js Websocket server) ports so we can open the game in our
browser. This is the Nginx configuration file:

server {
listen HTTP_CLIENT_PORT;
location / {
root HTTP_DOCUMENTROOT/client/;
}
}

And the following supervisord configuration is required to run Nginx and
Node.js:

[supervisord]
nodaemon=true

[program:nginx]
command=/usr/sbin/nginx

[program:nodejs]
command=/usr/bin/nodejs HTTP_DOCUMENTROOT/server/server.js GAME_SERVER_PORT

Finally, the run.sh script will download the Asteroids source code and
save it on GlusterFS shared volume. The last step is to replace the
required parameters on configuration files to run Nginx and Node.js
server application. The following commands are needed to publish the
Docker image:

docker build -t nixel/rancher-glusterfs-client .
docker push nixel/rancher-glusterfs-client .

Creating Docker hosts

Now we need to create three Docker hosts, two of them used to run
GlusterFS server containers, and the third to publish the Asteroids
game.
Create_Amazon_Instance
In Rancher UI, click + Add Host button and choose Amazon EC2
provider
. You need to specify, at least, the following information:

  • Container names
  • Amazon Access Key and Secret Key that you got before.
  • EC2 Region, Zone and VPC/Subnet ID. Be sure to choose the same
    region, zone and VPC/subnet ID where Rancher Server is deployed.
  • Type the Security Group name that we created before: Gluster.

Repeat this step three times to create gluster01, gluster02, and
asteroids hosts. Gluster hosts

Adding GlusterFS server containers

Now you are ready to deploy your GlusterFS cluster. First, click +
Add Container
button on gluster01 host and enter the following
information:

  • Name: gluster01
  • Image: nixel/rancher-glusterfs-server:latest

Expand Advanced Options and follow these steps:

  • Volumes section – Add this volume:
    /gluster_volume:/gluster_volume
  • Networking section – Choose Managed Network on Docker0
  • Security/Host section – Enable Give the container full access to
    the host
    checkbox

Create_gluster_server_container_1
Now wait for gluster01 container to be created and copy its Rancher
IP address, you are needing it now. Then click + Add Container button
on gluster02 host to create the second GlusterFS server container
with the following configuration:

  • Name: gluster02
  • Image: nixel/rancher-glusterfs-server:latest

Expand Advanced Options and follow these steps:

  • Command section – Add an Environment Variable named GLUSTER_PEER
    which value is the gluster01 container IP. In my case it is
    10.42.46.31
  • Volumes section – Add this volume:
    /gluster_volume:/gluster_volume
  • Networking section – Choose Managed Network on Docker0
  • Security/Host section – Enable Give the container full access to
    the host
    checkbox

Gluster02_container_env_vars
Now wait for gluster02 container to be created and open its menu,
then click View Logs option.
Gluster02_container_logs_menu
You will see the following messages at the bottom of log screen
confirming that shared volume was successfully created.
Gluster02_container_logs

Adding Asteroids container

Now it is time to create our GlusterFS client container, which is
publishing an Asteroids game to the Internet. Click + Add Container on
asteroids host and enter the following container information:

  • Name: asteroids
  • Image: nixel/rancher-glusterfs-client:latest
  • Port Map: map 80 (public) port to 80 (container) TCP
    port
  • Port Map: map 443 (public) port to 443 (container) TCP
    port

Expand Advanced Options and follow these steps:

  • Command section – Add an Environment Variable named
    GLUSTER_PEER which value is a comma separated list of gluster01
    and gluster02 containers IPs. In my case I’m typing this:
    10.42.46.31,10.42.235.105
  • Networking section – Choose Managed Network on Docker0
  • Security/Host section – Enable Give the container full access to
    the host
    checkbox

Note that we are not configuring any container volume, because all data
is stored in GlusterFS cluster.
asteroids_container_port_mapping
Wait for asteroids container to be created and show its logs. You will
find something like this at the top:
asteroids_container_top_logs
You will also see how Nginx server and Node.js application are started
at the bottom:
asteroids_container_bottom_logs
At this point your Rancher environment is up and running.
all_containers_gluster

Testing GlusterFS HA capabilities

Asteroids_game
It is time to play and test GLusterFS HA capabilities. What you are
doing now is to stop one GlusterFS container and check that game will
not suffer downtimes. Browse to http://ASTEROIDS_HOST_PUBLIC_IP and
you will access Asteroids game, enter your name and try to explode some
asteroids. Go to Rancher UI and stop gluster02 container, then open
a new browser tab and navigate to the game again. The game is
accessible. You can start gluster02 container, then stop
gluster01 container, and try again. You are still able to play.
Finally, keep gluster01 container stopped, restart asteroids
container and wait for it to start. As you can see, if at least one
GlusterFS server container is running you are able to play. Finally you
may want to stop gluster01 and gluster02 containers to check how
game becomes unavailable because its public content is not reachable
now. To recover the service start gluster01 and/or gluster02
containers again.

Conclusion

Shared storage is a required feature when you have to deploy software
that needs to share information across all nodes. In this post you have
seen how to easily deploy a Highly Available shared storage solution for
Rancher based on GlusterFS Docker images. By using an Asteroids game you
have checked that storage is available when, at least, one GlusterFS
container is running. In future posts we are combining the shared
storage solution with Rancher Load Balancing feature, added in 0.16
version, so you will see how to build scalable, distributable, and
Highly Available Web server solutions ready for production use. To learn
more about Rancher, please join us for our next online meetup, where
we’ll be demonstrating some of these features and answering your
questions. Manel Martinez is a Linux systems
engineer with experience in the design and management of scalable,
distributable and highly available open source web infrastructures based
on products like KVM, Docker, Apache, Nginx, Tomcat, Jboss, RabbitMQ,
HAProxy, MySQL and XtraDB. He lives in spain, and you can find him on
Twitter @manel_martinezg.

Source

Deploy Python Application | Open Source Load Balancer

google cloud
platformRecently
Rancher provided a disk image to be used to deploy RancherOS v0.3 on
Google Compute Engine (GCE). The image supports RancherOS cloud config
functionality. Additionally, it merges the SSH keys from the project,
instance and cloud-config and adds them to the rancher user.

Building The Setup

In this post, I will cover how to use the RancherOS image on GCE to set
up a MongoDB Replica Set. Additionally I will cover how to use one of
the recent features of Rancher platform which is the Load Balancer. In
order to make the setup more realistic, I created a simple python
application, that will count the number of hits on the website and save
this number on MongoDB database. The setup will include multiple servers
on different cloud hosting providers:

  1. three (g1-small) servers on GCE to deploy the MongoDB replicaset.
  2. one (n1-standard-1) server on GCE to install the Rancher platform.
  3. one server on Digital Ocean to hold the application containers.
  4. one server on Digital Ocean which will be used as a Load Balancer on
    Rancher platform.

RancherOS On GCE

We will import RancherOS disk image on GCE to be used later in the
setup. To import the image you need to create a Google Cloud Storage
object which we will upload the RancherOS disk image. To create cloud
storage object, you can use the web UI:
1
Or you can use gsutil tool that lets you access Google Cloud
Storage service from the command line, but first you need to
authenticate
gsutil to be able to create new storage object. Now you need to upload
the image to the newly created storage object:
2
The only thing left to do is creating the RancherOS image which will be
used later, click on create new image under the Images section:
3

Start RancherOS On GCE

After creating RancherOS image, create three machines that will be used
to set up a MongoDB replica set and select RancherOS as their image:
5
For the sake of this setup I created a Networking zone with one rule
that opens every TCP/UDP port on the server. Obviously, you shouldn’t do
that on a production server.

Docker Images

Now since the RancherOS image is ready, we need to get the Docker images
ready too. We will be using a Docker image for MongoDB and for the
Python application. I will be using the official
MongoDB
image, which will
simply run MongoDB server. However, to run the container as a part of
the replica set you need to add –replicaSet option to the running
command. And for the python application I will be using a Docker image
which will pull the Flask application from github and run it using
Gunicorn, of course if you need to add more sense of “production” to the
setup you will need to add more tweaks to the Docker image, but this
example is good enough to give you a good idea about the setup. The
Dockerfile of the python application:

FROM ubuntu:latest
MAINTAINER Hussein Galal

RUN apt-get -q update
RUN apt-get install -yqq python python-dev python-distribute python-pip python-virtualenv
RUN apt-get install -yqq build-essential git

RUN mkdir -p /var/www/app
ADD run.sh /tmp/run.sh
ADD gunicorn.py /var/www/app/
RUN chmod u+x /tmp/run.sh
EXPOSE 80
WORKDIR /var/www/app/
ENTRYPOINT /tmp/run.sh

The run.sh script will pull the Flask
Application

from github and run Gunicorn:

git clone https://github.com/galal-hussein/Flask-example-app.git ../app

virtualenv venv
./venv/bin/pip install gunicorn
./venv/bin/pip install -r requirements.txt
./venv/bin/gunicorn -c gunicorn.py app:app

Let’s now build and push this image to Docker hub to be used later when
we deploy the applications:

~# docker build -t husseingalal/flask_app .
~# docker push husseingalal/flask_app

The flask application is very simple, it displays the number of
pageviews and the os hostname just to make sure that the load balancer
is working fine, here is a small snippet from the application:

@app.route(‘/’)
def cntr():
mongo.db.rancher.update({“Project” : “Rancher”}, {“$inc” : {“pageviews” : 1}}, True)
posts = mongo.db.rancher.find({“Project”:”Rancher”})[0]
return render_template(‘index.html’, posts=posts, hostname=socket.gethostname())

Rancher Platform

Rancher is a container management platform, that can connect containers
across different host and it provides a set of features including: load
balancing, monitoring, logging, and integration with existing user
directories (e.g., GitHub) for identity management. To deploy Rancher
platform on a machine, login to the machine and run this command:

~# docker run -d -p 8080:8080 rancher/server

This command will create a Docker instance with a Rancher server that
listens on port 8080 and proxy that port to port 8080 on the host. After
running that command wait a few minutes until the server is ready, and
then login to the server:
6
The next step is to register the machines with the Rancher platform,
on Rancher platform, click on “Add Host” to register the each machine.
On Rancher platform you have the option to use the Docker Machine
integration to directly create Digital Ocean, or Amazon EC2 machines, or
you can just copy the registering command to any server that has Docker
installed:
7
After running the command on the 3 MongoDB servers, you will see
something like that:
8

MongoDB Replica Set

Replication ensures that your data will exist on different servers to
increase availability, in MongoDB you can set up replication by creating
replica set. Replica set is a group of MongoDB servers. A primary server
and multiple secondary servers that keep identical copies of the primary
server. MongoDB achieves that by keeping a log of operations called
oplog, that contain the write operations. The secondary servers also
maintain their own oplog, they fetch the operations from the member they
are syncing from. I ’ll create 3 MongoDB containers, each on different
host. Each container will run with –replSet option which specify a
name for the replica set:
9
Create the rest of the MongoDB containers with the same option to be
part of the replication group. After creating the 3 containers you
should initiate the replication by connecting to the MongoDB instance
and run rs.initiate(config) in the MongoDB Javascript shell:

> config = {
“_id” : “rancher”,
“members” : [
{“_id” : 0, “host” : “<ip-of-the-1st-container>:27017”},
{“_id” : 1, “host” : “<ip-of-the-2nd-container>:27017”},
{“_id” : 2, “host” : “<ip-of-the-3rd-container>:27017”}
]
}
> rs.initiate(config)

rancher:PRIMARY>

That means that this container is the primary server for the MongoDB
replica set.

Deploy The App Containers

Now let’s deploy the application container that we created earlier,
we’ll create two app containers which we will load balancer between
them in the next section. To differentiate between each app container i
will specify the hostname of each container as an option when we create
the container:
10
The second container will be created with the same options as the first
one, but we will map the port 80 to port 8001 to be able to test the
container separately.

Rancher’s Load Balancer

The final step is to create a load balancer to bounce the requests
between the two app containers, Rancher’s Load Balancer distributes
network traffic across a number of containers. For each host that will
be selected to be a Load Balancer a Load Balancer Agent system container
is started and HAProxy software is installed in it. For more information
about Rancher’s Load
Balancer.

To create a Load Balancer using Rancher’s platform, make sure to select
the application containers, and select the port you will be receiving
and sending requests. Also note that we used round robin algorithm to
distribute the requests between the two app containers, the other
algorithms available to use are leastconn, and source.
11
12
You can also configure Health Checks to monitor the availability of
the application containers, you can configure the health checks by using
GET, HEAD, POST, etc. In this example, i created an endpoint called
/healthcheck that will be used to check if the application server is
up and running:
13
Now let’s test the setup, by access the url of the Load Balancer:
14
15
Also you can check the /healthcheck endpoint to check that the app is up
and running:

$ curl http://45.55.210.170/healthcheck
200 OK From app1

$ curl http://45.55.210.170/healthcheck
200 OK From app2

Conclusion

Now RancherOS V0.3.0 can be deployed in Google Compute Engine (GCE).
Using RancherOS and Rancher platform you can put together a production
environment that uses the most recent features of Rancher platform like
load balancing which allow devs and ops to deploy large-scale
applications. Both Rancher and RancherOS are open source tools, and can
be downloaded from Github. If you’re
interested in learning more, join our next Online Meetup to hear from
some of our developers, and see the latest features going into the
Rancher and RancherOS projects. You can register below:

Source

Self Driving Clusters – Managed Autoscaling Kubernetes on AWS

Giant Swarm provides managed Kubernetes clusters for our customers, which are operated 24/7 by our operations team. Each customer has their own private control plane which they can use to create as many tenant clusters as they require.

This results in a large number of tenant clusters and control planes that we need to manage and keep up to date. So automation is essential and for that, we leverage Kubernetes itself. Our microservices and operators (custom controllers) run in a Kubernetes cluster.

Managed Components

From the beginning, our tenant clusters have come with managed components like Nginx Ingress Controller and Calico and DNS. Our customers have been asking us to manage more components for them. This allows them to focus on their applications, which is what they really care about. We’re calling this the Managed Cloud Native Stack and we’re hard at work adding more components to our app catalog.

We’re delighted to announce the latest managed component for AWS clusters is the upstream cluster-autoscaler.

Scaling – how it worked before

Our control plane gives customers an API with which they can create clusters and scale them. This makes it easy to automate provisioning clusters. Many customers were also using our API for scaling. One customer has even written an operator to do this.

The first step when adding components from the community is our Solutions Engineers work with our customers to get it installed in their clusters. This is documented and can then be used as a tutorial by other customers. Once we see overall demand and there is a stable solution we add the component to our official app catalog and provide 24/7 support for it.

We have already included metrics-server as an essential component in our clusters for a while now. This is a requirement for the Horizontal Pod Autoscaler and the Vertical Pod Autoscaler. Many of our customers already use these to autoscale their pods. So this is something else they don’t have to worry about in their clusters. Now both HPA and VPA can be used in conjunction with the cluster-autoscaler to autoscale both pods and nodes.

Autoscaling

When adding the autoscaler to our app catalog we realized that it would be nice to install it as a default app. Why not make all clusters ship with autoscaling enabled by default? So we decided to install it as an essential into all AWS clusters. Users can define the cluster size with a minimum and maximum number of nodes. If both are identical the cluster will not be autoscaled.

Dynamic clusters

With node autoscaling your clusters become more dynamic and are better able to cope with changes in load. If pods cannot be scheduled due to insufficient resources your clusters will be scale up without requiring manual intervention. If nodes are underutilized your cluster will be scaled down and your costs will be reduced.

App Catalogs

Soon we will be adding more optional apps and an incubation catalog. This lets our customers easily try out the latest new components from the community. Once these components are ready they will graduate to our main app catalog, and be supported 24/7 by our operations team.

With Giant Swarm’s immutable infrastructure the orchestration of your clusters becomes super simple. You can easily create clusters, scale them up and down and upgrade to the latest Kubernetes version. The immutability of the the whole infrastructure makes this fast and resilient and always brings the cluster into a known and tested state. Request your free trial of the Giant Swarm Infrastructure here.

Source

Giant Swarm’s Top Cloud Native Posts of previous 2018

Giant Swarm’s Top Cloud Native Posts of 2018

Our team is hard at work perfecting the ideal Kubernetes stack for you. Still, we find time to share knowledge about our experiences in the containerized world. With 2018 a near-distant memory, we’d like to take a moment to reflect on the cloud native topics that you all loved so much.

Skim through our top 10 list below and catch up on an article or two that you may have missed, or would like to revisit.

Source

Running our own ELK stack with Docker and Rancher

 

A Detailed Overview of Rancher’s Architecture

This newly-updated, in-depth guidebook provides a detailed overview of the features and functionality of the new Rancher: an open-source enterprise Kubernetes platform.

Get the eBook

kibana2At
Rancher Labs we generate a lot of logs in our internal environments. As
we conduct more and more testing on these environments we have found the
need to centrally aggregate the logs from each environment. We decided
to use Rancherto build and run a scalable
ELK stack to manage all of these logs. For those that are unfamiliar
with the ELK stack, it is made up of Elasticsearch, Logstash and Kibana.
Logstash provides a pipeline for shipping logs from various sources and
input types, combining, massaging and moving them into Elasticsearch, or
several other stores. It is a really powerful tool in the logging
arsenal. Elasticsearch is a document database that is really good at
search. It can take our processed output from Logstash, analyze and
provides an interface to query all of our logging data. Together with
Kibana, a powerful visualization tool that consumes Elasticsearch data,
you have amazing ability to gain insights from your logging. Previously,
we have been using Elastic’s Found product and have been very impressed.
One of the interesting things we realized while using Found for
Elasticsearch is that the ELK stack really is made up of discrete parts.
Each part of the stack has its own needs and considerations. Found
provided us Elasticsearch and Kibana. There was no Logstash end point
provided, though it was sufficiently documented how to use Found with
Logstash. So, we have always had to run our own Logstash pipeline.
Logstash Our Logstash implementation includes three tiers, one each
for collection, queueing and processing. Collection- responsible for
providing remote endpoints for logging inputs. Like Syslog, Gelf,
Logstash. Once it receives these logs it places them quickly onto a
Redis Queue. Queuing tier – provided by Redis, a very fast in memory
database. It acts as a buffer between the collection and processing
tier. Processing tier – removes messages from the queue, and applies
filter plugins to the logs that manipulate the data to a desired format.
This tier does the heavy lifting and is often a bottleneck in a log
pipeline. Once it processes the data it forwards it along to the final
destination, which is Elasticsearch. Logstash
Pipeline
Each Logstash container has a configuration sidekick that provides
configuration through a shared volume.
By breaking the stack into these
tiers, you can scale and adapt each part without major impact to the
other parts of the stack. As a user, you can also scale and adjust each
tier to suit your needs. A good read on how to scale Logstash can be
found on Elastic’s web page here: Deploying and Scaling
Logstash
.
To build the Logstash stack we stared as we usually do. In general, we
try to reuse as much as possible from the community. Looking at the
DockerHub registry, we found there is already an official Logstash image
maintained by Docker. The real magic is in configuration of Logstash at
each of the tiers. To achieve maximum flexibility with configuration, we
built a confd container that consumes KV, or Key Value, data for its
configuration values. The logstash configurations are the most volatile,
and unique to an organization as they provide the interfaces for the
collection, indexing, and shipping of the logs. Each organization is
going to have different processing needs, formatting, tagging etc. To
achieve maximum flexibility we leveraged the confd tool and Rancher
sidekick containers. The sidekick creates an atomic scheduling unit
within Rancher. In this case, our configuration container exposes the
configuration files to our Logstash container through volume sharing. In
doing this, there is no modification needed to the default Docker
Logstash image. How is that for reuse! Elasticsearch Elasticsearch
is built out in three tiers as well. When reading the production
deployment recommendations, it discusses having nodes that are dedicated
masters, data nodes and client nodes. We followed the same deployment
paradigm with this application as the logstash implementation. We deploy
each role as a service. Each service is composed of an official image
and paired with a Confd sidekick container to provide configuration. It
ends up looking like this: Elastic Search
Tier
Each tier in the Elasticsearch stack has a confd container providing
configurations through a shared volume. These containers are scheduled
together inside of Rancher.
In the current configuration, we use the
master service to provide node discovery. When using the Rancher private
network, we disable multicast and enable unicast. Since every node in
the cluster points to the master they can talk to one another. The
Rancher network also allows the nodes to talk to one another. As a part
of our stack, we also use the Kopf tool to quickly visualize our
clusters health and perform other maintenance tasks. Once you bring up
the stack you will see that you can use Kopf to see that all the nodes
came up in the cluster. Kibana 4 Finally, in order to view all of
these logs and make sense of the data, we bring up Kibana to complete
our ELK stack. We have chosen to go with Kibana 4 in this stack. Kibana
4 is launched with an Nginx container to provide basic auth behind a
Rancher load balancer. The Kibana 4 instance is the Official image which
is hosted on DockerHub. The Kibana 4 image talks to the Elasticsearch
client nodes. So now we have a full ELK stack for taking logs and
shipping them to Elasticsearch for visualization in Kibana. The next
step is getting the logs from the hosts running your application.
Bringing up the Stack on Rancher So now you have the backstory on
how we came up with our ELK stack configuration. Here are instructions
to run the ELK stack on Rancher. This assumes that you already have a
Rancher environment running with at least one compute node. We will also
be using the Rancher compose CLI tool. Rancher-compose can be found on
GitHub here
rancher/rancher-compose.
You will need API keys from your Rancher deployment. In the instructions
below, we will bring up each component of the ELK stack, as its own
stack in Rancher. A stack in Rancher is a collection of services that
make up an application, and are defined by a Docker Compose file. In
this example, we will build the stacks in the same environment and use
cross stack linking to connect services. Cross stack linking allows
services in different stacks to discover each other through a DNS name.

  1. Clone our compose template repository: git clone
    https://github.com/rancher/compose-templates.git
  2. First lets bring up the Elasticsearch cluster.
    a. cd compose-templates/elasticsearch
    b. rancher-compose -p es up (Other services assume es as the
    elasticsearch stack name) This will bring up four services.
    – elasticsearch-masters
    – elasticsearch-datanodes
    – elasticsearch-clients
    – kopf
    c. Once Kopf is up, click on the container in the Rancher UI, and
    get the IP of the node it is running on.
    d. Open a new tab in your browser and go to the IP. You should see
    one datanode on the page.
  3. Now lets bring up our Logstash tier.
    a. cd ../logstash
    b. rancher-compose -p logstash up
    c. This will bring up the following services
    – Redis
    – logstash-collector
    – logstash-indexer
    d. At this point, you can point your applications at
    logtstash://host:5000.
  4. (Optional) Install logspout on your nodes
    a. cd ../logspout
    b. rancher-compose -p logspout up
    c. This will bring up a logspout container on every node in your
    Rancher environment. Logs will start moving through the pipeline
    into Elasticsearch.
  5. Finally, lets bring up Kibana 4
    a. cd ../kibana
    b. rancher-compose -p kibana up
    c. This will bring up the following services
    – kibana-vip
    – nginx-proxy
    – kibana4
    d. Click the container in the kibana-vip service in the Rancher UI.
    Visit the host ip in a separate browser tab. You will be
    directed to the Kibana 4 landing page to select your index.

Now that you have a fully functioning ELK stack on Rancher, you can
start sending your logs through the Logstash collector. By default the
collector is listening for Logstash inputs on UDP port 5000. If you are
running applications outside of Rancher, you can simply point them to
your Logstash endpoint. If your application runs on Rancher you can use
the optional Logspout-logstash service above. If your services run
outside of Rancher, you can configure your Logstash to use Gelf, and use
the Docker log driver. Alternatively, you could setup a Syslog listener,
or any number of supported Logstash input plugins. Conclusion
Running the ELK stack on Rancher in this way provides a lot of
flexibility to build and scale to meet any organization’s needs. It
also creates a simple way to introduce Rancher into your environment
piece by piece. As an operations team, you could quickly spin up
pipelines from existing applications to existing Elasticsearch clusters.
Using Rancher you can deploy applications following container best
practices by using sidekick containers to customize standard containers.
By scheduling these containers as a single unit, you can separate your
application out into separate concerns. On Wednesday, September 16th,
we hosted an online meetup focused on container logging, where I
demonstrated how to build and deploy your own ELK stack. If you’d like
to view a recording of this you can view it
here.
If you’d like to learn more about using Rancher, please join us for an
upcoming online meetup, or join our beta
program
or request a discussion with one
of our engineers.

Source

Adding Linux Dash As A System Service

Ivan Mikushin discussed adding system services to RancherOS using Docker Compose. Today I want to show you an exmaple of how to deploy Linux Dash as a system service. Linux Dash is a simple, low overhead, and web supported monitoring tool for Linux, you can read more about Linux Dash here. In this post i will add Linux Dash as a system service to RancherOS version 0.3.0 which allows users to add system services using rancherctl command. The Ubuntu’s console is the only service that is currently available in RancherOS.

Creating Linux Dash Docker Image

I build a 32MB node.js busybox image on top of the hwestphal/nodebox image, with linux-dash installed which will run on port 80 by default. The Docker file of this image:

FROM hwestphal/nodebox
MAINTAINER Hussein Galal

RUN opkg-install unzip
RUN curl -k -L -o master.zip https://github.com/afaqurk/linux-dash/archive/master.zip
RUN unzip master.zip
WORKDIR linux-dash-master
RUN npm install

ENTRYPOINT ["node","server"]

The image needs to be available on Docker Hub to be pulled later by RancherOS, so we should build and push the image:

# docker build -t husseingalal/busydash busydash/
# docker push husseingalal/busydash

Starting Linux Dash As A System Service

Linux Dash can be started as system service in RancherOS using rancherctl service enable <system-service> while <system-service> is the location of the yaml file that contains the option for starting the system service in RancherOS. linux-dash.yml

dash:
image: husseingalal/busydash
privileged: true
links:
- network
labels:
- io.rancher.os.scope=system
restart: always
pid: host
ipc: host
net: host

To start the previous configuration as a system service, run the following command on RancherOS:

~# rancherctl service enable /home/rancher/linux-dash/linux-dash.yml

By using this command, the service will also be added to the rancher.yml file and set to enabled, but a reboot needs to occur in order for it take effect. After rebooting, you can see that the dash service has been started using rancherctl command:

rancher@xxx:~$ sudo rancherctl service list
enabled  ubuntu-console
enabled  /home/rancher/linux-dash/linux-dash.yml

And you can see that the Dash container has been started as a system Docker container:

rancher@xxx:~$ sudo system-docker ps
CONTAINER ID        IMAGE                          COMMAND                CREATED             STATUS              PORTS               NAMES

447ada85ca78        rancher/ubuntuconsole:v0.3.0   "/usr/sbin/entry.sh    About an hour ago   Up About an hour                        console

fb7ce6f074e6        husseingalal/busydash:latest   "node server"          About an hour ago   Up About an hour                        dash

b7b1c734776b        userdocker:latest              "/usr/sbin/entry.sh    About an hour ago   Up About an hour                        userdocker

2990a5db9042        udev:latest                    "/usr/sbin/entry.sh    About an hour ago   Up About an hour                        udev

935486c2bf83        syslog:latest                  "/usr/sbin/entry.sh    About an hour ago   Up About an hour                        syslog

And to test the Web UI just enter the following url to your browser: http://server’s-ip 1617

Conclusion

In version 0.3.0 of RancherOS, you have the ability to create and manage your own RancherOS system services. System service in RancherOS make it easy to enable is a Docker container that will start at the OS startup and can be designed in Docker compose format. For more information about system services in RancherOS. You can find instructions on how to download RancherOS from Github.

Source

Using Compose to go from Docker to Kubernetes

Feb 6, 2019

For anyone using containers, Docker is a wonderful development platform, and Kubernetes is an equally wonderful production platform. But how do we go from one to the other? Specifically, if we use Compose to describe our development environment, how do we transform our Compose files into Kubernetes resources?

This is a translation of an article initially published in French. So feel free to read the French version if you prefer!

Before we dive in, I’d like to offer a bit of advertising space to the primary sponsor of this blog, i.e. myself: ☺

In February, I will deliver container training in Canada! There will be getting started with containers and getting started with orchestration with Kubernetes. Both sessions will be offered in Montréal in English, and in Québec in French. If you know someone who might be interested … I’d love if you could let them know! Thanks ♥

What are we trying to solve?

When getting started with containers, I usually suggest following this plan:

  • write a Dockerfile for one service, i.e. one component of your application, so that this service can run in a container;
  • run the other services of that app in containers as well, by writing more Dockerfiles or using pre-built images;
  • write a Compose file for the entire app;
  • … stop.

When you reach this stage, you’re already leveraging containers and benefiting from the work you’ve done so far, because at this point, anyone (with Docker installed on their machine) can build and run the app with just three commands:

Then, we can add a bunch of extra stuff: continuous integration (CI), continuous deployment (CD) to pre-production …

And then, one day, we want to go to production with these containers. And, within many organizations, “production with containers” means Kubernetes. Sure, we could debate about the respective merits of Mesos, Nomad, Swarm, etc., but here, I want to pretend that we chose Kubernetes (or that someone chose it for us), for better or for worse.

So here we are! How do we get from our Compose files to Kubernetes resources?

At first, it looks like this should be easy: Compose is using YAML files, and so is Kubernetes.

I see lots of YAML

Original image by Jake Likes Onions, remixed by @bibryam.

There is just one thing: the YAML files used by Compose and the ones used by Kubernetes have nothing in common (except being both YAML). Even worse: some concepts have totally different meanings! For instance, when using Docker Compose, a service is a set of identical containers (sometimes placed behind a load balancer), whereas with Kubernetes, a service is a way to access a bunch of resources (for instance, containers) that don’t have a stable network address. When there are multiple resources behind a single service, that service then acts as a load balancer. Yes, these different definitions are confusing; yes, I wish the authors of Compose and Kubernetes had been able to agree on a common lingo; but meanwhile, we have to deal with it.

Since we can’t wave a magic wand to translate our YAML files, what should we do?

I’m going to describe three methods, each with its own pros and cons.

100% Docker

If we’re using a recent version of Docker Desktop (Docker Windows or Docker Mac), we can deploy a Compose file on Kubernetes with the following method:

  1. In Docker Desktop’s preferences panel, select “Kubernetes” as our orchestrator. (If it was set to “Swarm” before, this might take a minute or two so that the Kubernetes components can start.)
  2. Deploy our app with the following command:

That’s all, folks!

In simple scenarios, this will work out of the box: Docker translates the Compose file into Kubernetes resources (Deployment, Service, etc.) and we won’t have to maintain extra files.

But there is a catch: this will run the app on the Kubernetes cluster running within Docker Destkop on our machine. How can we change that, so that the app runs on a production Kubernetes cluster?

If we’re using Docker Enterprise Edition, there is an easy solution: UCP (Universal Control Plane) can do the same thing, but while targeting a Docker EE cluster. As a reminder, Docker EE can run on the same cluster, side-by-side, applications managed by Kubernetes, and applications managed by Swarm. When we deploy an app by providing a Compose file, we pick which orchestrator we want to use, and that’s it.

(The UCP documentation explains this more in depth. We can also read this article on the Docker blog.)

This method is fantastic if we’re already using Docker Enterprise Edition (or plan to), because in addition to being the simplest option, it’s also the most robust, since we’ll benefit from Docker Inc’s support if needed.

Alright, but for the rest of us who do not use Docker EE, what do?

Use some tools

There are a few tools out there to translate a Compose file into Kubernetes resources. Let’s spend some time on Kompose, because it’s (in my humble opinion) the most complete at the moment, and the one with the best documentation.

We can use Kompose in two different ways: by working directly with our Compose files, or by translating them into Kubernetes YAML files. In the latter case, we deploy these files with kubectl, the Kubernetes CLI. (Technically, we don’t have to use the CLI; we could use these YAML files with other tools like WeaveWorks Flux or Gitkube, but let’s keep this simple!)

If we opt to work directly with our Compose files, all we have to do is use komposeinstead of docker-compose for most commands. In practice, we’ll start our app with kompose up (instead of docker-compose up), for instance.

This method is particularly suitable if we’re working with a large number of apps, for which we already have a bunch of Compose files, and we don’t want to maintain a second set of files. It’s also suitable if our Compose files evolve quickly, and we want to maintain parity between our Compose files and our Kubernetes files.

However, sometimes, the translation produced by Kompose will be imperfect, or even outright broken. For instance, if we are using local volumes (docker run -v /path/to/data:/data ...), we need to find another way to bring these files into our containers once they run on Kubernetes. (By using Persistent Volumes, for instance.) Sometimes, we might want to adapt the application architecture: for instance, to ensure that the web server and the app server are running together, within the same pod, instead of being two distinct entities.

In that case, we can use kompose convert, which will generate the YAML files corresponding to the resources that would have been created with kompose up. Then, we can edit these files and touch them up at will before loading them into our cluster.

This method gives us a lot of flexibility (since we can edit and transform the YAML files as much as necessary before using them), but this means any change or edit might have to be done again when we update the original Compose file.

If we maintain many applications, but with similar architectures (perhaps they use the same languages, frameworks, and patterns), then we can use kompose convert, followed by an automated post-processing step on the generated YAML files. However, if we maintain a small number of apps (and/or they are very different from each other), writing custom post-processing scripts suited to every scenario may be a lot of work. And even then, it’s a good idea to double-check the output of these scripts a number of times, before letting them output YAML that would go straight to production. This might warrant even more work; more than you might want to invest.

Is it worth the time to automate?

This table (courtesy of XKCD) tells us how much time we can spend on automation before it gets less efficient than doing things by hand.

I’m a huge fan of automation. Automation is great. But before I automate something, I need to be able to do it …

… Manually

The best way to understand how these tools work, is to do their job ourselves, by hand.

Just to make it clear: I’m not suggesting that you do this on all your apps (especially if you have many apps!), but I would like to show my own technique for converting a Compose app into Kubernetes resources.

The basic idea is simple: each line in our Compose file must be mapped to something in Kubernetes. If I were to print the YAML for both my Compose file and my Kubernetes resources, and put them side by side, for each line in the Compose file, I should be able to draw an arrow pointing to a line (or multiple lines) on the Kubernetes side.

This helps me to make sure that I haven’t skipped anything.

Now, I need to know how to express every section, parameter, and option in the Compose file. Let’s see how it works on a small example!

This is an actual Compose file written (and used) by one of my customers. I replaced image and host names to respect their privacy, but other than that, it’s verbatim. This Compose file is used to run a LAMP stack in a preproduction environment on a single server. The next step is to “Kubernetize” this app (so that it can scale horizontally if necessary).

Next to each line of the Compose file, I indicated how I translated it into a Kubernetes resource. In another post (to be published next week), I will explain step by step the details of this translation from Compose to Kubernetes.

This is a lot of work. Furthermore, that work is specific to this app, and has to be re-done for every other app! This doesn’t sound like an efficient technique, does it? In this specific case, my customer has a whole bunch of apps that are very similar to the first one that we converted together. Our goal is to build an app template (for instance, by writing a Helm Chart) that we can reuse, or at least use as a base, for many applications.

If the apps differ significantly, there’s no way around it: we need to convert them one by one.

In that case, my technique is to tackle the problem by both ends. In concrete terms, that means converting an app manually, and then thinking about what we can adapt and tweak so that the original app (running under Compose) can be easier to deploy with Kubernetes. Some tiny changes can help a lot. For instance, if we connect through another service through a FQDN (e.g. sql-57.whatever.com), replace it with a short name (e.g. sql) and use a Service (with an ExternalName or static endpoints). Or use an environment variable to switch the code behavior. If we normalize our applications, it is very likely that we will be able to deal with them automatically with Kompose or Docker Enterprise Edition.

(This, by the way, is the whole point of platforms like OpenShift or CloudFoundry: they restrict what you can do to a smaller set of options, making that set of options easier to manage from an automation standpoint. But I digress!)

Conclusions

Moving an app from Compose to Kubernetes requires transforming the application’s Compose file into multiple Kubernetes resources. There are tools (like Kompose) to do this automatically, but these tools are no silver bullet (at least, not yet).

And even if we use a tool, we need to understand how it works and what it’s producing. We need to be familiar with Kubernetes, its concepts, and various resource types.

This is the perfect opportunity to bring up the training sessions that we’re organizing in February 2019 in Canada!

There will be:

These sessions are designed to complement each other, so you can follow both of them if you want to ramp up your skills in containers and orchestration.

If you wonder what these training sessions look like, our slides and other materials are publicly available on http://container.training/. You will also find a few videos taken during previous sessions and workshops. This will help you to figure out if this content is what you need!

Source

Poseidon-Firmament Scheduler – Flow Network Graph Based Scheduler

Wednesday, February 06, 2019

Poseidon-Firmament Scheduler – Flow Network Graph Based Scheduler

Authors: Deepak Vij (Huawei), Shivram Shrivastava (Huawei)

Introduction

Cluster Management systems such as Mesos, Google Borg, Kubernetes etc. in a cloud scale datacenter environment (also termed as Datacenter-as-a-Computer or Warehouse-Scale Computing – WSC) typically manage application workloads by performing tasks such as tracking machine live-ness, starting, monitoring, terminating workloads and more importantly using a Cluster Scheduler to decide on workload placements.

Cluster Scheduler essentially performs the scheduling of workloads to compute resources – combining the global placement of work across the WSC environment makes the “warehouse-scale computer” more efficient, increases utilization, and saves energy. Cluster Scheduler examples are Google Borg, Kubernetes, Firmament, Mesos, Tarcil, Quasar, Quincy, Swarm, YARN, Nomad, Sparrow, Apollo etc.

In this blog post, we briefly describe the novel Firmament flow network graph based scheduling approach (OSDI paper) in Kubernetes. We specifically describe the Firmament Scheduler and how it integrates with the Kubernetes cluster manager using Poseidon as the integration glue. We have seen extremely impressive scheduling throughput performance benchmarking numbers with this novel scheduling approach. Originally, Firmament Scheduler was conceptualized, designed and implemented by University of Cambridge researchers, Malte Schwarzkopf & Ionel Gog.

Poseidon-Firmament Scheduler – How It Works

At a very high level, Poseidon-Firmament scheduler augments the current Kubernetes scheduling capabilities by incorporating novel flow network graph based scheduling capabilities alongside the default Kubernetes Scheduler. It models the scheduling problem as a constraint-based optimization over a flow network graph – by reducing scheduling to a min-cost max-flow optimization problem. Due to the inherent rescheduling capabilities, the new scheduler enables a globally optimal scheduling environment that constantly keeps refining the workloads placements dynamically.

Key Advantages

Flow graph scheduling based Poseidon-Firmament scheduler provides the following key advantages:

  • Workloads (pods) are bulk scheduled to enable scheduling decisions at massive scale.
  • Based on the extensive performance test results, Poseidon-Firmament scales much better than Kubernetes default scheduler as the number of nodes increase in a cluster. This is due to the fact that Poseidon-Firmament is able to amortize more and more work across workloads.
  • Poseidon-Firmament Scheduler outperforms the Kubernetes default scheduler by a wide margin when it comes to throughput performance numbers for scenarios where compute resource requirements are somewhat uniform across jobs (Replicasets/Deployments/Jobs). Poseidon-Firmament scheduler end-to-end throughput performance numbers, including bind time, consistently get better as the number of nodes in a cluster increase. For example, for a 2,700 node cluster (shown in the graphs here), Poseidon-Firmament scheduler achieves a 7X or greater end-to-end throughput than the Kubernetes default scheduler, which includes bind time.
  • Availability of complex rule constraints.
  • Scheduling in Poseidon-Firmament is very dynamic; it keeps cluster resources in a global optimal state during every scheduling run.
  • Highly efficient resource utilizations.

Firmament Flow Network Graph – An Overview

Firmament scheduler runs a min-cost flow algorithm over the flow network to find an optimal flow, from which it extracts the implied workload (pod placements). A flow network is a directed graph whose arcs carry flow from source nodes (i.e. pod nodes) to a sink node. A cost and capacity associated with each arc constrain the flow, and specify preferential routes for it.

Figure 1 below shows an example of a flow network for a cluster with two tasks (workloads or pods) and four machines (nodes) – each workload on the left hand side, is a source of one unit of flow. All such flow must be drained into the sink node (S) for a feasible solution to the optimization problem.

Figure 1. Example of a Flow Network
Figure 1. Example of a Flow Network

Poseidon Mediation Layer – An Overview

Poseidon is a service that acts as the integration glue for the Firmament scheduler with Kubernetes. It augments the current Kubernetes scheduling capabilities by incorporating new flow network graph based Firmament scheduling capabilities alongside the default Kubernetes Scheduler; multiple schedulers running simultaneously. Figure 2 below describes the high level overall design as far as how Poseidon integration glue works in conjunction with the underlying Firmament flow network graph based scheduler.

Figure 2. Firmament Kubernetes Integration Overview
Figure 2. Firmament Kubernetes Integration Overview

As part of the Kubernetes multiple schedulers support, each new pod is typically scheduled by the default scheduler, but Kubernetes can be instructed to use another scheduler by specifying the name of another custom scheduler (in our case, Poseidon-Firmament) at the time of pod deployment. In this case, the default scheduler will ignore that Pod and allow Poseidon scheduler to schedule the Pod to a relevant node.

Note: For details about the design of this project see the design document.

Possible Use Case Scenarios – When To Use It

Poseidon-Firmament scheduler enables extremely high throughput scheduling environment at scale due to its bulk scheduling approach superiority versus K8s pod-at-a-time approach. In our extensive tests, we have observed substantial throughput benefits as long as resource requirements (CPU/Memory) for incoming Pods is uniform across jobs (Replicasets/Deployments/Jobs), mainly due to efficient amortization of work across jobs.

Although, Poseidon-Firmament scheduler is capable of scheduling various types of workloads (service, batch, etc.), following are the few use cases where it excels the most:

  1. For “Big Data/AI” jobs consisting of a large number of tasks, throughput benefits are tremendous.
  2. Substantial throughput benefits also for service or batch job scenarios where workload resource requirements are uniform across jobs (Replicasets/Deplyments/Jobs).

Current Project Stage

Currently Poseidon-Firmament project is an incubation project. Alpha Release is available at https://github.com/kubernetes-sigs/poseidon.

Source

Ansible Docker | Application Automation

[Ansible-Docker-RancherOver the last year I’ve been using Rancher with Ansible, and have found that using the two together can be incredibly useful. If you aren’t familiar with Ansible, it is a powerful configuration management tool which can be used to manage servers remotely without a daemon or agent running on the host. Instead, it uses SSH to connect with hosts, and applies tasks directly on the machines. Because of this, as long as you have SSH access to the host, (and Python) running on the host, you will be able to use Ansible to manage hosts remotely. You can find detailed ][documentation][ for Ansible on the company’s website..] [In this post, I will be using Ansible with Docker to automate the build out of a simple wordpress environment on a ]Rancher[ deployment. Specifically, I will include the following steps:]

  • [Installing Docker on my hosts using Ansible.]
  • [Setting up a fresh Rancher installation using Ansible.]
  • [Registering hosts with Rancher using Ansible.]
  • [Deploying the Application containers on the Hosts.]

Preparing the Playbook

[Ansible uses “playbooks’ which are Ansible’s configuration and orchestration language, These playbooks are expressed in YAML format, and describes set of tasks that will run on remote hosts, see this ][introduction][ for more information on how to use Ansible playbooks] [in our case the ][playbook][ will run on 3 servers, one server for the ]Rancher[ platform, the second server for the ]MySQL[ database, and the last one for the ]WordPress[ application.] [The addresses and information about the previous servers are listed in the following Ansible inventory file, the inventory is the file that contains names, addresses, and ports of the remote hosts where the Ansible playbook is going to execute:] inventory file:

[Rancher]
rancher ansible_ssh_port=22 ansible_ssh_host=x.x.x.x

[nodes:children]
application
database

[application]
node1 ansible_ssh_port=22 ansible_ssh_host=y.y.y.y

[database]
node2 ansible_ssh_port=22 ansible_ssh_host=z.z.z.z

[Note that I used grouping in the inventory to better describe the list of machines used in this deployment, ][The playbook itself will consists of five ]plays[, which will result in deploying the WordPress application:]

  • Play #1[ Installing and configuring Docker ]

[The first play will install and configure ]Docker[ on all machines, it uses the “docker” role which we will see in the next section.]

  • Play #2[ Setting up Rancher server]

[This play will install Rancher server and make sure it is up and running, this play will only run on one server which is considered to be the Rancher server.]

  • Play #3[ Registering Rancher hosts]

[This play will run on two machines to register each of them with the Rancher server which should be up and running from the last play.]

  • Play #4[ Deploy MySQL Container]

[This is a simple play to deploy the MySQL container on the database server.]

  • play #5[ Deploy WordPress App]

[This play will install the WordPress application on the second machine and link it to the MySQL container.] rancher.yml (the playbook file)

---
# play 1
- name: Installing and configuring Docker
  hosts: all
  sudo: yes
  roles:
    - { role: docker, tags: ["docker"] }

# play 2
- name: Setting up Rancher Server
  hosts: "rancher"
  sudo: yes
  roles:
    - { role: rancher, tags: ["rancher"] }

# play 3
- name: Register Rancher Hosts
  hosts: "nodes"
  sudo: yes
  roles:
    - { role: rancher_reg, tags: ["rancher_reg"] }

# play 4
- name: Deploy MySQL Container
  hosts: 'database'
  sudo: yes
  roles:
      - { role: mysql_docker, tags: ["mysql_docker"] }

# play 5
- name: Deploy WordPress App
  hosts: "application"
  sudo: yes
  roles:
    - { role: wordpress_docker, tags: ["wordpress_docker"] }

Docker role

[This role will install the latest version of Docker on all the servers, the role assumes that you will use Ubuntu 14.04, because some other Ubuntu distros require some dependencies to run docker which is not discussed here, see the Docker ][documentation][ for more information on installing Docker on different platforms.]

- name: Fail if OS distro is not Ubuntu 14.04
  fail:
      msg="The role is designed only for Ubuntu 14.04"
  when: "{{ ansible_distribution_version | version_compare('14.04', '!=') }}"

[The Docker module in Ansible requires ]docker-py[ library to be installed on the remote server, so at first we use python-pip to install docker-py library on all servers before installing the Docker:]

- name: Install dependencies
  apt:
      name={{ item }}
      update_cache=yes
  with_items:
      - python-dev
      - python-setuptools

- name: Install pip
  easy_install:
      name=pip

- name: Install docker-py
  pip:
      name=docker-py
      state=present
      version=1.1.0

The next tasks will import the Docker apt repo and install Docker:

- name: Add docker apt repo
  apt_repository:
      repo='deb https://apt.dockerproject.org/repo ubuntu-{{ ansible_distribution_release }} main'
      state=present

- name: Import the Docker repository key
  apt_key:
      url=https://apt.dockerproject.org/gpg
      state=present
      id=2C52609D

- name: Install Docker package
  apt:
      name=docker-engine
      update_cache=yes

Finally the next three tasks will create a system group for Docker and add any user defined in “docker_users” variable to this group, and it will copy template for Docker configuration then restart Docker.

- name: Create a docker group
  group:
      name=docker
      state=present

- name: Add user(s) to docker group
  user:
      name={{ item }}
      group=docker
      state=present
  with_items: docker_users
  when: docker_users is defined

- name: Configure Docker
  template:
      src=default_docker.j2
      dest=/etc/default/docker
      mode=0644
      owner=root
      group=root
  notify: restart docker

The “default_docker.j2” template will check for the variable “docker_opts” which is not defined by default, and if it is defined will add the options defined in the variable to the file:

# Docker Upstart and SysVinit configuration file

# Use DOCKER_OPTS to modify the daemon startup options.
{% if docker_opts is defined %}
DOCKER_OPTS="{{ docker_opts | join(' ')}}"
{% endif %}

Rancher role

[The rancher role is really simple, its goal is to pull and run the Rancher’s Docker image from the hub, and then wait for the Rancher server to start and listen for incoming connections:]

---
- name: Pull and run the Rancher/server container
  docker:
      name: "{{ rancher_name }}"
      image: rancher/server
      restart_policy: always
      ports:
        - "{{ rancher_port }}:8080"

- name: Wait for the Rancher server to start
  action: command docker logs {{ rancher_name }}
  register: rancher_logs
  until: rancher_logs.stdout.find("Listening on") != -1
  retries: 30
  delay: 10

- name: Print Rancher's URL
  debug: msg="You can connect to rancher server http://{{ ansible_default_ipv4.address }}:{{ rancher_port }}"

Rancher Registration Role

The rancher_reg role will pull and run the rancher_agent Docker image, first it will use Rancher’s API to return the registration token to run each agent with the right registration url, this token is needed to register hosts in Rancher environment:

---
- name: Install httplib2
  apt:
      name=python-httplib2
      update_cache=yes

- name: Get the default project id
   action: uri
       method=GET
       status_code=200
       url="http://{{ rancher_server }}:{{ rancher_port }}/v1/projects" return_content=yes
   register: project_id

- name: Return the registration token URL of Rancher server
  action: uri
      method=POST
      status_code=201
      url="http://{{ rancher_server }}:{{ rancher_port }}/v1/registrationtokens?projectId={{ project_id.json['data'][0]['id'] }}" return_content=yes
  register: rancher_token_url

- name: Return the registration URL of Rancher server
  action: uri
      method=GET
      url={{ rancher_token_url.json['links']['self'] }} return_content=yes
  register: rancher_token

Then it will make sure that no other agent is running on the server and it will run the Rancher Agent:

- name: Check if the rancher-agent is running
  command: docker ps -a
  register: containers

- name: Register the Host machine with the Rancher server
  docker:
      image: rancher/agent:v{{ rancher_agent_version }}
      privileged: yes
      detach: True
      volumes: /var/run/docker.sock:/var/run/docker.sock
      command: "{{ rancher_token.json['registrationUrl'] }}"
      state: started
  when: "{{ 'rancher-agent' not in containers.stdout }}"

MySQL and WordPress Roles

[The two roles are using Ansible ][Docker’s][ module to run Docker images on the server, you will note that each Docker container will start with ]RANCHER_NETWORK=true[ environment variable, which will cause the Docker container to use Rancher’s managed network so that containers can communicate on different hosts in the same private network.] [I will use the official ][MySQL][ and ][Wordpress][ images, the MySQL image requires the ]MYSQL_ROOT_PASSWORD[ environment variable to start, you can also start it with default database and user which will be granted superuser permissions on this database.]

- name: Create a mysql docker container
  docker:
      name: mysql
      image: mysql:{{ mysql_version }}
      detach: True
      env: RANCHER_NETWORK=true,
           MYSQL_ROOT_PASSWORD={{ mysql_root_password }}

- name: Wait a few minutes for the IPs to be set to the container
  wait_for: timeout=120

# The following tasks help with the connection of the containers in different hosts in Rancher
- name: Fetch the MySQL Container IP
  shell: docker exec mysql ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1 |  sed -n 2p
  register: mysql_sec_ip

- name: print the mysql rancher's ip
  debug: msg={{ mysql_sec_ip.stdout }}

[Note that role will wait for 2 minutes until to make sure that the container is configured with the right IPs, and then it will fetch the container’s secondary ip which is the ip used in Rancher’s network and save it to the ]mysql_sec_ip[ variable which will survive through the playbook, WordPress image on other hand will start with ]WORDPRESS_DB_HOST[ set to the ip of the mysql container we just started.]

- name: Create a wordpress docker container
  docker:
      name: wordpress
      image: wordpress:{{ wordpress_version }}
      detach: True
      ports:
      - 80:80
      env: RANCHER_NETWORK=true,
         WORDPRESS_DB_HOST={{ mysql_host }}:3306,
         WORDPRESS_DB_PASSWORD={{ mysql_root_password }},
         WORDPRESS_AUTH_KEY={{ wordpress_auth_key }},
         WORDPRESS_SECURE_AUTH_KEY={{ wordpress_secure_auth_key }},
         WORDPRESS_LOGGED_IN_KEY={{ wordpress_logged_in_key }},
         WORDPRESS_NONCE_KEY={{ wordpress_nonce_key }},
         WORDPRESS_AUTH_SALT={{ wordpress_auth_salt }},
         WORDPRESS_SECURE_AUTH_SALT={{ wordpress_secure_auth_salt }},
         WORDPRESS_NONCE_SALT={{ wordpress_nonce_salt }},
         WORDPRESS_LOGGED_IN_SALT={{ wordpress_loggedin_salt }}

Managing Variables

Ansible defines variables in different layers, some of layers override the others, so for our case I added a default set of variables for each role to be used in different playbooks later, and added the currently used variables in the group_vars directory to override them.

├── group_vars
│   ├── all.yml
│   ├── nodes.yml
│   └── Rancher.yml
├── hosts
├── rancher.yml
├── README.md
└── roles
    ├── docker
    ├── mysql_docker
    ├── rancher
    ├── rancher_reg
    └── wordpress_docker

The nodes.yml variables will apply on the nodes group defined in the inventory file which contains the database and application servers, this file contains information used by mysql and wordpress containers:

---
rancher_server: "{{ hostvars['rancher']['ansible_ssh_host'] }}"

# MySQL variables
mysql_root_password: "{{ lookup('password', mysql_passwd_tmpfile + ' length=20 chars=ascii_letters,digits') }}"
mysql_passwd_tmpfile: /tmp/mysqlpasswd.file
mysql_host: "{{ hostvars.node2.mysql_sec_ip.stdout }}"
mysql_port: 3306
mysql_version: 5.5

# WordPress variables
wordpress_version: latest

[You may note that I used password lookup to generate a random password for mysql root password, a good alternative for this method would be ][vault][ to encrypt sensitive data like passwords or keys.]

Running the Playbook

[To run the playbook, I fired up 3 machines with Ubuntu 14.04 installed and added their IPs to the inventory we saw earlier, and then used the following command to start the playbook:]

$ ansible-playbook -u root -i hosts rancher.yml

[After the playbook finishes its work, you can access the Rancher server and you will see the following:] rancher\_nodes[And when accessing the IP of node1 on port 80 you will access WordPress:] wordpress\_rancher

Conclusion

[Ansible is a very powerful and simple automation tool that can be used to manage and configure a fleet of servers, using Ansible with Rancher can be a very efficient method to start your environment and manage your Docker containers. This month we are hosting an online meetup in which we’ll be demonstrating how to run microservices in Docker containers and orchestration application upgrades using Rancher. Please join us for this meetup to learn more. ]

Source

Deploying a scalable Jenkins cluster with Docker and Rancher

Containerization brings several benefits to traditional CI platforms where builds share hosts: build dependencies can be isolated, applications can be tested against multiple environments (testing a Java app against multiple versions of JVM), on-demand build environments can be created with minimal stickiness to ensure test fidelity, Docker Compose can be used to quickly bring up environments which mirror development environments. Lastly, the inherent isolation offered by Docker Compose-based stacks allow for concurrent builds — a sticking point for traditional build environments with shared components.

One of the immediate benefits of containerization for CI is that we can leverage tools such as Rancher to manage distributed build environments across multiple hosts. In this article, we’re going to launch a distributed Jenkins cluster with Rancher Compose. This work builds upon the earlier work** **by one of the authors, and further streamlines the process of spinning up and scaling a Jenkins stack.

Our Jenkins Stack

jenkins\_master\_slave For our stack, we’re using Docker in Docker (DIND) images for Jenkins master** and slave **running on top of Rancher compute nodes launched in Amazon EC2. With DIND, each Jenkins container runs a Docker daemon within itself. This allows us to create build pipelines for dockerized applications with Jenkins.

Prerequisites

  • [AWS EC2 account]
  • [IAM credentials for docker machine]
  • [Rancher Server v0.32.0+]
  • [Docker 1.7.1+]
  • [Rancher Compose]
  • [Docker Compose]

Setting up Rancher

Step 1: Setup an EC2 host for Rancher server

First thing first, we need an EC2 instance to run the Rancher server. We recommend going with Ubuntu 14.04 AMI for it’s up-to-date kernel. Make sure[ to configure the security group for the EC2 instance with access to port 22 (SSH) and 8080 (rancher web interface):]

launch\_ec2\_instance\_for\_rancher\_step\_2

[Once the instance starts, the first order of business is to ][install the latest version of Docker by following the steps below (for Ubuntu 14.04):]

  1. [sudo apt-get update]
  2. [curl -sSL https://get.docker.com/ | sh (requires sudo password)]
  3. [sudo usermod -aG docker ubuntu]
  4. [Log out and log back in to the instance]

At this point you should be able to run docker without sudo.

[Step 2: Run and configure Rancher]

[To install and run the latest version of Rancher (v0.32.0 at the time of writing), follow the instructions in the docs. In a few minutes your Rancher server should be up and ready to serve requests on port 8080. ][If you browse to http://YOUR_EC2_PUBLIC_IP:8080/ you will be greeted with a welcome page and a notice asking you to configure access. ][This is an important step to prevent unauthorized access to your Rancher server. Head over to the settings section and follow the instructions here to configure access control. ]

rancher\_setup\_step\_1

[We typically create a separate environment for hosting all developer facing tools, e.g., Jenkins, Seyren, Graphite etc to isolate them from the public facing live services. To this end, we’re going to create an environment called *Tools. *From the environments menu (top left), select \“manage environments\” and create a new environment. Since we’re going to be working in this environment exclusively, let’s go ahead and make this our default environment by selecting \“set as default login environment\” from the environments menu. ]

rancher\_setup\_step\_2\_add\_tools\_env

The next step is to tell Rancher about our hosts. For this tutorial, we’ll launch all hosts with Ubuntu 14.04. Alternatively, you can add an existing host using the custom host** **option in Rancher. Just make sure that your hosts are running Docker 1.7.1+.

rancher\_setup\_step\_3\_add\_ec2\_host

One of the hosts (JENKINS_MASTER_HOST) is going to run Jenkins master and would need some additional configuration. First, we need to open up access to port 8080 (default Jenkins port). You can do that by updating the security group used by that instance fom the AWS console. In our case, we updated the security group ( \“rancher-machine\” ) which was created by rancher. Second, we need to attach an additional EBS-backed volume to host Jenkins configuration. Make sure that you allocate enough space for the volume, based on how large your build workspaces tend to get. In addition, make sure the flag \“delete on termination\” is unchecked. That way, the volume can be re-attached to another instance and backed up easily:

[![launch\_ec2\_ebs\_volume\_for\_jenkins](http://cdn.rancher.com/wp-content/uploads/2015/08/01132712/launch_ec2_ebs_volume_for_jenkins.png)](http://cdn.rancher.com/wp-content/uploads/2015/08/01132712/launch_ec2_ebs_volume_for_jenkins.png)

Lastly, let’s add a couple of labels for the JENKINS_MASTER_HOST; 1) add a label called \“profile\” with the value as \“jenkins\” and 2) add a label called \“jenkins-master\” with the value \“true\“. We’re going to use these labels later to schedule master and slave containers on our hosts.

Step 3: Download and install rancher-compose CLI

As a last step, we need to install the rancher-compose CLI on our development machine. To do that, head over to the applications tab in Rancher and download the rancher compose CLI for your system. All you need is to add the path-to-your-rancher-compose-CLI to your *PATH *environment variable.

rancher\_setup\_step\_5\_install\_rancher\_compose

With that, our rancher server is ready and we can now launch and manage containers with it.

Launching Jenkins stack with Rancher

Step 1: Stack configuration

Before we launch the Jenkins stack, we need to create a new Rancher API key from API & Keys section under settings. Save the API key pair some place safe as we’re going to need it with rancher-compose. For the rest of the article, we refer to the API key pair as [RANCHR_API_KEY and RANCHER_API_KEY_SECRET]. Next, open up a terminal to fetch the latest version of Docker and Rancher Compose templates from Github:

git clone https://github.com/rancher/jenkins-rancher.git
cd jenkins-rancher

Before we can use these templates, let’s quickly update the configuration. First, open up the Docker Compose file and update the Jenkins username and password to a username and password of your choice. Let’s call these credentials JENKINS_USERand JENKINS_PASSWORD. These credentials will be used by the Jenkins slave to talk to master. Second, update the host tag for slave and master to match the tags you specified for your rancher compute hosts. Make sure that theio.rancher.scheduler.affinity:host_label has a value of \“profile=jenkins\” for jenkins-slave. Similarly, for jenkins-master, make sure that the value for io.rancher.scheduler.affinity:host_label is \“jenkins-master=true\“. This will ensure that rancher containers are only launched on the hosts that you want to limit them to. For example, we are limiting our Jenkins master to only run on a host with an attached EBS volume and access to port 8080.

jenkins-slave:
  environment:
    JENKINS_USERNAME: jenkins
    JENKINS_PASSWORD: jenkins
    JENKINS_MASTER: http://jenkins-master:8080
  labels:
    io.rancher.scheduler.affinity:host_label: profile=jenkins
  tty: true
  image: techtraits/jenkins-slave
  links:
  - jenkins-master:jenkins-master
  privileged: true
  volumes:
  - /var/jenkins
  stdin_open: true
jenkins-master:
  restart: 'no'
  labels:
    io.rancher.scheduler.affinity:host_label: jenkins-master=true
  tty: true
  image: techtraits/jenkins-master
  privileged: true
  stdin_open: true
  volume_driver: /var/jenkins_home
jenkins-lb:
  ports:
  - '8080'
  tty: true
  image: rancher/load-balancer-service
  links:
  - jenkins-master:jenkins-master
  stdin_open: true

Step 2: Create the Jenkins stack with Rancher compose

[Now we’re all set to launch the Jenkins stack. Open up a terminal, navigate to the \”jenkins-rancher\” directory and type: ]
rancher-compose --url http://RANCHER_HOST:RANCHER_PORT/v1/ --access-key RANCHER_API_KEY --secret-key RANCHER_API_KEY_SECRET --project-name jenkins --verbose create
[The output of the rancher compose command should look something like:]

[DEBU[0000] Opening compose file: docker-compose.yml][ DEBU[0000] Opening rancher-compose file: /home/mbsheikh/jenkins-rancher/rancher-compose.yml][ DEBU[0000] [0/3] [jenkins-slave]: Adding][ DEBU[0000] Found environment: jenkins(1e9)][ DEBU[0000] Launching action for jenkins-master][ DEBU[0000] Launching action for jenkins-slave][ DEBU[0000] Launching action for jenkins-lb][ DEBU[0000] Project [jenkins]: Creating project][ DEBU[0000] Finding service jenkins-master][ DEBU[0000] [0/3] [jenkins-master]: Creating][ DEBU[0000] Found service jenkins-master][ DEBU[0000] [0/3] [jenkins-master]: Created][ DEBU[0000] Finding service jenkins-slave][ DEBU[0000] Finding service jenkins-lb][ DEBU[0000] [0/3] [jenkins-slave]: Creating][ DEBU[0000] Found service jenkins-slave][ DEBU[0000] [0/3] [jenkins-slave]: Created][ DEBU[0000] Found service jenkins-lb][ DEBU[0000] [0/3] [jenkins-lb]: Created]

Next, verify that we have a new stack with three services:

rancher\_compose\_2\_jenkins\_stack\_created

Before we start the stack, let’s make sure that the services are properly linked. Go to your stack’s settings and select \“View Graph\” which should display the links between various services:

rancher\_compose\_3\_jenkins\_stack\_graph

Step 3: Start the Jenkins stack with Rancher compose

To start the stack and all of Jenkins services, we have a couple of options; 1) select \“Start Services\” option from Rancher UI, or 2) invoke rancher-compose CLI with the following command:

rancher-compose --url http://RANCHER_HOST:RANCHER_PORT/v1/ --access-key RANCHER_API_KEY --secret-key RANCHER_API_KEY_SECRET --project-name jenkins --verbose start

Once everything is running, find out the public IP of the host running \“jenkins-lb\” from the Rancher UI and browse to http://HOST_IP_OF_JENKINS_LB:8080/. If everything is configured correctly, you should see the Jenkins landing page. At this point, both your Jenkins master and slave(s) should be running; however, if you check the logs for your Jenkins slave, you would see 404 errors where the Jenkins slave is unable to connect to the Jenkins master. We need to configure Jenkins to allow for slave connections.

Configuring and Testing Jenkins

In this section, we’ll go through the steps needed to configure and secure our Jenkins stack. First, let’s create a Jenkins user with the same credentials (JENKINS_USER and JENKINS_PASSWORD) that you specified in your docker compose configuratio[n file. ]Next, to enable security for Jenkins, navigate to \“manage Jenkins\” and select \“enable security\” from the security configuration. Make sure to specify 5000 as a fixed port for \“TCP port for JNLP slave agents\“. Jenkins slaves communicate with the master node on this port.

setup\_jenkins\_1\_security

For the Jenkins slave to be able to connect to the master, we first need to install the Swarm plugin. The plugin can be installed from the \“manage plugins\” section in Jenkins. Once you have the swarm plugin installed, your Jenkins slave should show up in the \“Build Executor Status\” tab:

setup\_jenkins\_2\_slave\_shows\_up

Finally, to complete the master-slave configuration, head over to \“manage Jenkins\“. You should now see a notice about enabling master security subsystem. Go ahead and enable the subsystem; it can be used to control access between master and slaves:

setup\_jenkins\_3\_master\_slave\_security\_subsystem

Before moving on, let’s configure Jenkins to work with Git and Java based projects. To configure git, simply install the git plugin. Then, select \“Configure\” from \“Manage Jenkins\” settings and set up the JDK and maven installers you want to use for your projects:

[setup\_jenkins\_4\_jdk\_7]

setup\_jenkins\_5\_maven\_3

The steps above should be sufficient for building docker or maven based Java projects. To test our new Jenkins stack, let’s create a docker based job. Create a new \“Freestyle Project\” type job named \“docker-test\” and add the following build step and select \“execute shell\” with the following commands:

docker -v
docker run ubuntu /bin/echo hello world
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
docker rmi $(docker images -q)

Save the job and run. In the console output, you should see the version of docker running inside your Jenkins container and the output for other docker commands in our job.

Note: The stop, rm and rmi commands used in the above shell script stops and cleans up all containers and images. Each Jenkins job should only touch it’s own containers, and therefore, we recommend deleting this job after a successful test.

Scaling Jenkins with Rancher

This is an area where Rancher really shines; it makes managing and scaling Docker containers trivially easy. In this section we’ll show you how to scale up and scale down the number of Jenkins slaves based on your needs.

In our initial setup, we only had one EC2 host registered with Rancher and all three services (Jenkins load balancer, Jenkins master and Jenkins slave) running on the same host. It looks like:

rancher\_one\_host

We’re now going to register another host by following the instructions here:

rancher\_setup\_step\_4\_hosts

jenkins\_scale\_upTo launch more Jenkins slaves, simply click \“Scale up\” from your \“Jenkins\” stack in Rancher. That’s it! Rancher will immediately launch a new Jenkins slave container. As soon as the slave container starts, it will connect with Jenkins master and will show up in the list of build hosts:

jenkins\_scale\_up\_2

To scale down, select \“edit\” from jenkins-slave settings and adjust the number of slaves to your liking:

jenkins\_scale\_down

In a few seconds you’ll see the change reflected in Jenkins list of available build hosts. Behind the scenes, Rancher uses labels to schedule containers on hosts. For more details on Rancher’s container scheduling, we encourage you to check out the documentation.

Conclusion

In this article, we built Jenkins with Docker and Rancher. We deployed up a multi-node Jenkins platform with Rancher Compose which can be launched with a couple of commands and scaled as needed. Rancher’s cross-node networking allows us to seamlessly scale the Jenkins cluster on multiple nodes and potentially across multiple clouds with just a few clicks. Another significant aspect of our Jenkins stack is the DIND containers for Jenkins master and slave, which allows the Jenkins setup to be readily used for dockerized and non dockerized applications.

In future articles, we’re going to use this Jenkins stack to create build pipelines and highlight CI best practices for dockerized applications. To learn more about managing applications through the upgrade process, please join our next online meetup where we’ll dive into the details of how to manage deployments and upgrades of microservices with Docker and Rancher.

Source