Docker Load Balancing Now Available in Rancher 0.16

Hello, my name is Alena Prokharchyk and I am a part of the software
development team at Rancher Labs. In this article I’m going to give an
overview of a new feature I’ve been working on, which was released this
week with Rancher 0.16 – a Docker Load Balancing service. One of the
most frequently requested Rancher features, load balancers are used to
distribute traffic between docker containers. Now Rancher users
can configure, update and scale up an integrated load balancing service
to meet their application needs, using either Rancher’s UI or API. To
implement our load balancing functionality we decided to use HAproxy,
which is deployed as a contianer, and managed by the Rancher
orchestration functionality. With Rancher’s Load Balancing capability,
users are now able to use a consistent, portable load balancing service
on any infrastructure where they can run Docker. Whether it is running
in a public cloud, private cloud, lab, cluster, or even on a laptop, any
container can be a target for the load balancer.

Creating a Load Balancer

Once you have an environment running in Rancher, it is simple to create
a Load Balancer. You’ll see a new top level tab in the Rancher UI
called “Balancing” from which you can create and access your load
balancers. Screen Shot 2015-04-12 at 8.24.47
PM
To create a new load balancer click on + Add Load Balancer. You’ll
be given a configuration screen to provide details on how you want the
load balancer to function. Screen Shot 2015-04-12 at 8.25.05
PM
There are a number of different options for configuration, and I’ve
created a video demonstration to walk through the process.

Updating an active Load Balancer

In some cases after your Load Balancer has been created, you might want
to change its settings – for example to add or remove listener ports,
configure a health check, or simply add more target containers. Rancher
performs all the updates without any downtime for your application. To
update the Load Balancer, bring up the Load Balancer “Details” view by
clicking on its name in the UI:
UpdateNavigation
Then navigate to the toolbar of the setting you want to change, and
make the update:
LBUpdateConfig

###

Understanding Health Checks

Health checks can be incredibly helpful when running a production
application. Health checks monitor the availability of target
containers, so that if one of the load balanced containers in your app
becomes unresponsive, it can be excluded from the list of balanced
hosts, until its functioning again. You can delegate this task to the
Rancher Load Balancer by configuring the health check on it from the UI.
Just provide a monitoring URL for the target container, as well as
check intervals and healthy and unhealthy response thresholds. You can
see the UI for this in the image below.
healthCheck

Stickiness Policies

Some applications require that a user continues to connect to the same
backend server within the same login session. This persistence is
achieved by configuring Stickiness policy on the Load Balancer. With
stickiness, you can control whether the session cookie is provided by
the application, or directly from the load balancer.

Scaling your application

The Load Balancer service is primarily used to help scale up
applications as you add additional targets to the load balancer.
However, to provide an additional layer of scaling, the load balancer
itself can also scale across multiple hosts, creating a clustered load
balancing service. With the Load Balancer deployed on multiple hosts,
you can use a Global Load Balancing service, such as Amazon Web
Services, Route 53, to distribute incoming traffic across load
balancers. This can be especially useful when running load balancers in
different physical locations. The diagram below explains how this can be
done. Screen Shot 2015-04-08 at 5.55.10
PM

Load Balancing and Service Discovery

This new load balancing support has plenty of independent value, but it
will also be an important part of the work we’re doing on service
discovery, and support for Docker Compose. We’re still working on this
and testing it, but you should start to see this functionality in
Rancher over the next four to six weeks. If you’d like to learn about
load balancing, Docker Compose, service discovery and running
microservices with Rancher, please join our next online meetup where
we’ll be covering all of these topics by clicking the button
below.  Alena Prokharchyk @lemonjet https://github.com/alena1108

Source

Building a Continuous Integration Environment using Docker, Jenkins and OpenVPN

Build a CI/CD Pipeline with Kubernetes and Rancher 2.0

Recorded Online Meetup of best practices and tools for building pipelines with containers and kubernetes.

Watch the training

RancherVPNSince
I started playing with Docker I have been thinking that its network
implementation is something that will need to be improved before I could
really use it in production. It is based on container links and service
discovery but it only works for host-local containers. This creates
issues for a few use cases, for example when you are setting up services
that need advanced network features like broadcasting/multicasting for
clustering. In this case you must deploy your application stack
containers in the same Docker host, but it makes no sense to deploy a
whole cluster in the same physical or virtual host. Also I would like
containers networking to function without performing any action like
managing port mappings or exposing new ports. This is why networking is
one of my favorite features of Rancher, because it overcomes Docker
network limitations using a software defined network that connects all
docker containers under the same network as if all of them were
physically connected. This feature makes it much easier to interconnect
your deployed services because you don’t have to configure anything. It
just works. **** ****However I was still missing the possibility to
easily reach my containers and services from my PC as if I also was on
the same network again without configuring new firewall rules or mapping
ports. That is why I created a Docker image that extends Rancher network
using OpenVPN. This allows any device that may run OpenVPN client
including PCs, gateways, and even mobile devices or embedded systems to
access your Rancher network in an easy and secure way because all its
traffic is encrypted. There are many use cases and possibilities for
using this, I list some examples:

  • Allow all users in your office to access your containers
  • Enabling oncall sysadmins to access your containers from anywhere at
    any time
  • Or the example that we are carrying out: allowing a user who works
    at home to access your containers

And all this without reconfiguring your Rancher environment every time
that you grant access to someone. In this post we are installing a
minimalistic Continuous Integration (CI) environment on AWS using
Rancher and RancherOS. The main idea is to create a scenario where a
developer who teleworks can easily access our CI environment, without
adding IPs to a whitelist, exposing services to the Internet nor
performing special configurations. To do so we are installing and
configuring these docker images:

  • jenkins: a Jenkins instance to compile a sample WAR hosted in
    github. Jenkins will automatically deploy this application in tomcat
    after compiling it.
  • tutum/tomcat:7.0 – a Tomcat instance for deploying the sample WAR
  • nixel/rancher-vpn-server: a custom OpenVPN image I have created
    specially to extend Rancher network

And we are using a total of 4 Amazon EC2 instances:

  • 1 for running Rancher Server
  • 1 for running VPN server
  • 1 for running Tomcat server
  • 1 for running Jenkins

*RancherVPN* At the end the
developer will be able to browse to Jenkins and Tomcat webapp using his
VPN connection. As you will see, this is easy to achieve because you are
not configuring anything for accessing Tomcat or Jenkins from your PC,
you just launch a container and you are able to connect to it.

Preparing AWS cloud

You need to perform these actions on AWS before setting up the CI
environment. Creating a Key Pair Go to EC2 Console and enter Key
Pairs
section. When you create the Key Pair your browser will download
a private key that you will need later for connecting to your Rancher
Server instance using SSH if you want to. Save this file because you
won’t be able to download it from AWS anymore. Creating a Security
Group Before creating a Security Group go to VPC Console and choose
one VPC and Subnet where you will deploy your EC2 instances. Copy the
VPC ID and Subnet ID and CIDR. Go to EC2 Console and create a Security
Group named Rancher which will allow this inbound traffic:

  • Allow 22/tcp, 2376/tcp and 8080/tcp ports from any source, needed
    for Docker machine to provision hosts
  • Allow 500/udp and 4500/udp ports from any source, needed for Rancher
    network
  • Allow 9345/tcp and 9346/tcp ports from any source, needed for UI
    features like graphs, view logs, and execute shell
  • Allow 1194/tcp and 2222/tcp ports from any source, needed to publish
    our VPN server container

Be sure to select the appropriate VPC in the Security Group dialog.
Creating an Access Key On EC2 Console click your name in the top
menu bar and go to Security Credentials. Expand Access Keys (Access
Key ID and Secret Access Key)
option and create a new Access Key.
Finally click Download Key File because again you won’t be able to do
it later. You will need this for Rancher Server to create Docker hosts
for you.

Installing Rancher Server

Create a new instance on EC2 console that uses rancheros-0.2.1 AMI,
search for it in Community AMIS section. For this tutorial I am using
a basic t1.micro instance with 8GB disk, you may change this to better
fit your environment needs. Now enter Configure Instance Details
screen and select the appropriated Network and Subnet. Then expand
Advanced Details section and enter this user data:

#!/bin/bash
docker run -d -p 8080:8080 rancher/server:v0.14.2

This will install and run Rancher Server 0.14.2 when the instance boots.
Before launching the new instance be sure to choose the Security Group
and Key Pair we just created before. Finally go to Instances menu and
get your Rancher Server instance public IP. After some minutes navigate
to
http://RANCER_SERVER_PUBLIC_IP:8080
and you will enter Rancher UI.

Provisioning Docker hosts

In this section we are creating our Docker hosts. Go to Rancher UI and
click Add Host button, confirm your Rancher Server public IP and then
click Amazon EC2 provider. In this form you need to enter the
following data: host name, Access Key, Secret Key, Region, Zone, VPC
ID, Subnet ID,
and Security Group. Be sure to enter the appropriated
values for Region, Zone, VPC ID and Subnet ID because they must
match those used by Rancher Server instance. You must specify Security
Group name instead its ID, in our case it is named Rancher.
rancher-create-host
Repeat this step three times so Rancher will provision our three Docker
hosts. After some minutes you will see your hosts running in Rancher UI.

rancher-hosts-list

Installing VPN container

Now it’s time to deploy our VPN server container that will extend the
Rancher network. Go to your first host, click Add Container button and
follow these steps:

  1. Enter a name for this container like rancher-vpn-server
  2. Enter docker image: nixel/rancher-vpn-server:latest
  3. Add this TCP port map: 1194 (on Host) to 1194 (in Container)
  4. Add this TCP port map: 2222 (on Host) to 2222 (in Container)

Now expand Advanced Options section and follow these steps:

  1. In Volume section add this new volume to persist VPN
    configuration: /etc/openvpn:/etc/openvpn
  2. In Networking section be sure to select Managed Network on
    docker0
  3. In Security/Host section be sure to enable the Give the container
    full access to the host
    checkbox

After a while you will see your rancher-vpn-server container running on
your first host.
rancher-vpn-server-container
Now you are about to use another nice Rancher feature. Expand your
rancher-vpn-server container menu and click View Logs button as you can
see in the following image:
rancher-tomcat-view-container-logs
Now scroll to top and you will find the information you need in order to
connect with your VPN client. We are using this data later.
**** rancher-vpn-server-logs****

Installing Tomcat container

To install Tomcat container you have to click Add Container button on
your second host and follow these steps:

  1. Enter a name for this container like tomcat
  2. Enter docker image: tutum/tomcat:7.0
  3. *No port map is required*
  4. Expand Advanced Options and in Networking section be sure to
    select Managed Network on docker0

After a while you will see your Tomcat container running on your second
host.
rancher-tomcat-server-container
Now open Tomcat container logs in order to get its admin password, you
are needing it later when configuring Jenkins.
rancher-tomcat-logs

Installing Jenkins container

Click Add Container button on your third host and execute the following
steps:

  1. Enter a name for this container like jenkins
  2. Enter docker image: jenkins
  3. No port map is required

Now expand Advanced Options section and follow these steps:

  1. In Volume section add this new volume to persist Jenkins
    configuration: /var/jenkins_home
  2. In Networking section be sure to select Managed Network on
    docker0

After a while you will see your Jenkins container running on your third
host.
rancher-jenkins-container

Putting it all together

In this final step you are going to install and run the VPN client.
There are two ways to get the client working: using a Docker image I
have prepared that does not require any configuration, or using any
OpenVPN client that you will need to configure. Once the VPN client is
working you are browsing to Jenkins in order to create an example CI job
that will deploy the sample WAR application on Tomcat. You will finally
browse to the sample application so you can see how all this works
together. Installing Dockerized VPN client In a PC with Docker
installed you will execute the command that we saw before in
rancher-vpn-server container logs. According to my example I will
execute this command:

sudo docker run -ti -d –privileged –name rancher-vpn-client -e VPN_SERVERS=54.149.62.184:1194 -e VPN_PASSWORD=mmAG840NGfKEXw73PP5m nixel/rancher-vpn-client:latest

Adapt it to your environment. Then show rancher-vpn-client container
logs:

sudo docker logs rancher-vpn-client

You will see a message printing the route you need to add in your system
in order to be able to reach Rancher network.
rancher-vpn-client-route
In my case I’m executing this command:

sudo route add -net 10.42.0.0/16 gw 172.17.0.8

At this point you are able to ping all your containers, no matter in
which host they run. Now your PC is actually connected to Rancher
network and you can reach any container and service running on your
Rancher infrastructure. If you repeat this step in a Linux Gateway at
your office you will, in fact, expose Rancher network to all the
computers connected in your LAN, which is really interesting.
Installing a custom OpenVPN client If you prefer to use an existing
or custom OpenVPN client, you can do it. You will need your OpenVPN
configuration file that you can get executing the SSH command that we
got before in rancher-vpn-server container log. In my case I can get
RancherVPNClient.ovpn file executing this command:

sshpass -p mmAG840NGfKEXw73PP5m ssh -p 2222 -o ConnectTimeout=4 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@54.149.62.184 “get_vpn_client_conf.sh 54.149.62.184:1194” > RancherVPNClient.ovpn

Now, for example, you can execute OpenVPN executing this command:

/usr/sbin/openvpn –config RancherVPNClient.ovpn

You can also use OpenVPN iOS/Android application with this
RancherVPNClient.ovpn file and you will also be able to access your
Rancher network from your mobile or tablet. Again, you can extend your
VPN for all users in your LAN if you repeat this step in a Linux Gateway
in your office. Configuring Jenkins Now it’s time to configure
Jenkins to compile and deploy our sample WAR in Tomcat. Browse to
http://JENKINS_CONTAINER_IP:8080
(in my case http://10.42.13.224:8080) and you will see Jenkins
Dashboard.
jenkins-dashboard
Before starting you must install Github Plugin and Maven following these
steps:

  1. Click Manage Jenkins menu option and then Manage Plugins
  2. Go to Available tab and search for Github plugin, named “Github
    Plugin”. Activate its checkbox
  3. Click Download now and install after restart button
  4. When the plugin is installed enable checkbox Restart Jenkins when
    installation is complete and no jobs are running,
    and then wait for
    Jenkins to be restarted
  5. When Jenkins is running again, go to Manage Jenkins and click
    Configure System
  6. In Maven section click Add Maven button, enter a name for the
    installation and choose last maven version.
  7. Click Save button to finish

jenkins-install-maven
When you are back in Dashboard click create new jobs link and follow
these instructions:

jenkins-git-url

  • In Build section enter the following maven goals and options.
    Replace TOMCAT_CONTAINER_IP with the IP assigned to your
    Tomcat container (10.42.236.18 in my case) and
    TOMCAT_ADMIN_PASSWORD with the password we saw before for
    admin user (6xc3gzOi4pMG in my case).

clean package tomcat7:redeploy -DTOMCAT_HOST=TOMCAT_CONTAINER_IP -DTOMCAT_PORT=8080 -DTOMCAT_USER=admin -DTOMCAT_PASS=TOMCAT_ADMIN_PASSWORD

I am setting this maven configuration:

clean package tomcat7:redeploy -DTOMCAT_HOST=10.42.236.18 -DTOMCAT_PORT=8080 -DTOMCAT_USER=admin -DTOMCAT_PASS=6xc3gzOi4pMG

  • Save your job

jenkins-maven-goals
Now you can click Build Now button to run your job. Open your
execution (listed in Build History table) and then click Console
Output
option. If you go to the bottom you will see something like
this:
jenkins-job-result
Testing the sample application Now browse to
http://TOMCAT_CONTAINER_IP:8080/sample/
and you will see this page showing information about Tomcat server and
your browser client.
sample-application

Conclusion

In this post we have installed a basic Continuous Integration
environment as an example to make your Docker containers reachable from
your PC, your LAN, and even a mobile device or any system that can
execute an OpenVPN client. This is possible thanks to Rancher Network, a
great functionality that improves Docker networking by connecting your
containers under the same network. What we actually did was to extend
Rancher network using an OpenVPN link that is really easy to configure
with Docker, and secure to use because all your traffic is being
encrypted. This functionality can help many companies to better manage
the way they give access to their containers from any unknown or
uncontrolled network. Now you don’t need to think anymore about exposing
or mapping ports, changing firewall rules, or taking care about what
services you publish to the Internet. For more information on managing
docker with Rancher, please join our next online meetup, where we’ll be
demonstrating Rancher, Docker Compose, service discovery and many other
capabilities. Manel Martinez is a Linux
systems engineer with experience in the design and management of
scalable, distributable and highly available open source web
infrastructures based on products like KVM, Docker, Apache, Nginx,
Tomcat, Jboss, RabbitMQ, HAProxy, MySQL and XtraDB. He lives in Spain,
and you can find him on Twitter
@manel_martinezg.

Source

Remembering Paul Hudak

Paul
HudakRenowned computer
scientist Paul Hudak, one of the designers of the Haskell programming
language, died of leukemia this week. There’s been an outpouring of
reactions from people Paul’s life and work has touched. Paul was my
Ph.D. adviser at Yale in the 1990s. He supervised my work, paid for my
education, and created an environment that enabled me to learn from some
of the brightest minds in the world. Paul was an influential figure in
the advancement of functional programming. Functional programming
advocates a declarative style, as opposed to procedural or imperative
style, of programming. For example, instead of writing
result = 0; for (i=0; i<n; i++) result += a[i]; you write
result = sum(a[0:n]). In many cases, the declarative style is easier
to understand and more elegant. Because the declarative style focuses on
what, rather than how to perform the computation, it enables
programmers to worry less about implementation details and gives
compilers more freedom to produce optimized code. One of the strongest
influences of functional programming came from Lambda Calculus, a
mathematical construct formulated by Alonzo Church in the 1930s. Lambda
Calculus has had a huge impact on programming languages even though it
was created before computers were invented. Lambda Calculus introduced
modern programming constructs such as variable bindings, function
definitions, function calls, and recursion. Alan Turing, who studied as
a Ph.D. student under Alonzo Church, proved that Lambda Calculus and
Turing Machine were equivalent in computability. It is therefore
comforting to know that, in theory, whatever a computer can do, we can
write a program for it. In 1977, around the time Paul was starting his
own Ph.D. research, functional programming got a tremendous boost when
John Backus presented his Turing Award lecture titled “Can Programming
be Liberated from the von Neumann Style?” Backus argued conventional
languages designed for sequential “word-at-a-time” processing were too
complex and could no longer keep up with advances in computers. Backus
favored functional style programming which possessed stronger
mathematical properties. The Backus lecture had a strong impact because
it represented a radical departure from his early work in leading the
development of FORTRAN and in participating in the design of ALGOL 60,
the major “von Neumann style” languages of its day. Functional
programming research took off in the 1980s. Researchers from all over
the world created numerous functional programming languages. The
proliferation of languages became a problem. Many of these languages
were similar enough to be understandable by humans. But researchers
could not collaborate on the implementation or run each other’s
programs. In 1987, Paul Hudak and a group of prominent researchers came
together and created Haskell as a common research and education language
for functional programming. As far as I can remember, Paul always
emphasized other people’s contributions to the Haskell language. There’s
no doubt, however, Paul was a major driving force behind Haskell. This
is just the type of leader Paul was. He painted the vision and gathered
the resources. He would create an environment for others to thrive. He
attracted a remarkable group of world-class researchers at Yale Haskell
Group. I made great friends like Rajiv Mirani. I was fortunate to get to
know researchers like John Peterson, Charles Consel, Martin Odersky, and
Mark Jones. Mark Jones, in particular, developed a variant of Haskell
called Gofer. Gofer’s rich type system enabled me to complete my Ph.D.
thesis work on monad transformers. I decided to pursue a Ph.D. in Yale
Haskell Group largely motivated by Paul’s vision that we could make
programmers more efficient by designing better programming languages.
Paul believed programming languages should be expressive enough to make
programmers productive, yet still retain the simplicity so programs
would be easy to understand. He had a favorite saying “the most
important things in programming are abstraction, abstraction,
abstraction,” which meant a well-written program should be clean and
simple to understand with details abstracted away in modules and
libraries. Paul believed compilers should help programmers write correct
code by catching as many mistakes as possible before a program ever
runs. By the time I completed my Ph.D. program, however, we found it
difficult to get the larger world to share the same view. The computing
industry in the late 1990s and early 2000s turned out to be very
different from what functional programming researchers had anticipated.
There were several reasons for this. First, the von Neumann style
computers kept getting better. When I worked on the Haskell compiler,
computers ran at 25MHZ. CPU speed would grow to over 3GHZ in less than
10 years. The miraculous growth of the conventional computing model made
benefits compilers could get from functional programming irrelevant.
Second, the tremendous profit derived from Y2K and Internet build-out
enabled companies to employ industry-scale programming, where armies of
coders built complex systems. One of the last piece of advice Paul gave
me was to accept a job in Silicon Valley working on the then-nascent
non-functional language Java, instead of pursuing a research career on
the East Coast. Like many others, I have witnessed with surprise the
rising interest in programming language design and functional
programming in recent years. No doubt this has to do with the slowing
growth of CPU clock-rate and the growth in multi-core and multi-node
computing. Functional programming frees developers from worrying about
low-level optimization and scheduling, and enables developers to focus
on solving problems at large scale. A more fundamental reason for the
resurgence of functional programming, I believe, lies in the fact
programming has become less of an industrial-scale effort and more of a
small-scale art form. The simplicity and elegance of functional
programming strike a chord with developers. Building on the rich
foundational capabilities nicely abstracted away in web services, open
source modules, and third-party libraries, developers can create
application or infrastructure software quickly and disrupt incumbent
vendors working with outdated practices. I have not kept in touch with
Paul in recent years. But I can imagine it must be incredibly rewarding
for Paul to see the impact of his work and see how the programming model
he worked so hard to advance is finally becoming accepted.

Source

Magento and Docker | Magento Deployment

magento-logo2A
little over a month ago I wrote
about

setting up a Magento cluster on Docker using Rancher. At the I
identified some short comings of Rancher such as its lack of support fot
load-balancing. Rancher released support for load
balancing
and docker
machine
with
0.16, and I would like to revisit our Magento deployment to cover the
use of load balancers for scalability as well as availability.
Furthermore, I would also like to cover how the docker machine
integration makes it easier to launch Rancher compute nodes directly
from the Rancher UI.

Amazon Setup

As before we will be running our cluster on top of AWS hence if you have
not already done so follow the steps outlined in the Amazon Environment
Setup
section of the earlier tutorial to setup an ssh key pair and a
security group. However, unlike earlier we will be using the Rancher UI
to launch compute nodes and will require an Access Key ID and Secret
Access Key
. To create your key and secret click through to the IAM
service and select Users from the menu on the left. Click the Create
User
button and specify rancher as the user name in the subsequent
screen and click Create. You will be given the Access Key ID and
Secret Access Key in the dialogue shown below, keep the information safe
as there is no way to recover the secret and you will need this later.

iam-keyOnce
you have created the IAM user you will also need to give it permissions
to create Amazon Ec2 Instances. To do so select rancher from the user
list and click Attach Policy in the Managed Policies section. Add
the AmazonEC2FullAccess policy to the Rancher user so that we are able
to create the required resources from the Rancher UI when creating
compute nodes. Full access is a little more permissive tan required
however, for the sake of brevity we are not creating custom policy.

Screen Shot 2015-04-27 at 9.03.52
PM

Rancher Setup

After setting up the AWS environment, follow the steps outlined in the
Rancher Server Launch section of the earlier Magento
tutorial

to bring up your Rancher server and browse to
http://RANCHER_SERVER_IP:8080/. *Be sure you are using a version of
Rancher after 0.16.* Load the Hosts tab using the respective option
in the left-side menu and click + Add Host to add rancher compute
nodes. The first time you launch a compute node you will be prompted to
confirm the IP address at which Rancher server is available to your
compute nodes. Specify the Private IP address of the Amazon node on
which Rancher server is running and hit save.

Screen-Shot-2015-04-27-at-9.30.14-PM

In the Add Host screen select the Amzon EC2 Icon and specify the
required information in order to launch a compute node. The required
information is shown below. Enter the access key and secret key that you
created earlier for the rancher IAM user. We are using a t2.micro
instance for our tutorial however you would probably use a larger
instance for your nodes. Select the same VPC as your Rancher server
instance and specify Rancher as the security group to match the
security group that you created earlier in the Environment Setup
section. The compute nodes must be launched in a different availability
zone from the rancher server hence we select Zone c (Our Rancher Server
was in Zone a) . This requirement is due to the fact that Docker Machine
uses the Public IP of compute agents to ssh into them from the Server.
However, a nodes public IP is not addressable from within its own
subnet.

machine

Repeat the steps above to launch five compute nodes; one for the MySQL
database, two for the load-balanced Magento nodes and two for the load
balancers themselves. I have labeled the nodes as DataNode, Magento1,
Magento2, LB1 and LB2. When all nodes come up you should be able to see
them in the Rancher Server UI as shown below.

Screen-Shot-2015-04-27-at-10.45.09-PM

Magento Container Setup

Now that we have our Rancher deployment launched we can setup our
Magento containers. However before we launch our Magento containers we
must first launch a MySQL container to serve as our database and
Memcached containers for caching. Let’s launch our MySQL container first
on one of the compute nodes. We do this by clicking the + Add
Container
on the DataNode host. In the pop up menu we need to specify a
name for our container and mysql as the source image. Select Advanced
Options > Command
> Environment Vars + to add the four required
variables: mysql root password, mysql user, mysql password, and mysql
database. You may choose any values for these the root password and user
password, however, the mysql user and database must be magento. After
adding all of these environment variables, hit create to create the
container. Note that mysql is official Docker mysql image and details
of what is inside this container can be found on
its dockerhub page.

envvars.png

Next we will create the Memcached containers on the two magento compute
nodes, one on each of the Magento nodes. We again give the containers a
name (memcached1 and memcached2) and specify their source images
as memcached. The Memcached containers do not require any further
configuration and therefore we can just click create to setup the
containers. Details of the memcached official container we use can be
found on
its dockerhub page.

Now we are ready to create the magento containers, On the Magento1 host
create a container named magento1 using the image
usman/magento:multinode.
You need to specify the MYSQL_HOST and MEMCACHED_HOST environment
variables using the container IPs that are listed in the Rancher UI.
Note that for Magento1 you should specify the IP of Memcached1.
Similarly launch a second container called magento2 on the Magento2 host
and specify the mysql host and memcached host environment variables. In
a few moments both your magento hosts should be up and ready. Note that
unlike before we did not have to link the mysql and memcached containers
to our magento containers. This is because Rancher now gives all
containers access to each other over a Virtual Private Network (VPN)
without the need for exposing ports or linking containers. Furthermore
we will not need to expose ports on the Magento containers as we will
use the same VPN to allow the load balancers to communicate with our
nodes.

Load balancer Setup

Now that your containers are up we can setup load balancers to split
traffic onto the Magento containers. Select the Balancing tab in the
left side menu then click Balancers and + Add Load Balancer. In the
subsequent screen you can specify a name and description for your load
balancer. Next you can select the hosts on which to run balancer
containers run. in our case we can select both LB1 and LB2. We then need
to select the two Magento containers as targets. In the Listening
Ports
section we need to specify that our Magento containers are
listening for HTTP traffic on port 80 and that we want load balancers to
also listen to http traffic on port 80.

Screen Shot 2015-04-29 at 9.22.12 PM

Lastly, click on the Health Check tab and specify that the load
balancers should send a GET request to the root URI every 2000 ms to
check that the container is still healthy. If three consecutive health
checks fail then the container will be marked as unhealthy and no
further traffic will be routed to it until it can respond successfully
to two consecutive health checks. In a few moments your load balancers
will be ready and you can load Magento on the public IP of either load
balancer host. You will need to look for the IP in the Amazon EC2
console as the Rancher UI only shows the private IP of the nodes. Once
you load the Magento UI follow the steps outlined in the previous
tutorial to setup your connection the MySQL and to setup a magento
account.

Screen Shot 2015-04-28 at 10.22.36
PM

###

DNS Round-robin Setup using Amazon Route 53

Now that we have our load balancers up and running we can split traffic
onto our two Magento contianers but we still must send our requests to
one balancer or the other. To enable routing to both load balancers
transparently we need to setup DNS round-robin. For this you may use any
DNS provider of your choice but since we are using Amazon EC2 we will
use Amazon’s Route 53 service. Use the Top menu to select the Route
53
service and select Hosted Zones from the left menu. If you don’t
already have a registered domain and hosted zone you may have to create
one. We are using the rancher-magento.com domain and hosted zone. In
your hosted zone click the Create Record Set button and specify a
subdomain such as lb.rancher-magento.com in the form which loads to
the right of the screen*. S*elect type A – IPv4 address and specify
the public IP address of one of your load balancer hosts. In the
Routing Policy section select Weighted, and enter 10 as the weight.
Enter 1 as the Set ID and click Save Record Set. Repeat exactly the
same process once more but use the public IP of the second load-balancer
host. This pair of DNS entries is specifying that we want to route
clients who ask for lb.rancher-magento.com to the two specified IPs.
Since the IPs records have the same weight the traffic will be split
evenly between the two load balancers. We can now load up our Magento UI
using http://lb.rancher-magento.com instead of having to specify the
IP.

Screen Shot 2015-04-29 at 9.47.28
PM

Wrapping up

rancher-machine

Putting it all together we get a cluster setup as shown above. Using the
DNS entries our web browsers are directed to one of the load balancers
LB1, or LB2. By having two load balancers we have split traffic and
hence reduced the load on each of our load balancer instances. The load
balancers will then proxy traffic to either Magento1 or Magento2. This
again allows us to spread the load to the separate containers running on
their own hosts. We have setup only two Magento containers but your
could setup as many as you need. Furthermore, the health check setup
ensures that if one of the Magento containers fails the traffic will
quickly be diverted to the remaining container without human
intervention. Each of the Magento containers has a Memcached server
running on its own host to provide fast access to frequently used data.
However, both magento containers use the same MySQL container to ensure
consistency between the two containers. By using Rancher’s docker
machine support we were able to launch all hosts (other than Rancher
Server) directly from the Rancher UI. In addition, due to Rancher’s VPN
we did not have to expose ports on any of our containers nor did we have
to link containers. This greatly simplifies the Magento container setup
logic. With support for load balancers and machine (as well as docker
compose coming soon), Rancher is becoming a much more viable option for
running large scale user facing deployments.

To learn more about Rancher, please join us for one of our monthly
online meetups. You can register for an upcoming meetup by following the
link below.

Source

GlusterFS Docker | Building an HTML5 Game

gluster logoGlusterFS
is a scalable, highly available, and distributed network file system
widely used for applications that need shared storage including cloud
computing, media streaming, content delivery networks, and web cluster
solutions. High availability is ensured by the fact that storage data is
redundant, so in case one node fails another will cover it without
service interruption. In this post I’ll show you how to create a
GlusterFS cluster for Docker that you can use to store your containers
data. The storage volume where data resides is replicated twice, so data
will be accessible if at least one Gluster container is working. We’ll
use Rancher for Docker management and orchestration. In order to test
storage availability and reliability I’ll be deploying an Asteroids
game.
GlusterFS-Asteroids-Architecture

Prerequisites

Preparing AWS environment Before deploying the GlusterFS cluster you
need to satisfy the following requirements in AWS:

  • Create an Access Key to use Rancher AWS provisioning feature.
    You can get an Access Ke
    • Allow 22/tcp, 2376/tcp and 8080/tcp ports from any source,
      needed for Docker machine to provision hosts
    • Allow 500/udp and 4500/udp ports from any source, needed for
      Rancher network
    • Allow 9345/tcp and 9346/tcp ports from any source, needed for UI
      features like graphs, view logs, and execute shell
    • Allow 80/tcp and 443/tcp ports from any source, needed to
      publish the Asteroids game
  • Create a RancherOS instance (look for RancherOS AMI in
    Community AMIs). Configure it to run Rancher Server by
    defining the following user data and associate it to the Gluster
    Security Group. Once the instance is running you can browse to
    Rancher UI: http://RANCHER_INSTANCE_PUBLIC_IP:8080/

#!/bin/bash
docker run -d -p 8080:8080 rancher/server:v0.17.1

Preparing Docker images

I have prepared two Docker images that we are using later. This is how I
built them. The GlusterFS server image This is the Dockerfile:

FROM ubuntu:14.04

MAINTAINER Manel Martinez <manel@nixelsolutions.com>

RUN apt-get update &&
apt-get install -y python-software-properties software-properties-common
RUN add-apt-repository -y ppa:gluster/glusterfs-3.5 &&
apt-get update &&
apt-get install -y glusterfs-server supervisor

RUN mkdir -p /var/log/supervisor

ENV GLUSTER_VOL ranchervol
ENV GLUSTER_REPLICA 2
ENV GLUSTER_BRICK_PATH /gluster_volume
ENV GLUSTER_PEER **ChangeMe**
ENV DEBUG 0

VOLUME [“/gluster_volume”]

RUN mkdir -p /usr/local/bin
ADD ./bin /usr/local/bin
RUN chmod +x /usr/local/bin/*.sh
ADD ./etc/supervisord.conf /etc/supervisor/conf.d/supervisord.conf

CMD [“/usr/local/bin/run.sh”]

As you can see, we are using 2 replicas for distributing the Gluster
volume ranchervol. All its data will be persisted in Docker volume
/gluster_volume. Note that we are not exposing any port because
GlusterFS containers are connecting through Rancher network. The
run.sh script is as follows:

#!/bin/bash

[ “$DEBUG” == “1” ] && set -x

prepare-gluster.sh &
/usr/bin/supervisord

It will invoke another script to prepare GlusterFS cluster in
background. This is required because Gluster commands need to be
executed when its daemon is running. This is the content for
prepare-gluster.sh script:

#!/bin/bash

set -e

[ “$DEBUG” == “1” ] && set -x

if [ “$” == “**ChangeMe**” ]; then
# This node is not connecting to the cluster yet
exit 0
fi

echo “=> Waiting for glusterd to start…”
sleep 10

if gluster peer status | grep $ >/dev/null; then
echo “=> This peer is already part of Gluster Cluster, nothing to do…”
exit 0
fi

echo “=> Probing peer $…”
gluster peer probe $

echo “=> Creating GlusterFS volume $…”
my_rancher_ip=`echo $ | awk -F/ ”`
gluster volume create $ replica $ $:$ $:$ force

echo “=> Starting GlusterFS volume $…”
gluster volume start $

As we can see, if we don’t provide GLUSTER_PEER environment variable
the container will only start GlusterFS daemon and wait for a second
peer container to join the cluster. The second container needs to know
about GLUSTER_PEER address in order to contact it (peer probe) and
create the shared storage volume. This is the supervisor configuration
file, needed to start GlusterFS daemon:

[supervisord]
nodaemon=true

[program:glusterd]
command=/usr/sbin/glusterd -p /var/run/glusterd.pid

The following commands are required to publish the Docker image:

docker build -t nixel/rancher-glusterfs-server .
docker push nixel/rancher-glusterfs-server .

The Asteroids game image This is the image we are using to publish
the Asteroids HTML5 game for testing Gluster HA capabilities. This
container acts as a GlusterFS client that will mount the shared volume
where the following game content is being stored:

  • static files (HTML, JS, CSS) needed to open the client-side game in
    your browser. A Nginx server will publish this to the Internet.
  • A WebSocket server application used to handle user connections and
    control game logics. A Node.js service will publish this application
    to the Internet.

This is the Dockerfile which defines the image:

FROM ubuntu:14.04

MAINTAINER Manel Martinez <manel@nixelsolutions.com>

RUN apt-get update &&
apt-get install -y python-software-properties software-properties-common
RUN add-apt-repository -y ppa:gluster/glusterfs-3.5 &&
apt-get update &&
apt-get install -y git nodejs nginx supervisor glusterfs-client dnsutils

ENV GLUSTER_VOL ranchervol
ENV GLUSTER_VOL_PATH /mnt/$
ENV GLUSTER_PEER **ChangeMe**
ENV DEBUG 0

ENV HTTP_CLIENT_PORT 80
ENV GAME_SERVER_PORT 443
ENV HTTP_DOCUMENTROOT $/asteroids/documentroot

EXPOSE $
EXPOSE $

RUN mkdir -p /var/log/supervisor $
WORKDIR $

RUN mkdir -p /usr/local/bin
ADD ./bin /usr/local/bin
RUN chmod +x /usr/local/bin/*.sh
ADD ./etc/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
ADD ./etc/nginx/sites-available/asteroids /etc/nginx/sites-available/asteroids

RUN echo “daemon off;” >> /etc/nginx/nginx.conf
RUN rm -f /etc/nginx/sites-enabled/default
RUN ln -fs /etc/nginx/sites-available/asteroids /etc/nginx/sites-enabled/asteroids
RUN perl -p -i -e “s/HTTP_CLIENT_PORT/$/g” /etc/nginx/sites-enabled/asteroids
RUN HTTP_ESCAPED_DOCROOT=`echo $ | sed “s///\\\\//g”` && perl -p -i -e “s/HTTP_DOCUMENTROOT/$/g” /etc/nginx/sites-enabled/asteroids

RUN perl -p -i -e “s/GAME_SERVER_PORT/$/g” /etc/supervisor/conf.d/supervisord.conf
RUN HTTP_ESCAPED_DOCROOT=`echo $ | sed “s///\\\\//g”` && perl -p -i -e “s/HTTP_DOCUMENTROOT/$/g” /etc/supervisor/conf.d/supervisord.conf

CMD [“/usr/local/bin/run.sh”]

And this is the run.sh script:

#!/bin/bash

set -e

[ “$DEBUG” == “1” ] && set -x && set +e

if [ “$” == “**ChangeMe**” ]; then
echo “ERROR: You did not specify “GLUSTER_PEER” environment variable – Exiting…”
exit 0
fi

ALIVE=0
for PEER in `echo “$” | sed “s/,/ /g”`; do
echo “=> Checking if I can reach GlusterFS node $ …”
if ping -c 10 $ >/dev/null 2>&1; then
echo “=> GlusterFS node $ is alive”
ALIVE=1
break
else
echo “*** Could not reach server $ …”
fi
done

if [ “$ALIVE” == 0 ]; then
echo “ERROR: could not contact any GlusterFS node from this list: $ – Exiting…”
exit 1
fi

echo “=> Mounting GlusterFS volume $ from GlusterFS node $ …”
mount -t glusterfs $:/$ $

echo “=> Setting up asteroids game…”
if [ ! -d $ ]; then
git clone https://github.com/BonsaiDen/NodeGame-Shooter.git $
fi

my_public_ip=`dig -4 @ns1.google.com -t txt o-o.myaddr.l.google.com +short | sed “s/”//g”`
perl -p -i -e “s/HOST = ‘.*’/HOST = ‘$’/g” $/client/config.js
perl -p -i -e “s/PORT = .*;/PORT = $;/g” $/client/config.js

/usr/bin/supervisord

As you can see we need to inform about GlusterFS containers where
ranchervol storage is being served using GLUSTER_PEER environment
variable. Although GlusterFS client does not need to know about all
cluster nodes, this is useful for Asteroids container to be able to
mount the volume if at least one GlusterFS container is alive. We are
proving this HA feature later. In this case we are exposing 80 (Nginx)
and 443 (Node.js Websocket server) ports so we can open the game in our
browser. This is the Nginx configuration file:

server {
listen HTTP_CLIENT_PORT;
location / {
root HTTP_DOCUMENTROOT/client/;
}
}

And the following supervisord configuration is required to run Nginx and
Node.js:

[supervisord]
nodaemon=true

[program:nginx]
command=/usr/sbin/nginx

[program:nodejs]
command=/usr/bin/nodejs HTTP_DOCUMENTROOT/server/server.js GAME_SERVER_PORT

Finally, the run.sh script will download the Asteroids source code and
save it on GlusterFS shared volume. The last step is to replace the
required parameters on configuration files to run Nginx and Node.js
server application. The following commands are needed to publish the
Docker image:

docker build -t nixel/rancher-glusterfs-client .
docker push nixel/rancher-glusterfs-client .

Creating Docker hosts

Now we need to create three Docker hosts, two of them used to run
GlusterFS server containers, and the third to publish the Asteroids
game.
Create_Amazon_Instance
In Rancher UI, click + Add Host button and choose Amazon EC2
provider
. You need to specify, at least, the following information:

  • Container names
  • Amazon Access Key and Secret Key that you got before.
  • EC2 Region, Zone and VPC/Subnet ID. Be sure to choose the same
    region, zone and VPC/subnet ID where Rancher Server is deployed.
  • Type the Security Group name that we created before: Gluster.

Repeat this step three times to create gluster01, gluster02, and
asteroids hosts. Gluster hosts

Adding GlusterFS server containers

Now you are ready to deploy your GlusterFS cluster. First, click +
Add Container
button on gluster01 host and enter the following
information:

  • Name: gluster01
  • Image: nixel/rancher-glusterfs-server:latest

Expand Advanced Options and follow these steps:

  • Volumes section – Add this volume:
    /gluster_volume:/gluster_volume
  • Networking section – Choose Managed Network on Docker0
  • Security/Host section – Enable Give the container full access to
    the host
    checkbox

Create_gluster_server_container_1
Now wait for gluster01 container to be created and copy its Rancher
IP address, you are needing it now. Then click + Add Container button
on gluster02 host to create the second GlusterFS server container
with the following configuration:

  • Name: gluster02
  • Image: nixel/rancher-glusterfs-server:latest

Expand Advanced Options and follow these steps:

  • Command section – Add an Environment Variable named GLUSTER_PEER
    which value is the gluster01 container IP. In my case it is
    10.42.46.31
  • Volumes section – Add this volume:
    /gluster_volume:/gluster_volume
  • Networking section – Choose Managed Network on Docker0
  • Security/Host section – Enable Give the container full access to
    the host
    checkbox

Gluster02_container_env_vars
Now wait for gluster02 container to be created and open its menu,
then click View Logs option.
Gluster02_container_logs_menu
You will see the following messages at the bottom of log screen
confirming that shared volume was successfully created.
Gluster02_container_logs

Adding Asteroids container

Now it is time to create our GlusterFS client container, which is
publishing an Asteroids game to the Internet. Click + Add Container on
asteroids host and enter the following container information:

  • Name: asteroids
  • Image: nixel/rancher-glusterfs-client:latest
  • Port Map: map 80 (public) port to 80 (container) TCP
    port
  • Port Map: map 443 (public) port to 443 (container) TCP
    port

Expand Advanced Options and follow these steps:

  • Command section – Add an Environment Variable named
    GLUSTER_PEER which value is a comma separated list of gluster01
    and gluster02 containers IPs. In my case I’m typing this:
    10.42.46.31,10.42.235.105
  • Networking section – Choose Managed Network on Docker0
  • Security/Host section – Enable Give the container full access to
    the host
    checkbox

Note that we are not configuring any container volume, because all data
is stored in GlusterFS cluster.
asteroids_container_port_mapping
Wait for asteroids container to be created and show its logs. You will
find something like this at the top:
asteroids_container_top_logs
You will also see how Nginx server and Node.js application are started
at the bottom:
asteroids_container_bottom_logs
At this point your Rancher environment is up and running.
all_containers_gluster

Testing GlusterFS HA capabilities

Asteroids_game
It is time to play and test GLusterFS HA capabilities. What you are
doing now is to stop one GlusterFS container and check that game will
not suffer downtimes. Browse to http://ASTEROIDS_HOST_PUBLIC_IP and
you will access Asteroids game, enter your name and try to explode some
asteroids. Go to Rancher UI and stop gluster02 container, then open
a new browser tab and navigate to the game again. The game is
accessible. You can start gluster02 container, then stop
gluster01 container, and try again. You are still able to play.
Finally, keep gluster01 container stopped, restart asteroids
container and wait for it to start. As you can see, if at least one
GlusterFS server container is running you are able to play. Finally you
may want to stop gluster01 and gluster02 containers to check how
game becomes unavailable because its public content is not reachable
now. To recover the service start gluster01 and/or gluster02
containers again.

Conclusion

Shared storage is a required feature when you have to deploy software
that needs to share information across all nodes. In this post you have
seen how to easily deploy a Highly Available shared storage solution for
Rancher based on GlusterFS Docker images. By using an Asteroids game you
have checked that storage is available when, at least, one GlusterFS
container is running. In future posts we are combining the shared
storage solution with Rancher Load Balancing feature, added in 0.16
version, so you will see how to build scalable, distributable, and
Highly Available Web server solutions ready for production use. To learn
more about Rancher, please join us for our next online meetup, where
we’ll be demonstrating some of these features and answering your
questions. Manel Martinez is a Linux systems
engineer with experience in the design and management of scalable,
distributable and highly available open source web infrastructures based
on products like KVM, Docker, Apache, Nginx, Tomcat, Jboss, RabbitMQ,
HAProxy, MySQL and XtraDB. He lives in spain, and you can find him on
Twitter @manel_martinezg.

Source

Deploy Python Application | Open Source Load Balancer

google cloud
platformRecently
Rancher provided a disk image to be used to deploy RancherOS v0.3 on
Google Compute Engine (GCE). The image supports RancherOS cloud config
functionality. Additionally, it merges the SSH keys from the project,
instance and cloud-config and adds them to the rancher user.

Building The Setup

In this post, I will cover how to use the RancherOS image on GCE to set
up a MongoDB Replica Set. Additionally I will cover how to use one of
the recent features of Rancher platform which is the Load Balancer. In
order to make the setup more realistic, I created a simple python
application, that will count the number of hits on the website and save
this number on MongoDB database. The setup will include multiple servers
on different cloud hosting providers:

  1. three (g1-small) servers on GCE to deploy the MongoDB replicaset.
  2. one (n1-standard-1) server on GCE to install the Rancher platform.
  3. one server on Digital Ocean to hold the application containers.
  4. one server on Digital Ocean which will be used as a Load Balancer on
    Rancher platform.

RancherOS On GCE

We will import RancherOS disk image on GCE to be used later in the
setup. To import the image you need to create a Google Cloud Storage
object which we will upload the RancherOS disk image. To create cloud
storage object, you can use the web UI:
1
Or you can use gsutil tool that lets you access Google Cloud
Storage service from the command line, but first you need to
authenticate
gsutil to be able to create new storage object. Now you need to upload
the image to the newly created storage object:
2
The only thing left to do is creating the RancherOS image which will be
used later, click on create new image under the Images section:
3

Start RancherOS On GCE

After creating RancherOS image, create three machines that will be used
to set up a MongoDB replica set and select RancherOS as their image:
5
For the sake of this setup I created a Networking zone with one rule
that opens every TCP/UDP port on the server. Obviously, you shouldn’t do
that on a production server.

Docker Images

Now since the RancherOS image is ready, we need to get the Docker images
ready too. We will be using a Docker image for MongoDB and for the
Python application. I will be using the official
MongoDB
image, which will
simply run MongoDB server. However, to run the container as a part of
the replica set you need to add –replicaSet option to the running
command. And for the python application I will be using a Docker image
which will pull the Flask application from github and run it using
Gunicorn, of course if you need to add more sense of “production” to the
setup you will need to add more tweaks to the Docker image, but this
example is good enough to give you a good idea about the setup. The
Dockerfile of the python application:

FROM ubuntu:latest
MAINTAINER Hussein Galal

RUN apt-get -q update
RUN apt-get install -yqq python python-dev python-distribute python-pip python-virtualenv
RUN apt-get install -yqq build-essential git

RUN mkdir -p /var/www/app
ADD run.sh /tmp/run.sh
ADD gunicorn.py /var/www/app/
RUN chmod u+x /tmp/run.sh
EXPOSE 80
WORKDIR /var/www/app/
ENTRYPOINT /tmp/run.sh

The run.sh script will pull the Flask
Application

from github and run Gunicorn:

git clone https://github.com/galal-hussein/Flask-example-app.git ../app

virtualenv venv
./venv/bin/pip install gunicorn
./venv/bin/pip install -r requirements.txt
./venv/bin/gunicorn -c gunicorn.py app:app

Let’s now build and push this image to Docker hub to be used later when
we deploy the applications:

~# docker build -t husseingalal/flask_app .
~# docker push husseingalal/flask_app

The flask application is very simple, it displays the number of
pageviews and the os hostname just to make sure that the load balancer
is working fine, here is a small snippet from the application:

@app.route(‘/’)
def cntr():
mongo.db.rancher.update({“Project” : “Rancher”}, {“$inc” : {“pageviews” : 1}}, True)
posts = mongo.db.rancher.find({“Project”:”Rancher”})[0]
return render_template(‘index.html’, posts=posts, hostname=socket.gethostname())

Rancher Platform

Rancher is a container management platform, that can connect containers
across different host and it provides a set of features including: load
balancing, monitoring, logging, and integration with existing user
directories (e.g., GitHub) for identity management. To deploy Rancher
platform on a machine, login to the machine and run this command:

~# docker run -d -p 8080:8080 rancher/server

This command will create a Docker instance with a Rancher server that
listens on port 8080 and proxy that port to port 8080 on the host. After
running that command wait a few minutes until the server is ready, and
then login to the server:
6
The next step is to register the machines with the Rancher platform,
on Rancher platform, click on “Add Host” to register the each machine.
On Rancher platform you have the option to use the Docker Machine
integration to directly create Digital Ocean, or Amazon EC2 machines, or
you can just copy the registering command to any server that has Docker
installed:
7
After running the command on the 3 MongoDB servers, you will see
something like that:
8

MongoDB Replica Set

Replication ensures that your data will exist on different servers to
increase availability, in MongoDB you can set up replication by creating
replica set. Replica set is a group of MongoDB servers. A primary server
and multiple secondary servers that keep identical copies of the primary
server. MongoDB achieves that by keeping a log of operations called
oplog, that contain the write operations. The secondary servers also
maintain their own oplog, they fetch the operations from the member they
are syncing from. I ’ll create 3 MongoDB containers, each on different
host. Each container will run with –replSet option which specify a
name for the replica set:
9
Create the rest of the MongoDB containers with the same option to be
part of the replication group. After creating the 3 containers you
should initiate the replication by connecting to the MongoDB instance
and run rs.initiate(config) in the MongoDB Javascript shell:

> config = {
“_id” : “rancher”,
“members” : [
{“_id” : 0, “host” : “<ip-of-the-1st-container>:27017”},
{“_id” : 1, “host” : “<ip-of-the-2nd-container>:27017”},
{“_id” : 2, “host” : “<ip-of-the-3rd-container>:27017”}
]
}
> rs.initiate(config)

rancher:PRIMARY>

That means that this container is the primary server for the MongoDB
replica set.

Deploy The App Containers

Now let’s deploy the application container that we created earlier,
we’ll create two app containers which we will load balancer between
them in the next section. To differentiate between each app container i
will specify the hostname of each container as an option when we create
the container:
10
The second container will be created with the same options as the first
one, but we will map the port 80 to port 8001 to be able to test the
container separately.

Rancher’s Load Balancer

The final step is to create a load balancer to bounce the requests
between the two app containers, Rancher’s Load Balancer distributes
network traffic across a number of containers. For each host that will
be selected to be a Load Balancer a Load Balancer Agent system container
is started and HAProxy software is installed in it. For more information
about Rancher’s Load
Balancer.

To create a Load Balancer using Rancher’s platform, make sure to select
the application containers, and select the port you will be receiving
and sending requests. Also note that we used round robin algorithm to
distribute the requests between the two app containers, the other
algorithms available to use are leastconn, and source.
11
12
You can also configure Health Checks to monitor the availability of
the application containers, you can configure the health checks by using
GET, HEAD, POST, etc. In this example, i created an endpoint called
/healthcheck that will be used to check if the application server is
up and running:
13
Now let’s test the setup, by access the url of the Load Balancer:
14
15
Also you can check the /healthcheck endpoint to check that the app is up
and running:

$ curl http://45.55.210.170/healthcheck
200 OK From app1

$ curl http://45.55.210.170/healthcheck
200 OK From app2

Conclusion

Now RancherOS V0.3.0 can be deployed in Google Compute Engine (GCE).
Using RancherOS and Rancher platform you can put together a production
environment that uses the most recent features of Rancher platform like
load balancing which allow devs and ops to deploy large-scale
applications. Both Rancher and RancherOS are open source tools, and can
be downloaded from Github. If you’re
interested in learning more, join our next Online Meetup to hear from
some of our developers, and see the latest features going into the
Rancher and RancherOS projects. You can register below:

Source

Self Driving Clusters – Managed Autoscaling Kubernetes on AWS

Giant Swarm provides managed Kubernetes clusters for our customers, which are operated 24/7 by our operations team. Each customer has their own private control plane which they can use to create as many tenant clusters as they require.

This results in a large number of tenant clusters and control planes that we need to manage and keep up to date. So automation is essential and for that, we leverage Kubernetes itself. Our microservices and operators (custom controllers) run in a Kubernetes cluster.

Managed Components

From the beginning, our tenant clusters have come with managed components like Nginx Ingress Controller and Calico and DNS. Our customers have been asking us to manage more components for them. This allows them to focus on their applications, which is what they really care about. We’re calling this the Managed Cloud Native Stack and we’re hard at work adding more components to our app catalog.

We’re delighted to announce the latest managed component for AWS clusters is the upstream cluster-autoscaler.

Scaling – how it worked before

Our control plane gives customers an API with which they can create clusters and scale them. This makes it easy to automate provisioning clusters. Many customers were also using our API for scaling. One customer has even written an operator to do this.

The first step when adding components from the community is our Solutions Engineers work with our customers to get it installed in their clusters. This is documented and can then be used as a tutorial by other customers. Once we see overall demand and there is a stable solution we add the component to our official app catalog and provide 24/7 support for it.

We have already included metrics-server as an essential component in our clusters for a while now. This is a requirement for the Horizontal Pod Autoscaler and the Vertical Pod Autoscaler. Many of our customers already use these to autoscale their pods. So this is something else they don’t have to worry about in their clusters. Now both HPA and VPA can be used in conjunction with the cluster-autoscaler to autoscale both pods and nodes.

Autoscaling

When adding the autoscaler to our app catalog we realized that it would be nice to install it as a default app. Why not make all clusters ship with autoscaling enabled by default? So we decided to install it as an essential into all AWS clusters. Users can define the cluster size with a minimum and maximum number of nodes. If both are identical the cluster will not be autoscaled.

Dynamic clusters

With node autoscaling your clusters become more dynamic and are better able to cope with changes in load. If pods cannot be scheduled due to insufficient resources your clusters will be scale up without requiring manual intervention. If nodes are underutilized your cluster will be scaled down and your costs will be reduced.

App Catalogs

Soon we will be adding more optional apps and an incubation catalog. This lets our customers easily try out the latest new components from the community. Once these components are ready they will graduate to our main app catalog, and be supported 24/7 by our operations team.

With Giant Swarm’s immutable infrastructure the orchestration of your clusters becomes super simple. You can easily create clusters, scale them up and down and upgrade to the latest Kubernetes version. The immutability of the the whole infrastructure makes this fast and resilient and always brings the cluster into a known and tested state. Request your free trial of the Giant Swarm Infrastructure here.

Source

Giant Swarm’s Top Cloud Native Posts of previous 2018

Giant Swarm’s Top Cloud Native Posts of 2018

Our team is hard at work perfecting the ideal Kubernetes stack for you. Still, we find time to share knowledge about our experiences in the containerized world. With 2018 a near-distant memory, we’d like to take a moment to reflect on the cloud native topics that you all loved so much.

Skim through our top 10 list below and catch up on an article or two that you may have missed, or would like to revisit.

Source

Running our own ELK stack with Docker and Rancher

 

A Detailed Overview of Rancher’s Architecture

This newly-updated, in-depth guidebook provides a detailed overview of the features and functionality of the new Rancher: an open-source enterprise Kubernetes platform.

Get the eBook

kibana2At
Rancher Labs we generate a lot of logs in our internal environments. As
we conduct more and more testing on these environments we have found the
need to centrally aggregate the logs from each environment. We decided
to use Rancherto build and run a scalable
ELK stack to manage all of these logs. For those that are unfamiliar
with the ELK stack, it is made up of Elasticsearch, Logstash and Kibana.
Logstash provides a pipeline for shipping logs from various sources and
input types, combining, massaging and moving them into Elasticsearch, or
several other stores. It is a really powerful tool in the logging
arsenal. Elasticsearch is a document database that is really good at
search. It can take our processed output from Logstash, analyze and
provides an interface to query all of our logging data. Together with
Kibana, a powerful visualization tool that consumes Elasticsearch data,
you have amazing ability to gain insights from your logging. Previously,
we have been using Elastic’s Found product and have been very impressed.
One of the interesting things we realized while using Found for
Elasticsearch is that the ELK stack really is made up of discrete parts.
Each part of the stack has its own needs and considerations. Found
provided us Elasticsearch and Kibana. There was no Logstash end point
provided, though it was sufficiently documented how to use Found with
Logstash. So, we have always had to run our own Logstash pipeline.
Logstash Our Logstash implementation includes three tiers, one each
for collection, queueing and processing. Collection- responsible for
providing remote endpoints for logging inputs. Like Syslog, Gelf,
Logstash. Once it receives these logs it places them quickly onto a
Redis Queue. Queuing tier – provided by Redis, a very fast in memory
database. It acts as a buffer between the collection and processing
tier. Processing tier – removes messages from the queue, and applies
filter plugins to the logs that manipulate the data to a desired format.
This tier does the heavy lifting and is often a bottleneck in a log
pipeline. Once it processes the data it forwards it along to the final
destination, which is Elasticsearch. Logstash
Pipeline
Each Logstash container has a configuration sidekick that provides
configuration through a shared volume.
By breaking the stack into these
tiers, you can scale and adapt each part without major impact to the
other parts of the stack. As a user, you can also scale and adjust each
tier to suit your needs. A good read on how to scale Logstash can be
found on Elastic’s web page here: Deploying and Scaling
Logstash
.
To build the Logstash stack we stared as we usually do. In general, we
try to reuse as much as possible from the community. Looking at the
DockerHub registry, we found there is already an official Logstash image
maintained by Docker. The real magic is in configuration of Logstash at
each of the tiers. To achieve maximum flexibility with configuration, we
built a confd container that consumes KV, or Key Value, data for its
configuration values. The logstash configurations are the most volatile,
and unique to an organization as they provide the interfaces for the
collection, indexing, and shipping of the logs. Each organization is
going to have different processing needs, formatting, tagging etc. To
achieve maximum flexibility we leveraged the confd tool and Rancher
sidekick containers. The sidekick creates an atomic scheduling unit
within Rancher. In this case, our configuration container exposes the
configuration files to our Logstash container through volume sharing. In
doing this, there is no modification needed to the default Docker
Logstash image. How is that for reuse! Elasticsearch Elasticsearch
is built out in three tiers as well. When reading the production
deployment recommendations, it discusses having nodes that are dedicated
masters, data nodes and client nodes. We followed the same deployment
paradigm with this application as the logstash implementation. We deploy
each role as a service. Each service is composed of an official image
and paired with a Confd sidekick container to provide configuration. It
ends up looking like this: Elastic Search
Tier
Each tier in the Elasticsearch stack has a confd container providing
configurations through a shared volume. These containers are scheduled
together inside of Rancher.
In the current configuration, we use the
master service to provide node discovery. When using the Rancher private
network, we disable multicast and enable unicast. Since every node in
the cluster points to the master they can talk to one another. The
Rancher network also allows the nodes to talk to one another. As a part
of our stack, we also use the Kopf tool to quickly visualize our
clusters health and perform other maintenance tasks. Once you bring up
the stack you will see that you can use Kopf to see that all the nodes
came up in the cluster. Kibana 4 Finally, in order to view all of
these logs and make sense of the data, we bring up Kibana to complete
our ELK stack. We have chosen to go with Kibana 4 in this stack. Kibana
4 is launched with an Nginx container to provide basic auth behind a
Rancher load balancer. The Kibana 4 instance is the Official image which
is hosted on DockerHub. The Kibana 4 image talks to the Elasticsearch
client nodes. So now we have a full ELK stack for taking logs and
shipping them to Elasticsearch for visualization in Kibana. The next
step is getting the logs from the hosts running your application.
Bringing up the Stack on Rancher So now you have the backstory on
how we came up with our ELK stack configuration. Here are instructions
to run the ELK stack on Rancher. This assumes that you already have a
Rancher environment running with at least one compute node. We will also
be using the Rancher compose CLI tool. Rancher-compose can be found on
GitHub here
rancher/rancher-compose.
You will need API keys from your Rancher deployment. In the instructions
below, we will bring up each component of the ELK stack, as its own
stack in Rancher. A stack in Rancher is a collection of services that
make up an application, and are defined by a Docker Compose file. In
this example, we will build the stacks in the same environment and use
cross stack linking to connect services. Cross stack linking allows
services in different stacks to discover each other through a DNS name.

  1. Clone our compose template repository: git clone
    https://github.com/rancher/compose-templates.git
  2. First lets bring up the Elasticsearch cluster.
    a. cd compose-templates/elasticsearch
    b. rancher-compose -p es up (Other services assume es as the
    elasticsearch stack name) This will bring up four services.
    – elasticsearch-masters
    – elasticsearch-datanodes
    – elasticsearch-clients
    – kopf
    c. Once Kopf is up, click on the container in the Rancher UI, and
    get the IP of the node it is running on.
    d. Open a new tab in your browser and go to the IP. You should see
    one datanode on the page.
  3. Now lets bring up our Logstash tier.
    a. cd ../logstash
    b. rancher-compose -p logstash up
    c. This will bring up the following services
    – Redis
    – logstash-collector
    – logstash-indexer
    d. At this point, you can point your applications at
    logtstash://host:5000.
  4. (Optional) Install logspout on your nodes
    a. cd ../logspout
    b. rancher-compose -p logspout up
    c. This will bring up a logspout container on every node in your
    Rancher environment. Logs will start moving through the pipeline
    into Elasticsearch.
  5. Finally, lets bring up Kibana 4
    a. cd ../kibana
    b. rancher-compose -p kibana up
    c. This will bring up the following services
    – kibana-vip
    – nginx-proxy
    – kibana4
    d. Click the container in the kibana-vip service in the Rancher UI.
    Visit the host ip in a separate browser tab. You will be
    directed to the Kibana 4 landing page to select your index.

Now that you have a fully functioning ELK stack on Rancher, you can
start sending your logs through the Logstash collector. By default the
collector is listening for Logstash inputs on UDP port 5000. If you are
running applications outside of Rancher, you can simply point them to
your Logstash endpoint. If your application runs on Rancher you can use
the optional Logspout-logstash service above. If your services run
outside of Rancher, you can configure your Logstash to use Gelf, and use
the Docker log driver. Alternatively, you could setup a Syslog listener,
or any number of supported Logstash input plugins. Conclusion
Running the ELK stack on Rancher in this way provides a lot of
flexibility to build and scale to meet any organization’s needs. It
also creates a simple way to introduce Rancher into your environment
piece by piece. As an operations team, you could quickly spin up
pipelines from existing applications to existing Elasticsearch clusters.
Using Rancher you can deploy applications following container best
practices by using sidekick containers to customize standard containers.
By scheduling these containers as a single unit, you can separate your
application out into separate concerns. On Wednesday, September 16th,
we hosted an online meetup focused on container logging, where I
demonstrated how to build and deploy your own ELK stack. If you’d like
to view a recording of this you can view it
here.
If you’d like to learn more about using Rancher, please join us for an
upcoming online meetup, or join our beta
program
or request a discussion with one
of our engineers.

Source

Adding Linux Dash As A System Service

Ivan Mikushin discussed adding system services to RancherOS using Docker Compose. Today I want to show you an exmaple of how to deploy Linux Dash as a system service. Linux Dash is a simple, low overhead, and web supported monitoring tool for Linux, you can read more about Linux Dash here. In this post i will add Linux Dash as a system service to RancherOS version 0.3.0 which allows users to add system services using rancherctl command. The Ubuntu’s console is the only service that is currently available in RancherOS.

Creating Linux Dash Docker Image

I build a 32MB node.js busybox image on top of the hwestphal/nodebox image, with linux-dash installed which will run on port 80 by default. The Docker file of this image:

FROM hwestphal/nodebox
MAINTAINER Hussein Galal

RUN opkg-install unzip
RUN curl -k -L -o master.zip https://github.com/afaqurk/linux-dash/archive/master.zip
RUN unzip master.zip
WORKDIR linux-dash-master
RUN npm install

ENTRYPOINT ["node","server"]

The image needs to be available on Docker Hub to be pulled later by RancherOS, so we should build and push the image:

# docker build -t husseingalal/busydash busydash/
# docker push husseingalal/busydash

Starting Linux Dash As A System Service

Linux Dash can be started as system service in RancherOS using rancherctl service enable <system-service> while <system-service> is the location of the yaml file that contains the option for starting the system service in RancherOS. linux-dash.yml

dash:
image: husseingalal/busydash
privileged: true
links:
- network
labels:
- io.rancher.os.scope=system
restart: always
pid: host
ipc: host
net: host

To start the previous configuration as a system service, run the following command on RancherOS:

~# rancherctl service enable /home/rancher/linux-dash/linux-dash.yml

By using this command, the service will also be added to the rancher.yml file and set to enabled, but a reboot needs to occur in order for it take effect. After rebooting, you can see that the dash service has been started using rancherctl command:

rancher@xxx:~$ sudo rancherctl service list
enabled  ubuntu-console
enabled  /home/rancher/linux-dash/linux-dash.yml

And you can see that the Dash container has been started as a system Docker container:

rancher@xxx:~$ sudo system-docker ps
CONTAINER ID        IMAGE                          COMMAND                CREATED             STATUS              PORTS               NAMES

447ada85ca78        rancher/ubuntuconsole:v0.3.0   "/usr/sbin/entry.sh    About an hour ago   Up About an hour                        console

fb7ce6f074e6        husseingalal/busydash:latest   "node server"          About an hour ago   Up About an hour                        dash

b7b1c734776b        userdocker:latest              "/usr/sbin/entry.sh    About an hour ago   Up About an hour                        userdocker

2990a5db9042        udev:latest                    "/usr/sbin/entry.sh    About an hour ago   Up About an hour                        udev

935486c2bf83        syslog:latest                  "/usr/sbin/entry.sh    About an hour ago   Up About an hour                        syslog

And to test the Web UI just enter the following url to your browser: http://server’s-ip 1617

Conclusion

In version 0.3.0 of RancherOS, you have the ability to create and manage your own RancherOS system services. System service in RancherOS make it easy to enable is a Docker container that will start at the OS startup and can be designed in Docker compose format. For more information about system services in RancherOS. You can find instructions on how to download RancherOS from Github.

Source