Announcing RancherOS: A minimalist distro designed explicitly to run Docker

Today I would like to announce a new open source project called
RancherOS – the smallest, easiest way to run Docker in production and
at scale. RancherOS is the first operating system to fully embrace
Docker, and to run all system services as Docker containers. At Rancher
Labs we focus on building tools that help customers run Docker in
production, and we think RancherOS will be an excellent choice for
anyone who wants a lightweight version of Linux ideal for running
containers.
rancherOS_logo_black

##

How RancherOS began

The first question that arises when you are thinking about putting
Docker in production is which OS to use. The simplest answer is to run
Docker on your favorite Linux distribution. However, it turns out the
real answer is a bit more nuanced. After running Docker on just about
every distro over the last year, I eventually decided to create a
minimalist Linux distribution that focuses explicitly on running Docker
from the very beginning. Docker is a fast-moving target. With a constant
drum beat of releases, it is sometimes difficult for Linux distributions
to keep up. In October 2013, I started working very actively with
Docker, eventually leading to an open source project called Stampede.io.
At that time I decided to target one Linux distribution that I thought
to be the best for Docker since it was included by default. With
Stampede.io, I was pushing the boundaries of what was possible with
Docker and able to do some fun things like run libvirt and KVM in Docker
containers. Consequentially I always needed the latest version of
Docker, which was problematic. At the time, Docker was releasing new
versions each month (currently Docker is on a two month release cycle).
It would often take over a month for new Docker versions to make it to
the stable version of the Linux distro. Initially, this didn’t seem like
a bad proposition because undoubtedly, a new Docker release couldn’t be
considered “stable” on day one, and I could always use alpha releases of
my distribution of choice. However, alpha distribution releases include
other recently released software, not just Docker but alpha kernel and
alpha versions of other software. With RancherOS we addressed this by
limiting the OS to just the things we need to run Docker, specifically,
the Linux kernel, Docker and the bare minimum amount of code needed to
join the two together. Picking which version of RancherOS to run is as
easy as saying which version of Docker you wish to run. The sole purpose
of RancherOS is to run Docker and therefore our release schedule is
closely aligned. All other software included in the distribution is
considered stable, even if you just picked up the latest and greatest
Docker version.

An OS where everything is a container

When most people think of Docker they think about running applications.
While Docker is excellent at that, it can also be used to run system
services thanks to recently added capabilities. Since first starting
Rancher (our Docker orchestration product), we’ve wanted the entire
orchestration stack to be packaged by and run in Docker – not just the
application we were managing. This was initially quite difficult, since
the orchestration stack needed to interact with the lower level
subsystems. Nevertheless, we carried on with many “hacks” to make it
possible. Once we determined the features we needed within Docker, we
helped and encouraged the development of those items. Finally, with
Docker 1.5, we were able to remove the hacks, paving the way for
RancherOS. Docker now allows sufficient control of the PID, IPC,
network namespaces and capabilities. This means it is now possible to
run systems oriented processes within Docker containers. In RancherOS we
run absolutely everything in a container, including system services such
as udev, DHCP, ntp, syslog, and cloud-init.
rancherOS_lighter

A Linux distro without systemd

I have been running systemd and Docker together for a long time. When I
was developing Stampede.io I initially architected the system to run on
a distribution that heavily leveraged systemd. I started to notice a
number of strange errors when testing real world failure scenarios.
Having previously run production cloud infrastructure, I cared very much
about reliably managing things at scale. Having seen odd errors with
systemd and Docker I started digging into the issue. You can see most of
my comments in this Docker
issue
and this mailing
list
thread
.
As it turns out, systemd cannot effectively monitor Docker containers
due to the incompatibility with the two architectures. While systemd
monitors the Docker client used to launch the Docker container, this is
not really helpful and I worked hard with both Docker and systemd
communities to fix the issue. I even went so far as to create an open
source project called
systemd-docker. The
purpose of the project was to create a wrapper for Docker that attempted
to make these two systems work well together. While it fixed many of the
issues, there were still some corner cases I just couldn’t address.
Realizing things must change in either Docker or systemd I shifted focus
to talking to
both
of
them.
With the announcement of Rocket more effort is being put into making
systemd a better container runtime. Rocket, as it stands today, is
largely a wrapper around
systemd
.
Additionally, systemd itself has since added native support for pulling
and running Docker
images
which
seems to indicate that they are more interested in subsuming container
functionalities in systemd than improving interoperability with Docker.
Ultimately, all signs continue to point to no quick resolution between
these two projects. When looking at our use case for RancherOS, we
realized we did not need systemd to run Docker. In fact, we didn’t need
any other supervisor to sit at PID 1. Docker was sufficient in itself.
What we have done with RancherOS is run what we call “System Docker” as
PID 1. All containers providing core system services are run from System
Docker, which also launches another Docker daemon which we call “User
Docker” under which we run user containers. This separation is quite
practical. Imagine a user did docker rm -f $(docker ps -qa). You run
the risk of them deleting the entire operating system.

Minimalist Linux distributions

As users look at shifting workloads to containers, dependencies on the
host system become dramatically less. All current minimalist Linux
distributions have taken advantage of this fact, allowing them to
drastically slim down their footprint. I love the model distributions
such as CoreOS have pioneered and we have been inspired by them. By
constraining the use case of RancherOS to running Docker, we decided
only core system services (logging, device management, alerting) and
access (console, ssh) were required. With the ability to run these
services in containers, all we needed was the container system itself
and a bit of bootstrap code (to get networking up, for example). If you
take this one step further and put the server under the management of a
clustering/orchestration system, you can even minimize the need to run a
full console.

First Meetup

On March 31st, I’ll be hosting an online meet up to demonstrate
RancherOS, discuss some of the features we are working on, and answer
any questions you might have. If you would like to learn more, please
register now:

Conclusion

When we looked at simplifying large scale deployments of Docker, there
were no solutions available that truly embraced Docker. We started the
RancherOS project because we love Docker and feel we can significantly
simplify the Linux distribution necessary to run it. Hopefully, this
will allow users to focus more on their container workloads and less on
managing the servers running them. If you’re primary requirement for
Linux is to run Docker, we’d love for you to give RancherOS a try and
let us know what you think. You can find everything
at https://github.com/rancherio/os.

Source

Docker’s 6th Birthday: How do you #Docker?

Docker is turning 6 years old! Over the years, Docker Community members have found some amazing and innovative ways of using Docker technology and we’ve been blown away by all the use-cases we’ve seen from the community at DockerCon. From Docker for Space where NASA used Docker technology to build software to deflect asteroids to using “gloo” to glue together traditional apps, microservices and serverless, you all continue to amaze us year over year.

So this year, we want to celebrate you! From March 18th to the 31st, Docker User Groups all over the world will be hosting local birthday show-and-tell celebrations. Participants will each have 10-15 minutes of stage time to present how they’ve been using Docker. Think of these as lightning talks – your show-and-tell doesn’t need to be polished and it can absolutely be a fun hack and/or personal project. Everyone who presents their work will get a Docker Birthday #6 t-shirt and have the opportunity to submit their Docker Birthday Show-and-tell to present at DockerCon.

Are you new to Docker? Not sure you’d like to present? No worries! Join in the fun and come along to listen, learn, add to your sticker collection and eat cake. Everyone is welcome!

Find a Birthday meetup near you!

There are already Docker Birthday #6 celebrations scheduled around the world with more on the way! Check back as more events are announced.

Don’t see an event in your city?

  • Contact your local Community Leaders via their user group page and see if you can help them organize a celebration!

Want to sponsor a birthday event?

Can’t attend but still want to be involved and learn more about Docker?

docker, Docker Birthday, Docker community, dockercon, Gloo, Nasa

Source

Rancher now supports Docker logs

Hi, I’m James Harris,
(@sir_yogi_bear) one of the
engineers here @Rancher_Labs, and
I am excited to announce we added support this week for pulling and
viewing Docker logs in Rancher. The addition of the feature allows users
to easily work with their containers from the web UI in a much more
involved way. Previously, there was no way to track the output of a
container through Rancher. Now you can easily follow both the Std out
and Std error of a container. This is the first step in adding more
advanced log management capabilities to Rancher. Container
UI You can see the new
feature when you look at any container in Rancher. In the top left of
the container screen you will see several icons for taking actions on a
container. Container
ActionsContainer Actions Stopping
StateContainer Actions
Stopped
Clicking on the: Logs
Icon will allow you to
view the container logs screen. We have two ways of displaying
container’s logs. One where the logs are combined: Container Logs
Screen With
TTY
And another where logs are separated between standard error and standard
out. This is available when a container is created without the tty flah
selected. Combined:Container Logs Screen Without
TTY
Just Std Out:Container Logs Screen Without Just Std
Out
Just Std Error:Container Logs Screen Without Just Std
Error
I hope that this new functionality in Rancher enables better management
of your containers, and please stay tuned, we’re going to be adding a
number of other capabilities around log management in the future. If
you’d like to see this or any other Rancher functionality. Please
don’t hesitate to schedule a one on one demo, where we can walk you
through the current features, and some of the capabilities we’re still
working on. James Harris
@sir_yogi_bear

You Might Also Like

blog

Creating a Magento Cluster on AWS using Docker and Rancher

blog

Using Docker Compose Files to define RancherOS System Services

blog

Container Logging with Docker and ELK – September 2015 Recorded Online Meetup

Source

AWS and Magento | Creating a Magento Cluster on AWS

magento-logo2

[Usman is a server and infrastructure engineer, with experience in
building large scale distributed services on top of various cloud
platforms. You can read more of his work at
techtraits.com, or follow him on twitter
@usman_ismailor on
GitHub.]

Magento is an open-source content management
system (CMS) offering a powerful tool-set for managing eCommerce
web-sites. Magento is used by thousands of companies including Nike and
Office Max. Today we are going to walk through the process of setting up
a Magento cluster using Docker and Rancher on the Amazon Elastic Compute
Cloud (EC2). We use Docker because it makes deployments simple,
repeatable, and portable across cloud service providers and in-house
hardware. Using Rancher we can extend the Docker primitives across
multiple servers even if those servers are on different cloud providers.

We will be using the Official MySQL
image
, Official Memcached
Image
and
a Magento image I
created. Despite its many benefits, one area where Docker is still
evolving is managing entire clusters. It is possible to have multi-node
deployments with Docker installed on each node, but you have to manage
the containers on each node independently by connecting through ssh.
Furthermore, we lose the ability to connect containers using Docker
linking. This is where Rancher comes in. Rancher creates a Virtual
Private Network (VPN) using IPSec between all Docker containers and
allows us to communicate between containers using standard Docker
primitives. Additionally, Rancher gives us a nice UI to launch and
manage containers without having to ssh into individual servers. We are
launching all nodes on EC2, however you could use any combination of
servers–some from EC2 or from in-house hardware or from Digital Ocean
if you choose.

The next sections walk through setting up the Amazon environment,
Rancher server, and a array of Rancher compute nodes. We will then use
those nodes to launch our Magneto deployment.

Amazon Environment Setup

We are going to bring up our cluster on top of Amazon EC2, and for this
we need an Amazon Web Services (AWS) account
and some familiarity with the AWS
Console
.
If you need a refresher you can peruse the AWS Console
Documentation
.
Once you are signed into the console, click through to the EC2 service
and from there select the Key Pairs menu item in the Network and
Security
section of the side menu. Click Create Key Pair and specify
a name in the pop-up screen to create a key. When you click Create, a
pem file will be downloaded to your computer. Keep the key file
somewhere safe as it is needed to login to your servers and also allows
someone to access your servers if they gain access to it. We will also
need to setup a security group by selecting the Security Groups option
in the side menu and clicking the Create Security Group button. Select
the default Virtual Private Cloud (VPC) to create the security group in
and open; port 8080, 9345 and 9346 to the internet. You will also need
to expose port 22 so that you can ssh into your server if necessary. For
this port, you can select the *My IP *option instead of Anywhere if you
would like to keep login access to your nodes limited.

racher-security-group

Rancher Server Launch

We are now ready to launch our Rancher server; for this we will Select
the Instances option from the side menu and then click Launch
Instance
. This is a 7-step process beginning with Selecting an Amazon
Machine Image (AMI). We will just use the default Amazon Linux AMI. Step
two requires us to choose an instance size for the Rancher server. We
will use the *t2.micro, *the least expensive instance type but recommend
you use a larger instance type for a real deployment. In step three, we
have to specify instance details; you can leave most fields set to their
defaults values but make sure the Auto-assign Public IP setting is set
to enabled otherwise you will not be able to access your server.

instance-details

Scroll to the bottom of the page and expand the Advanced
Details
section and add the following code into the User data text
box. Amazon uses this as an initializer for your instance and will make
sure that Docker and the Rancher Server is installed and running. Note
that we are using the package manager yum to install Docker and then
overwriting the binary with the latest one from Docker.com. We do this
because Rancher requires Docker version 1.5 or higher but the
repositories have not been updated past version 1.3 as yet.

#!/bin/bash
yum install docker -y
wget https://get.docker.com/builds/Linux/x86_64/docker-latest -O docker
chmod +x docker
mv -f ./docker $(which docker)
service docker restart
docker run -d -p 8080:8080 rancher/server

You may select the defaults for all subsequent options other than
Security Group for which you should select the one we created earlier.
Similarly when asked to select ssh key select the one we created
earlier. A few minutes after you create your instance the Rancher server
should be up and running. You can find your servers Public IP by
selecting the server in the console and then browse to
http://RANCHER_SERVER_IP:8080/ in a browser of your choice and you
will see the Rancher web console. Also note the Private IP of Rancher
from the details section of the Amazon Console; we will need this later
on.

rancher-server

Rancher Compute Node Setuprancher-ui

The first step in creating your Rancher compute nodes is to get the
agent launch command from the Rancher server. To do this, open up your
Rancher server in a browser and click Register a new host. You will be
presented with a pop window with a Docker run command that you can use
to bring up a Docker agent. Copy that command, and replace the Public IP
of the Rancher Server with its private IP. This is so required because
Rancher uses the private network for inter-server communication and
hence is not blocked by the Security Groups. You will also have to
remove the ‘-it’ switch and replace it with ‘-d’ to reflect the fact
that we are running this container in a non-interactive shell. Also note
that the IP Address (52.1.151.186) and secret (6C97B49FE39413B…) shown
below are unique to each setup and will be different for you.

docker run –rm -it –privileged -v /var/run/docker.sock:/var/run/docker.sock
rancher/agent http://52.1.151.186:8080/v1/scripts/6C97B49FE39413B2B76B:
1424538000000:6UL0o28EXZIkjZbmPOYMGxmM9RU

With this command we can now create our array launch configuration. We
do this by selecting the Launch Configurations item from the side menu
and clicking *Create Launch Configuration. You will then be asked to
follow the same 7-step form that you followed for the Rancher Server
instance launch
. As before,* select the Amazon Linux AMI, an instance
type of your choice, storage size, and the security group and ssh key we
created earlier. The only difference for the instance setup form is on
Step 3, Configure Launch Configuration. In the Advanced details
section you must select “Assign a public IP to every instance” to
every instance if you wish for your Docker containers to be publicly
accessible. In addition add the following lines into the User data text
box. This script is identical to the one we used to launch Rancher
server other than the last line. We replaced the Docker run for the
Rancher server command with the modified Docker run command for the
Rancher agent which we copied from the UI.

#!/bin/bash
yum install docker -y
wget https://get.docker.com/builds/Linux/x86_64/docker-latest -O docker
chmod +x docker
mv -f ./docker $(which docker)
service docker restart
docker run –rm -d –privileged -v /var/run/docker.sock:/var/run/docker.sock Rancher/agent http://172.30.2.200:8080/

Now that we have created the launch configuration we can create an
autoscaling array with our configuration. We select the Auto Scaling
Groups
option of the side menu and click *Create Auto Scaling
Group. *In the first step we will be asked to specify the launch
configuration, select the launch configuration we just created in the
previous paragraph. In the second step specify a name for this auto
scaling group, set the group size to however many compute nodes you
require. Note that this value can be changed later if your needs
increase or decrease. We will be using two agents to illustrate the
network connectivity features of Rancher.
For Network and Subnet choose the default VPC and the same
availability zone you used for the Rancher server. The rest of the steps
can be left at default settings. Once you have created your auto scaling
group wait a few minutes for the instances to be launched and then
browse to the Rancher Server URL to see that the new agents have been
registered. Note that you can always add more agents to your cluster by
editing the Desired Nodes setting in the Auto scaling group
configuration. With our launch configuration setup all new nodes will
register themselves with the Rancher server automatically.

Magento Setup

Now that we have our two node Rancher cluster launched we can setup our
Magento containers. However before we launch our Magento Cluster we must
first launch a MySQL container to serve as our database and a Memcached
cluster for caching. Let’s launch our MySQL container first on one of
the compute nodes. We do this by mouse hovering over the server in the
Rancher Server UI and clicking the plus icon. In the pop up menu we need
to specify a name for our container and mysql as the source image.
Select the *Command *tab in the menu to the left and add 4 Environment
Variables: mysql root password, mysql user, mysql password, and mysql
database. You may choose any values for these variables except mysql
database which must be set to Magento. After adding all of these
environment variables, hit create to create the container. Note that
mysql is official Docker mysql image and details of what is inside this
container can be found on
its dockerhub page.

mysql_env

Next we will create the Memcached container on the empty compute node by
hitting the plus icon. We again give the container a name and specify
its source image as memcached. The Memcached container does not
require any further configuration and therefore we can just click create
to setup the container. Details of the memcached official container we
use can be found on
its dockerhub page.
Lastly we will create the Magento container from an image I created
called usman/magento.
Create the Magento container on the same compute node as the cache (so
that cache is accessible faster). Specify usman/magento as the source
image. In the ports section add a mapping from 8080 public to 80 in host
ports. Make sure that you select theManaged Network on docker0 option
for both mysql and memcached containers so that we can connect to them
from our Magento container.

links

In the links section, add links to the mysql and memcached containers
that we just setup. It is important the mysql container be named db and
the memcached container be named cache as shown in the image to the
right. We need to choose these names because Rancher creates environment
variables in the Magento container telling it where to connect to MySQL
and Memcached. These variables are based on the name of the linked
container and hence we the need the values to remain the same. By
linking containers through Rancher the containers are able to
communicate even though we did not expose their ports. Rancher extends
Docker’s linking concept to support multiple nodes by providing
a virtual private network between the hosts using IPsec tunneling. We
will need to know the IP and Port at which MySQL is available to the
Magento container so that we can use it later when we configure Magento
through the web interface. So go head and hit create and wait for the
container to get into the Running state.

exec-shell

Once the container is running we can use another Rancher feature to open
a shell into containers directly from the UI to retrieve the network
information we require. Click the inverted chevron next to the Magento
container and select Execute Shell. You will be presented with pop up
window and an embedded shell connected to the container. In the shell
use the env command to list the environment variables and grep
for DB_PORT_3306_TCP_. This will list the IP, Port and Protocol at
which the DB is available. As an aside the ADDR will be the IP of the
Network Agent on the server that Magento is running. This is because the
network agent will be proxying traffic to the remote container.

env | grep DB_PORT_3306_TCP_
DB_PORT_3306_TCP_PORT=28428
DB_PORT_3306_TCP_PROTO=tcp
DB_PORT_3306_TCP_ADDR=10.42.81.155

The Magento container should be up by now and you can browse to port
8080 on the public interface of the Amazon server running Magento. Note
that the public IP is not shown inside Rancher as we used the private
interfaces to setup agents–you will have to lookup the IP from the
Amazon console. If everything is working correctly you should see the
Magento installation wizard that will guide you through the rest of the
process. The initial page will ask you to accept the Terms of Service,
and then subsequent pages will as ask for locale information and
database configuration. For database configuration enter the value of
the $DB_PORT_3306_TCP_ADDR that we looked up earlier followed by
a colon and $DB_PORT_3306_TCP_PORT in to the Host field. Also
specify the database name as *magento and username *and password to
match values you selected for MYSQL_USER and MYSQL_PASSWORD
respectively. Next you will be asked to create an administrator user and
password for Magento and with that you are done. You should have a fully
functional, albeit empty Magento website.

magento-configruation

If you got this far, you have successfully launched a Rancher cluster on
the Amazon Compute Cloud and created a distributed Magento deployment
using Docker containers. We have also used two salient features of
Rancher–the VPN that allows private communication between containers
and the ability to manage containers on the entire cluster from a
web-UI. During this entire tutorial we never had to ssh into any of our
servers. Although the Magento cluster we setup was just meant to be an
example, with a few modifications such as using a replicated MySQL
database and a larger array of Rancher compute nodes, we could very
easily use Rancher for a production Magento deployment.

A word of caution if you do plan on using Rancher for large-scale
deployments: Rancher is still in its infancy, and there are some
features that have not been released yet. For example if we were to run
multiple Magento containers, we would need to setup our own
load-balancing solution to distribute traffic to the containers. Another
issue is that container configuration is stored on the agents where the
containers are running and hence losing an agent means not only losing
the containers running on it but also the configuration used to launch
them. The containers we launched today are fairly simple but this is not
always the case. Both ELBs and support for Docker Compose with some form
of templating are on the Rancher roadmap, so hopefully these items will
be addressed soon. I am not using Rancher on user-facing productions
systems just yet but it is going to be a larger and larger part of the
Docker based systems I manage.

If you would like to learn more about Rancher, please request a
one-on-one demo to get a better understanding of how the platform works.

[Usman is a server and infrastructure engineer, with experience in
building large scale distributed services on top of various cloud
platforms. You can read more of his work at
techtraits.com, or follow him on twitter
@usman_ismailor on
GitHub.]

Source

Docker Environments for Collaboration | Introducing Projects

In last week’s 0.9 release we added support in Rancher for users to
create new deployment environments that can be shared with colleagues.
These docker environments are called projects, and are an extension of the
GitHub OAuth integration we added to Rancher last month. The focus of
projects is to allow teams to collaborate on Docker environments, and
since our user management is connected with GitHub today, we leverage
standard GitHub abstractions, such as users, teams and organizations, to
support Rancher Projects.

(If you haven’t read my earlier post on
GitHub OAuth on Rancher,
I would recommend you to read it as it provides an introduction to
Rancher authentication using GitHub.)

Projects demo

This demo will show you how to create projects on Rancher for various
levels of access control.

The project use case

One of the most obvious use cases for this new feature is to
control access to environments and resources within an organization.
For example, a common request from users is to have development teams
and production teams own their own environments and resources. With
projects, access to production environments can be shared among an
approved group, and restricted from unauthorized users. At the same
time, developers can have unfettered access to development environments,
and can collaborate on testing, confident it will not be accessed by
anyone else. Every project is a fully isolated environment for managing
resources and deploying containers. Anyone who has access to a project
can register new computing resources (virtual machines or physical
servers) and deploy containers, configure networking, and consume all of
the other capabilities of Rancher. Rancher supports three kinds of
projects

  1. User Projects
  2. Team projects
  3. Org projects

User projects

User projects allow resources to be orchestrated by an individual user.
They are meant to be used when a single user is the sole manager of the
resources. Users can create multiple projects for different environments
they are working on. One of the caveats of this type of project is
that, users can create “user-level” projects only for themselves.

Team projects

Team projects allow users to allocate resources and provide access to a
team of people. Team projects are ideal for collaborating with a
predefined GitHub group. In the use case above, an organization could
create separate team projects for the dev and operations teams.
Giving both teams the ability to access their own resources.

Org projects

Organization level projects allocate resources and provides access to
all members of the organization. For example, if you wanted to create a
resource called demo, that everyone in your organization could
orchestrate, this type of project would be the ideal choice. I hope this
project feature will be useful to you and your team. If you’d like to
get more information on using Rancher, or see it in action, please
don’t hesitate to schedule a demo.

###

Source

A Major Step Towards Making Docker a Distributed Application Platform

socketplane

Today Docker acquired SDN software maker SocketPlane. Congratulations to both
Docker and SocketPlane teams. We have worked closely with SocketPlane
team since the early Docker networking
discussions
and have a
great amount of respect for their technical abilities. We are also happy
to see Docker Inc. make a serious effort to bring SDN capabilities to
the Docker platform. Many customers have told us that the lack of
multi-host networking is one of the last remaining gaps that impede the
wide-spread production use of Docker containers. Today Docker containers
on multiple hosts cannot easily communicate with each other. Without
SDN, developers and operations teams have to resort to complicated port
mapping to get containers that are running on different hosts to talk to
each other. This dramatically complicates application deployment,
monitoring, upgrades, and service discovery.

At Rancher Labs, we are
developing two products: RancherOS and
Rancher. RancherOS is a
minimalist Linux distribution designed specifically for running Docker
containers. It enables the type of networking services developed by
SocketPlane to be packaged and distributed as system containers. Rancher
is a container orchestration platform for managing large production
deployments of Docker containers. Rancher requires a multi-host
networking layer to be deployed underneath. In fact, the need for
multi-host networking is such that we developed a simple yet functional
SDN solution in Rancher itself. We look forward to working with
SocketPlane team as they drive the Docker networking API design so that
Rancher can work with multiple SDN implementations in the future.

As a member of the Docker ecosystem, Rancher’s success depends on an
increasing number of organizations embracing containers. We believe
Docker can only succeed if it fulfills its promise of becoming a
distributed application platform. A standardized Docker networking layer
is an important step in this direction.

If you’re interested in learning more about RancherOS, please join us for an online meet up on
March 31st.

Source

NodeJS Application Using MongoDB and Rancher

So last week I finally got out from my “tech” comfort zone, and tried
to set up a Node.js application which uses a MongoDB database, and to
add an extra layer of fun I used Rancher to set up the
whole application stack using Docker containers.

I designed a small application with Node, its only function is to
calculate the number of hits on the website, you can find the code at
Github

The setup was to add an Nginx container as a load balancer at the
front-end to serve two back-end Node application containers, and then
have the two Node servers connect to a MongoDB database container. In
this setup I will use 5 machines from Digital Ocean, 4 to build the
application stack with the highest availability, and the 5th as a
Rancher server.

nodejs_app.png

[]Set Up A Rancher Server

On a Digital Ocean machine with Docker 1.4 installed we will apply the
following command to set up a Rancher platform on the port 8000:

root@Rancher-Mngmt:~# docker run -d –name rancher-server -p 8080:8080 rancher/server

The previous command will run a docker instance with rancher platform,
and proxy the port 8080 on the instance to the same port on the Digital
Ocean machine. To make sure that the server is running type this
command:

root@Rancher-io-Mngmt:~# docker logs rancher-server

You should see something like the following output:

20:02:41.943 [main] INFO ConsoleStatus – [DONE ] [68461ms] Startup Succeeded, Listening on port 8080

To access Rancher now, type the following url in your browser:

http://DO-ip-address:8080/
you should see something like the following:

rancher-mngmt.png

[]Register Digital Ocean’s instances With Rancher

To register the Digital Ocean machines with docker 1.4 installed with
Rancher, type the following on each machine:

root@Rancher-Test-Instance-X# docker run -it –privileged -v /var/run/docker.sock:/var/run/docker.sock rancher/agent http://rancher-server-ip:8080

where rancher-server-ip is the ip address of the
Rancher server we just installed, or you can click on “Register a New
Host “ on Rancher platform and copy the command shown.

registernewhost.png

After applying the previous command on each machine you should see
something like the following when you access the Rancher management
server:

RInstances.png

If you are familiar with Ansible as a configuration management
tool, you can use it to register the Digital Ocean machines with Rancher
in one command:

  • First, add the ips of the Digital Ocean machines in
    /etc/ansible/hosts under one group name:

[DO]
178.62.101.243
178.62.27.24
178.62.98.242
178.62.11.154

  • Now, run the following command to register all machines at
    once:

$ ansible DO -u root -a “docker run -it –privileged -v /var/run/docker.sock:/var/run/docker.sock rancher/agent http://rancher-server-ip:8080”

MongoDB Docker Container

After Registering the 4 machines with Rancher, its time to start
building our application stack.

The node.js application will calculate the number of hits on a
website, so it needs to store this data somewhere. I will use MongoDB
container to store the number of hits.

The Dockerfile will be like the following:

FROM ubuntu:14.04
MAINTAINER hussein.galal.ahmed.11@gmail.com
ENV chached_FLAG 0
RUN apt-get -qq update && apt-get -yqq upgrade
RUN apt-key adv –keyserver hkp://keyserver.ubuntu.com:80 –recv 7F0CEB10
RUN echo ‘deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen’ | tee /etc/apt/sources.list.d/10gen.list
RUN apt-get update && apt-get install -yqq mongodb-org
RUN mkdir -p /data/db
EXPOSE 27017
ADD run.sh /tmp/run.sh
ADD init.json /tmp/init.json
ENTRYPOINT [“/bin/bash”, “/tmp/run.sh”]

The previous Docker file is really simple, lets explain it line by
line:

  • First update the apt cache and install latest updates:

RUN apt-get -qq update && apt-get -yqq upgrade

  • Add the key and the mongodb repo to apt sources.list:

RUN apt-key adv –keyserver hkp://keyserver.ubuntu.com:80 –recv 7F0CEB10
RUN echo ‘deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen’ | tee /etc/apt/sources.list.d/10gen.list

  • Install the MongoDB package which installs the server and the
    client:

RUN apt-get update && apt-get install -yqq mongodb-org

  • Create the directory which will store the MongoDB files:
  • Expose the port 27017, which is the default port to connect to
    MongoDB:
  • Add two files to the container:
  • init.json: the initial database to start the
    application.
  • run.sh: will import the init.json database to the
    MongoDB server and ran the server.

ADD run.sh /tmp/run.sh
ADD init.json /tmp/init.json

  • Finally, it will add entrypoint to the container to be started with
    executing the run.sh file:

ENTRYPOINT [“/bin/bash”, “/tmp/run.sh”]

Let’s take a look at the run.sh file:

#!/bin/bash
/usr/bin/mongod &
sleep 3
mongoimport –db countdb –collection hits –type json –file /tmp/init.json
/usr/bin/mongod –shutdown
sleep 3
/usr/bin/mongod

The server started first to be able to import the init.json database to
the countdb database and hits collection, then shutdown the server and
start it up again but in the foreground this time.

The init.json database file:

Node.js Application Container

The Node.js container will install node.js and git packages, and
then will run a simple script to update the /etc/hosts file with the ip of the MongoDB container provided by the
environment variable: $MONGO_IP.

FROM ubuntu:14.04
MAINTAINER hussein.galal.ahmed.11@gmail.com
ENV CACHED_FLAG 1

# Install node
RUN apt-get update -qq && apt-get -y upgrade
RUN apt-get install -yqq nodejs git git-core
VOLUME [ “/var/www/nodeapp” ]
ADD ./run.sh /tmp/run.sh# Install Dependencies
WORKDIR /var/www/nodeapp

# Run The App
ENTRYPOINT [“/bin/bash”, “/tmp/run.sh”]

The ENTRYPOINT of the Docker container is executing the
/tmp/run.sh script:

MONGO_DN=mongo
if [ -n “$MONGO_IP” ]
then
echo “$MONGO_IP $MONGO_DN” >> /etc/hosts
fi

# Fetch the application
git clone https://github.com/galal-hussein/hitcntr-nodejs.git
mv hitcntr-nodejs/* .
rm -rf hitcntr-nodejs

# Run the Application
nodejs index.js

The previous script will check for the MONGO_IP environment variable and if it is set, it will add the content of
this variable to /etc/hosts, then pull the code from
Github Repo, and finally run the node application.

Nginx Container

The Dockerfile of the Nginx container will install nginx webserver and
add the configuration files, and ran a script to update /etc/hosts file
like the Node.js container, and finally run the web server.

Nginx Dockerfile:

#dockerfile for nginx/nodejs
FROM ubuntu:14.04
MAINTAINER hussein.galal.ahmed.11@gmail.com
ENV CACHED_FLAG 0

# Install nginx
RUN apt-get update -qq && apt-get -y upgrade
RUN apt-get -y -qq install nginx

# Adding the configuration files
ADD conf/nginx.conf /etc/nginx/nginx.conf
ADD conf/default /etc/nginx/conf.d/default
ADD ./run.sh /tmp/run.sh

# Expose the port 80
EXPOSE 80

# Run nginx
ENTRYPOINT [ “/bin/bash”, “/tmp/run.sh” ]

The Dockerfile is very simple and use the same commands like the
previous images.

run.sh:

NODE_1_DN=node_app1
NODE_2_DN=node_app2
if [ -n “$NODE_APP1_IP” ]
then
echo “$NODE_APP1_IP $NODE_1_DN” >> /etc/hosts
fi
if [ -n “$NODE_APP2_IP” ]
then
echo “$NODE_APP2_IP $NODE_2_DN” >> /etc/hosts
fi
# Run Nginx
/usr/sbin/nginx

Since we are using two Node application servers, we need to proxy the
http requests received by Nginx to those servers and to do that we need
to add the ips of the Node.js containers to the hosts file.

The ips of the Node.js containers are defined by the two environment
variables (NODE_APP1_IP, and NODE_APP2_IP).

Build And Push The Images

Now for the final step, build and then push the images to Docker
hup:

~/rancher_vm# docker build -t husseingalal/nodeapp_mongo mongo/
~/rancher_vm# docker build -t husseingalal/nodeapp_node node/
~/rancher_vm# docker build -t husseingalal/nodeapp_nginx nginx/
~/rancher_vm# docker push husseingalal/nodeapp_mongo
~/rancher_vm# docker push husseingalal/nodeapp_node
~/rancher_vm# docker push husseingalal/nodeapp_nginx

Now Docker will ask you for your account credentials, then the images
will be pushed to the Docker hub to be used later with Rancher.

Set Up The Application Stack

  1. At Rancher platform, create at the first host a Docker container
    using the MongoDB image we just created:

Rmongo1.png

Note that the option “Manage Network on docker0
was chosen to make sure that we will enable one of the unique features
of Rancher which is cross container networking, this feature enables
Docker containers on different hosts to communicate in a virtual private
network.

After clicking Create, you should see that the machine is started to
download the image and install it along with another docker instance
called Network Agent which is used to create the virtual private network
we just talked about.

RMongo2.png

  1. The second step is to add the the two Node.js Application servers
    which are connected to the MongoDB database:

Rnode1_1.png

Note that we used the Node.js image we just created, before creating
the container make sure to add the MONGO_IP environment variable to add the ip of the MongoDB server, you can
get the private ip of the MongoDB server from the Rancher panel:

Rnode1_2.png

After that click Create to begin the creation process of the Node.js
container. On the second host create the second Node.js Application
container using the same steps.

  1. The final step is to create the Nginx webserver container on the
    last host:

Rnginx1.png

Since the nginx instance will be facing the internet, we should proxy
the port 80 from inside the container to the port 80 of the Digital
Ocean machine:

Rnginx2.png

We need also to add the ips of the two Node.js application servers
which are connected to Nginx, you can add the ips through creating two
environment variables (NODE_APP1_IP, NODE_APP2_IP):

Screenshot from 2015-02-04
22:50:13.png

Now wecan access the application using the ip address of the Host
machine http://<the-ip-address>.

Rsuccess.png

Conclusion

In part 1 of this series, I created a Node.js application stack using
Docker containers and Rancher platform. The stack consists of Nginx
container which balances the load between two Node.js application
containers and using MongoDB as our database.

In part 2
I introduce one of the newest features of Rancher: Github
Authentication, also I will use Github WebHooks feature for automatic deployment of the web application.

If you’d like to learn more about Rancher, please schedule a
demo:

Hussein Galal is a Linux System Administrator, with experience in
Linux, Unix, Networking, and open source technologies like Nginx,
Apache, PHP-FPM, Passenger, MySQL, LXC, and Docker. You can follow
Hussein
on Twitter @galal_hussein.*

Source

Build NodeJS App Using MongoDB and Rancher

In the first part of
this post
,
I created a full Node.js application stack using MongoDB as the
application’s database and Nginx as a load balancer that distributed
incoming requests to two Node.js application servers. I created the
environment on Rancher and using Docker containers.

In this post I will go through setting up Rancher authentication with
GitHub, and creating a webhook with GitHub for automatic
deployments.

[]Rancher Access Control

Starting from version 0.5, Rancher can be configured to restrict
access to a set of GitHub users and organization members (you can read a
blog about it
here).
Using this feature ensures that no one other than authorized users can
access Rancher server through the web UI.

After setting up the rancher server, you should see message that says
“Access Control is not configured” :

Raccesscontrol

Click on settings and on the Access Control panel you will be
instructed on how to setup and register new application with GitHub. The
instructions will provide you with a
link to GitHub application settings.

Now on GitHub Application Settings page, click on Register new
application:

Auth_1

Now you will put some information about Rancher’s server:

Application name: any name you choose

Homepage URL: Rancher server url

Application description: any description

Authorization callback URL: also Rancher server url.

Auth_2

After clicking on Register Application, you will be provided with
a Client ID and Client Secret, which are both used to register the user
to the Rancher server:

Auth_3

Now add the Client ID and Client Secret to the Rancher management
server, click on Authenticate with Github:

Auth_4

If everything went well, you should see something like the
following:

Auth_6

Now you have authorized a GitHub user account to your Rancher
management server, and can start adding users and organizations from
GitHub to Rancher projects.

[]Automatic Deployment Using Webhooks

Webhooks can provide an efficient way for changing the application’s
content using HTTP callbacks for specific events, in this configuration
I will register a couple of webhooks with GitHub to send a POST request
to a custom URL.

There are a number of ways to create an automatic deployment setup for
your app, I decided to use the following approach:

  • Create a webhook on Github for each push.
  • Modify the Node.js Docker instances with:
  • A webhook handler in Node.js. – A script that pulls the new
    pushed repo.
  • Start the Application with Nodemon, supervisor, or PM2 to restart on
    each modification.
  • Start the Handler with any port, and proxy this port to the
    corresponding port of the host machine.

WebHooks
Model

Let’s go through our solution in more detail:

The new Node.js Application Container

First we need to modify the Node.js Docker image which i created in the
first post. Now it has to contain the Hook handler program plus the
re-deploy script, also we should start the main application using
Nodemon, the new Dockerfile:

# Dockerfile For Node.js App
FROM ubuntu:14.04
MAINTAINER hussein.galal.ahmed.11@gmail.com
ENV CACHED_FLAG 1

# Install node and npm
RUN apt-get update -qq && apt-get -y upgrade
RUN apt-get install -yqq nodejs npm git git-core

# Install nodemon
RUN npm install -g nodemon
VOLUME [ “/var/www/nodeapp” ]

# Add redeploy script and hook handler
ADD ./run.sh /tmp/run.sh
ADD ./redeploy.sh /tmp/redeploy.sh
ADD ./webhook.js /tmp/webhook.js
WORKDIR /var/www/nodeapp
# Expose both ports (app port and the hook handler port)
EXPOSE 8000
EXPOSE 9000

# Run The App
ENTRYPOINT [“/b2in/bash”, “/tmp/run.sh”]

You should notice that a two new files were added to this Dockerfile:
the webhook.js which is the hook handler, and redeploy.sh script which
is basically a git pull from the GitHub repo.

The webhook.js handler

I wrote the webhook handle in NodeJS:

var http = require(‘http’)
var createHandler = require(‘github-webhook-handler’)
var handler = createHandler({ path: ‘/’, secret: ‘secret’ })
var execFile = require(‘child_process’).execFile;
//Create Server That Listen On Port 9000
http.createServer(function (req, res) {
handler(req, res, function (err) {
res.statusCode = 404
res.end(‘no such location’)
})
}).listen(9000)

//Hook Handler on Error
handler.on(‘error’, function (err) {
console.error(‘Error:’, err.message)
})

//Hook Handler on Push
handler.on(‘push’, function (event) {
console.log(‘Received a push event for %s to %s’,
event.payload.repository.name,
event.payload.ref)
execFile(‘/tmp/redeploy.sh’, function(error, stdout, stderr) {
console.log(‘Error: ‘+error)
console.log( ‘Redeploy Completed’ );
});
})

I won’t go into the details of the code, but here are some notes that
you should consider:

  • I used
    github-webhook-handler library.
  • The handler will use a secret string that will be configured later
    using GitHub.
  • The handler will listen on port 9000.
  • The handler will execute redeploy.sh.

The redeploy.sh script:

sleep 5
cd /var/www/nodeapp
git pull

The last script is the run script which used to start the handler and
the application:

MONGO_DN=mongo
if [ -n “$MONGO_IP” ]
then
echo “$MONGO_IP $MONGO_DN” >> /etc/hosts
fi
ln -s /usr/bin/nodejs /usr/bin/node
chmod a+x /tmp/redeploy.sh

#fetch the app
git clone https://github.com/galal-hussein/hitcntr-nodejs.git .
cd /tmp
npm install github-webhook-handler
nodejs webhook.js &

# Run the Application
cd /var/www/nodeapp
nodemon index.js

Now build and push the image like I did in the previous post.

Add Webhook With Github

To create a webhook on Github, open the repository → settings →
Webhooks & Services then Add Webhook:

hooks_1

Now add a custom url which will be notified when the specified events
happen:

hooks_3

You should add the secret token which we specified previously in the
handler’s code. Add a second webhook but this time with the url of
the second application, then build the application stack like we did in
the previous post, but this time proxy port 9000 at the Node container:

hooks35

After building the stack check the Github webhooks, and you should see
something like this:

hooks_4

Now let’s test the webhooks, if you accessed the url of the Nginx web
server you will see something like this:

hooks5

Now commit any changes to your code and push it on Github, and the
changes will be applied immediately to the app servers, in our case I
changed the “hits” to be “Webhooks Worked, Hits”:

hooks6

Conclusion

In this two post series, I created a simple Node.js application with
MongoDB as a NoSQL database and used Rancher to build the whole stack
with Docker containers. In the second post I used the authentication
feature of Rancher with GitHub accounts, then I used webhooks to build
an automatic deployment solution.

I hope this helps you understand how to leverage Rancher, Docker and
GitHub to better manage application deployments.

If you’d like to learn more about using Rancher, please don’t hesitate
to schedule a demo and discussion with one of our
engineers.

Source

Podman and Buildah for Docker users

Podman and Buildah for Docker users

I was asked recently on Twitter to better explain Podman and Buildah for someone familiar with Docker. Though there are many blogs and tutorials out there, which I will list later, we in the community have not centralized an explanation of how Docker users move from Docker to Podman and Buildah. Also what role does Buildah play? Is Podman deficient in some way that we need both Podman and Buildah to replace Docker?

This article answers those questions and shows how to migrate to Podman.

How does Docker work?

First, let’s be clear about how Docker works; that will help us to understand the motivation for Podman and also for Buildah. If you are a Docker user, you understand that there is a daemon process that must be run to service all of your Docker commands. I can’t claim to understand the motivation behind this but I imagine it seemed like a great idea, at the time, to do all the cool things that Docker does in one place and also provide a useful API to that process for future evolution. In the diagram below, we can see that the Docker daemon provides all the functionality needed to:

  • Pull and push images from an image registry
  • Make copies of images in a local container storage and to add layers to those containers
  • Commit containers and remove local container images from the host repository
  • Ask the kernel to run a container with the right namespace and cgroup, etc.

Essentially the Docker daemon does all the work with registries, images, containers, and the kernel. The Docker command-line interface (CLI) asks the daemon to do this on your behalf.

How does Docker Work -- Docker architecture overview

This article does not get into the detailed pros and cons of the Docker daemon process. There is much to be said in favor of this approach and I can see why, in the early days of Docker, it made a lot of sense. Suffice it to say that there were several reasons why Docker users were concerned about this approach as usage went up. To list a few:

  • A single process could be a single point of failure.
  • This process owned all the child processes (the running containers).
  • If a failure occurred, then there were orphaned processes.
  • Building containers led to security vulnerabilities.
  • All Docker operations had to be conducted by a user (or users) with the same full root authority.

There are probably more. Whether these issues have been fixed or you disagree with this characterization is not something this article is going to debate. We in the community believe that Podman has addressed many of these problems. If you want to take advantage of Podman’s improvements, then this article is for you.

The Podman approach is simply to directly interact with the image registry, with the container and image storage, and with the Linux kernel through the runC container runtime process (not a daemon).

Podman architectural approach

Now that we’ve discussed some of the motivation it’s time to discuss what that means for the user migrating to Podman. There are a few things to unpack here and we’ll get into each one separately:

  • You install Podman instead of Docker. You do not need to start or manage a daemon process like the Docker daemon.
  • The commands you are familiar with in Docker work the same for Podman.
  • Podman stores its containers and images in a different place than Docker.
  • Podman and Docker images are compatible.
  • Podman does more than Docker for Kubernetes environments.
  • What is Buildah and why might I need it?

Installing Podman

If you are using Docker today, you can remove it when you decide to make the switch. However, you may wish to keep Docker around while you try out Podman. There are some useful tutorials and an awesome demonstration that you may wish to run through first so you can understand the transition more. One example in the demonstration requires Docker in order to show compatibility.

To install Podman on Red Hat Enterprise Linux 7.6 or later, use the following; if you are using Fedora, then replace yum with dnf:

# yum -y install podman

Podman commands are the same as Docker’s

When building Podman, the goal was to make sure that Docker users could easily adapt. So all the commands you are familiar with also exist with Podman. In fact, the claim is made that if you have existing scripts that run Docker you can create a docker alias for podman and all your scripts should work (alias docker=podman). Try it. Of course, you should stop Docker first (systemctl stop docker). There is a package you can install called podman-docker that does this for conversion for you. It drops a script at /usr/bin/docker that executes Podman with the same arguments.

The commands you are familiar with—pull, push, build, run, commit, tag, etc.—all exist with Podman. See the manual pages for Podman for more information. One notable difference is that Podman has added some convenience flags to some commands. For example, Podman has added –all (-a) flags for podman rm and podman rmi. Many users will find that very helpful.

You can also run Podman from your normal non-root user in Podman 1.0 on Fedora. RHEL support is aimed for version 7.7 and 8.1 onwards. Enhancements in userspace security have made this possible. Running Podman as a normal user means that Podman will, by default, store images and containers in the user’s home directory. This is explained in the next section. For more information on how Podman runs as a non-root user, please check out Dan Walsh’s article: How does rootless Podman work?

Podman and container images

When you first type podman images, you might be surprised that you don’t see any of the Docker images you’ve already pulled down. This is because Podman’s local repository is in /var/lib/containers instead of /var/lib/docker. This isn’t an arbitrary change; this new storage structure is based on the Open Containers Initiative (OCI) standards.

In 2015, Docker, Red Hat, CoreOS, SUSE, Google, and other leaders in the Linux containers industry created the Open Container Initiative in order to provide an independent body to manage the standard specifications for defining container images and the runtime. In order to maintain that independence, the containers/image and containers/storage projects were created on GitHub.

Since you can run podman without being root, there needs to be a separate place where podman can write images. Podman uses a repository in the user’s home directory: ~/.local/share/containers. This avoids making /var/lib/containers world-writeable or other practices that might lead to potential security problems. This also ensures that every user has separate sets of containers and images and all can use Podman concurrently on the same host without stepping on each other. When users are finished with their work, they can push to a common registry to share their image with others.

Docker users coming to Podman find that knowing these locations is useful for debugging and for the important rm -rf /var/lib/containers, when you just want to start over. However, once you start using Podman, you’ll probably start using the new -all option to podman rm and podman rmi instead.

Container images are compatible between Podman and other runtimes

Despite the new locations for the local repositories, the images created by Docker or Podman are compatible with the OCI standard. Podman can push to and pull from popular container registries like Quay.io and Docker hub, as well as private registries. For example, you can pull the latest Fedora image from the Docker hub and run it using Podman. Not specifying a registry means Podman will default to searching through registries listed in the registries.conf file, in the order in which they are listed. An unmodified registries.conf file means it will look in the Docker hub first.

$ podman pull fedora:latest
$ podman run -it fedora bash

Images pushed to an image registry by Docker can be pulled down and run by Podman. For example, an image (myfedora) I created using Docker and pushed to my Quay.io repository (ipbabble) using Docker can be pulled and run with Podman as follows:

$ podman pull quay.io/ipbabble/myfedora:latest
$ podman run -it myfedora bash

Podman provides capabilities in its command-line push and pull commands to gracefully move images from /var/lib/docker to /var/lib/containers and vice versa. For example:

$ podman push myfedora docker-daemon:myfedora:latest

Obviously, leaving out the docker-daemon above will default to pushing to the Docker hub. Using quay.io/myquayid/myfedora will push the image to the Quay.io registry (where myquayid below is your personal Quay.io account):

$ podman push myfedora quay.io/myquayid/myfedora:latest

If you are ready to remove Docker, you should shut down the daemon and then remove the Docker package using your package manager. But first, if you have images you created with Docker that you wish to keep, you should make sure those images are pushed to a registry so that you can pull them down later. Or you can use Podman to pull each image (for example, fedora) from the host’s Docker repository into Podman’s OCI-based repository. With RHEL you can run the following:

# systemctl stop docker
# podman pull docker-daemon:fedora:latest
# yum -y remove docker # optional

Podman helps users move to Kubernetes

Podman provides some extra features that help developers and operators in Kubernetes environments. There are extra commands provided by Podman that are not available in Docker. If you are familiar with Docker and are considering using Kubernetes/OpenShift as your container platform, then Podman can help you.

Podman can generate a Kubernetes YAML file based on a running container using podman generate kube. The command podman pod can be used to help debug running Kubernetes pods along with the standard container commands. For more details on how Podman can help you transition to Kubernetes, see the following article by Brent Baude: Podman can now ease the transition to Kubernetes and CRI-O.

What is Buildah and why would I use it?

Buildah actually came first. And maybe that’s why some Docker users get a bit confused. Why do these Podman evangelists also talk about Buildah? Does Podman not do builds?

Podman does do builds and for those familiar with Docker, the build process is the same. You can either build using a Dockerfile using podman build or you can run a container and make lots of changes and then commit those changes to a new image tag. Buildah can be described as a superset of commands related to creating and managing container images and, therefore, it has much finer-grained control over images. Podman’s build command contains a subset of the Buildah functionality. It uses the same code as Buildah for building.

The most powerful way to use Buildah is to write Bash scripts for creating your images—in a similar way that you would write a Dockerfile.

I like to think of the evolution in the following way. When Kubernetes moved to CRI-O based on the OCI runtime specification, there was no need to run a Docker daemon and, therefore, no need to install Docker on any host in the Kubernetes cluster for running pods and containers. Kubernetes could call CRI-O and it could call runC directly. This, in turn, starts the container processes. However, if we want to use the same Kubernetes cluster to do builds, as in the case of OpenShift clusters, then we needed a new tool to perform builds that would not require the Docker daemon and subsequently require that Docker be installed. Such a tool, based on the containers/storage and containers/image projects, would also eliminate the security risk of the open Docker daemon socket during builds, which concerned many users.

Buildah (named for fun because of Dan Walsh’s Boston accent when pronouncing “builder”) fit this bill. For more information on Buildah, see buildah.io and specifically see the blogs and tutorials sections.

There are a couple of extra things practitioners need to understand about Buildah:

  1. It allows for finer control of creating image layers. This is a feature that many container users have been asking for for a long time. Committing many changes to a single layer is desirable.
  2. Buildah’s run command is not the same as Podman’s run command. Because Buildah is for building images, the run command is essentially the same as the Dockerfile RUN command. In fact, I remember the week this was made explicit. I was foolishly complaining that some port or mount that I was trying wasn’t working as I expected it to. Dan (@rhatdan) weighed in and said that Buildah should not be supporting running containers in that way. No port mapping. No volume mounting. Those flags were removed. Instead buildah run is for running specific commands in order to help build a container image, for example, buildah run dnf -y install nginx.
  3. Buildah can build images from scratch, that is, images with nothing in them at all. Nothing. In fact, looking at the container storage created as a result of a buildah from scratch command yields an empty directory. This is useful for creating very lightweight images that contain only the packages needed in order to run your application.

A good example use case for a scratch build is to consider the development images versus staging or production images of a Java application. During development, a Java application container image may require the Java compiler and Maven and other tools. But in production, you may only require the Java runtime and your packages. And, by the way, you also do not require a package manager such as DNF/YUM or even Bash. Buildah is a powerful CLI for this use case. See the diagram below. For more information, see Building a Buildah Container Image for Kubernetes and also this Buildah introduction demo.

Buildah is a powerful CLI

Getting back to the evolution story…Now that we had solved the Kubernetes runtime issue with CRI-O and runC, and we had solved the build problem with Buildah, there was still one reason why Docker was still needed on a Kubernetes host: debugging. How can we debug container issues on a host if we don’t have the tools to do it? We would need to install Docker, and then we are back where we started with the Docker daemon on the host. Podman solves this problem.

Podman becomes a tool that solves two problems. It allows operators to examine containers and images with commands they are familiar with using. And it also provides developers with the same tools. So Docker users, developers, or operators, can move to Podman, do all the fun tasks that they are familiar with from using Docker, and do much more.

Conclusion

I hope this article has been useful and will help you migrate to using Podman (and Buildah) confidently and successfully.

For more information:

Related Articles

Take advantage of your Red Hat Developers membership and download RHEL today at no cost.

Source

Rancher adds support for Docker Machine provisioning.

Docker
MachineThis
week we released Rancher 0.12, which adds support for provisioning hosts
using Docker Machine. We’re really excited to get this feature out,
because it makes launching Rancher-enabled Docker hosts easier than
ever. If you’re not familiar with Docker Machine, it is a project that
allows cloud providers to develop standard “drivers” for provisioning
cloud infrastructure on the fly. You can learn more about it on the
Docker
website.
The first cloud we’re supporting with Docker Machine is Digital Ocean.
For our initial release, we chose Digital Ocean, because it is an
excellent implementation of the machine driver. As always, the Digital
Ocean team has focused on simplicity and user experience, and were
fantastic to work with during our testing. Docker machine drivers are
already available for many public cloud providers, as well as vCenter,
CloudStack, OpenStack and other private cloud platforms. We will be
adding support for additional drivers over the next few weeks, and
documenting how you can use any driver you like. Please feel free to
let us know if there
are drivers you would like us to prioritize. Now, let me walk you
through using Docker Machine with Rancher. To get started, click on the
“Regsiter a New Host” link in the Hosts tab within Rancher.
hosts
If this is the first time you’ve added a host, you’ll be presented
with a Host Setup dialog that asks you to confirm the DNS host name or
IP address that hosts should use to connect to the Rancher API. Confirm
this setting and click Save.
host-setup
Once that is completed, you’ll be taken to the Add Host page,
where you’ll see a new tab for provisioning Digital Ocean hosts.
new-host
To provision a Digital Ocean machine, fill out the relevant
information about the host you want to provision, inlcuding the OS
image, size and Digital Ocean region. You’ll need to have a Digital
Ocean access token, which you can get by creating an
account
on their
site. Once you hit create, you’ll be returned to the hosts page where
you will see your new host being created.
creating
Creating the host will take a few minutes, as the VM needs to be
provisioned, configured with Docker, and bootstrapped as a Rancher host.
But once it’s done, the UI will automatically update to show the new
host. At this point, you have a fully enabled Docker host. You can click
the Add Container link to start adding containers. We hope you find this
feature useful and welcome your feedback. As always, you can submit any
feature requests or other issues to the Rancher GitHub
repo
. In the next few weeks,
we’ll be adding the ability to export the Docker machine configuration
so that you can deploy containers outside of Rancher, more verbose
status updates during machine creation, and (of course) more Machine
drivers. If you’d like to talk with one of our engineers and learn more
about Rancher, please feel free to request a demo, and we’ll walk you
through Rancher and answer all of your questions.

Source