Sharing Docker Containers across DevOps Environments

docker

Docker provides a powerful tool for creating lightweight images and containerized processes, but did you know it can make your development environment part of the DevOps pipeline too? Whether you’re managing tens of thousands of servers in the cloud or are a software engineer looking to incorporate Docker containers into the software development life cycle, this article has a little something for everyone with a passion for Linux and Docker.

In this article, I describe how Docker containers flow through the DevOps pipeline. I also cover some advanced DevOps concepts (borrowed from object-oriented programming) on how to use dependency injection and encapsulation to improve the DevOps process. And finally, I show how containerization can be useful for the development and testing process itself, rather than just as a place to serve up an application after it’s written.

Introduction

Containers are hot in DevOps shops, and their benefits from an operations and service delivery point of view have been covered well elsewhere. If you want to build a Docker container or deploy a Docker host, container or swarm, a lot of information is available. However, very few articles talk about how to developinside the Docker containers that will be reused later in the DevOps pipeline, so that’s what I focus on here.

""

Figure 1. Stages a Docker Container Moves Through in a Typical DevOps Pipeline

Container-Based Development Workflows

Two common workflows exist for developing software for use inside Docker containers:

  1. Injecting development tools into an existing Docker container: this is the best option for sharing a consistent development environment with the same toolchain among multiple developers, and it can be used in conjunction with web-based development environments, such as Red Hat’s codenvy.com or dockerized IDEs like Eclipse Che.
  2. Bind-mounting a host directory onto the Docker container and using your existing development tools on the host: this is the simplest option, and it offers flexibility for developers to work with their own set of locally installed development tools.

Both workflows have advantages, but local mounting is inherently simpler. For that reason, I focus on the mounting solution as “the simplest thing that could possibly work” here.

How Docker Containers Move between Environments

A core tenet of DevOps is that the source code and runtimes that will be used in production are the same as those used in development. In other words, the most effective pipeline is one where the identical Docker image can be reused for each stage of the pipeline.

""

Figure 2. Idealized Docker-Based DevOps Pipeline

The notion here is that each environment uses the same Docker image and code base, regardless of where it’s running. Unlike systems such as Puppet, Chef or Ansible that converge systems to a defined state, an idealized Docker pipeline makes duplicate copies (containers) of a fixed image in each environment. Ideally, the only artifact that really moves between environmental stages in a Docker-centric pipeline is the ID of a Docker image; all other artifacts should be shared between environments to ensure consistency.

Handling Differences between Environments

In the real world, environmental stages can vary. As a case point, your QA and staging environments may contain different DNS names, different firewall rules and almost certainly different data fixtures. Combat this per-environment drift by standardizing services across your different environments. For example, ensuring that DNS resolves “db1.example.com” and “db2.example.com” to the right IP addresses in each environment is much more Docker-friendly than relying on configuration file changes or injectable templates that point your application to differing IP addresses. However, when necessary, you can set environment variables for each container rather than making stateful changes to the fixed image. These variables then can be managed in a variety of ways, including the following:

  1. Environment variables set at container runtime from the command line.
  2. Environment variables set at container runtime from a file.
  3. Autodiscovery using etcd, Consul, Vault or similar.

Consider a Ruby microservice that runs inside a Docker container. The service accesses a database somewhere. In order to run the same Ruby image in each different environment, but with environment-specific data passed in as variables, your deployment orchestration tool might use a shell script like this one, “Example Mircoservice Deployment”:


# Reuse the same image to create containers in each
# environment.
docker pull ruby:latest

# Bash function that exports key environment
# variables to the container, and then runs Ruby
# inside the container to display the relevant
# values.
microservice () {
    docker run -e STAGE -e DB --rm ruby \
        /usr/local/bin/ruby -e \
            'printf("STAGE: %s, DB: %s\n",
                    ENV["STAGE"],
                    ENV["DB"])'
}

Table 1 shows an example of how environment-specific information for Development, Quality Assurance and Production can be passed to otherwise-identical containers using exported environment variables.

Table 1. Same Image with Injected Environment Variables

Development Quality Assurance Production
export STAGE=dev DB=db1; microservice export STAGE=qa DB=db2; microservice export STAGE=prod DB=db3; microservice

To see this in action, open a terminal with a Bash prompt and run the commands from the “Example Microservice Deployment” script above to pull the Ruby image onto your Docker host and create a reusable shell function. Next, run each of the commands from the table above in turn to set up the proper environment variables and execute the function. You should see the output shown in Table 2 for each simulated environment.

Table 2. Containers in Each Environment Producing Appropriate Results

Development Quality Assurance Production
STAGE: dev, DB: db1 STAGE: qa, DB: db2 STAGE: prod, DB: db3

Despite being a rather simplistic example, what’s being accomplished is really quite extraordinary! This is DevOps tooling at its best: you’re re-using the same image and deployment script to ensure maximum consistency, but each deployed instance (a “container” in Docker parlance) is still being tuned to operate properly within its pipeline stage.

With this approach, you limit configuration drift and variance by ensuring that the exact same image is re-used for each stage of the pipeline. Furthermore, each container varies only by the environment-specific data or artifacts injected into them, reducing the burden of maintaining multiple versions or per-environment architectures.

But What about External Systems?

The previous simulation didn’t really connect to any services outside the Docker container. How well would this work if you needed to connect your containers to environment-specific things outside the container itself?

Next, I simulate a Docker container moving from development through other stages of the DevOps pipeline, using a different database with its own data in each environment. This requires a little prep work first.

First, create a workspace for the example files. You can do this by cloning the examples from GitHub or by making a directory. As an example:


# Clone the examples from GitHub.
git clone \
    https://github.com/CodeGnome/SDCAPS-Examples
cd SDCAPS-Examples/db

# Create a working directory yourself.
mkdir -p SDCAPS-Examples/db
cd SDCAPS-Examples/db

The following SQL files should be in the db directory if you cloned the example repository. Otherwise, go ahead and create them now.

db1.sql:


-- Development Database
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE AppData (
    login TEXT UNIQUE NOT NULL,
    name TEXT,
    password TEXT
);
INSERT INTO AppData
VALUES ('root','developers','dev_password'),
       ('dev','developers','dev_password');
COMMIT;

db2.sql:


-- Quality Assurance (QA) Database
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE AppData (
    login TEXT UNIQUE NOT NULL,
    name TEXT,
    password TEXT
);
INSERT INTO AppData
VALUES ('root','qa admins','admin_password'),
       ('test','qa testers','user_password');
COMMIT;

db3.sql:


-- Production Database
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE AppData (
    login TEXT UNIQUE NOT NULL,
    name TEXT,
    password TEXT
);
INSERT INTO AppData
VALUES ('root','production',
        '$1$Ax6DIG/K$TDPdujixy5DDscpTWD5HU0'),
       ('deploy','devops deploy tools',
        '$1$hgTsycNO$FmJInHWROtkX6q7eWiJ1p/');
COMMIT;

Next, you need a small utility to create (or re-create) the various SQLite databases. This is really just a convenience script, so if you prefer to initialize or load the SQL by hand or with another tool, go right ahead:


#!/usr/bin/env bash

# You assume the database files will be stored in an
# immediate subdirectory named "db" but you can
# override this using an environment variable.
: "${DATABASE_DIR:=db}"
cd "$DATABASE_DIR"

# Scan for the -f flag. If the flag is found, and if
# there are matching filenames, verbosely remove the
# existing database files.
pattern='(^|[[:space:]])-f([[:space:]]|$)'
if [[ "$*" =~ $pattern ]] &&
    compgen -o filenames -G 'db?' >&-
then
    echo "Removing existing database files ..."
    rm -v db? 2> /dev/null
    echo
fi

# Process each SQL dump in the current directory.
echo "Creating database files from SQL ..."
for sql_dump in *.sql; do
    db_filename="${sql_dump%%.sql}"
    if [[ ! -f "$db_filename" ]]; then
        sqlite3 "$db_filename" < "$sql_dump" &&
        echo "$db_filename created"
    else
        echo "$db_filename already exists"
    fi
done

When you run ./create_databases.sh, you should see:


Creating database files from SQL ...
db1 created
db2 created
db3 created

If the utility script reports that the database files already exist, or if you want to reset the database files to their initial state, you can call the script again with the -f flag to re-create them from the associated .sql files.

Creating a Linux Password

You probably noticed that some of the SQL files contained clear-text passwords while others have valid Linux password hashes. For the purposes of this article, that’s largely a contrivance to ensure that you have different data in each database and to make it easy to tell which database you’re looking at from the data itself.

For security though, it’s usually best to ensure that you have a properly hashed password in any source files you may store. There are a number of ways to generate such passwords, but the OpenSSL library makes it easy to generate salted and hashed passwords from the command line.

Tip: for optimum security, don’t include your desired password or passphrase as an argument to OpenSSL on the command line, as it could then be seen in the process list. Instead, allow OpenSSL to prompt you with Password: and be sure to use a strong passphrase.

To generate a salted MD5 password with OpenSSL:


$ openssl passwd \
    -1 \
    -salt "$(openssl rand -base64 6)"
Password:

Then you can paste the salted hash into /etc/shadow, an SQL file, utility script or wherever else you may need it.

Simulating Deployment inside the Development Stage

Now that you have some external resources to experiment with, you’re ready to simulate a deployment. Let’s start by running a container in your development environment. I follow some DevOps best practices here and use fixed image IDs and defined gem versions.

DevOps Best Practices for Docker Image IDs

To ensure that you’re re-using the same image across pipeline stages, always use an image ID rather than a named tag or symbolic reference when pulling images. For example, while the “latest” tag might point to different versions of a Docker image over time, the SHA-256 identifier of an image version remains constant and also provides automatic validation as a checksum for downloaded images.

Furthermore, you always should use a fixed ID for assets you’re injecting into your containers. Note how you specify a specific version of the SQLite3 Ruby gem to inject into the container at each stage. This ensures that each pipeline stage has the same version, regardless of whether the most current version of the gem from a RubyGems repository changes between one container deployment and the next.

Getting a Docker Image ID

When you pull a Docker image, such as ruby:latest, Docker will report the digest of the image on standard output:


$ docker pull ruby:latest
latest: Pulling from library/ruby
Digest:
sha256:eed291437be80359321bf66a842d4d542a789e
↪687b38c31bd1659065b2906778
Status: Image is up to date for ruby:latest

If you want to find the ID for an image you’ve already pulled, you can use the inspect sub-command to extract the digest from Docker’s JSON output—for example:


$ docker inspect \
      --format='{{index .RepoDigests 0}}' \
      ruby:latest
      ruby@sha256:eed291437be80359321bf66a842d4d542a789
↪e687b38c31bd1659065b2906778

First, you export the appropriate environment variables for development. These values will override the defaults set by your deployment script and affect the behavior of your sample application:


# Export values we want accessible inside the Docker
# container.
export STAGE="dev" DB="db1"

Next, implement a script called container_deploy.sh that will simulate deployment across multiple environments. This is an example of the work that your deployment pipeline or orchestration engine should do when instantiating containers for each stage:


#!/usr/bin/env bash

set -e

####################################################
# Default shell and environment variables.
####################################################
# Quick hack to build the 64-character image ID
# (which is really a SHA-256 hash) within a
# magazine's line-length limitations.
hash_segments=(
    "eed291437be80359321bf66a842d4d54"
    "2a789e687b38c31bd1659065b2906778"
)
printf -v id "%s" "${hash_segments[@]}"

# Default Ruby image ID to use if not overridden
# from the script's environment.
: "${IMAGE_ID:=$id}"

# Fixed version of the SQLite3 gem.
: "${SQLITE3_VERSION:=1.3.13}"

# Default pipeline stage (e.g. dev, qa, prod).
: "${STAGE:=dev}"

# Default database to use (e.g. db1, db2, db3).
: "${DB:=db1}"

# Export values that should be visible inside the
# container.
export STAGE DB

####################################################
# Setup and run Docker container.
####################################################
# Remove the Ruby container when script exits,
# regardless of exit status unless DEBUG is set.
cleanup () {
    local id msg1 msg2 msg3
    id="$container_id"
    if [[ ! -v DEBUG ]]; then
        docker rm --force "$id" >&-
    else
        msg1="DEBUG was set."
        msg2="Debug the container with:"
        msg3="    docker exec -it $id bash"
        printf "\n%s\n%s\n%s\n" \
          "$msg1" \
          "$msg2" \
          "$msg3" \
          > /dev/stderr
  fi
}
trap "cleanup" EXIT

# Set up a container, including environment
# variables and volumes mounted from the local host.
docker run \
    -d \
    -e STAGE \
    -e DB \
    -v "${DATABASE_DIR:-${PWD}/db}":/srv/db \
    --init \
    "ruby@sha256:$IMAGE_ID" \
    tail -f /dev/null >&-

# Capture the container ID of the last container
# started.
container_id=$(docker ps -ql)

# Inject a fixed version of the database gem into
# the running container.
echo "Injecting gem into container..."
docker exec "$container_id" \
    gem install sqlite3 -v "$SQLITE3_VERSION" &&
    echo

# Define a Ruby script to run inside our container.
#
# The script will output the environment variables
# we've set, and then display contents of the
# database defined in the DB environment variable.
ruby_script='
    require "sqlite3"

    puts %Q(DevOps pipeline stage: #{ENV["STAGE"]})
    puts %Q(Database for this stage: #{ENV["DB"]})
    puts
    puts "Data stored in this database:"

    Dir.chdir "/srv/db"
    db    = SQLite3::Database.open ENV["DB"]
    query = "SELECT rowid, * FROM AppData"
    db.execute(query) do |row|
        print " " * 4
        puts row.join(", ")
    end
'

# Execute the Ruby script inside the running
# container.
docker exec "$container_id" ruby -e "$ruby_script"

There are a few things to note about this script. First and foremost, your real-world needs may be either simpler or more complex than this script provides for. Nevertheless, it provides a reasonable baseline on which you can build.

Second, you may have noticed the use of the tail command when creating the Docker container. This is a common trick used for building containers that don’t have a long-running application to keep the container in a running state. Because you are re-entering the container using multiple exec commands, and because your example Ruby application runs once and exits, tail sidesteps a lot of ugly hacks needed to restart the container continually or keep it running while debugging.

Go ahead and run the script now. You should see the same output as listed below:


$ ./container_deploy.sh
Building native extensions.  This could take a while...
Successfully installed sqlite3-1.3.13
1 gem installed

DevOps pipeline stage: dev
Database for this stage: db1

Data stored in this database:
    1, root, developers, dev_password
    2, dev, developers, dev_password

Simulating Deployment across Environments

Now you’re ready to move on to something more ambitious. In the preceding example, you deployed a container to the development environment. The Ruby application running inside the container used the development database. The power of this approach is that the exact same process can be re-used for each pipeline stage, and the only thing you need to change is the database to which the application points.

In actual usage, your DevOps configuration management or orchestration engine would handle setting up the correct environment variables for each stage of the pipeline. To simulate deployment to multiple environments, populate an associative array in Bash with the values each stage will need and then run the script in a for loop:


declare -A env_db
env_db=([dev]=db1 [qa]=db2 [prod]=db3)

for env in dev qa prod; do
    export STAGE="$env" DB="${env_db[$env]}"
    printf "%s\n" "Deploying to ${env^^} ..."
    ./container_deploy.sh
done

This stage-specific approach has a number of benefits from a DevOps point of view. That’s because:

  1. The image ID deployed is identical across all pipeline stages.
  2. A more complex application can “do the right thing” based on the value of STAGE and DB (or other values) injected into the container at runtime.
  3. The container is connected to the host filesystem the same way at each stage, so you can re-use source code or versioned artifacts pulled from Git, Nexus or other repositories without making changes to the image or container.
  4. The switcheroo magic for pointing to the right external resources is handled by your deployment script (in this case, container_deploy.sh) rather than by making changes to your image, application or infrastructure.
  5. This solution is great if your goal is to trap most of the complexity in your deployment tools or pipeline orchestration engine. However, a small refinement would allow you to push the remaining complexity onto the pipeline infrastructure instead.

Imagine for a moment that you have a more complex application than the one you’ve been working with here. Maybe your QA or staging environments have large data sets that you don’t want to re-create on local hosts, or maybe you need to point at a network resource that may move around at runtime. You can handle this by using a well known name that is resolved by a external resource instead.

You can show this at the filesystem level by using a symlink. The benefit of this approach is that the application and container no longer need to know anything about which database is present, because the database is always named “db”. Consider the following:


declare -A env_db
env_db=([dev]=db1 [qa]=db2 [prod]=db3)
for env in dev qa prod; do
    printf "%s\n" "Deploying to ${env^^} ..."
    (cd db; ln -fs "${env_db[$env]}" db)
    export STAGE="$env" DB="db"
    ./container_deploy.sh
done

Likewise, you can configure your Domain Name Service (DNS) or a Virtual IP (VIP) on your network to ensure that the right database host or cluster is used for each stage. As an example, you might ensure that db.example.com resolves to a different IP address at each pipeline stage.

Sadly, the complexity of managing multiple environments never truly goes away—it just hopefully gets abstracted to the right level for your organization. Think of your objective as similar to some object-oriented programming (OOP) best practices: you’re looking to create pipelines that minimize things that change and to allow applications and tools to rely on a stable interface. When changes are unavoidable, the goal is to keep the scope of what might change as small as possible and to hide the ugly details from your tools to the greatest extent that you can.

If you have thousands or tens of thousands of servers, it’s often better to change a couple DNS entries without downtime rather than rebuild or redeploy 10,000 application containers. Of course, there are always counter-examples, so consider the trade-offs and make the best decisions you can to encapsulate any unavoidable complexity.

Developing inside Your Container

I’ve spent a lot of time explaining how to ensure that your development containers look like the containers in use in other stages of the pipeline. But have I really described how to develop inside these containers? It turns out I’ve actually covered the essentials, but you need to shift your perspective a little to put it all together.

The same processes used to deploy containers in the previous sections also allow you to work inside a container. In particular, the previous examples have touched on how to bind-mount code and artifacts from the host’s filesystem inside a container using the -v or --volume flags. That’s how the container_deploy.sh script mounts database files on /srv/db inside the container. The same mechanism can be used to mount source code, and the Docker exec command then can be used to start a shell, editor or other development process inside the container.

The develop.sh utility script is designed to showcase this ability. When you run it, the script creates a Docker container and drops you into a Ruby shell inside the container. Go ahead and run ./develop.sh now:


#!/usr/bin/env bash

id="eed291437be80359321bf66a842d4d54"
id+="2a789e687b38c31bd1659065b2906778"
: "${IMAGE_ID:=$id}"
: "${SQLITE3_VERSION:=1.3.13}"
: "${STAGE:=dev}"
: "${DB:=db1}"

export DB STAGE

echo "Launching '$STAGE' container..."
docker run \
    -d \
    -e DB \
    -e STAGE \
    -v "${SOURCE_CODE:-$PWD}":/usr/local/src \
    -v "${DATABASE_DIR:-${PWD}/db}":/srv/db \
    --init \
    "ruby@sha256:$IMAGE_ID" \
    tail -f /dev/null >&-

container_id=$(docker ps -ql)

show_cmd () {
    enter="docker exec -it $container_id bash"
    clean="docker rm --force $container_id"
    echo -ne \
        "\nRe-enter container with:\n\t${enter}"
    echo -ne \
        "\nClean up container with:\n\t${clean}\n"
}
trap 'show_cmd' EXIT

docker exec "$container_id" \
    gem install sqlite3 -v "$SQLITE3_VERSION" >&-

docker exec \
    -e DB \
    -e STAGE \
    -it "$container_id" \
    irb -I /usr/local/src -r sqlite3

Once inside the container’s Ruby read-evaluate-print loop (REPL), you can develop your source code as you normally would from outside the container. Any source code changes will be seen immediately from inside the container at the defined mountpoint of /usr/local/src. You then can test your code using the same runtime that will be available later in your pipeline.

Let’s try a few basic things just to get a feel for how this works. Ensure that you have the sample Ruby files installed in the same directory as develop.sh. You don’t actually have to know (or care) about Ruby programming for this exercise to have value. The point is to show how your containerized applications can interact with your host’s development environment.

example_query.rb:


# Ruby module to query the table name via SQL.
module ExampleQuery
  def self.table_name
    path = "/srv/db/#{ENV['DB']}"
    db   = SQLite3::Database.new path
    sql =<<-'SQL'
      SELECT name FROM sqlite_master
       WHERE type='table'
       LIMIT 1;
    SQL
    db.get_first_value sql
  end
end

source_list.rb:


# Ruby module to list files in the source directory
# that's mounted inside your container.
module SourceList
  def self.array
    Dir['/usr/local/src/*']
  end

  def self.print
    puts self.array
  end
end

At the IRB prompt (irb(main):001:0>), try the following code to make sure everything is working as expected:


# returns "AppData"
load 'example_query.rb'; ExampleQuery.table_name

# prints file list to standard output; returns nil
load 'source_list.rb'; SourceList.print

In both cases, Ruby source code is being read from /usr/local/src, which is bound to the current working directory of the develop.sh script. While working in development, you could edit those files in any fashion you chose and then load them again into IRB. It’s practically magic!

It works the other way too. From inside the container, you can use any tool or feature of the container to interact with your source directory on the host system. For example, you can download the familiar Docker whale logo and make it available to your development environment from the container’s Ruby REPL:


Dir.chdir '/usr/local/src'
cmd =
  "curl -sLO "             <<
  "https://www.docker.com" <<
  "/sites/default/files"   <<
  "/vertical_large.png"
system cmd

Both /usr/local/src and the matching host directory now contain the vertical_large.png graphic file. You’ve added a file to your source tree from inside the Docker container!

""

Figure 3. Docker Logo on the Host Filesystem and inside the Container

When you press Ctrl-D to exit the REPL, the develop.sh script informs you how to reconnect to the still-running container, as well as how to delete the container when you’re done with it. Output will look similar to the following:


Re-enter container with:
        docker exec -it 9a2c94ebdee8 bash
Clean up container with:
        docker rm --force 9a2c94ebdee8

As a practical matter, remember that the develop.sh script is setting Ruby’s LOAD_PATH and requiring the sqlite3 gem for you when launching the first instance of IRB. If you exit that process, launching another instance of IRB with docker exec or from a Bash shell inside the container may not do what you expect. Be sure to run irb -I /usr/local/src -r sqlite3 to re-create that first smooth experience!

Wrapping Up

I covered how Docker containers typically flow through the DevOps pipeline, from development all the way to production. I looked at some common practices for managing the differences between pipeline stages and how to use stage-specific data and artifacts in a reproducible and automated fashion. Along the way, you also may have learned a little more about Docker commands, Bash scripting and the Ruby REPL.

Install MongoDB Community Edition 4.0 on Linux

Image result for mongodb images

MongoDB is an open source no-schema and high-performance document-oriented NoSQL database (NoSQL means it doesn’t provide any tables, rows, etc.) system much like Apache CouchDB. It stores data in JSON-like documents with dynamic schema’s for better performance.

MongoDB Packages

Following are the supported MongoDB packages, comes with own repository and contains:

  1. mongodb-org – A metapackage that will install following 4 component packages automatically.
  2. mongodb-org-server – Contains the mongod daemon and releated configuration and init scripts.
  3. mongodb-org-mongos – Contains the mongos daemon.
  4. mongodb-org-shell – Contains the mongo shell.
  5. mongodb-org-tools – Contains the MongoDB tools: mongo, mongodump, mongorestore, mongoexport, mongoimport, mongostat, mongotop, bsondump, mongofiles, mongooplog and mongoperf.

In this article, we will walk you through the process of installing MongoDB 4.0 Community Edition on RHELCentOSFedoraUbuntu and Debian servers with the help of official MongoDB repository using .rpm and .debpackages on 64-bit systems only.

Step 1: Adding MongoDB Repository

First, we need to add MongoDB Official Repository to install MongoDB Community Edition on 64-bit platforms.

On Red Hat, CentOS and Fedora

Create a file /etc/yum.repos.d/mongodb-org-4.0.repo to install MongoDB directly, using yum command.

# vi /etc/yum.repos.d/mongodb-org-4.0.repo

Now add the following repository file.

[mongodb-org-4.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc

On Ubuntu Systems

MongoDB repository only provides packages for 18.04 LTS (bionic)16.04 LTS (xenial) and 14.04 LTS (Trusty Tahr) long-term supported 64bit Ubuntu releases.

To install MongoDB Community Edition on Ubuntu, you need to first import the public key used by the package management system.

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4

Next, create a MongoDB repository file and update the repository as shown.

On Ubuntu 18.04
$ echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list
$ sudo apt-get update
On Ubuntu 16.04
$ echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list
$ sudo apt-get update
On Ubuntu 14.04
$ echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list
$ sudo apt-get update

On Debian Systems

MongoDB repository only provides packages for 64-bit Debian 9 Stretch and Debian 8 Jessie, to install MongoDB on Debian, you need to run the following series of commands:

On Debian 9
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4
$ echo "deb http://repo.mongodb.org/apt/debian stretch/mongodb-org/4.0 main" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list
$ sudo apt-get update
On Debian 8
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4
$ echo "deb http://repo.mongodb.org/apt/debian jessie/mongodb-org/4.0 main" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list
$ sudo apt-get update

Step 2: Installing MongoDB Community Edition Packages

Once the repo installed, run the following command to install MongoDB 4.0.

# yum install -y mongodb-org               [On RPM based Systems]
$ sudo apt-get install -y mongodb-org      [On DEB based Systems]

To install a particular MongoDB release version, include each component package individually and add the version number to the package name, as shown in the following example:

-------------- On RPM based Systems --------------
# yum install -y mongodb-org-4.0.6 mongodb-org-server-4.0.6 mongodb-org-shell-4.0.6 mongodb-org-mongos-4.0.6 mongodb-org-tools-4.0.6

-------------- On DEB based Systems --------------
$ sudo apt-get install -y mongodb-org=4.0.6 mongodb-org-server=4.0.6 mongodb-org-shell=4.0.6 mongodb-org-mongos=4.0.6 mongodb-org-tools=4.0.6

Step 3: Configure MongoDB Community Edition

Open file /etc/mongod.conf and verify below basic settings. If commented any settings, please un-comment it.

# vi /etc/mongod.conf
path: /var/log/mongodb/mongod.log
port=27017
dbpath=/var/lib/mongo

Note: This step is only applicable for Red Hat based distributions, Debian and Ubuntu users can ignore it.

Now open port 27017 on the firewall.

-------------- On FirewallD based Systems --------------
# firewall-cmd --zone=public --add-port=27017/tcp --permanent
# firewall-cmd --reload

-------------- On IPtables based Systems --------------
# iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 27017 -j ACCEPT

Step 4: Run MongoDB Community Edition

Now it’s time to start the mongod process by issuing the following command:

# service mongod start
OR               
$ sudo service mongod start

You can make sure that the mongod process has been started successfully by verifying the contents of /var/log/mongodb/mongod.log log file for a line reading.

2019-03-05T01:33:47.121-0500 I NETWORK  [initandlisten] waiting for connections on port 27017

Also you can start, stop or restart mongod process by issuing the following commands:

# service mongod start
# service mongod stop
# service mongod restart

Now enable mongod process at system boot.

# systemctl enable mongod.service     [On SystemD based Systems]
# chkconfig mongod on                 [On SysVinit based Systems]

Step 5: Begin using MongoDB

Connect to your MongoDB shell by using following command.

# mongo

Command Ouput :

MongoDB shell version v4.0.6
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("70ffe350-a41f-42b9-871a-17ccde28ba24") }
MongoDB server version: 4.0.6
Welcome to the MongoDB shell.

This command will connect to your MongoDB database. Run the following basic commands.

> show dbs
> show collections
> show users
> use <db name>
> exit

Step 6: Uninstall MongoDB Community Edition

To completely uninstall MongoDB, you must delete the MongoDB applications, configuration files and directories contains any data and logs.

The following instructions will walk through you the process of removing MongoDB from your system.

On RHEL, CentOS and Fedora

# service mongod stop
# yum erase $(rpm -qa | grep mongodb-org)
# rm -r /var/log/mongodb
# rm -r /var/lib/mongo

On Debian and Ubuntu

$ sudo service mongod stop
$ sudo apt-get purge mongodb-org*
$ sudo rm -r /var/log/mongodb
$ sudo rm -r /var/lib/mongodb

For more information visit official page at http://docs.mongodb.org/manual/contents/.

Source

Bash Case Statement | Linuxize

Bash case statements are generally used to simplify complex conditionals when you have multiple different choices. Using the case statement instead of nested if statements will help you make your bash scripts more readable and easier to maintain.

The Bash case statement has a similar concept with the Javascript or C switch statement. The main difference is that unlike the C switch statement the Bash case statement doesn’t continue to search for a pattern match once it has found one and executed statements associated with that pattern.

In this tutorial, we will cover the basics of the Bash case statements and show you how to use them in your shell scripts.

The Bash case statement takes the following form:

case EXPRESSION in

  PATTERN_1)
    STATEMENTS
    ;;

  PATTERN_2)
    STATEMENTS
    ;;

  PATTERN_N)
    STATEMENTS
    ;;

  *)
    STATEMENTS
    ;;
esac

Copy

  • Each case statement starts with the case keyword followed by the case expression and the in keyword. The statement ends with the esac keyword.
  • You can use multiple patterns separated by the | operator. The ) operator terminates a pattern list.
  • A pattern can have special characters.
  • A pattern and its associated commands are known as a clause.
  • Each clause must be terminated with ;;.
  • The commands corresponding to the first pattern that matches the expression are executed.
  • It is a common practice to use the wildcard asterisk symbol (*) as a final pattern to define the default case. This pattern will always match.
  • If no pattern is matched the return status is zero. Otherwise, the return status is the exit status of the executed commands.

Here is an example using the case statement in a bash script that will print the official language of a given country:

languages.sh
#!/bin/bash

echo -n "Enter the name of a country: "
read COUNTRY

echo -n "The official language of $COUNTRY is "

case $COUNTRY in

  Lithuania)
    echo -n "Lithuanian"
    ;;

  Romania | Moldova)
    echo -n "Romanian"
    ;;

  Italy | "San Marino" | Switzerland | "Vatican City")
    echo -n "Italian"
    ;;

  *)
    echo -n "unknown"
    ;;
esac

Copy

Save the custom script as a file and run it from the command line.

bash languages.sh

Copy

The script will ask you to enter a country. For example, if you type “Lithuania” it will match the first pattern and the echo command in that clause will be executed.

The script will print the following output:

Enter the name of a country: Lithuania
The official language of Lithuania is Lithuanian

Copy

If you enter a country that doesn’t match any other pattern except the default wildcard asterisk symbol, let’s say Argentina the script will execute echo command inside the default clause.

Enter the name of a country: Argentina
The official language of Argentina is unknown

Copy

By now you should have a good understanding of how to write bash case statements. They are often used to pass parameters to a shell script from the command line. For example, the init scripts are using case statements for starting, stopping or restarting services.

Source

How to Install FFmpeg in Linux

FFmpeg is one of the best multimedia frameworks that contains various tools for different tasks. For example the ffplay is a portable media player that can be used to play audio/video files, ffmpeg can convert between different file formats, ffserver can be used to stream live broadcasts and ffprobe is able to analyze multimedia stream.

This framework is really powerful due to the diversity of available tools in it, that provide the best technical solution for the user. According to the description of FFmpeg in the official website, the reason for having such a great multimedia framework is the combination of the best free software options available.

The FFmpeg framework offers high security and the reason for this is the seriosity of the developers when they review the code, it is always done with security in mind.

I am very sure you will find this framework very useful when you would like to do some digital audio and video streaming or recording. There are many other practical thing that you can do with the help of the FFmpegframework such as converting your wav file to an mp3 one, encode and decode your videos or even scale them.

According to the official website FFmpeg is able to do the followings.

  1. decode multimedia files
  2. encode multimedia files
  3. transcode multimedia files
  4. mux multimedia files
  5. demux multimedia files
  6. stream multimedia files
  7. filter multimedia files
  8. play multimedia files

Let me take an example, a very simple one. The following command will convert your mp4 file into an avi file, simple as that.

# ffmpeg -i Lone_Ranger.mp4 Lone_Ranger.avi

The above command is only useful for explanation, it is not recommended to be used in practice because the codex, bitrate and other specifics are not declared.

In the next part we will practice with some of the FFmpeg multimedia framework tools, but before doing that we have to install it in our Linux box.

How to Install FFmpeg Multimedia Framework in Linux

Since the FFmpeg packages are offered for the most used Linux distributions and the installation will be relatively easy. Lets start with the installation of the FFmpeg framework in Ubuntu based distributions.

Install FFmpeg on Ubuntu and Linux Mint

I will install FFmpeg via the PPA recommended in the official blog. Open a new terminal (CTRL+ALT+T) and then run the following commands.

$ sudo add-apt-repository ppa:mc3man/trusty-media
$ sudo apt-get update
$ sudo apt-get install ffmpeg
$ ffmpeg -version

Install FFmpeg on Debian

To install FFmpeg, first you need to add the following line to your /etc/apt/sources.list file. As per your distribution, change ‘<mydist>‘ with ‘stretch‘, ‘jessie‘, or ‘wheezy‘.

deb http://www.deb-multimedia.org <mydist> main non-free deb-src http://www.deb-multimedia.org <mydist> main non-free

Then update system package sources and install FFmpeg with the following commands.

$ sudo apt-get update
$ sudo apt-get install deb-multimedia-keyring
$ sudo apt-get update
$ sudo apt-get install ffmpeg
$ ffmpeg -version

Install FFmpeg on CentOS and RHEL

To install FFmpeg on CentOS and RHEL distributions, you need to enable EPEL and RPM Fusion repository on the system using following commands.

To install and enable EPEL, use following command.

# yum install epel-release

To install and enable RPM Fusion, use following command on your distribution version.

-------------- On CentOS & RHEL 7.x -------------- 
# yum localinstall --nogpgcheck https://download1.rpmfusion.org/free/el/rpmfusion-free-release-7.noarch.rpm https://download1.rpmfusion.org/nonfree/el/rpmfusion-nonfree-release-7.noarch.rpm

-------------- On CentOS & RHEL 6.x --------------
# yum localinstall --nogpgcheck https://download1.rpmfusion.org/free/el/rpmfusion-free-release-6.noarch.rpm https://download1.rpmfusion.org/nonfree/el/rpmfusion-nonfree-release-6.noarch.rpm

After enabling repositories, run the following command to install FFmpeg:

# yum install ffmpeg ffmpeg-devel
# ffmpeg -version

Install FFmpeg on Fedora

On Fedora, you need to install and enable RPM Fusion to install FFmpeg as shown.

$ sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
$ sudo dnf install ffmpeg ffmpeg-devel
$ ffmpeg -version

FFmpeg Compiling from Source

Compiling software from source is not the easiest thing in the world, but with the right instructions we will be able to do it. First make sure your system meet all the dependencies. The installation of these dependencies can be done with the help of the following commands.

First, tell the system to pull down the latest packages.

$ sudo apt-get update

Install the dependencies with the following command.

-------------- On Debian & Ubuntu --------------
$ sudo apt-get -y install autoconf automake build-essential libass-dev libfreetype6-dev libgpac-dev \
libsdl1.2-dev libtheora-dev libtool libva-dev libvdpau-dev libvorbis-dev libx11-dev \
libxext-dev libxfixes-dev pkg-config texi2html zlib1g-dev
-------------- On CentOS and RHEL --------------
# yum install glibc gcc gcc-c++ autoconf automake libtool git make nasm pkgconfig SDL-devel \
a52dec a52dec-devel alsa-lib-devel faac faac-devel faad2 faad2-devel freetype-devel giflib gsm gsm-devel \
imlib2 imlib2-devel lame lame-devel libICE-devel libSM-devel libX11-devel libXau-devel libXdmcp-devel \
libXext-devel libXrandr-devel libXrender-devel libXt-devel libogg libvorbis vorbis-tools mesa-libGL-devel \
mesa-libGLU-devel xorg-x11-proto-devel zlib-devel libtheora theora-tools ncurses-devel libdc1394 libdc1394-devel \
amrnb-devel amrwb-devel opencore-amr-devel

Then use the following command to create a new directory for the FFmpeg sources. This is the directory where the source files will be downloaded.

$ mkdir ~/ffmpeg_sources

Now compile and install yasm assembler used by FFmpeg by running the following commands.

$ cd ~/ffmpeg_sources
$ wget http://www.tortall.net/projects/yasm/releases/yasm-1.3.0.tar.gz
$ tar xzvf yasm-1.3.0.tar.gz
$ cd yasm-1.3.0
$ ./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin"
$ make
$ make install
$ make distclean
$ export "PATH=$PATH:$HOME/bin"

After you have successfully installed the yasm assembler it is time to install some various encoders that will be used with the specific FFmpeg tools. Use the following commands to install the H.264 video encoder.

$ cd ~/ffmpeg_sources
$ wget http://download.videolan.org/pub/x264/snapshots/last_x264.tar.bz2
$ tar xjvf last_x264.tar.bz2
$ cd x264-snapshot*
$ ./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" --enable-static
$ make
$ make install
$ make distclean

Another nice useful encoder is the libfdk-aac AAC audio encoder.

$ cd ~/ffmpeg_sources
$ wget -O fdk-aac.zip https://github.com/mstorsjo/fdk-aac/zipball/master
$ unzip fdk-aac.zip
$ cd mstorsjo-fdk-aac*
$ autoreconf -fiv
$./configure --prefix="$HOME/ffmpeg_build" --disable-shared
$ make
$ make install
$ make distclean

Install libopus audio decoder and encoder.

$ cd ~/ffmpeg_sources
$ wget http://downloads.xiph.org/releases/opus/opus-1.1.tar.gz
$ tar xzvf opus-1.1.tar.gz
$ cd opus-1.1
$ ./configure --prefix="$HOME/ffmpeg_build" --disable-shared
$ make
$ make install
$ make distclean

Now, it’s time to install ffmpeg from source.

$ cd ~/ffmpeg_sources
$ wget http://ffmpeg.org/releases/ffmpeg-snapshot.tar.bz2
$ tar xjvf ffmpeg-snapshot.tar.bz2
$ cd ffmpeg
$ PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig"
$ export PKG_CONFIG_PATH
$ ./configure --prefix="$HOME/ffmpeg_build" --extra-cflags="-I$HOME/ffmpeg_build/include" \
   --extra-ldflags="-L$HOME/ffmpeg_build/lib" --bindir="$HOME/bin" --extra-libs="-ldl" --enable-gpl \
   --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus \
   --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-nonfree --enable-x11grab
$ make
$ make install
$ make distclean
$ hash -r

Note: If you have not installed certain encoders, make sure to remove ‘–enable-encoder_name‘ from the above ‘./configure‘ command so the installation is done without any problem.

There are many encoders that you can install, but fur the purpose of this article I am not going to install all of them, but you can install them using the following official guides.

  1. FFmpeg Compilation Guide for Ubuntu
  2. FFmpeg Compilation Guide for CentOS

Conclusion

In this first part we updated our readers with the latest news according to the FFmpeg multimedia framework and showed them how to install it in their Linux machines. The next part will be totally about learning how to use the amazing tools inside this leading multimedia framework.

Update: The Part 2 of this FFmpeg series is published, which shows some useful ffmpeg command-line usage to perform various audio, video and image conversion procedures: 15 Useful ‘FFmpeg’ Commands for Video, Audio and Image Conversion in Linux.

15 Useful ‘FFmpeg’ Commands for Video, Audio and Image Conversion in Linux – Part 2

In this article we are going to look at some options and examples of how you can use FFmpeg multimedia framework to perform various conversion procedures on audio and video files.

FFMPEG Command Examples in Linux

15 FFMPEG Command Examples in Linux

For more details about FFmpeg and steps to install it in different Linux distros, read the article from the link below:

 FFmpeg Multimedia Framework Installation Guide on Linux – Part 1

Useful FFmpeg Commands

FFmpeg utility supports almost all major audio and video formats, if you want to check the ffmpeg supported available formats you can use ./ffmpeg -formats command to list all supported formats. If you are new to this tool, here are some handy commands that will give you a better idea about the capabilities of this powerful tool.

1. Get Video File Information

To get information about a file (say video.mp4), run the following command. Remember you have to specify an ouput file, but in this case we only want to get some information about the input file.

$ ffmpeg -i video.flv -hide_banner

Get Video Information

Get Video Information

 

Note: The -hide_banner option is used to hide a copyright notice shown my ffmpeg, such as build options and library versions. This option can be used to suppress printing this information.

For example, if you run the above command without adding -hide_banner option it will print the all FFmpeg tools copyright information as shown.

$ ffmpeg -i video.flv
Hide FFmpeg Version Information

Hide FFmpeg Version Information

2. Split a video into images

To turn a video to number of images, run the command below. The command generates the files named image1.jpgimage2.jpg and so on…

$ ffmpeg -i video.flv image%d.jpg
Split Video into Images

Split Video into Images

After successful execution of above command you can verify that the video turn into multiple images using following ls command.

$ ls -l

total 11648
-rw-r--r-- 1 tecmint tecmint   14592 Oct 19 13:19 image100.jpg
-rw-r--r-- 1 tecmint tecmint   14603 Oct 19 13:19 image101.jpg
-rw-r--r-- 1 tecmint tecmint   14584 Oct 19 13:19 image102.jpg
-rw-r--r-- 1 tecmint tecmint   14598 Oct 19 13:19 image103.jpg
-rw-r--r-- 1 tecmint tecmint   14634 Oct 19 13:19 image104.jpg
-rw-r--r-- 1 tecmint tecmint   14693 Oct 19 13:19 image105.jpg
-rw-r--r-- 1 tecmint tecmint   14641 Oct 19 13:19 image106.jpg
-rw-r--r-- 1 tecmint tecmint   14581 Oct 19 13:19 image107.jpg
-rw-r--r-- 1 tecmint tecmint   14508 Oct 19 13:19 image108.jpg
-rw-r--r-- 1 tecmint tecmint   14540 Oct 19 13:19 image109.jpg
-rw-r--r-- 1 tecmint tecmint   12219 Oct 19 13:18 image10.jpg
-rw-r--r-- 1 tecmint tecmint   14469 Oct 19 13:19 image110.jpg

3. Convert images into a video

Turn number of images to a video sequence, use the following command. This command will transform all the images from the current directory (named image1.jpgimage2.jpg, etc…) to a video file named imagestovideo.mpg.

There are many other image formats (such as jpeg, png, jpg, etc) you can use.

$ ffmpeg -f image2 -i image%d.jpg imagestovideo.mpg
Convert Images to Video

Convert Images to Video

4. Convert a video into mp3 format

To convert an .flv format video file to Mp3 format, run the following command.

$ ffmpeg -i video.flv -vn -ar 44100 -ac 2 -ab 192 -f mp3 audio.mp3
Convert Video to Audio

Convert Video to Audio

Description about the options used in above command:

  1. vn: helps to disable video recording during the conversion.
  2. ar: helps you set audio sampling rate in Hz.
  3. ab: set the audio bitrate.
  4. ac: to set the number of audio channels.
  5. -f: format.

5. Covert flv video file to mpg format

To convert a .flv video file to .mpg, use the following command.

$ ffmpeg -i video.flv video.mpg
Convert Avi to MPG Video Format

Convert Avi to MPG Video Format

6. Convert video into animated gif

To convert a .flv video file to animated, uncompressed gif file, use the command below.

$ ffmpeg -i video.flv animated.gif.mp4
Covert Video to Animated Gif

Covert Video to Animated Gif

7. Convert mpg video file to flv

To convert a .mpg file to .flv format, use the following command.

$ ffmpeg -i video.mpg -ab 26k -f flv video1.flv
Convert Mpg to Flv Video Format

Convert Mpg to Flv Video Format

8. Convert avi video file to mpeg

To convert a .avi file to mpeg for dvd players, run the command below:

$ ffmpeg -i video.avi -target pal-dvd -ps 2000000000 -aspect 16:9 video.mpeg

Explanation about the options used in above command.

  1. target pal-dvd : Output format
  2. ps 2000000000 maximum size for the output file, in bits (here, 2 Gb).
  3. aspect 16:9 : Widescreen.
Convert Avi to Mpeg Video Format

Convert Avi to Mpeg Video Format

9. Convert a video to CD or DVD format

To create a video CD or DVD, FFmpeg makes it simple by letting you specify a target type and the format options required automatically.

You can set a target type as follows: add -target type; type can of the following be vcd, svcd, dvd, dv, pal-vcd or ntsc-svcd on the command line.

To create a VCD, you can run the following command:

$ ffmpeg -i video.mpg -target vcd vcd_video.mpg
Convert Video to DVD Format

Convert Video to DVD Format

10. Extract audio from video file

To extract sound from a video file, and save it as Mp3 file, use the following command:

$ ffmpeg -i video1.avi -vn -ar 44100 -ac 2 -ab 192 -f mp3 audio3.mp3

Explanation about the options used in above command.

  1. Source video : video.avi
  2. Audio bitrate : 192kb/s
  3. output format : mp3
  4. Generated sound : audio3.mp3
Extract Audio from Video

Extract Audio from Video

11. Mix a video and audio together

You can also mix a video with a sound file as follows:

$ ffmpeg -i audio.mp3 -i video.avi video_audio_mix.mpg
Mix Video and Audio

Mix Video and Audio

12. Increase/Reduce Video Playback Speed

To increase video play back speed, run this command. The -vf option sets the video filters that helps to adjust the speed.

$ ffmpeg -i video.mpg -vf "setpts=0.5*PTS" highspeed.mpg
Increase Video Playback Speed

Increase Video Playback Speed

You can also reduce video speed as follows:

$ ffmpeg -i video.mpg -vf "setpts=4.0*PTS" lowerspeed.mpg -hide_banner
Reduce Video Playback Speed

Reduce Video Playback Speed

13. Compare/Test Video and Audio Quality

To compare videos and audios after converting you can use the commands below. This helps you to test videos and audio quality.

$ ffplay video1.mp4
Test Video Quality

Test Video Quality

To test audio quality simply use the name of the audio file as follows:

$ ffplay audio_filename1.mp3
Test Audio Quality

Test Audio Quality

You can listen to them while they play and compare the qualities from the sound.

14. Add Photo or Banner to Audio

You can add a cover poster or image to an audio file using the following command, this comes very useful for uploading MP3s to YouTube.

$ ffmpeg -loop 1 -i image.jpg -i Bryan\ Adams\ -\ Heaven.mp3 -c:v libx264 -c:a aac -strict experimental -b:a 192k -shortest output.mp4
Add Image to Audio

Add Image to Audio

15. Add subtitles to a Movie

If you have a separate subtitle file called subtitle.srt, you can use following command to add subtitle to a movie file:

$ ffmpeg -i video.mp4 -i subtitles.srt -map 0 -map 1 -c copy -c:v libx264 -crf 23 -preset veryfast video-output.mkv

Summary

That is all for now but these are just few examples of using FFmpeg, you can find more options for what you wish to accomplish. Remember to post a comment to provide information about how to use FFmpeg or if you have encountered errors while using it.

Referencehttps://ffmpeg.org/

Source

Infrastructure monitoring: Defense against surprise downtime

A strong monitoring and alert system based on open source tools prevents problems before they affect your infrastructure.

Analytics: Charts and Graphs

Infrastructure monitoring is an integral part of infrastructure management. It is an IT manager’s first line of defense against surprise downtime. Severe issues can inject considerable downtime to live infrastructure, sometimes causing heavy loss of money and material.

Monitoring collects time-series data from your infrastructure so it can be analyzed to predict upcoming issues with the infrastructure and its underlying components. This gives the IT manager or support staff time to prepare and apply a resolution before a problem occurs.

A good monitoring system provides:

  1. Measurement of the infrastructure’s performance over time
  2. Node-level analysis and alerts
  3. Network-level analysis and alerts
  4. Downtime analysis and alerts
  5. Answers to the 5 W’s of incident management and root cause analysis (RCA):
    • What was the actual issue?
    • When did it happen?
    • Why did it happen?
    • What was the downtime?
    • What needs to be done to avoid it in the future?

Building a strong monitoring system

There are a number of tools available that can build a viable and strong monitoring system. The only decision to make is which to use; your answer lies in what you want to achieve with monitoring as well as various financial and business factors you must consider.

While some monitoring tools are proprietary, many open source tools, either unmanaged or community-managed software, will do the job even better than the closed source options.

In this article, I will focus on open source tools and how to use them to create a strong monitoring architecture.

Log collection and analysis

To say “logs are helpful” would be an understatement. Logs not only help in debugging issues; they also provide a lot of information to help you predict an upcoming issue. Logs are the first door to open when you encounter issues with software components.

Both Fluentd and Logstash can be used for log collection; the only reason I would choose Fluentd over Logstash is because of its independence from the Java process; it is written in C+ Ruby, which is widely supported by container runtimes like Docker and orchestration tools like Kubernetes.

Log analytics is the process of analyzing the log data you collect over time and producing real-time logging metrics. Elasticsearch is a powerful tool that can do just that.

Finally, you need a tool that can collect logging metrics and enable you to visualize the log trends using charts and graphs that are easy to understand. Kibana is my favorite option for that purpose.

Because logs can hold sensitive information, here are a few security pointers to remember:

  • Always transport logs over a secure connection.
  • The logging/monitoring infrastructure should be implemented inside the restricted subnet.
  • Access to monitoring user interfaces (e.g., Kibana and Grafana) should be restricted or authenticated only to stakeholders.

Node-level metrics

Not everything is logged!

Yes, you heard that right: Logging monitors a software or a process, not every component in the infrastructure.

Operating system disks, externally mounted data disks, Elastic Block Store, CPU, I/O, network packets, inbound and outbound connections, physical memory, virtual memory, buffer space, and queues are some of the major components that rarely appear in logs unless something fails for them.

So, how could you collect this data?

Prometheus is one answer. You just need to install software-specific exporters on the virtual machine nodes and configure Prometheus to collect time-based data from those unattended components. Grafana uses the data Prometheus collects to provide a live visual representation of your node’s current status.

If you are looking for a simpler solution to collect time-series metrics, consider MetricbeatElastic.io‘s in-house open source tool, which can be used with Kibana to replace Prometheus and Grafana.

Alerts and notifications

You can’t take advantage of monitoring without alerts and notifications. Unless stakeholders—no matter where they are in this big, big world—receive a notification about an issue, there’s no way they can analyze and fix the issue, prevent the customer from being impacted, and avoid it in the future.

Prometheus, with predefined alerting rules using its in-house Alertmanager and Grafana, can send alerts based on configured rules. Sensu and Nagios are other open source tools that offer alerting and monitoring services.

The only problem people have with open source alerting tools is that the configuration time and the process sometimes seem hard, but once they are set up, these tools function better than proprietary alternatives.

However, open source tools’ biggest advantage is that we have control over their behavior.

Monitoring workflow and architecture

A good monitoring architecture is the backbone of a strong and stable monitoring system. It might look something like this diagram.

In the end, you must choose a tool based on your needs and infrastructure. The open source tools discussed in this article are used by many organizations for monitoring their infrastructure and blessing it with high uptime.

Source

Importing a VDI in VirtualBox

If you’re used to be a VMware user and try to switch to the Open-Source side of the Force by using VirtualBox, you may run into difficulties if you try to import an existing VDI file into VirtualBox. Actually it’s quite easy, if you know how.

The main difference between VMware and VirtualBox is that VMware captures a whole virtual machine in an image, whereas VirtualBox only supports images of a hard disk. So in VirtualBox’s world, you first need to create a new virtual machine, before using an existing VirtualBox image.

 

  1. First copy your VDI file into VirtualBox’s virtual hard disks repository. On Mac OS X it’s $HOME/Library/VirtualBox/HardDisks/.

  2. Start VirtualBox and create a new virtual machine (according to the OS you expect to live on the VirtualBox image):

    virtualbox1.jpg

  3. When you’re asked for a hard disk image, select Use existing hard disk and click on the small icon on the right:

    virtualbox2.jpg

  4. Which will brings you to the Virtual Media Manager. Click on Add and select the VDI file from step 1.

    virtualbox3.jpg

  5. After leaving the Virtual Media Manager, you’ll be back in your virtual machine wizard. Now you can select your new VDI as existing hard disk and finalize the creation process.

    virtualbox4.jpg

  6. Back in the main window, you’re now able to start your new virtual machine:

    virtualbox5.jpg

It’s quite easy, if you know how.

RStudio Connect Deployments with GitHub Webhooks and Jenkins

New content management Connect server APIs are easy to integrate with programmatic deployment workflows.

Have you heard!? RStudio Connect 1.7.0 has support for programmatic deployment in the RStudio Connect Server API. These new APIs let your deployment engineers craft custom deployment workflows like these:

This article demonstrates programmatic deployment of a Shiny application with GitHub webhooks and a Jenkins Freestyle project.

What are we trying to build?

I have a data product (in this case a shiny application) deployed to my RStudio Connect server. I also have a GitHub repository for the application where I’ve version controlled the app code. I want to link and automate the application update process with my GitHub workflow, i.e. every time I push a code change to GitHub, I’d like the deployed application on Connect to automatically be updated with those changes.

Basic Build Plan

This workflow assumes that the content has already been deployed to my Connect Server at least once. The initial deployment could be achieved programmatically or through traditional IDE push-button / rsconnect deployment methods. The content management API for RStudio Connect can be leveraged to perform the initial bundle upload programmatically.

To read more about the content management APIs and view existing recipes, please see the following resources:

After reviewing the API documentation and example scripts, flesh out your plan to include actionable steps and tools. My updated diagram shows each process and the required resources for defining functional automation:

Actual Build Plan

Note: I started this project with a brand new, clean Jenkins Server. I use Ansible to create (and tear down) small Jenkins servers that live for the duration of my experiments. This article will not cover the basics of installing and configuring a Jenkins server.

Development and Git Branching

Application development occurs in the RStudio IDE. I plan to create a git branching strategy so that new changes can be kept separate from the master branch and reviewed before merging. The GitHub repository I created to keep the application code can be viewed here:

GitHub repository for the Shiny application
– deployment-bundle: app.Rmanifest.json
– README.md

This repository contains a README file (not required) and a single directory with all the application code (in this case only an app.R file) as well as the manifest file which can be generated with the rsconnect package in the RStudio IDE: rsconnect::rsconnect::writeManifest()

GitHub Webhooks

The next step of GitHub set up is to create a Webhook so that the Jenkins server can be notified of all new changes to Master.

In the GitHub repository, navigate to the Settings page, then select Webhooksfrom the left sidebar. Add a new webhook to see the management form as shown here:

Create a new webhook for Jenkins

For the Payload URL field, provide the URL of your Jenkins server with /github-webhook/ appended to it. These are the selections I set for the webhook:

Payload URL: http://[EXAMPLE-AWS-INSTANCE]/github-webhook/
Content type: application/json
Secret: [blank] — I did not use this
Event triggers: Just the push event
Active: Check

Jenkins GitHub Integration Plugin

Now that the webhook is in place, the next step is to configure the receiving end. Jenkins needs the GitHub Integration plugin to receive POST payloads coming from GitHub every time the push event triggers.

Add the GitHub plugin to Jenkins:

  • Manage Jenkins > Manage Plugins
Manage Jenkins Plugins

Check the Installed tab to see if the GitHub Integration Plugin already exists. If not, search for it in the Available tab, download and install.

Docker in Jenkins

In order to streamline the deployment build process for this project, I’ve chosen to use Docker image provided in the programmatic deployment example repo provided here: rstudio/connect-api-deploy-shiny.

There are many ways to incorporate the use of Docker containers into Jenkins projects. Rather than leverage an eternal container registry and a Jenkins-Docker plugin, I’ll show quick-and-dirty way, invoking it directly with shell commands.

Note: My Jenkins server is built with the Docker service installed, so this will work for my project, but it might not work for yours. Take the time to investigate what Docker integrations exist and are considered best practices if you are working on a shared or pre-existing Jenkins installation.

In a second GitHub repository, I’ve version controlled all the pieces of the deployment environment as well as the upload-and-deploy.sh script that will be used to interact with the RStudio Connect content management API. This repository is separate from the Shiny application code repo so that I can have a singular, centralized location for keeping just these pieces of the process.

GitHub repository for the dockerfile and deployment scripts:
– docker: Dockerfile
– upload-and-deploy.sh (modified from rstudio/connect-api-deploy-shiny)
– README.md

Create a Jenkins Project

All the parts are in place, so finally it’s time to put everything together in Jenkins.

Start a New Item > Freestyle Project

  • Give your project a name (e.g. “basic-app-deploy”)
  • I plan on linking this project to only one piece of content, so the name of the Jenkins project can reference my specific Shiny application.
Start a Freestyle Project

Sidebar: Why Jenkins Freestyle?

If there were a crawl-walk-run strategy for working with Jenkins, Freestyle projects might be the crawl step. If you’re already familiar with Jenkins, you might be more interested in setting up a pipeline project or using a Jenkinsfile to structure the workflow.

Pros and Cons of Jenkins Freestyle:

Pro: New to Jenkins ?— low learning curve
Pro: Nice way to learn the Jenkins web interface
Pro: Quick way to accomplish simple jobs (this is not a complex build)

Con: Way too much clicking through web forms
Con: Job is not defined as code

Navigate the Freestyle Project Webform

Once you have a new project set up, step through the freestyle webform complete the configuration:

General

Source Code Management

Build Triggers

  • Check: GitHub hook trigger for GITScm polling

Build Environment > Bindings

Programmatic deployment requires an RStudio Connect API key. Generate an API key through the Connect user interface:

RStudio Connect API Keys

Add Credentials: Save the API key as a Secret Text in Jenkins Credentials Provider:

You can expose secret texts to the Build Environment through the Bindings option:

  • Check: Use secret texts or files
    Secret text:
    – Variable: PUBLISHER_KEY (choose a name)
  • Credentials: Add > Jenkins > Add Credentials
Save the API key as a Secret Text in Jenkins Credentials Provider

Build

The build pane allows for many different types of task selections. For simplicity, I chose the Execute Shell Commands option. I created three blocks of Shell Command build tasks, but the separation was only for readability:

Execute Shell Block 1: Read in the Dockerfile and deployment shell script from GitHub

rm -rf prog-deploy-jenkins
git clone https://github.com/kellobri/prog-deploy-jenkins.git
stat prog-deploy-jenkins/docker/Dockerfile
chmod 755 prog-deploy-jenkins/upload-and-deploy.sh

Execute Shell Block 2: Build the Docker image

cd prog-deploy-jenkins/
docker build -t rstudio-connect-deployer:latest docker

Execute Shell Block 3: Run the Docker container and deployment script

docker run --rm \
--privileged=true \
-e CONNECT_SERVER="http://ec2-52-90-255-153.compute-1.amazonaws.com:3939/" \
-e CONNECT_API_KEY=$PUBLISHER_KEY \
-v $(pwd):/content \
-w /content \
rstudio-connect-deployer:latest \
/content/prog-deploy-jenkins/upload-and-deploy.sh 5c276b83-2eeb-427b-95a6-ac8915e22bfd /content/deployment-bundle

In this block, I reference the PUBLISHER_KEY credential created in the Build Environment step earlier.

Content GUID lookup

I have also hard-coded two additional important pieces of information: the CONNECT_SERVER address, and the application GUID. You could easily create a secret text credential for the server address like we made for the the API key. The application GUID is an identifying piece of information that you’ll have to look up from the RStudio Connect User Interface.

The app GUID is listed at the bottom of the Info settings tab of the deployed content on RStudio Connect.

Project Finishing Touches:

  • Save your Jenkins freestyle project
  • Test it by pushing a change to GitHub!

Demo of a Successful Test

Useful Jenkins Debugging Areas

From the Jenkins dashboard, click on your project. Here you can go back to the webform and change something by clicking the ‘Configure’ link. To see details about the last build, click on that build link; from here you can access the console output for the build — this is usually the first place I go when a build fails.

Console Output for Jenkins Debugging

Also great for iteration and debugging: You can always schedule and run build tests directly from Jenkins without pushing random code changes to GitHub.

Success — What’s Next?

Congrats! Here are some places to explore next:

What if I need to do this for five more shiny apps?

Use this working freestyle project as a template for a new project!

From the Jenkins Dashboard, Select: New Item > Name the project > Then scroll to the bottom of the project type selection options and use auto-complete to find the project you’d like to copy from:

Use your first project as a template for others

Great — But what if I need to do this for 100 more shiny apps?

Remember that crawl-walk-run strategy that I mentioned earlier? If you need to put CI/CD in place for 100 shiny applications, you’re probably going to want to consider some of the other methods for interacting with Jenkins.

Freestyle projects are a great learning tool — and can be helpful for getting small projects off the ground quickly. But I wouldn’t recommend using them long term unless clicking around in webforms is your favorite thing ever.

If you’re looking to do large-scale programmatic deployments with Jenkins, I recommend moving toward a workflow structured on pipeline projects and Jenkinsfiles.


Key Resources in this Article:

RStudio Community is a great place to start conversations and share your ideas about how to grow and adapt these workflows.

Running The RadeonSI NIR Back-End With Mesa 19.1 Git

It’s been a number of months since last trying the RadeonSI NIR back-end, which is being developed as part of the OpenGL 4.6 SPIR-V bits for this AMD OpenGL driver, but eventually RadeonSI may end up switching from TGSI to NIR by default. Given the time since we last tried it out and the increasing popularity of NIR, this weekend I did some fresh tests of the NIR back-end with a Radeon Vega graphics card.

The RadeonSI NIR support isn’t enabled by default but requires setting the R600_DEBUG=nir environment variable for activating. They have been pursuing this support to re-use existing code as part of the long-awaited OpenGL 4.6 SPIR-V ingestion support, which is still ongoing.

The last time I tried out RadeonSI NIR months ago it was causing issues with a few OpenGL games, but fortunately that seems to be an issue of the past. When trying all of the frequently benchmarked OpenGL Linux games with RadeonSI NIR on Mesa 19.1-devel, I didn’t run into any game problems or any corruption problems or other nuisances to deal with… The experience was great.

This round of testing was with Mesa 19.1-devel via the Padoka PPA on Ubuntu 18.10 and using the Linux 5.0 Git kernel. The Radeon RX Vega 64 graphics card was what I used for this quick weekend comparison.

Besides being pleased with running into no visible issues when using the NIR intermediate representation by RadeonSI Gallium3D, I also ran some benchmarks comparing the stock behavior to the Linux OpenGL gaming performance when taking the NIR code-path. Benchmarks were done using the Phoronix Test Suite.

Source

Alacritty – A Fastest Terminal Emulator for Linux

Alacritty is a free open-source, fast, cross-platform terminal emulator, that uses GPU (Graphics Processing Unit) for rendering, which implements certain optimizations that are not available in many other terminal emulators in Linux.

Alacritty is focused on two goals simplicity and performance. The performance goal means, it should be speedy than any other terminal emulator available. The simplicity goal means, it doesn’t supports features such as tabs or splits (which can be easily provided by other terminal multiplexer – tmux) in Linux.

Prerequisites

Alacritty requires the most recent stable Rust compiler to install it.

Install Required Dependency Packages

1. First install Rust programming language using an rustup installer script and follow on screen instructions.

# sudo curl https://sh.rustup.rs -sSf | sh

2. Next, you need to install a few additional libraries to build Alacritty on your Linux distributions, as shown.

--------- On Ubuntu/Debian --------- 
# apt-get install cmake libfreetype6-dev libfontconfig1-dev xclip

--------- On CentOS/RHEL ---------
# yum install cmake freetype-devel fontconfig-devel xclip
# yum group install "Development Tools"

--------- On Fedora ---------
# dnf install cmake freetype-devel fontconfig-devel xclip

--------- On Arch Linux ---------
# pacman -S cmake freetype2 fontconfig pkg-config make xclip

--------- On openSUSE ---------
# zypper install cmake freetype-devel fontconfig-devel xclip 

Installing Alacritty Terminal Emulator in Linux

3. Once you have installed all the required packages, next clone the Alacritty source code repository and compile it using following commands.

$ cd Downloads
$ git clone https://github.com/jwilm/alacritty.git
$ cd alacritty
$ cargo build --release

4. Once the compilation process is complete, the binary will be saved in ./target/release/alacritty directory. Copy the binary to a directory in your PATH and on a dekstop, you can add the application to your system menus, as follows.

# cp target/release/alacritty /usr/local/bin
# cp Alacritty.desktop ~/.local/share/applications

5. Next install the manual pages using following command.

# gzip -c alacritty.man | sudo tee /usr/local/share/man/man1/alacritty.1.gz > /dev/null

6. To add shell completion settings to your Linux shell, do the following.

--------- On Bash Shell ---------
# cp alacritty-completions.bash  ~/.alacritty
# echo "source ~/.alacritty" >> ~/.bashrc

--------- On ZSH Shell ---------
# cp alacritty-completions.zsh /usr/share/zsh/functions/Completion/X/_alacritty

--------- On FISH Shell ---------
# cp alacritty-completions.fish /usr/share/fish/vendor_completions.d/alacritty.fish

7. Finally start Alacritty in your system menu and click on it; when run for the first time, a config file will be created under $HOME/.config/alacritty/alacritty.yml, you can configure it from here.

Alacritty Terminal Emulator

Alacritty Terminal Emulator

For more information and configuration options, go to the Alacritty Github repository.

Alacritty is a cross-platform, fast, GPU accelerated terminal emulator focused on speed and performance. Although it is ready for daily usage, many features are yet to be added to it such as scroll back and more.

DomTerm – A terminal Emulator and Console for Linux

DomTerm is a free open-source feature-rich, modern terminal emulator and screen multiplexer (like tmux or GNU screen), which is based on web technologies and a rich-text console written mostly in JavaScript.

It uses libwebsockets as a backend and a byte-protocol to communicate with the back-end, this implies that you can invoke it in a browser using web sockets; embed it in a third-party application; or simply run it as a generic terminal emulator program.

DomTerm Terminal Emulator for Linux

DomTerm Terminal Emulator for Linux

DomTerm Features:

  • It is xterm-compatible and supports multiple sub-commands.
  • It comes with multiple applications which include a: xterm-compatible terminal emulator, command console, chat/talk window and a read-eval-print-loop for an interactive scripting language.
  • Supports multiplexing and sessions.
  • Its back-end allows for printing images, graphics as well as rich text.
  • Supports controlling of user preferences via a CSS file.
  • Supports keyboard shortcuts with smart line-wrapping.<./li>
  • Optionally allows for input editing and movement of cursor using mouse.
  • Supports preserving of TAB characters with automatic pagination.
  • Support draggable tabs and panes.
  • Automatically turn URLs and mail addresses in output into links and much more.
  • An experimental package atom-domterm for the Atom editor.

How to Install DomTerm Terminal Emulator in Linux

There are no pre-built DomTerm packages available, therefore you need to install it from source, but before downloading the source code and compiling it. First you need to install following dependencies on your respective Linux distributions using package manager as shown.

On Debian/Ubuntu

$ sudo apt-get update
$ sudo apt-get install git gcc make cmake automake libjson-c-dev pkg-config asciidoctor libmagic-dev zlib1g-dev qt5-qmake qt5-default libqt5webengine5 libqt5webchannel5-dev qtwebengine5-dev

On RHEL/CentOS

$ sudo yum update
$ sudo yum install gcc make automake autoconf texinfo patch libwebsockets libwebsockets-devel json-c json-c-devel openssl-devel file-devel libcap-devel asciidoctor

On Fedora

$ sudo dnf update
$ sudo dnf install gcc make automake autoconf texinfo patch libwebsockets libwebsockets-devel json-c json-c-devel openssl-devel file-devel libcap-devel asciidoctor

DomTerm also requires libwebsockets version 2.2 or later. Therefore, you need to build and install the latest version from source as shown.

$ cd ~/Downloads
$ git clone https://github.com/warmcat/libwebsockets
$ cd libwebsockets
$ mkdir build
$ cd build
$ cmake -DLWS_WITH_SSL=0 -DLWS_WITH_ZIP_FOPS=1 . .
$ make

Next clone the DomTerm source repository, build and install it using the following commands.

$ cd ~/Downloads/
$ git clone https://github.com/PerBothner/DomTerm
$ cd DomTerm
$ autoreconf
$ ./configure --with-qtwebengine --with-libwebsockets=$HOME/Downloads/libwebsockets/build
$ make
$ sudo make install

Once you have successfully installed DomTerm on your Linux distribution, you can search for it from your system menu or run the following command to launch it.

$ domterm

DomTerm Homepagehttps://domterm.org/

That’s all! DomTerm is a full-featured terminal emulator and a rich-text console, it also comes with several other useful applications.

WP2Social Auto Publish Powered By : XYZScripts.com