Setting Up Real-Time Monitoring with ‘Ganglia’ for Grids and Clusters of Linux Servers

Ever since system administrators have been in charge of managing servers and groups of machines, tools like monitoring applications have been their best friends. You will probably be familiar with tools like NagiosZabbixIcinga, and Centreon. While those are the heavyweights of monitoring, setting them up and fully taking advantage of their features may be somewhat difficult for new users.

In this article we will introduce you to Ganglia, a monitoring system that is easily scalable and allows to view a wide variety of system metrics of Linux servers and clusters (plus graphs) in real time.

Install Gangila Monitoring in Linux

Install Gangila Monitoring in Linux

Ganglia lets you set up grids (locations) and clusters (groups of servers) for better organization.

Thus, you can create a grid composed of all the machines in a remote environment, and then group those machines into smaller sets based on other criteria.

In addition, Ganglia’s web interface is optimized for mobile devices, and also allows you to export data en .csvand .json formats.

Our test environment will consist of a central CentOS 7 server (IP address 192.168.0.29) where we will install Ganglia, and an Ubuntu 14.04 machine (192.168.0.32), the box that we want to monitor through Ganglia’s web interface.

Throughout this guide we will refer to the CentOS 7 system as the master node, and to the Ubuntu box as the monitored machine.

Installing and Configuring Ganglia

To install the monitoring utilities in the the master node, follow these steps:

1. Enable the EPEL repository and then install Ganglia and related utilities from there:

# yum update && yum install epel-release
# yum install ganglia rrdtool ganglia-gmetad ganglia-gmond ganglia-web 

The packages installed in the step above along with ganglia, the application itself, perform the following functions:

  1. rrdtool, the Round-Robin Database, is a tool that’s used to store and display the variation of data over time using graphs.
  2. ganglia-gmetad is the daemon that collects monitoring data from the hosts that you want to monitor. In those hosts and in the master node it is also necessary to install ganglia-gmond (the monitoring daemon itself):
  3. ganglia-web provides the web frontend where we will view the historical graphs and data about the monitored systems.

2. Set up authentication for the Ganglia web interface (/usr/share/ganglia). We will use basic authentication as provided by Apache.

If you want to explore more advanced security mechanisms, refer to the Authorization and Authenticationsection of the Apache docs.

To accomplish this goal, create a username and assign a password to access a resource protected by Apache. In this example, we will create a username called adminganglia and assign a password of our choosing, which will be stored in /etc/httpd/auth.basic (feel free to choose another directory and / or file name – as long as Apache has read permissions on those resources, you will be fine):

# htpasswd -c /etc/httpd/auth.basic adminganglia

Enter the password for adminganglia twice before proceeding.

3. Modify /etc/httpd/conf.d/ganglia.conf as follows:

Alias /ganglia /usr/share/ganglia
<Location /ganglia>
    AuthType basic
    AuthName "Ganglia web UI"
    AuthBasicProvider file
    AuthUserFile "/etc/httpd/auth.basic"
    Require user adminganglia
</Location>

4. Edit /etc/ganglia/gmetad.conf:

First, use the gridname directive followed by a descriptive name for the grid you’re setting up:

gridname "Home office"

Then, use data_source followed by a descriptive name for the cluster (group of servers), a polling interval in seconds and the IP address of the master and monitored nodes:

data_source "Labs" 60 192.168.0.29:8649 # Master node
data_source "Labs" 60 192.168.0.32 # Monitored node

5. Edit /etc/ganglia/gmond.conf.

a) Make sure the cluster block looks as follows:

cluster {
name = "Labs" # The name in the data_source directive in gmetad.conf
owner = "unspecified"
latlong = "unspecified"
url = "unspecified"
}

b) In the udp_send_chanel block, comment out the mcast_join directive:

udp_send_channel   {
  #mcast_join = 239.2.11.71
  host = localhost
  port = 8649
  ttl = 1
}

c) Finally, comment out the mcast_join and bind directives in the udp_recv_channel block:

udp_recv_channel {
  #mcast_join = 239.2.11.71 ## comment out
  port = 8649
  #bind = 239.2.11.71 ## comment out
}

Save the changes and exit.

6. Open port 8649/udp and allow PHP scripts (run via Apache) to connect to the network using the necessary SELinux boolean:

# firewall-cmd --add-port=8649/udp
# firewall-cmd --add-port=8649/udp --permanent
# setsebool -P httpd_can_network_connect 1

7. Restart Apache, gmetad, and gmond. Also, make sure they are enabled to start on boot:

# systemctl restart httpd gmetad gmond
# systemctl enable httpd gmetad httpd

At this point, you should be able to open the Ganglia web interface at http://192.168.0.29/ganglia and login with the credentials from #Step 2.

Gangila Web Interface

Gangila Web Interface

8. In the Ubuntu host, we will only install ganglia-monitor, the equivalent of ganglia-gmond in CentOS:

$ sudo aptitude update && aptitude install ganglia-monitor

9. Edit the /etc/ganglia/gmond.conf file in the monitored box. This should be identical to the same file in the master node except that the commented out lines in the clusterudp_send_channel, and udp_recv_channelshould be enabled:

cluster {
name = "Labs" # The name in the data_source directive in gmetad.conf
owner = "unspecified"
latlong = "unspecified"
url = "unspecified"
}

udp_send_channel   {
  mcast_join = 239.2.11.71
  host = localhost
  port = 8649
  ttl = 1
}

udp_recv_channel {
  mcast_join = 239.2.11.71 ## comment out
  port = 8649
  bind = 239.2.11.71 ## comment out
}

Then, restart the service:

$ sudo service ganglia-monitor restart

10. Refresh the web interface and you should be able to view the statistics and graphs for both hosts inside the Home office grid / Labs cluster (use the dropdown menu next to to Home office grid to choose a cluster, Labsin our case):

Ganglia Home Office Grid Report

Ganglia Home Office Grid Report

Using the menu tabs (highlighted above) you can access lots of interesting information about each server individually and in groups. You can even compare the stats of all the servers in a cluster side by side using the Compare Hosts tab.

Simply choose a group of servers using a regular expression and you will be able to see a quick comparison of how they are performing:

Ganglia Host Server Information

Ganglia Host Server Information

One of the features I personally find most appealing is the mobile-friendly summary, which you can access using the Mobile tab. Choose the cluster you’re interested in and then the individual host:

Ganglia Mobile Friendly Summary View

Ganglia Mobile Friendly Summary View

Summary

In this article we have introduced Ganglia, a powerful and scalable monitoring solution for grids and clusters of servers. Feel free to install, explore, and play around with Ganglia as much as you like (by the way, you can even try out Ganglia in a demo provided in the project’s official website.

While you’re at it, you will also discover that several well-known companies both in the IT world or not use Ganglia. There are plenty of good reasons for that besides the ones we have shared in this article, with easiness of use and graphs along with stats (it’s nice to put a face to the name, isn’t it?) probably being at the top.

 
Source

How to Install ‘atop’ to Monitor Logging Activity of Linux System Processes

Atop is a full screen performance monitor that can report the activity of all processes, even the ones that have been completed. Atop also allows you to keep daily log of system activities. The same can be used for different purposes, including analysis, debugging, pinpointing the cause of a system overload and others.

Atop Features

  1. Check the overall resource consumption by all processes
  2. Check how much of the available resources have been utilized
  3. Logging of resource utilization
  4. Check resource consumption by individual threads
  5. Monitor process activity per user or per program
  6. Monitor network activity per process

The latest version of Atop is 2.1 and includes following features

  1. New logging mechanism
  2. New key flags
  3. New Fields (counters)
  4. Bug fixes
  5. Configurable colors

Installing Atop Monitoring Tool on Linux

1. In this article, I will show you how to install and configure atop on Linux systems like RHEL/CentOS/Fedora and Debian/Ubuntu based derivatives, so that you can easily monitor your system processes.

On RHEL/CentOS/Fedora

First you will need to enable epel repository under RHEL/CentOS/ systems, in order to install atop monitoring tool.

After you’ve enabled epel repository, you can simple use the yum package manager to install atop package as shown below.

# yum install atop
Install Atop Using Epel Repo

Install Atop Using Epel Repo

Alternatively, you may download direct atop rpm packages using following wget command and continue with the installation of atop, with the following command.

------------------ For 32-bit Systems ------------------
# wget http://www.atoptool.nl/download/atop-2.1-1.i586.rpm
# rpm -ivh atop-2.1-1.i586.rpm

------------------ For 64-bit Systems ------------------
# wget http://www.atoptool.nl/download/atop-2.1-1.x86_64.rpm
# rpm -ivh atop-2.1-1.x86_64.rpm 
Install Atop Using RPM Package

Install Atop Using RPM Package

On Debian/Ubuntu

Under Debian based systems, atop can be installed from the default repositories using apt-get command.

$ sudo apt-get install atop
Install Atop Under Debian Systems

Install Atop Under Debian Systems

2. After installing atop, make sure atop will start upon system start up, run the following commands:

------------------ Under RedHat based systems ------------------
# chkconfig --add atop
# chkconfig atop on --level 235
Enable Atop at System Boot

Enable Atop at System Boot

$ sudo update-rc.d atop defaults             [Under Debian based systems]
Add Atop at System Boot

Add Atop at System Boot

3. By default atop will log all the activity on every 600 seconds. As this might not be that useful, I will change atop’s configuration, so all the activities will be logged in interval of 60 seconds. For that purpose run the following command:

# sed 's/600/60/' /etc/atop/atop.daily -i                [Under RedHat based systems]
$ sudo sed 's/600/60/' /etc/default/atop -i              [Under Debian based systems]
Change Atop Log Interval Time

Change Atop Log Interval Time

Now that you have atop installed and configured, the next logical question is “How do I use it?”. Actually there are few ways for that:

4. If you just run atop in terminal you will have top like interface, which will update every 10 seconds.

# atop

You should see a screen similar to this one:

Atop System Process Monitoring

Atop System Process Monitoring

You can use different keys within atop to sort the information by different criteria. Here are some examples:

5. Scheduling information – “s” key – shows scheduling information for the main thread of each process. Also indicates how many processes are in state “running”:

# atop -s
Shows Scheduling Information of Process

Shows Scheduling Information of Process

6. Memory consumption – “m” key – shows memory related information about all running processes The VSIZE column indicates the total virtual memory and the RSIZE shows the resident size used per process.

The VGROW and RGROW indicate the growth during the last interval. The MEM column indicates the resident memory usage by the process.

# atop -m
Shows Process Memory Information

Shows Process Memory Information

7. Show disk utilization – “d” key – shows the disks activity on a system level (LVM and DSK columns). Disk activity is shown as amount of data that is being transferred by reads/writes (RDDSK/WRDSK columns).

# atop -d
Shows Disk Utilization

Shows Disk Utilization

8. Show variable information – “v” key – this option displays provides more specific data about the running processes like uid, pid, gid, cpu usage, etc:

# atop -v
Shows UID PID Information

Shows UID PID Information

9. Show command of processes – “c” key:

# atop -c
Shows Command Process

Shows Command Process

10. Cumulative per program – “p” key – the information shown in this window is accumulated per program. The most right column shows which programs are active (during the intervals) and the most left column shows how many process they have spawned.

# atop -p
Shows Active and Spawned Programs

Shows Active and Spawned Programs

11. Cumulative per user – “u” key – this screen shows which users were/are active during the last interval and indicates how many processes each user runs/ran.

# atop -u
Shows User Processes

Shows User Processes

12. Network usage – “n” key (requires netatop kernel module) shows the network activity per processes.

To install and active netatop kernel module, you need to have following dependency packages installed on your system from the distributor’s repository.

# yum install kernel-devel zlib-devel                [Under RedHat based systems]
$ sudo apt-get install zlib1g-dev                    [Under Debian based systems] 

Next download the netatop tarball and build the module and daemon.

# wget http://www.atoptool.nl/download/netatop-0.3.tar.gz
# tar -xvf netatop-0.3.tar.gz
# cd netatop-0.3
Download Netatop Package

Download Netatop Package

Extract Netatop Files

Extract Netatop Files

Go to the ‘netatop-0.3‘ directory and run the following commands to install and build the module.

# make
# make install
Install Netatop Module

Install Netatop Module

After netatop module installed successfully, load the module and start the daemon.

# service netatop start
OR
$ sudo service netatop start

If you want to load the module automatically after boot, run one of the following commands depending on the distribution.

# chkconfig --add netatop                [Under RedHat based systems]
$ sudo update-rc.d netatop defaults      [Under Debian based systems] 

Now check network usage using “n” key.

# atop -n
Shows Network Usage

Shows Network Usage

13. The directory where atop keeps its history files.

# /var/log/atop/atop_YYYYMMDD

Where YYYY is the year, MM is the month and DD current day of the month. For example:

atop_20150423

All files created by atop are binary. They are not log or text files and only atop can read them. Note however that Logrotate can read and rotate those files.

Let’s say you wish to see todays logs beginning 05:05 server time. Simply run the following command.

# atop -r -b 05:05 -l 1
Check Atop Logs

Check Atop Logs

The atop options are quite a lot and you may wish to see the help menu. For that purpose in the atop window simply use the “?” character to see list of arguments that atop can use. Here is list of most frequently used options:

Atop Options and Usage

Atop Options and Usage

I hope you find my article useful and help you narrow down or prevent issues with your Linux system. In case you have any questions or would like to receive clarification for the usage of atop, please post a comment in the comment section below.

 
Source

CoreFreq – A Powerful CPU Monitoring Tool for Linux Systems

CoreFreq is a CPU monitoring program intended for the Intel 64-bits processor and supports architectures such as Atom, Core2, Nehalem, SandyBridge and above, AMD Family 0F.

Its core is established on a kernel module which helps to retrieve internal performance counters from each CPU core, and works in relation with a daemon which gathers the data and a small console client links to the daemon and displays collected data.

CoreFreq CPU Monitoring

It offers a groundwork to recapture CPU data with a high degree of accuracy:

  1. Core frequencies & ratios; SpeedStep (EIST), Turbo Boost, Hyper-Threading (HTT) as well as Base Clock.
  2. Performance counters in conjunction with Time Stamp Counter (TSC), Unhalted Core Cycles (UCC), Unhalted Reference Cycles (URC).
  3. Number of instructions per cycle or second, IPS, IPC, or CPI.
  4. CPU C-States C0 C1 C3 C6 C7 – C1E – Auto/UnDemotion of C1 C3.
  5. DTS Temperature along with Tjunction Max, Thermal Monitoring TM1 TM2 state.
  6. Topology map including Caches for boostrap together with application CPU.
  7. Processor features, brand plus architecture strings.

Note: This tool is more useful and appropriate for expert Linux users and experienced system administrators, however, novice users can gradually learn how to purposefully use it.

How Does CoreFreq Works

It functions by invoking a Linux Kernel module which then uses:

  1. asm code to keep the readings of the performance counters as close as possible.
  2. per-CPU, effects slab data memory plus high-resolution timer.
  3. compliant with suspend / resume and CPU Hot-Plug.
  4. a shared memory to protect kernel from the user-space part of the program.
  5. atomic synchronization of threads to do away with mutexes and deadlock.

How to Install CoreFreq in Linux

To install CoreFreq, first you need to install the prerequisites (Development Tools) to compile and build the program from source.

$ sudo yum group install 'Development Tools'           [On CentOS/RHEL]
$ sudo dnf  group install 'Development Tools'          [On Fedora 22+ Versions]
# sudo apt-get install dkms git libpthread-stubs0-dev  [On Debian/Ubuntu] 

Next clone the CoreFreq source code from the Github repository, move into the download folder and compile and build the program:

$ git clone https://github.com/cyring/CoreFreq.git
$ cd CoreFreq
$ make 

Build CoreFreq Program

Build CoreFreq Program

Note: Arch Linux users can install corefreq-git from the AUR.

Now run the following commands to load the Linux kernel module from local directory followed by the daemon:

$ sudo insmod corefreqk.ko
$ sudo ./corefreqd

Then, start the client, as a user.

$ ./corefreq-cli

CoreFreq Linux CPU Monitoring

CoreFreq Linux CPU Monitoring

From the interface above, you can use shortcut keys:

  1. F2 to display a usage menu as seen at the top section of the screen.
  2. Right and Left arrows to move over the menu tabs.
  3. Up and Down arrows to select a menu item, then click [Enter].
  4. F4 will close the program.
  5. h will open a quick reference.

To view all usage options, type the command below:

$ ./corefreq-cli -h
CoreFreq Options
CoreFreq.  Copyright (C) 2015-2017 CYRIL INGENIERIE

usage:	corefreq-cli [-option <arguments>]
	-t	Show Top (default)
	-d	Show Dashboard
		  arguments: <left> <top> <marginWidth> <marginHeight>
	-c	Monitor Counters
	-i	Monitor Instructions
	-s	Print System Information
	-M	Print Memory Controller
	-m	Print Topology
	-u	Print CPUID
	-k	Print Kernel
	-h	Print out this message

Exit status:
0	if OK,
1	if problems,
>1	if serious trouble.

Report bugs to labs[at]cyring.fr

To print info about the kernel, run:

$ ./corefreq-cli -k

Print CPU identification details:

$ ./corefreq-cli -u

You can as well monitor CPU instructions in real-time:

$ ./corefreq-cli -i

Enable tracing of counters as below:

$ ./corefreq-cli -c

For more information and usage, visit the CoreFreq Github repository: https://github.com/cyring/CoreFreq

In this article, we reviewed a powerful CPU monitoring tool, which may be more useful to Linux experts or experienced system administrators as compared to novice users.

Source

6 Useful Tools to Monitor MongoDB Performance

Image result for mongodb images

We recently showed how to install MongoDB in Ubuntu 18.04. Once you have successfully deployed your database, you need to monitor its performance while it is running. This is one of the most important tasks under database administration.

Luckily enough, MongoDB provides various methods for retrieving its performance and activity. In this article, we will look at monitoring utilities and database commands for reporting statistics about the state of a running MongoDB instance.

1. Mongostat

Mongostat is similar in functionality to vmstat monitoring tool, which is available on all major Unix-like operating systems such as Linux, FreeBSD, Solaris as well as MacOS. Mongostat is used to get a quick overview of the status of your database; it provides a dynamic real-time view of a running mongod or mongos instance. It retrieves the counts of database operations by type, such as insert, query, update, delete and more.

You can run mongostat as shown. Note that if you have authentication enabled, put the user password in single quotes to avoid getting an error, especially if you have special characters in it.

$ mongostat -u "root" -p '=@!#@%$admin1' --authenticationDatabase "admin"

Monitor MongoDB Performance

Monitor MongoDB Performance

For more mongostat usage options, type the following command.

$ mongostat --help 

2. Mongotop

Mongotop also provides a dynamic real-time view of a running MongoDB instance. It tracks the amount of time a MongoDB instance spends reading and writing data. It returns values every second, by default.

$ mongotop -u "root" -p '=@!#@%$admin1'  --authenticationDatabase "admin"

Monitor MongoDB Activity

Monitor MongoDB Activity

For more mongotop usage options, type the following command.

$ mongotop --help 

3. serverStatus Command

First, you need to run the following command to login into mongo shell.

$ mongo -u "root" -p '=@!#@%$admin1' --authenticationDatabase "admin"

Then run the serverStatus command, which provides an overview of the database’s state, by collecting statistics about the instance.

>db.runCommand( { serverStatus: 1 } )
OR
>db.serverStatus()

4. dbStats Command

The dbStats command returns storage statistics for a particular database, such as the amount of storage used, the quantity of data contained in the database, and object, collection, and index counters.

>db.runCommand({ dbStats: 1 } )
OR
>db.stats()

5. collStats

collStats command is used to collect statistics similar to that provided by dbStats on the collection level, but its output includes a count of the objects in the collection, the size of the collection, the amount of disk space consumed by the collection, and information concerning its indexes.

>db.runCommand( { collStats : "aurthors", scale: 1024 } )

6. replSetGetStatus Command

The replSetGetStatus command outputs the status of the replica set from the perspective of the server that processed the command. This command must be run against the admin database in the followiing form.

>db.adminCommand( { replSetGetStatus : 1 } )

In this addition to the above utilities and database commands, you can also use supported third party monitoring tools either directly, or via their own plugins. These include mtopmunin and nagios.

For more information, consult: Monitoring for MongoDB Documentation.

That’s it for now! In this article, we have covered some useful monitoring utilities and database commands for reporting statistics about the state of a running MongoDB instance. Use the feedback form below to ask any questions or share your thoughts with us.

 

Source

Get started with Joplin, a note-taking app

Learn how open source tools can help you be more productive in 2019. First up, Joplin.

hands programming

Joplin

In the realm of productivity tools, note-taking apps are VERY handy. Yes, you can use the open source NixNote to access Evernote notes, but it’s still linked to the Evernote servers and still relies on a third party for security. And while you CAN export your Evernote notes from NixNote, the only format options are NixNote XML or PDF files.

Joplin graphical application

Joplin’s GUI.

Enter Joplin. Joplin is a NodeJS application that runs and stores notes locally, allows you to encrypt your notes and supports multiple sync methods. Joplin can run as a console or graphical application on Windows, Mac, and Linux. Joplin also has mobile apps for Android and iOS, meaning you can take your notes with you without a major hassle. Joplin even allows you to format notes with Markdown, HTML, or plain text.

Joplin on Android

Joplin’s Android app.

One really nice thing about Joplin is it supports two kinds of notes: plain notes and to-do notes. Plain notes are what you expect—documents containing text. To-do notes, on the other hand, have a checkbox in the notes list that allows you to mark them “done.” And since the to-do note is still a note, you can include lists, documentation, and additional to-do items in a to-do note.

When using the GUI, you can toggle editor views between plain text, WYSIWYG, and a split screen showing both the source text and the rendered view. You can also specify an external editor in the GUI, making it easy to update notes with Vim, Emacs, or any other editor capable of handling text documents.

Joplin console version

Joplin in the console.

The console interface is absolutely fantastic. While it lacks a WYSIWYG editor, it defaults to the text editor for your login. It also has a powerful command mode that allows you to do almost everything you can do in the GUI version. And it renders Markdown correctly in the viewer.

You can group notes in notebooks and tag notes for easy grouping across your notebooks. And it even has built-in search, so you can find things if you forget where you put them.

Overall, Joplin is a first-class note-taking app (and a great alternative to Evernote) that will help you be organized and more productive over the next year.

How to join a Linux computer to an Active Directory domain

Organizations with an AD infrastructure in place that wish to provision Linux computers can bind those devices to their existing domain.

istock-841033564logfiles.jpg

I’m not as strong with Linux distributions as I am with Windows and macOS. Yet when I was recently presented with a question on how to bind Linux hosts to an existing Windows AD domain, I accepted the challenge and along with it, the opportunity to pick up some more Linux experience and help a friend out.

Most IT professionals I meet are adamant about performing their tasks with the least amount of hands-on, physical presence as possible. This is not to say that they do not wish to get their hands dirty per se, but rather speaks more to the fact that IT generally has a lot on its plate so working smarter—not harder—is always greater than tying up all your resources on just one or two trouble tickets.

SEE: System update policy template download (Tech Pro Research)

Just about any administrative task you wish to perform is possible from the powerful, robust command-line interface (CLI). This is one of the areas in which Linux absolutely shines. Regardless as to whether the commands are entered manually, remotely via SSH, or automatically piped in using scripts—the ability to manage Linux hosts natively is second to none. Armed with this new-found knowledge, we head directly to the CLI to resolve this problem.

Before diving into the crux of how to perform this domain bind, please note that I included two distinct (though quite similar) processes to accomplish this task. The process used will depend on what version of the Linux kernel your distribution of choice is based on: Debian or Red Hat (RHEL).

Joining Debian-based distros to Active Directory

Launch Terminal and enter the following command:

sudo apt-get realmd

After ‘realmd’ installs successfully, enter the next command to join the domain:

realm join domain.tld –user username

Enter the password of the account with permissions to join devices to the domain, and press the enter key. If the dependencies are not currently loaded onto the Linux host, the binding process will trigger them to be installed automatically.

Joining RHEL-based distros to Active Directory

Launch Terminal and enter the following command:

yum install sssd realmd oddjob oddjob-mkhomedir adcli samba-common samba-common-tools krb5-workstation openldap-clients policycoreutils-python -y

Once the dependencies install successfully, enter the next command to join the domain:

realm join domain.tld –user=username

After authentication occurs for the first time, Linux will automatically create the /etc/sssd/sssd.conf and /etc/krb.conf files, as well as the /etc/krb5.keytab, which control how the system will connect to and communicate with Kerberos (the authentication protocol used by Microsoft’s Active Directory).

Note: The dependencies are installed with their default configurations. This may or may not work with your environment’s specific set up. Additional configuration may be necessary before domain accounts can be authenticated.

Confirm domain (realm) joined successfully

At Terminal, enter the following command for a list of the domain, along with configuration information set:

realm list

Alternatively, you can always check the properties of the computer object in Active Directory Users and Computers snap-in to verify that it was both created and has the proper trust relationship established between host and AD.

Source

How to Install Skype on CentOS 7

Skype is one of the most popular communication applications in the world that allows you to make free online audio and video calls, and affordable international calling to mobiles and landlines worldwide.

Skype is not an open source application and it is not included in the CentOS repositories.

This tutorial explains how to install the latest version of Skype on CentOS 7.

Prerequisites

The user you are logged in as must have sudo privileges to be able to install packages.

Installing Skype on CentOS

Perform the following steps to install Skype on CentOS.

1. Download Skype

Start by opening your terminal either by using the Ctrl+Alt+T keyboard shortcut or by clicking on the terminal icon.

Download the latest Skype .rpm package using the following wget command:

wget https://go.skype.com/skypeforlinux-64.rpm

2. Install Skype

Once the download is complete, install Skype by running the following command as a user with sudo privileges:

sudo yum localinstall ./skypeforlinux-64.rpm

That’s it. Skype has been installed on your CentOS desktop.

3. Start Skype

Now that Skype has been installed, you can start it either from the command line by typing skypeforlinux or by clicking on the Skype icon (Applications -> Internet -> Skype).

When you start Skype for the first time, a window like the following will appear:

From here you can sign in to Skype with your Microsoft Account and start chatting and talking with your friends and family.

Updating Skype

During the installation process, the official Skype repository will be added to your system. Use the cat command to verify the file contents:

/etc/yum.repos.d/skype-stable.repo[deb [arch=amd64] https://repo.skype.com/deb stable main]([skype-stable]
name=skype (stable)
baseurl=https://repo.skype.com/rpm/stable/
enabled=1
gpgcheck=1
gpgkey=https://repo.skype.com/data/SKYPE-GPG-KEY)

This ensures that your Skype installation will be updated automatically when a new version is released through your desktop standard Software Update tool.

Conclusion

In this tutorial, you’ve learned how to install Skype on your CentOS 7 desktop.

Feel free to leave a comment below.

Source

Sharing Docker Containers across DevOps Environments

docker

Docker provides a powerful tool for creating lightweight images and containerized processes, but did you know it can make your development environment part of the DevOps pipeline too? Whether you’re managing tens of thousands of servers in the cloud or are a software engineer looking to incorporate Docker containers into the software development life cycle, this article has a little something for everyone with a passion for Linux and Docker.

In this article, I describe how Docker containers flow through the DevOps pipeline. I also cover some advanced DevOps concepts (borrowed from object-oriented programming) on how to use dependency injection and encapsulation to improve the DevOps process. And finally, I show how containerization can be useful for the development and testing process itself, rather than just as a place to serve up an application after it’s written.

Introduction

Containers are hot in DevOps shops, and their benefits from an operations and service delivery point of view have been covered well elsewhere. If you want to build a Docker container or deploy a Docker host, container or swarm, a lot of information is available. However, very few articles talk about how to developinside the Docker containers that will be reused later in the DevOps pipeline, so that’s what I focus on here.

""

Figure 1. Stages a Docker Container Moves Through in a Typical DevOps Pipeline

Container-Based Development Workflows

Two common workflows exist for developing software for use inside Docker containers:

  1. Injecting development tools into an existing Docker container: this is the best option for sharing a consistent development environment with the same toolchain among multiple developers, and it can be used in conjunction with web-based development environments, such as Red Hat’s codenvy.com or dockerized IDEs like Eclipse Che.
  2. Bind-mounting a host directory onto the Docker container and using your existing development tools on the host: this is the simplest option, and it offers flexibility for developers to work with their own set of locally installed development tools.

Both workflows have advantages, but local mounting is inherently simpler. For that reason, I focus on the mounting solution as “the simplest thing that could possibly work” here.

How Docker Containers Move between Environments

A core tenet of DevOps is that the source code and runtimes that will be used in production are the same as those used in development. In other words, the most effective pipeline is one where the identical Docker image can be reused for each stage of the pipeline.

""

Figure 2. Idealized Docker-Based DevOps Pipeline

The notion here is that each environment uses the same Docker image and code base, regardless of where it’s running. Unlike systems such as Puppet, Chef or Ansible that converge systems to a defined state, an idealized Docker pipeline makes duplicate copies (containers) of a fixed image in each environment. Ideally, the only artifact that really moves between environmental stages in a Docker-centric pipeline is the ID of a Docker image; all other artifacts should be shared between environments to ensure consistency.

Handling Differences between Environments

In the real world, environmental stages can vary. As a case point, your QA and staging environments may contain different DNS names, different firewall rules and almost certainly different data fixtures. Combat this per-environment drift by standardizing services across your different environments. For example, ensuring that DNS resolves “db1.example.com” and “db2.example.com” to the right IP addresses in each environment is much more Docker-friendly than relying on configuration file changes or injectable templates that point your application to differing IP addresses. However, when necessary, you can set environment variables for each container rather than making stateful changes to the fixed image. These variables then can be managed in a variety of ways, including the following:

  1. Environment variables set at container runtime from the command line.
  2. Environment variables set at container runtime from a file.
  3. Autodiscovery using etcd, Consul, Vault or similar.

Consider a Ruby microservice that runs inside a Docker container. The service accesses a database somewhere. In order to run the same Ruby image in each different environment, but with environment-specific data passed in as variables, your deployment orchestration tool might use a shell script like this one, “Example Mircoservice Deployment”:


# Reuse the same image to create containers in each
# environment.
docker pull ruby:latest

# Bash function that exports key environment
# variables to the container, and then runs Ruby
# inside the container to display the relevant
# values.
microservice () {
    docker run -e STAGE -e DB --rm ruby \
        /usr/local/bin/ruby -e \
            'printf("STAGE: %s, DB: %s\n",
                    ENV["STAGE"],
                    ENV["DB"])'
}

Table 1 shows an example of how environment-specific information for Development, Quality Assurance and Production can be passed to otherwise-identical containers using exported environment variables.

Table 1. Same Image with Injected Environment Variables

Development Quality Assurance Production
export STAGE=dev DB=db1; microservice export STAGE=qa DB=db2; microservice export STAGE=prod DB=db3; microservice

To see this in action, open a terminal with a Bash prompt and run the commands from the “Example Microservice Deployment” script above to pull the Ruby image onto your Docker host and create a reusable shell function. Next, run each of the commands from the table above in turn to set up the proper environment variables and execute the function. You should see the output shown in Table 2 for each simulated environment.

Table 2. Containers in Each Environment Producing Appropriate Results

Development Quality Assurance Production
STAGE: dev, DB: db1 STAGE: qa, DB: db2 STAGE: prod, DB: db3

Despite being a rather simplistic example, what’s being accomplished is really quite extraordinary! This is DevOps tooling at its best: you’re re-using the same image and deployment script to ensure maximum consistency, but each deployed instance (a “container” in Docker parlance) is still being tuned to operate properly within its pipeline stage.

With this approach, you limit configuration drift and variance by ensuring that the exact same image is re-used for each stage of the pipeline. Furthermore, each container varies only by the environment-specific data or artifacts injected into them, reducing the burden of maintaining multiple versions or per-environment architectures.

But What about External Systems?

The previous simulation didn’t really connect to any services outside the Docker container. How well would this work if you needed to connect your containers to environment-specific things outside the container itself?

Next, I simulate a Docker container moving from development through other stages of the DevOps pipeline, using a different database with its own data in each environment. This requires a little prep work first.

First, create a workspace for the example files. You can do this by cloning the examples from GitHub or by making a directory. As an example:


# Clone the examples from GitHub.
git clone \
    https://github.com/CodeGnome/SDCAPS-Examples
cd SDCAPS-Examples/db

# Create a working directory yourself.
mkdir -p SDCAPS-Examples/db
cd SDCAPS-Examples/db

The following SQL files should be in the db directory if you cloned the example repository. Otherwise, go ahead and create them now.

db1.sql:


-- Development Database
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE AppData (
    login TEXT UNIQUE NOT NULL,
    name TEXT,
    password TEXT
);
INSERT INTO AppData
VALUES ('root','developers','dev_password'),
       ('dev','developers','dev_password');
COMMIT;

db2.sql:


-- Quality Assurance (QA) Database
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE AppData (
    login TEXT UNIQUE NOT NULL,
    name TEXT,
    password TEXT
);
INSERT INTO AppData
VALUES ('root','qa admins','admin_password'),
       ('test','qa testers','user_password');
COMMIT;

db3.sql:


-- Production Database
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE AppData (
    login TEXT UNIQUE NOT NULL,
    name TEXT,
    password TEXT
);
INSERT INTO AppData
VALUES ('root','production',
        '$1$Ax6DIG/K$TDPdujixy5DDscpTWD5HU0'),
       ('deploy','devops deploy tools',
        '$1$hgTsycNO$FmJInHWROtkX6q7eWiJ1p/');
COMMIT;

Next, you need a small utility to create (or re-create) the various SQLite databases. This is really just a convenience script, so if you prefer to initialize or load the SQL by hand or with another tool, go right ahead:


#!/usr/bin/env bash

# You assume the database files will be stored in an
# immediate subdirectory named "db" but you can
# override this using an environment variable.
: "${DATABASE_DIR:=db}"
cd "$DATABASE_DIR"

# Scan for the -f flag. If the flag is found, and if
# there are matching filenames, verbosely remove the
# existing database files.
pattern='(^|[[:space:]])-f([[:space:]]|$)'
if [[ "$*" =~ $pattern ]] &&
    compgen -o filenames -G 'db?' >&-
then
    echo "Removing existing database files ..."
    rm -v db? 2> /dev/null
    echo
fi

# Process each SQL dump in the current directory.
echo "Creating database files from SQL ..."
for sql_dump in *.sql; do
    db_filename="${sql_dump%%.sql}"
    if [[ ! -f "$db_filename" ]]; then
        sqlite3 "$db_filename" < "$sql_dump" &&
        echo "$db_filename created"
    else
        echo "$db_filename already exists"
    fi
done

When you run ./create_databases.sh, you should see:


Creating database files from SQL ...
db1 created
db2 created
db3 created

If the utility script reports that the database files already exist, or if you want to reset the database files to their initial state, you can call the script again with the -f flag to re-create them from the associated .sql files.

Creating a Linux Password

You probably noticed that some of the SQL files contained clear-text passwords while others have valid Linux password hashes. For the purposes of this article, that’s largely a contrivance to ensure that you have different data in each database and to make it easy to tell which database you’re looking at from the data itself.

For security though, it’s usually best to ensure that you have a properly hashed password in any source files you may store. There are a number of ways to generate such passwords, but the OpenSSL library makes it easy to generate salted and hashed passwords from the command line.

Tip: for optimum security, don’t include your desired password or passphrase as an argument to OpenSSL on the command line, as it could then be seen in the process list. Instead, allow OpenSSL to prompt you with Password: and be sure to use a strong passphrase.

To generate a salted MD5 password with OpenSSL:


$ openssl passwd \
    -1 \
    -salt "$(openssl rand -base64 6)"
Password:

Then you can paste the salted hash into /etc/shadow, an SQL file, utility script or wherever else you may need it.

Simulating Deployment inside the Development Stage

Now that you have some external resources to experiment with, you’re ready to simulate a deployment. Let’s start by running a container in your development environment. I follow some DevOps best practices here and use fixed image IDs and defined gem versions.

DevOps Best Practices for Docker Image IDs

To ensure that you’re re-using the same image across pipeline stages, always use an image ID rather than a named tag or symbolic reference when pulling images. For example, while the “latest” tag might point to different versions of a Docker image over time, the SHA-256 identifier of an image version remains constant and also provides automatic validation as a checksum for downloaded images.

Furthermore, you always should use a fixed ID for assets you’re injecting into your containers. Note how you specify a specific version of the SQLite3 Ruby gem to inject into the container at each stage. This ensures that each pipeline stage has the same version, regardless of whether the most current version of the gem from a RubyGems repository changes between one container deployment and the next.

Getting a Docker Image ID

When you pull a Docker image, such as ruby:latest, Docker will report the digest of the image on standard output:


$ docker pull ruby:latest
latest: Pulling from library/ruby
Digest:
sha256:eed291437be80359321bf66a842d4d542a789e
↪687b38c31bd1659065b2906778
Status: Image is up to date for ruby:latest

If you want to find the ID for an image you’ve already pulled, you can use the inspect sub-command to extract the digest from Docker’s JSON output—for example:


$ docker inspect \
      --format='{{index .RepoDigests 0}}' \
      ruby:latest
      ruby@sha256:eed291437be80359321bf66a842d4d542a789
↪e687b38c31bd1659065b2906778

First, you export the appropriate environment variables for development. These values will override the defaults set by your deployment script and affect the behavior of your sample application:


# Export values we want accessible inside the Docker
# container.
export STAGE="dev" DB="db1"

Next, implement a script called container_deploy.sh that will simulate deployment across multiple environments. This is an example of the work that your deployment pipeline or orchestration engine should do when instantiating containers for each stage:


#!/usr/bin/env bash

set -e

####################################################
# Default shell and environment variables.
####################################################
# Quick hack to build the 64-character image ID
# (which is really a SHA-256 hash) within a
# magazine's line-length limitations.
hash_segments=(
    "eed291437be80359321bf66a842d4d54"
    "2a789e687b38c31bd1659065b2906778"
)
printf -v id "%s" "${hash_segments[@]}"

# Default Ruby image ID to use if not overridden
# from the script's environment.
: "${IMAGE_ID:=$id}"

# Fixed version of the SQLite3 gem.
: "${SQLITE3_VERSION:=1.3.13}"

# Default pipeline stage (e.g. dev, qa, prod).
: "${STAGE:=dev}"

# Default database to use (e.g. db1, db2, db3).
: "${DB:=db1}"

# Export values that should be visible inside the
# container.
export STAGE DB

####################################################
# Setup and run Docker container.
####################################################
# Remove the Ruby container when script exits,
# regardless of exit status unless DEBUG is set.
cleanup () {
    local id msg1 msg2 msg3
    id="$container_id"
    if [[ ! -v DEBUG ]]; then
        docker rm --force "$id" >&-
    else
        msg1="DEBUG was set."
        msg2="Debug the container with:"
        msg3="    docker exec -it $id bash"
        printf "\n%s\n%s\n%s\n" \
          "$msg1" \
          "$msg2" \
          "$msg3" \
          > /dev/stderr
  fi
}
trap "cleanup" EXIT

# Set up a container, including environment
# variables and volumes mounted from the local host.
docker run \
    -d \
    -e STAGE \
    -e DB \
    -v "${DATABASE_DIR:-${PWD}/db}":/srv/db \
    --init \
    "ruby@sha256:$IMAGE_ID" \
    tail -f /dev/null >&-

# Capture the container ID of the last container
# started.
container_id=$(docker ps -ql)

# Inject a fixed version of the database gem into
# the running container.
echo "Injecting gem into container..."
docker exec "$container_id" \
    gem install sqlite3 -v "$SQLITE3_VERSION" &&
    echo

# Define a Ruby script to run inside our container.
#
# The script will output the environment variables
# we've set, and then display contents of the
# database defined in the DB environment variable.
ruby_script='
    require "sqlite3"

    puts %Q(DevOps pipeline stage: #{ENV["STAGE"]})
    puts %Q(Database for this stage: #{ENV["DB"]})
    puts
    puts "Data stored in this database:"

    Dir.chdir "/srv/db"
    db    = SQLite3::Database.open ENV["DB"]
    query = "SELECT rowid, * FROM AppData"
    db.execute(query) do |row|
        print " " * 4
        puts row.join(", ")
    end
'

# Execute the Ruby script inside the running
# container.
docker exec "$container_id" ruby -e "$ruby_script"

There are a few things to note about this script. First and foremost, your real-world needs may be either simpler or more complex than this script provides for. Nevertheless, it provides a reasonable baseline on which you can build.

Second, you may have noticed the use of the tail command when creating the Docker container. This is a common trick used for building containers that don’t have a long-running application to keep the container in a running state. Because you are re-entering the container using multiple exec commands, and because your example Ruby application runs once and exits, tail sidesteps a lot of ugly hacks needed to restart the container continually or keep it running while debugging.

Go ahead and run the script now. You should see the same output as listed below:


$ ./container_deploy.sh
Building native extensions.  This could take a while...
Successfully installed sqlite3-1.3.13
1 gem installed

DevOps pipeline stage: dev
Database for this stage: db1

Data stored in this database:
    1, root, developers, dev_password
    2, dev, developers, dev_password

Simulating Deployment across Environments

Now you’re ready to move on to something more ambitious. In the preceding example, you deployed a container to the development environment. The Ruby application running inside the container used the development database. The power of this approach is that the exact same process can be re-used for each pipeline stage, and the only thing you need to change is the database to which the application points.

In actual usage, your DevOps configuration management or orchestration engine would handle setting up the correct environment variables for each stage of the pipeline. To simulate deployment to multiple environments, populate an associative array in Bash with the values each stage will need and then run the script in a for loop:


declare -A env_db
env_db=([dev]=db1 [qa]=db2 [prod]=db3)

for env in dev qa prod; do
    export STAGE="$env" DB="${env_db[$env]}"
    printf "%s\n" "Deploying to ${env^^} ..."
    ./container_deploy.sh
done

This stage-specific approach has a number of benefits from a DevOps point of view. That’s because:

  1. The image ID deployed is identical across all pipeline stages.
  2. A more complex application can “do the right thing” based on the value of STAGE and DB (or other values) injected into the container at runtime.
  3. The container is connected to the host filesystem the same way at each stage, so you can re-use source code or versioned artifacts pulled from Git, Nexus or other repositories without making changes to the image or container.
  4. The switcheroo magic for pointing to the right external resources is handled by your deployment script (in this case, container_deploy.sh) rather than by making changes to your image, application or infrastructure.
  5. This solution is great if your goal is to trap most of the complexity in your deployment tools or pipeline orchestration engine. However, a small refinement would allow you to push the remaining complexity onto the pipeline infrastructure instead.

Imagine for a moment that you have a more complex application than the one you’ve been working with here. Maybe your QA or staging environments have large data sets that you don’t want to re-create on local hosts, or maybe you need to point at a network resource that may move around at runtime. You can handle this by using a well known name that is resolved by a external resource instead.

You can show this at the filesystem level by using a symlink. The benefit of this approach is that the application and container no longer need to know anything about which database is present, because the database is always named “db”. Consider the following:


declare -A env_db
env_db=([dev]=db1 [qa]=db2 [prod]=db3)
for env in dev qa prod; do
    printf "%s\n" "Deploying to ${env^^} ..."
    (cd db; ln -fs "${env_db[$env]}" db)
    export STAGE="$env" DB="db"
    ./container_deploy.sh
done

Likewise, you can configure your Domain Name Service (DNS) or a Virtual IP (VIP) on your network to ensure that the right database host or cluster is used for each stage. As an example, you might ensure that db.example.com resolves to a different IP address at each pipeline stage.

Sadly, the complexity of managing multiple environments never truly goes away—it just hopefully gets abstracted to the right level for your organization. Think of your objective as similar to some object-oriented programming (OOP) best practices: you’re looking to create pipelines that minimize things that change and to allow applications and tools to rely on a stable interface. When changes are unavoidable, the goal is to keep the scope of what might change as small as possible and to hide the ugly details from your tools to the greatest extent that you can.

If you have thousands or tens of thousands of servers, it’s often better to change a couple DNS entries without downtime rather than rebuild or redeploy 10,000 application containers. Of course, there are always counter-examples, so consider the trade-offs and make the best decisions you can to encapsulate any unavoidable complexity.

Developing inside Your Container

I’ve spent a lot of time explaining how to ensure that your development containers look like the containers in use in other stages of the pipeline. But have I really described how to develop inside these containers? It turns out I’ve actually covered the essentials, but you need to shift your perspective a little to put it all together.

The same processes used to deploy containers in the previous sections also allow you to work inside a container. In particular, the previous examples have touched on how to bind-mount code and artifacts from the host’s filesystem inside a container using the -v or --volume flags. That’s how the container_deploy.sh script mounts database files on /srv/db inside the container. The same mechanism can be used to mount source code, and the Docker exec command then can be used to start a shell, editor or other development process inside the container.

The develop.sh utility script is designed to showcase this ability. When you run it, the script creates a Docker container and drops you into a Ruby shell inside the container. Go ahead and run ./develop.sh now:


#!/usr/bin/env bash

id="eed291437be80359321bf66a842d4d54"
id+="2a789e687b38c31bd1659065b2906778"
: "${IMAGE_ID:=$id}"
: "${SQLITE3_VERSION:=1.3.13}"
: "${STAGE:=dev}"
: "${DB:=db1}"

export DB STAGE

echo "Launching '$STAGE' container..."
docker run \
    -d \
    -e DB \
    -e STAGE \
    -v "${SOURCE_CODE:-$PWD}":/usr/local/src \
    -v "${DATABASE_DIR:-${PWD}/db}":/srv/db \
    --init \
    "ruby@sha256:$IMAGE_ID" \
    tail -f /dev/null >&-

container_id=$(docker ps -ql)

show_cmd () {
    enter="docker exec -it $container_id bash"
    clean="docker rm --force $container_id"
    echo -ne \
        "\nRe-enter container with:\n\t${enter}"
    echo -ne \
        "\nClean up container with:\n\t${clean}\n"
}
trap 'show_cmd' EXIT

docker exec "$container_id" \
    gem install sqlite3 -v "$SQLITE3_VERSION" >&-

docker exec \
    -e DB \
    -e STAGE \
    -it "$container_id" \
    irb -I /usr/local/src -r sqlite3

Once inside the container’s Ruby read-evaluate-print loop (REPL), you can develop your source code as you normally would from outside the container. Any source code changes will be seen immediately from inside the container at the defined mountpoint of /usr/local/src. You then can test your code using the same runtime that will be available later in your pipeline.

Let’s try a few basic things just to get a feel for how this works. Ensure that you have the sample Ruby files installed in the same directory as develop.sh. You don’t actually have to know (or care) about Ruby programming for this exercise to have value. The point is to show how your containerized applications can interact with your host’s development environment.

example_query.rb:


# Ruby module to query the table name via SQL.
module ExampleQuery
  def self.table_name
    path = "/srv/db/#{ENV['DB']}"
    db   = SQLite3::Database.new path
    sql =<<-'SQL'
      SELECT name FROM sqlite_master
       WHERE type='table'
       LIMIT 1;
    SQL
    db.get_first_value sql
  end
end

source_list.rb:


# Ruby module to list files in the source directory
# that's mounted inside your container.
module SourceList
  def self.array
    Dir['/usr/local/src/*']
  end

  def self.print
    puts self.array
  end
end

At the IRB prompt (irb(main):001:0>), try the following code to make sure everything is working as expected:


# returns "AppData"
load 'example_query.rb'; ExampleQuery.table_name

# prints file list to standard output; returns nil
load 'source_list.rb'; SourceList.print

In both cases, Ruby source code is being read from /usr/local/src, which is bound to the current working directory of the develop.sh script. While working in development, you could edit those files in any fashion you chose and then load them again into IRB. It’s practically magic!

It works the other way too. From inside the container, you can use any tool or feature of the container to interact with your source directory on the host system. For example, you can download the familiar Docker whale logo and make it available to your development environment from the container’s Ruby REPL:


Dir.chdir '/usr/local/src'
cmd =
  "curl -sLO "             <<
  "https://www.docker.com" <<
  "/sites/default/files"   <<
  "/vertical_large.png"
system cmd

Both /usr/local/src and the matching host directory now contain the vertical_large.png graphic file. You’ve added a file to your source tree from inside the Docker container!

""

Figure 3. Docker Logo on the Host Filesystem and inside the Container

When you press Ctrl-D to exit the REPL, the develop.sh script informs you how to reconnect to the still-running container, as well as how to delete the container when you’re done with it. Output will look similar to the following:


Re-enter container with:
        docker exec -it 9a2c94ebdee8 bash
Clean up container with:
        docker rm --force 9a2c94ebdee8

As a practical matter, remember that the develop.sh script is setting Ruby’s LOAD_PATH and requiring the sqlite3 gem for you when launching the first instance of IRB. If you exit that process, launching another instance of IRB with docker exec or from a Bash shell inside the container may not do what you expect. Be sure to run irb -I /usr/local/src -r sqlite3 to re-create that first smooth experience!

Wrapping Up

I covered how Docker containers typically flow through the DevOps pipeline, from development all the way to production. I looked at some common practices for managing the differences between pipeline stages and how to use stage-specific data and artifacts in a reproducible and automated fashion. Along the way, you also may have learned a little more about Docker commands, Bash scripting and the Ruby REPL.

Install MongoDB Community Edition 4.0 on Linux

Image result for mongodb images

MongoDB is an open source no-schema and high-performance document-oriented NoSQL database (NoSQL means it doesn’t provide any tables, rows, etc.) system much like Apache CouchDB. It stores data in JSON-like documents with dynamic schema’s for better performance.

MongoDB Packages

Following are the supported MongoDB packages, comes with own repository and contains:

  1. mongodb-org – A metapackage that will install following 4 component packages automatically.
  2. mongodb-org-server – Contains the mongod daemon and releated configuration and init scripts.
  3. mongodb-org-mongos – Contains the mongos daemon.
  4. mongodb-org-shell – Contains the mongo shell.
  5. mongodb-org-tools – Contains the MongoDB tools: mongo, mongodump, mongorestore, mongoexport, mongoimport, mongostat, mongotop, bsondump, mongofiles, mongooplog and mongoperf.

In this article, we will walk you through the process of installing MongoDB 4.0 Community Edition on RHELCentOSFedoraUbuntu and Debian servers with the help of official MongoDB repository using .rpm and .debpackages on 64-bit systems only.

Step 1: Adding MongoDB Repository

First, we need to add MongoDB Official Repository to install MongoDB Community Edition on 64-bit platforms.

On Red Hat, CentOS and Fedora

Create a file /etc/yum.repos.d/mongodb-org-4.0.repo to install MongoDB directly, using yum command.

# vi /etc/yum.repos.d/mongodb-org-4.0.repo

Now add the following repository file.

[mongodb-org-4.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc

On Ubuntu Systems

MongoDB repository only provides packages for 18.04 LTS (bionic)16.04 LTS (xenial) and 14.04 LTS (Trusty Tahr) long-term supported 64bit Ubuntu releases.

To install MongoDB Community Edition on Ubuntu, you need to first import the public key used by the package management system.

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4

Next, create a MongoDB repository file and update the repository as shown.

On Ubuntu 18.04
$ echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list
$ sudo apt-get update
On Ubuntu 16.04
$ echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list
$ sudo apt-get update
On Ubuntu 14.04
$ echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list
$ sudo apt-get update

On Debian Systems

MongoDB repository only provides packages for 64-bit Debian 9 Stretch and Debian 8 Jessie, to install MongoDB on Debian, you need to run the following series of commands:

On Debian 9
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4
$ echo "deb http://repo.mongodb.org/apt/debian stretch/mongodb-org/4.0 main" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list
$ sudo apt-get update
On Debian 8
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4
$ echo "deb http://repo.mongodb.org/apt/debian jessie/mongodb-org/4.0 main" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list
$ sudo apt-get update

Step 2: Installing MongoDB Community Edition Packages

Once the repo installed, run the following command to install MongoDB 4.0.

# yum install -y mongodb-org               [On RPM based Systems]
$ sudo apt-get install -y mongodb-org      [On DEB based Systems]

To install a particular MongoDB release version, include each component package individually and add the version number to the package name, as shown in the following example:

-------------- On RPM based Systems --------------
# yum install -y mongodb-org-4.0.6 mongodb-org-server-4.0.6 mongodb-org-shell-4.0.6 mongodb-org-mongos-4.0.6 mongodb-org-tools-4.0.6

-------------- On DEB based Systems --------------
$ sudo apt-get install -y mongodb-org=4.0.6 mongodb-org-server=4.0.6 mongodb-org-shell=4.0.6 mongodb-org-mongos=4.0.6 mongodb-org-tools=4.0.6

Step 3: Configure MongoDB Community Edition

Open file /etc/mongod.conf and verify below basic settings. If commented any settings, please un-comment it.

# vi /etc/mongod.conf
path: /var/log/mongodb/mongod.log
port=27017
dbpath=/var/lib/mongo

Note: This step is only applicable for Red Hat based distributions, Debian and Ubuntu users can ignore it.

Now open port 27017 on the firewall.

-------------- On FirewallD based Systems --------------
# firewall-cmd --zone=public --add-port=27017/tcp --permanent
# firewall-cmd --reload

-------------- On IPtables based Systems --------------
# iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 27017 -j ACCEPT

Step 4: Run MongoDB Community Edition

Now it’s time to start the mongod process by issuing the following command:

# service mongod start
OR               
$ sudo service mongod start

You can make sure that the mongod process has been started successfully by verifying the contents of /var/log/mongodb/mongod.log log file for a line reading.

2019-03-05T01:33:47.121-0500 I NETWORK  [initandlisten] waiting for connections on port 27017

Also you can start, stop or restart mongod process by issuing the following commands:

# service mongod start
# service mongod stop
# service mongod restart

Now enable mongod process at system boot.

# systemctl enable mongod.service     [On SystemD based Systems]
# chkconfig mongod on                 [On SysVinit based Systems]

Step 5: Begin using MongoDB

Connect to your MongoDB shell by using following command.

# mongo

Command Ouput :

MongoDB shell version v4.0.6
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("70ffe350-a41f-42b9-871a-17ccde28ba24") }
MongoDB server version: 4.0.6
Welcome to the MongoDB shell.

This command will connect to your MongoDB database. Run the following basic commands.

> show dbs
> show collections
> show users
> use <db name>
> exit

Step 6: Uninstall MongoDB Community Edition

To completely uninstall MongoDB, you must delete the MongoDB applications, configuration files and directories contains any data and logs.

The following instructions will walk through you the process of removing MongoDB from your system.

On RHEL, CentOS and Fedora

# service mongod stop
# yum erase $(rpm -qa | grep mongodb-org)
# rm -r /var/log/mongodb
# rm -r /var/lib/mongo

On Debian and Ubuntu

$ sudo service mongod stop
$ sudo apt-get purge mongodb-org*
$ sudo rm -r /var/log/mongodb
$ sudo rm -r /var/lib/mongodb

For more information visit official page at http://docs.mongodb.org/manual/contents/.

Source

Bash Case Statement | Linuxize

Bash case statements are generally used to simplify complex conditionals when you have multiple different choices. Using the case statement instead of nested if statements will help you make your bash scripts more readable and easier to maintain.

The Bash case statement has a similar concept with the Javascript or C switch statement. The main difference is that unlike the C switch statement the Bash case statement doesn’t continue to search for a pattern match once it has found one and executed statements associated with that pattern.

In this tutorial, we will cover the basics of the Bash case statements and show you how to use them in your shell scripts.

The Bash case statement takes the following form:

case EXPRESSION in

  PATTERN_1)
    STATEMENTS
    ;;

  PATTERN_2)
    STATEMENTS
    ;;

  PATTERN_N)
    STATEMENTS
    ;;

  *)
    STATEMENTS
    ;;
esac

Copy

  • Each case statement starts with the case keyword followed by the case expression and the in keyword. The statement ends with the esac keyword.
  • You can use multiple patterns separated by the | operator. The ) operator terminates a pattern list.
  • A pattern can have special characters.
  • A pattern and its associated commands are known as a clause.
  • Each clause must be terminated with ;;.
  • The commands corresponding to the first pattern that matches the expression are executed.
  • It is a common practice to use the wildcard asterisk symbol (*) as a final pattern to define the default case. This pattern will always match.
  • If no pattern is matched the return status is zero. Otherwise, the return status is the exit status of the executed commands.

Here is an example using the case statement in a bash script that will print the official language of a given country:

languages.sh
#!/bin/bash

echo -n "Enter the name of a country: "
read COUNTRY

echo -n "The official language of $COUNTRY is "

case $COUNTRY in

  Lithuania)
    echo -n "Lithuanian"
    ;;

  Romania | Moldova)
    echo -n "Romanian"
    ;;

  Italy | "San Marino" | Switzerland | "Vatican City")
    echo -n "Italian"
    ;;

  *)
    echo -n "unknown"
    ;;
esac

Copy

Save the custom script as a file and run it from the command line.

bash languages.sh

Copy

The script will ask you to enter a country. For example, if you type “Lithuania” it will match the first pattern and the echo command in that clause will be executed.

The script will print the following output:

Enter the name of a country: Lithuania
The official language of Lithuania is Lithuanian

Copy

If you enter a country that doesn’t match any other pattern except the default wildcard asterisk symbol, let’s say Argentina the script will execute echo command inside the default clause.

Enter the name of a country: Argentina
The official language of Argentina is unknown

Copy

By now you should have a good understanding of how to write bash case statements. They are often used to pass parameters to a shell script from the command line. For example, the init scripts are using case statements for starting, stopping or restarting services.

Source

WP2Social Auto Publish Powered By : XYZScripts.com