Primer on Yum Package Management Tool

The Yum package management tool is very crucial to the management of Linux systems either you are a Linux systems admin or a power user. Different package management tools are available across different Linux distros and the YUM package management tool is available on the RedHat and CentOS Linux distros. In the background YUM (Yellowdog Updater Modified) is dependent on the RPM(Red Hat Package Manager), and was created to enable the management of packages as parts of a larger system of software repositories instead of individual packages.

The configuration file for Yum is stored in the /etc/ directory, a file named yum.conf. This file can be configured and tweaked to suit certain needs of the system. Below is a sample of the contents of the yum.conf file:

[main]
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=5

This configuration file could be different from whatever you may get on your machine, but the configuration syntax follows the same rules. The repository of packages that can be installed with Yum are usually saved in the /etc/yum.repos.d/ directory, with each *.repo file in the directory serving as repositories of the various packages that can be installed.

The image below shows the structure of a CentOS base repository:

YUM works in a pattern similar to all Linux commands, using the structure below:

With the command above, you can carry out all necessary tasks with YUM. You can get help on how to use YUM with the –help option:

You should get a list of the commands and options that can be run on YUM, just as seen in the images below:

List of commands

List of options

For the rest of this article, we would be completing a couple of tasks with Yum. We would query, install, update and remove packages.

Querying packages with YUM

Let’s say you just got a job as a Linux system administrator at a company, and your first task is to install a couple of packages to help make your tasks easier such as nmap, top etc.

To proceed with this, you need to know about the packages and how well they will fit the computer’s needs.

Task 1: Getting information on a package

To get information on a package such as the package’s version, size, description etc, you need to use the info command.

As an example, the command below would give information on the httpd package:

Below is a snippet of the result from the command:

Name : httpd
Arch : x86_64
Version : 2.4.6
Release : 80.el7.centos.1

Task 2: Searching for existing packages

It is not in all cases you would know the exact name of a package. Sometimes, all you would know is a keyword affiliated with the package. In these scenarios, you can easily search for packages with that keyword in the name or description using the search command.

The command below would give a list of packages that have the keyword “nginx” in it.

Below is a snippet of the result from the command:

collectd-nginx.x86_64 :Nginx plugin for collectd
munin-nginx.noarch : NGINX support for Munin resource monitoring
nextcloud-nginx.noarch : Nginx integration for NextCloud
nginx-all-modules.noarch : A meta package that installs all available Nginx module

Task 3: Querying a list of packages

There are a lots of packages that are installed or are available for installation on the computer. In some cases, you would like to see a list of those packages to know what packages are available for installation.

There are three options for listing packages which would be stated below:

yum list installed: lists the packages that are installed on the machine.

yum list available: lists all packages available to be installed in from enabled repositories.

yum list all: lists all of the packages both installed and available.

Task 4: Getting package dependencies

Packages are rarely installed as standalone tools, they have dependencies which are essential to their functionalities. With Yum, you can get a list of a package’s dependencies with the deplist command.

As an example, the command below fetches a list of httpd’s dependencies:

Below is a snippet of the result:

package: httpd.x86_64 2.4.6-80.el7.centos.1
dependency: /bin/sh
provider: bash.x86_64 4.2.46-30.el7
dependency: /etc/mime.types
provider: mailcap.noarch 2.1.41-2.el7
dependency: /usr/sbin/groupadd
provider: shadow-utils.x86_64 2:4.1.5.1-24.el7

Task 6: Getting information on package groups

Through this article, we have been looking at packages. At this point, package groups would be introduced.

Package groups are collection of packages for serving a common purpose. So if you want to set up your machine’s system tools for example, you do not have to install the packages separately. You can install them all at once as a package group.

You can get information on a package group using the groupinfo command and putting the group name in quotes.

yum groupinfo “group-name”

The command below would fetch information on the “Emacs” package group.

Here is the information:

Group: Emacs
Group-Id: emacs
Description: The GNU Emacs extensible, customizable, text editor.
Mandatory Packages:
=emacs
Optional Packages:
ctags-etags
emacs-auctex
emacs-gnuplot
emacs-nox
emacs-php-mode

Task 7: Listing the available package groups

In the task above, we tried to get information on the “Emacs” package. However, with the grouplist command, you can get a list of available package groups for installation purposes.

The command above would list the available package groups. However, some packages would not be displayed due to their hidden status. To get a list of all package groups including the hidden ones, you add the hidden command as seen below:

Installing packages with YUM

We have looked at how packages can be queried with Yum. As a Linux system administrator you would do more than query packages, you would install them.

Task 8: Installing packages

Once you have the name of the package you like to install, you can install it with the install command.

Example:

Task 9: Installing packages from .rpm files

While you have to install most packages from the repository, in some cases you would be provided with *.rpm files to install. This can be done using the localinstall command. The localinstall command can be used to install *.rpm files either they are available on the machine or in some external repository to be accessed by a link.

yum localinstall file-name.rpm

Task 10: Reinstalling packages

While working with configuration files, errors can occur leaving packages and their config files messed up. The install command can do the job of correcting the mess. However, if there is a new version of the package in the repository, that would be the version to be installed which isn’t what we want.

With the reinstall command, we can re install the current version of packages regardless the latest version available in the repository.

yum reinstall package-name

Task 11: Installing package groups

Earlier, we looked into package groups and how to query them. Now we would see how to install them. Package groups can be installed using the groupinstall command and the name of the package group in quotes.

yum groupinstall “group-name”

Updating packages with YUM

Keeping your packages updated is key. Newer versions of packages often contain security patches, new features, discontinued features etc, so it is key to keep your computer updated as much as possible.

Task 12: Getting information on package updates

As a Linux system administrator, updates would be very crucial to maintaining the system. Therefore, there is a need to constantly check for package updates. You can check for updates with the updateinfo command.

There are lots of possible command combinations that can be used with updateinfo. However we would use only the list installed command.

yum updateinfo list installed

A snippet of the result can be seen below:

FEDORA-EPEL-2017-6667e7ab29 bugfix epel-release-7-11.noarch

FEDORA-EPEL-2016-0cc27c9cac bugfix lz4-1.7.3-1.el7.x86_64

FEDORA-EPEL-2015-0977 None/Sec. novnc-0.5.1-2.el7.noarch

Task 13: Updating all packages

Updating packages is as easy as using the update command. Using the update command alone would update all packages, but adding the package name would update only the indicated package.

yum update : to update all packages in the operating system

yum update httpd : to update the httpd package alone.

While the update command will update to the latest version of the package, it would leave obsolete files which the new version doesn’t need anymore.

To remove the obsolete packages, we use the upgrade command.

yum upgrade : to update all packages in the operating system and delete obsolete packages.

The upgrade command is dangerous though, as it would remove obsolete packages even if you use them for other purposes.

Task 14: Downgrading packages

While it is important to keep up with latest package updates, updates can be buggy. Therefore in a case where an update is buggy, it can be downgraded to the previous version which was stable. Downgrades are done with the downgrade command.

yum downgrade package-name

Removing packages with YUM

As a Linux system administrator, resources have to be managed. So while packages are installed for certain purposes, they should be removed when they are not needed anymore.

Task 15: Removing packages

The remove command is used to remove packages. Simply add the name of the package to be removed, and it would be uninstalled.

While the command above would remove packages, it would leave the dependencies. To remove the dependencies too, the autoremove command is used. This would remove the dependencies, configuration files etc.

yum autoremove package-name

Task 15: Removing package groups

Earlier we talked about installing package groups. It would be tiring to begin removing the packages individually when not needed anymore. Therefore we remove the package group with the groupremove command.

yum groupremove “group-name”

Conclusion

The commands discussed in this article are just a little show of the power of Yum. There are lots of other tasks that can be done with YUM which you can check at the official RHEL web page. However, the commands this article has discussed should get anybody started with doing regular Linux system administration tasks.

Source

Spectre Patches Whack Intel Performance Hard With Linux 4.20 Kernel

Integrating fixes for Spectre and Meltdown has been a long, slow process throughout 2018. We’ve seen new vulnerabilities popping up on a fairly regular cadence, with Intel and other vendors rolling out solutions as quickly as they can be developed. To date, most of these fixes haven’t had a significant impact on performance for ordinary users, but there are signs that new patches in the Linux 4.20 kernel can drag Intel performance down. The impact varies from test to test, but the gaps in some benchmarks are above 30 percent.

Phoronix has the details and test results. The Core i9-7980XE takes 1.28x longer in the Rodinia 2.4 heterogeneous compute benchmark suite. Performance in the DaCapo benchmark (V9.12-MR1) is a massive 1.5x worse. Not every test was impacted nearly this much, as there were other tests that showed regressions in the 5-8 percent range.

Image by Phoronix

Michael Larabel spent some time trying to tease apart the problem and where it had come from, initially suspecting that it might be a P-state bug or an unintended scheduler change. Neither was evident. The culprit is STIBP, or Single Thread Indirect Branch Predictors. According to Intel, there are three ways of mitigating branch target injection attacks (Spectre v2): Indirect Branch Restricted Speculation (IBRS), Single Thread Indirect Branch Predictors (STIBP), and Indirect Branch Predictor Barrier (IBPB). IBRS restricts speculation of indirect branches and carries the most severe performance penalty. STIBP is described as “Prevents indirect branch predictions from being controlled by the sibling Hyperthread.”

IBRS flushes the branch prediction cache between privilege levels and disables branch prediction on the sibling CPU thread. The STIBP fix, in contrast, only disables branch prediction on the HT core. The performance impact is variable, but in some cases it seems as though it would be less of a performance hit to simply disable Hyper-Threading altogether.

I would caution against reading into these results as they might apply to Windows users. There are differences between the patches that have been deployed on Linux systems versus their Windows counterparts. Microsoft recently announced, for example, that it will adopt the retpoline fix in Linux for Spectre Variant 2 flaws, improving overall performance in certain workloads. There seems to be some significant performance impacts in the 4.20 kernel, but what I can’t find is a detailed breakdown on exactly whether these fixes are already in Windows or will be added. In short, it’s not clear if these changes to Linux performance have any implications at all for non-Linux software.

Larabel has also written a follow-up article comparing the performance of all Spectre / Meltdown mitigation patches on Intel hardware through the present day. The impact ranges from 2-8 percent in some tests to 25 – 35 percent in others. There’s conclusive evidence that the Linux 4.20 kernel impacts performance in applications where previous patches did not, and several tests where the combined performance impact is enough to put AMD ahead of Intel in tests Intel previously won. How much this will matter to server vendors is unclear; analysts have generally predicted that these security issues would help Intel’s sales figures as companies replace systems. The idea that these ongoing problems could push companies to adopt AMD hardware instead is rarely discussed and AMD has not suggested this is a major source of new customer business.

Source

Schedule One-Time Commands with the UNIX at Tool

Cron is nice and all, but don’t forget about its cousin at.

When I first started using Linux, it was like being tossed into the deep end
of the UNIX pool. You were expected to use the command line heavily along
with all the standard utilities and services that came with your
distribution. At lot has changed since then, and nowadays, you can use a
standard Linux desktop without ever having to open a terminal or use old
UNIX services. Even as a sysadmin, these days, you often are a few layers of
abstraction above some of these core services.

I say all of this to point out that for us old-timers, it’s easy to take for
granted that everyone around us innately knows about all the command-line
tools we use. Yet, even though I’ve been using Linux for 20 years, I
still learn about new (to me) command-line tools all the time. In this “Back
to Basics” article series, I plan to cover some of the command-line tools
that those new to Linux may never have used before. For those of you who are
more advanced, I’ll spread out this series, so you can expect future
articles to be more technical. In this article, I describe how to use
the at utility to schedule jobs to run at a later date.

at vs. Cron

at is one of those commands that isn’t discussed very much. When
people talk about scheduling commands, typically cron gets the most
coverage. Cron allows you to schedule commands to be run on a periodic
basis. With cron, you can run a command as frequently as every minute or as
seldom as once a day, week, month or even year. You also can define more
sophisticated rules, so commands run, for example, every five minutes, every
weekday, every other hour and many other combinations. System administrators sometimes
will use cron to schedule a local script to collect metrics every minute or
to schedule backups.

On the other hand, although the at command also allows you to schedule
commands, it serves a completely different purpose from cron. While cron
lets you schedule commands to run periodically, at lets you schedule
commands that run only once at a particular time in the future. This
means that at fills a different and usually more immediate need
from cron.

Using at

At one point, the at command came standard on most Linux
distributions, but
these days, even on servers, you may find yourself having to
install the at package explicitly. Once installed, the easiest
way to use at is to type
it on the command line followed by the time you want the command to run:

$ at 18:00

The at command also can accept a number of different time formats. For
instance, it understands AM and PM as well as words like “tomorrow”, so you
could replace the above command with the identical:

$ at 6pm

And, if you want to run the same command at that time tomorrow instead:

$ at 6pm tomorrow

Once you press enter, you’ll drop into an interactive shell:

$ at 6pm tomorrow
warning: commands will be executed using /bin/sh
at>

From the interactive shell, you can enter the command you want to run
at that time. If you want to run multiple commands, press enter after each
command and type the command on the new at> prompt. Once you’re done
entering commands, press Ctrl-D on an empty at> prompt to exit the
interactive shell.

For instance, let’s say I’ve noticed that a particular server has had
problems the past two days at 5:10am for around five minutes, and so far, I’m
not seeing anything in the logs. Although I could just wake up early and log
in to the server, instead I could write a short script that collects data
from ps, netstat, tcpdump and other
command-line tools for a few minutes, so
when I wake up, I can go over the data it collected. Since this is a one-off,
I don’t want to schedule something with cron and risk forgetting about it
and having it run every day, so this is how I would set it up with
at:

$ at 5:09am tomorrow
warning: commands will be executed using /bin/sh
at>
at> /usr/local/bin/my_monitoring_script

Then I would press Ctrl-D, and the shell would exit with this output:

at> <EOT>
job 1 at Wed Sep 26 05:09:00 2018

Managing at Jobs

Once you have scheduled at jobs, it’s useful to be able to pull up a list of
all the at jobs in the queue, so you know what’s running and
when. The atq
command lists the current at queue:

$ atq
1 Wed Sep 26 05:09:00 2018 a kyle

The first column lists the number at assigned to each job and then lists the
time the job will be run and the user it will run as. Let’s say that in
the above example I realize I’ve made a mistake, because my script won’t be able
to run as a regular user. In that case, I would want to use the
atrm command
to remove job number 1:

$ atrm 1

If I were to run atq again, I would see that the job no longer exists.
Then I could sudo up to root and use the at command to schedule the job
again.

at One-Liners

Although at supports an interactive mode, you also can pipe commands to it all
on one line instead. So, for instance, I could schedule the above job with:

$ echo /usr/local/bin/my_monitoring_script | at 5:09am tomorrow

Conclusion

If you didn’t know that at existed, you might find yourself coming up with
all sorts of complicated and convoluted ways to schedule a one-off job. Even
worse, you might need to set an alarm clock so you can wake up extra early
and log in to a problem server. Of course, if you don’t have an alarm clock,
you could use at:

$ echo “aplay /home/kyle/alarm.wav” | at 7am tomorrow

Source

Open Source 2018: It Was the Best of Times, It Was the Worst of Times | Linux.com

Recently, IBM announced that it would be acquiring Red Hat for $34 billion, a more-than-60-percent premium over Red Hat’s market cap, and a nearly 12x multiple on revenues. In many ways, this was a clear sign that 2018 was the year commercial open source has arrived, if there was ever previously a question about it before.

Indeed, the Red Hat transaction is just the latest in a long line of multi-billion dollar outcomes this year. To date, more than $50 billion dollars have been exchanged in open source IPOs and mergers and acquisitions (M&A); and all of the M&A deals are considered “mega deals” — those valued over $5 billion.

  • IBM acquired Red Hat for $34 billion
  • Hortonworks’ $5.2 billion merger with Cloudera
  • Elasticsearch IPO – $4+billion
  • Pivotal IPO – $3.9 billion
  • Mulesoft acquired by Salesforce – $6.5 Billion

If you’re a current open source software (OSS) shareholder, it may feel like the best of times. However, If you’re an OSS user or emerging open source project or company, you might be feeling more ambivalent.

On the positive side, the fact that there have been such good financial outcomes should come as encouragement to the many still-private and outstanding open-source businesses (e.g., Confluent, Docker, HashiCorp, InfluxDB). And, we can certainly hope that this round of exits will encourage more investors to bet on OSS, enabling OSS to continue to be a prime driver of innovation.

However, not all of the news is rosy.

First, since many of these exits were in the form of M&A, we’ve actually lost some prime examples of independent OSS companies. For many years, there was a concern that Red Hat was the only example of a public open source company. Earlier this year, it seemed likely that the total would grow to 7 (Red Hat, Hortonworks, Cloudera, Elasticsearch, Pivotal, Mulesoft, and MongoDB). Assuming the announced M&As close as expected, the number of public open source companies is back down to four, and the combined market cap of public open source companies is much less than it was at the start of the year.

We Need to Go Deeper

I think it’s critical that we view these open source outcomes in the context of another unavoidable story — the growth in cloud computing.

Many of the open source companies involved share an overlooked common denominator: they’ve made most of their money through on-premise businesses. This probably comes as a surprise, as we regularly hear about cloud-related milestones, like the one that states that more than 80% of server workloads are in the cloud, that open source drives ⅔ or more of cloud revenues, and that the cloud computing market is expected to reach $300 billion by 2021.

By contrast, the total revenues of all of the open source companies listed above was less than $7B. And, almost all of the open source companies listed above have taken well over $200 million in investment each to build out direct sales and support to appropriately sell to the large, on premises enterprise market.

yRPFSfntUxV0-LzXJSZJDUuMjBJP_v6jIbOg4MQW

Open Source Driving Revenue, But for Whom?

The most common way that open source is used in the cloud is as a loss-leader to sell infrastructure. The largest cloud companies all offer free or near-free open source services that drive consumption of compute, networking, and storage.

To be clear, this is perfectly legal, and many of the cloud companies have contributed generously in both code and time to open source. However, the fact that it is difficult for OSS companies to monetize their own products with a hosted offering means that they are shut off from one of the most important and sustainable paths to scaling. Perhaps most importantly, OSS companies that are independent are largely closed off from the fastest growing segment of the computing market. Since there are only a handful of companies worldwide with the scale and capital to operate traditional public clouds (indeed, Amazon, Google, Microsoft, and Alibaba are among the largest companies on the planet), and those companies already control a disproportionate share of traffic, data, capital and talent, how can we ensure that investment, monetization, and innovation continue to flow in open source? And, how can open source companies sustainably grow.

For some OSS companies, the answer is M&A. For others, the cloud monetization/competition question has led them to adopt controversial and more restrictive licensing policies, such as Redis Lab’s adoption of the Commons Clause and MongoDB’s Server Side License.

But there may be a different answer to cloud monetization. Namely, create a different kind of cloud, one based on decentralized infrastructure.

Rather than spending billions to build out data centers, decentralized infrastructure approaches (like Storj, SONM, and others), provide incentives for people around the world to contribute spare computing, storage or network capacity. For example, by fairly and transparently allowing storage node operators to share in the revenue generated (i.e., by compensating supply), Storj was able to rapidly grow to a network of 150,000 nodes in 180 countries with over 150 PB of capacity–equivalent to several large data centers. Similarly, rather than spending hundreds of millions on traditional sales and marketing, we believe there is a way to fairly and transparently compensate those who bring demand to the network, so we have programmatically designed our network so that open source companies whose projects send users our way can get fairly and transparently compensated proportional to the storage and network usage they generate. We are actively working to encourage other decentralized networks to do the same, and believe this is the future of open cloud computing

This isn’t charity. Decentralized networks have strong economic incentives to compensate OSS as the primary driver of cloud demand. But, more importantly, we think that this can help drive a virtuous circle of investment, growth, monetization, and innovation. Done correctly, this will ensure that the best of times lay ahead!

Ben Golub is the former CEO of Docker and interim CEO at Storj Labs.

Watch the Open Source Summit keynote presentation from Ben Golub and Shawn Wilkinson to learn more about open source and the decentralized web.

Source

Cheat Sheet of Useful Commands Every Kali Linux User Needs To Know

This cheat sheet includes a list of basic and useful Linux commands that every Kali Linux user needs to know.

If you want to learn how to hack with Kali Linux, the most important thing you should do first is to master the command line interface.

Here’s why:

Tasks that take minutes or even hours to do on a desktop environment (GUI) can be done in a matter of seconds from the command line.

For example:

To download an entire HTML website, you only need to type:

wget -r domain.com

Now if you were to do the same on a GUI, you’d have to save each page one by one.

This is only one of many examples as to how powerful the command line is. There are many other tasks on Linux that can only be done from the command line.

In short:

Knowing your way around a command line will make you a more efficient and effective programmer. You’ll be able to get shit done faster by automating repetitive tasks. ​

​Plus, you’ll look like a complete bad ass in the process.

Use this cheat sheet as a reference in case you forget how to do certain tasks from the command-line. And trust me, it happens.

If you’re new to Unix/Linux operating systems, this cheat sheet also includes the fundamental linux commands such as jumping from one directory to another, as well as more technical stuff like managing processes.

NOTES
Everything inside “<>” should be replaced with a name of a file, directory or command.

Bash = A popular command-line used in Unix/Linux operating systems.

dir = directory/folder
file = file name & type (eg. notes.txt)
cmd = command (eg. mkdir, ls, curl, etc)
location = path/destination (eg. /home/Desktop)

pwd: Display path of current directory you’re in

​ls: List all files and folders in the current directory
ls -la: List detailed list of files and folders, including hidden ones

Change to a specific directory

cd: Change to home directory
cd /user/Desktop: Change to a specific directory called Desktop
cd .. : Move back a directory

Create a directory/folder

mkdir <dir>: Create a new directory
mkdir /home/Desktop/dir: Create a directory in a specific location

Create and edit files

touch <file>: Create an empty file
nano <file>: Edit an existing file or create it if it doesn’t exist.
Alternatives to nano text editor: vim, emacs

Copy, move and rename files and directories

cp <file1> <file2>: Create a copy of a file
cp -r <dir1> <dir2>: Create a copy of a directory and everything in it
cp <file> /home/Desktop/file2: Create a copy of a file in a different directory and name it file2.

mv <file> /home/Desktop: Move a file to a specific directory (overwrites any existing file with the same name)
mv <dir> /home/Desktop: Move a directory to another location
mv <dir1> <dir2>: Rename a file OR directory (dir1 -> dir2)

Delete files

rm <file>: Delete a file
rm -f <file>: Force delete a file
Careful now..

rm -r <dir>: Delete a directory and its contents
rm -rf <dir>: Force delete a directory and its contents
Careful when using this command as it will delete everything inside the directory

Output and analyze files

cat <file>: Display/output the contents of a file
less <file>: Display the contents of a file with scroll (paginate) ability (press q to quit)

head <file>: Display the first ten lines in a file
head -20 <file>: Display the first 20 lines in a file
tail <file>: Display the last ten lines in a file
tail -20 <file>: Display the last 20 lines in a file

diff <file1> <file2>: Check the difference between two files (file1 and file2)

cal: Display monthly calendar

date: Check date and time
uptime: Check system uptime and currently logged in users

uname -a: Display system information.
dmesg: Display kernel ring buffer

poweroff: Shutdown system
reboot: Reboot system

View disk and memory usage

df -h: Display disk space usage
fdisk -l: List disk partition tables
free: Display memory usage

cat /proc/meminfo: Display memory information
cat /proc/cpuinfo: Display cpu information

View user information

whoami: Output your username
w: Check who’s online

history: View a list of your previously executed commands

View last logged in users and information

last: Display last login info of users
last <user>: Display last login info of a specific user

finger <user>: Display user information

Installing & Upgrading Packages

Search for packages

apt-cache pkgnames: List all available packages
apt search <name>: Search for a package and its description
apt show <name>: Check detailed description of a package

Install packages

apt-get install <name>: Install a package
apt-get install <name1> <name2>: Install multiple packages

Update, upgrade & cleanup

apt-get update: Update list of available packages
apt-get upgrade: Install the newest version of available packages
apt-get dist-upgrade: Force upgrade packages.
apt-get autoremove: Remove installed packages that are no longer needed
apt-get clean: Free up disk space by removing archived packages

Delete packages

apt-get remove: Uninstall a package
apt-get remove –purge: Uninstall a package and remove its configuration files

Processes & Job Management

top: Display running processes & system usage in real-time.

ps: Display currently running processes
ps -u <user>: Display currently running processes of a user

kill <PID>: Kill a processes by PID #.
killall <processes>: Kill all processes with specified name.

Start, stop, resume jobs

jobs: Display the status of current jobs
jobs -l: Display detailed info about each job
jobs -r: Display only running jobs

bg: View stopped background jobs or resume job in the background
fg: Resume recent job in the foreground
fg <job>: Bring specific job to the foreground.

ping <host>: Ping a host
whois <domain/IP>: Get whois information about a domain or IP.
dig <domain/IP>: Get DNS information
nslookup: <NS>: Get nameserver information

ifconfig: Configure/display network interfaces
iwconfig: Configure/display wireless network interfaces

netstat -r: Display kernel IP routing tables
netstat -antp: Check for established and listening ports/connections​

arp -a: Display ARP cache tables for all interfaces​

Secure File Transfer (SCP)

Transfer files FROM the local system TO a remote host (Local > Remote)
scp /path/to/file [email protected]:/path/to/dest

Transfer files FROM a remote host TO the local system (Remote > Local)
scp [email protected]:/path/to/file /path/to/dest

Transfer directories and everything within it
scp -r /path/to/dir [email protected]:/path/to/dest

Transfer all files that match a specific filetype
scp /path/to/*.txt [email protected]:/path/to/dest

Transfer local public SSH public key to remote host
cat ~/.ssh/id_rsa.pub | ssh [email protected] ‘cat >> .ssh/authorized_keys’

Am I forgetting something? Let me know in the comments below. I’ll continue to update this when I get a chance.

Source

Kubernetes Tutorial for Beginners | Kubernetes Beginner’s Guide

free kubernetes beginners tutorial ebook

Have you been trying to learn Kubernetes for a while now but still miss some concepts?. Learning Kubernetes can be tough especially for users new to Containers and its orchestration. This ebook is one of the best books to getting started with Kubernetes. It has all the pieces you need to become a Kubernetes master.

For introduction purposes, let’s define what’s Kubernetes. Kubernetes is an open source tool initially designed by Google to aid in automation and management of containers and applications running on them.

If you’ve been playing with container engine tools like Docker, you must have experienced how difficult it is to manage more than one docker container across a number of hosts. This is where Kubernetes comes in. It makes it easy to deploy more than one container across a fleet of nodes and ensure they are highly available and redundant.

Free Ebook Kubernetes Essentials

What’s in “Kubernetes Essentials” eBook?

Everything in this “Kubernetes Essentials” ebook is perfectly arranged, starting from Kubernetes basics to advanced topics for experienced system administrators and Developers. Below is a cover of chapter available in this book.

Chapter 1: Introduction to Kubernetes

In this chapter, you’re introduced to the world of containers. You get to differentiate between Virtualization and Containerization. What’s the difference between Docker and VM, Docker vs Kubernetes, Why you need Kubernetes, Kubernetes use cases all over the world e.t.c?

Chapter 2: Key definitions and components

On chapter two of this ebook, you get to learn all the pieces that makeup Kubernetes. You’re introduced to the concepts of Pods, Clusters, Levels, Services, Replication and all the components of Kubernetes are covered in detail, with a clear definition of its functionalities. This is where you get to understand Kubernetes well and how all its components fit together.

Chapter 3: Kubernetes Concepts

In this chapter, you get to learn Kubernets Networking and Storage subsystem layer in detail. How Pods in Kubernetes manage multiple containers – lifecycle, pods creation, replication, and the multi-node networking like VXLAN. How rescheduling and rolling updates take place in Kubernetes is also covered in this section.

Chapter 4: Deploying Kubernetes Manually

Chapter 4 of this book concentrates on the manual deployment of Kubernetes on CentOS, Ubuntu, and other operating systems. The environment can be Virtual e.g VirtualBox, AWS cloud, Azure or with the help of Vagrant for test environments. You’ll build Kubernetes clusters from scratch, starting from preparation of base OS, the basics of managing a cluster with Vagrant and working with the kubeadm tool, to troubleshooting deployment issues, working with etcd, Kubernetes add-ons, Kubernetes dashboard, Flannel networking, CoreDNS e.t.c.

Chapter 5: Orchestrating Containers with Kubernetes

Now that everything before this chapter introduced you to the basics of Kubernetes and its deployment. It’s time to do the dirty work. Here you start to deploy real applications on containers orchestrated through Kubernetes. By the end of this chapter, you should be confident in the deployment of applications of Kubernetes and expose them to the public via Services. Troubleshooting of Docker containers inside Kubernetes umbrella is covered in detail.

Chapter 6: Deploying Kubernetes with Ansible

You don’t want to deploy Kubernetes manually? don’t worry your medication is here. With ansible, you can automate the deployment of Kubernetes by having everything in a playbook that’s executable. You’ll spend some time writing YAML files which will save you a lot of hours later. With this, it becomes easy to scale out your Kubernetes infrastructure and tear it down when done.

Chapter 7: Provisioning Storage in Kubernetes

Storage is one of the crucial parts of Kubernetes. If poorly designed and deployed, it can cost you money to bring things up to service in case of a failure. This chapter will teach you on best storage guidelines to follow for Kubernetes. You’re introduced to various storage plugins available and advice on which one to pick. The main goal of this chapter is to help you deploy persistent storage that’s easy to scale, and how to use this storage inside containers. NFS and ISCSI are the core storage protocols covered.

Chapter 8: Troubleshooting Kubernetes and Systemd Services

Troubleshooting is a key in all systems management tasks. You’ll learn to inspect and debug issues in Kubernetes. It covers troubleshooting of pods, cluster controllers, worker nodes, Docker containers, storage, networking and all other Kubernetes components. If you have been in Linux world for some time, you must have witnessed the stress of managing services with upstart. There cane Systemd with its challenges and benefits. On this chapter, you’ll learn all the bells and whistles of systemd on Kubernetes. How to fix issues when they arise by utilizing systemd as a tool for troubleshooting

Chapter 9: Kubernetes Maintenance

This chapter includes Kubernetes monitoring with influxdb as a data store, Grafana as a visualization tool and Prometheus monitoring system/ time series database. Using Kubernetes Dashboard to visualize container infrastructure is also covered here and how to do logging for containers. Finally, regular checks and cleaning are essential.

Wrapping Up

Learning Kubernetes is inevitable, especially for System Engineers, Administrators, and DevOps roles. Kubernetes is a recent technology but has revolutionized how containerized applications are deployed in the cloud. Being an open source technology backed by huge community and support of big companies like Red Hat, SUSE and others, its future is definitely great. This ebook will help you get started earlier and grow your career in this interesting and growing containers space. The content of this book is concrete and covers everything you need to become a Kubernetes guru!

Download Ebook

Read Also:

Source

Red Hat Enterprise Linux 8 Hits Beta With Integrated Container Features

It has been three and half years since Red Hat last issued a major new version number of its flagship Red Hat Enterprise Linux platform. A lot has happened since RHEL 7 was launched in June 2014, and Red Hat is now previewing its next-generation RHEL 8 platform in beta.

Among the biggest changes in the last four years across the compute landscape has been the emergence of containers and microservices as being a primary paradigm for application deployment. In RHEL 8, Red Hat is including multiple container tools that it has been developing and proving out in the open-source community, including Buildah (container building), Podman (running containers) and Skopeo (sharing/finding containers).

Systems management is also getting a boost in RHEL 8 with the Composer features that enable organizations to build and deploy custom RHEL images. Management of RHEL is further enhanced via the new Red Hat Enterprise Linux Web Console, which enables administrators to manage bare metal, virtual, local and remote Linux servers.

Although RHEL 8 will be the first major version number update since RHEL 7 in 2014, Red Hat has not been sitting idle the past four years. The company has updated RHEL up to twice a year with new milestone versions. The most recent version is RHEL 7.6, which became generally available on Oct. 30 with new security capabilities.

The RHEL 7.6 release came the day after Red Hat announced it was being acquired by IBM in a $34 billion deal that is set to close in 2019.

Security

New security capabilities will also be a core element of RHEL 8, most notably the inclusion of support for the TLS 1.3 cryptographic standard. TLS 1.3 was announced as a formal standard by the IETF back on March 26, providing an updated version to the core protocol used to secure data in motion across the internet.

Additionally, Red Hat is making it easier for system administrators to manage cryptographic policies in RHEL 8 with a new feature.

“System-wide cryptographic policies, which configures the core cryptographic subsystems, covering the TLS, IPSec, SSH, DNSSec, and Kerberos protocols, are applied by default,” the RHEL 8 release notes state. “With the new update-crypto-policies command, the administrator can easily switch between modes: default, legacy, future, and fips.”

 

Application Streams

In the past, RHEL users were largely stuck with certain version branches of core application libraries in an effort to help maintain compatibility and stability.

Red Hat’s community-led Fedora Linux distribution introduced the concept of modularity earlier this year, with the release of Fedora 28. RHEL 8 is now following the Fedora Modularity lead with the concept of Application Streams.

“Userspace components can now update more quickly than core operating system packages and without having to wait for the next major version of the operating system,” Stefanie Chiras, vice president and general manager of Red Hat Enterprise Linux at Red Hat, wrote in a blog. “Multiple versions of the same package, for example, an interpreted language or a database, can also be made available for installation via an application stream.”

Memory

Perhaps the biggest single change coming to RHEL 8 is in terms of system performance, specifically due to a new upper limit on physical memory capacity.

RHEL 7 had a physical upper limit of 64TB of system memory per server. Thanks to new performance capabilities in next-generation Intel and AMD CPUs, RHEL 8 will have an upper limit of 4PB of physical memory capacity.

Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.

Source

Download Disks Linux 3.31.2

Disks (formerly GNOME Disk Utility) is an open source software that lists mounted storage devices and virtual disk images, allowing users to manipulate them in any way possible.

The application looks exactly like the Disk Utility software of the Mac OS X operating system. It lets users to view detailed information about a certain storage device, such as model, size, partitioning, serial number, assessment, and device path.

In addition, for each drive, the software can display detailed volume information in both graphical and text modes, such as partition type, size, absolute path, filesystem type, and mounted point.

Features at a glance

There are various options for each partition and drive, allowing users to deactivate, mount, unmount, format, delete or benchmark them. You can also do all these action in batch mode, using multiple selected drives at once.

Another interesting feature is the ability to view SMART attributes and run self-tests on a specific disk drive, which will tell you if the device is OK or not and if it contains errors. Also, you can apply advanced power management and write cache settings for each listed disk.

Besides the standard storage devices like SSD (Solid Disk Drive), HDD (Hard Disk Drive) and USB flash drive, the program can also mount and list ISO and IMG disk images, which can be deployed (restored) to one of the aforementioned disk drives that are mounted on your machine. It can also list optical devices, such as CD-ROMs, DVD-ROMs or Blu-Ray drives.

Designed for GNOME

It is distributed as part of the GNOME desktop environment, but it can also be installed on other open source window managers as a standalone application, through the default software repositories of your Linux distribution.

Bottom line

Overall, Disks is an essential application for the GNOME desktop environment, as well as for any Linux-based operating system. It allows you to format and partition disk drives, as well as to write ISO images to USB sticks.

Source

The Opportunity in OpenStack Cloud for Service Providers

Helping Your Clients Embrace the Cloud Can Reap Big Dividends

Digital transformation is affecting every industry, from manufacturing to hospitality and government to finance. As a service provider, you’ve probably seen how this period of rapid change is disrupting your customers—causing both stress and growth. Luckily, your customers’ digital transformation can be an opportunity for your organization too.

Digital transformation is driving increased cloud adoption. According to a new research report from 451 Research, multicloud scenarios are the norm, and that means organizations increasingly need Cloud Management Platforms (CMPs). This is where service providers can step in. One compelling option for CMPs is open source software, including the industry-leading OpenStack cloud.

Open source platforms such as OpenStack can help you to better support the digital transformation initiatives of your customers. By enabling customization, customer choice and support for a broader array of technologies and platforms, open source software such as OpenStack provides benefits proprietary offerings don’t. One of those benefits is the constant innovation and improvement that open source technologies experience due to the contributions of a large community of developers.

OpenStack isn’t a cure-all. It makes great sense for some scenarios and less so for others. The report details where service providers are likely to see the maximum potential opportunity:

Large Enterprises

The largest companies have been early adopters of open source technologies, and with their developer teams and in-house resources, they often have a better understanding of their CMP needs. 451 also expects that enterprise data center growth will occur mostly in hosted environments—private, public and dedicated—as enterprises move increasingly to the cloud.

Private Cloud Requirements

While not exclusively a private cloud opportunity, the majority of the open source CMP opportunity is with private cloud. OpenStack can’t compare or compete with hyperscale public cloud providers in terms of features and functionality, but it can provide the desired control in a private cloud scenario.

Regulated Industries

If you’re a service provider working with customers in a regulated industry such as finance or health care, you likely know the challenges better than anyone. There are often strict requirements that some applications and data run in-house or in a private cloud. This may rule out certain proprietary cloud offerings while creating the opportunity for open source cloud software.

Regional Requirements

Outside of the North American market, people are still wary of trusting the processing and storing of data to a U.S.-based vendor. In addition, legislation—such as the General Data Protection Requirements (GDPR) in Europe—is increasingly adding location and data-transit rules to customers’ burdens.

In these sectors and more, OpenStack presents service providers like you with a compelling opportunity. How to best take advantage of it is the next question. In the paper, you’ll learn:

  • Which of the open source alternatives and go-to-market variations is best for you
  • What you stand to gain from your investment
  • How to best avoid the challenges involved

Source

Practical Networking for Linux Admins: TCP/IP | Linux.com

Get to know networking basics with this tutorial from our archives.

Linux grew up with a networking stack as part of its core, and networking is one of its strongest features. Let’s take a practical look at some of the TCP/IP fundamentals we use every day.

It’s IP Address

I have a peeve. OK, more than one. But for this article just one, and that is using “IP” as a shortcut for “IP address”. They are not the same. IP = Internet Protocol. You’re not managing Internet Protocols, you’re managing Internet Protocol addresses. If you’re creating, managing, and deleting Internet Protocols, then you are an uber guru doing something entirely different.

Yes, OSI Model is Relevant

TCP is short for Transmission Control Protocol. TCP/IP is shorthand for describing the Internet Protocol Suite, which contains multiple networking protocols. You’re familiar with the Open Systems Interconnection (OSI) model, which categorizes networking into seven layers:

  • 7. Application layer
  • 6. Presentation layer
  • 5. Session layer
  • 4. Transport layer
  • 3. Network layer
  • 2. Data link layer
  • 1. Physical layer

The application layer includes the network protocols you use every day: SSH, TLS/SSL, HTTP, IMAP, SMTP, DNS, DHCP, streaming media protocols, and tons more.

TCP operates in the transport layer, along with its friend UDP, the User Datagram Protocol. TCP is more complex; it performs error-checking, and it tries very hard to deliver your packets. There is a lot of back-and-forth communication with TCP as it transmits and verifies transmission, and when packets get lost it resends them. UDP is simpler and has less overhead. It sends out datagrams once, and UDP neither knows nor cares if they reach their destination.

TCP is for ensuring that data is transferred completely and in order. If a file transfers with even one byte missing it’s no good. UDP is good for lightweight stateless transfers such NTP and DNS queries, and is efficient for streaming media. If your music or video has a blip or two it doesn’t render the whole stream unusable.

The physical layer refers to your networking hardware: Ethernet and wi-fi interfaces, cabling, switches, whatever gadgets it takes to move your bits and the electricity to operate them.

Ports and Sockets

Linux admins and users have to know about ports and sockets. A network socket is the combination of an IP address and port number. Remember back in the early days of Ubuntu, when the default installation did not include a firewall? No ports were open in the default installation, so there were no entry points for an attacker. “Opening a port” means starting a service, such as an HTTP, IMAP, or SSH server. Then the service opens a listening port to wait for incoming connections. “Opening a port” isn’t quite accurate because it’s really referring to a socket. You can see these with the netstat command. This example displays only listening sockets and the names of their services:

$ sudo netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 1583/mysqld
tcp 0 0 127.0.0.1:5901 0.0.0.0:* LISTEN 13951/qemu-system-x
tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN 2101/dnsmasq
tcp 0 0 192.168.122.1:80 0.0.0.0:* LISTEN 2001/apache2
tcp 0 0 192.168.122.1:443 0.0.0.0:* LISTEN 2013/apache2
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1200/sshd
tcp6 0 0 :::80 :::* LISTEN 2057/apache2
tcp6 0 0 :::22 :::* LISTEN 1200/sshd
tcp6 0 0 :::443 :::* LISTEN 2057/apache2

This shows that MariaDB (whose executable is mysqld) is listening only on localhost at port 3306, so it does not accept outside connections. Dnsmasq is listening on 192.168.122.1 at port 53, so it is accepting external requests. SSH is wide open for connections on any network interface. As you can see, you have control over exactly what network interfaces, ports, and addresses your services accept connections on.

Apache is listening on two IPv4 and two IPv6 ports, 80 and 443. Port 80 is the standard unencrypted HTTP port, and 443 is for encrypted TLS/SSL sessions. The foreign IPv6 address of :::* is the same as 0.0.0.0:* for IPv4. Those are wildcards accepting all requests from all ports and IP addresses. If there are certain addresses or address ranges you do not want to accept connections from, you can block them with firewall rules.

A network socket is a TCP/IP endpoint, and a TCP/IP connection needs two endpoints. A socket represents a single endpoint, and as our netstat example shows a single service can manage multiple endpoints at one time. A single IP address or network interface can manage multiple connections.

The example also shows the difference between a service and a process. apache2 is the service name, and it is running four processes. sshd is one service with one process listening on two different sockets.

Unix Sockets

Networking is so deeply embedded in Linux that its Unix domain sockets (also called inter-process communications, or IPC) behave like TCP/IP networking. Unix domain sockets are endpoints between processes in your Linux operating system, and they operate only inside the Linux kernel. You can see these with netstat:

$ netstat -lx
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node Path
unix 2 [ ACC ] STREAM LISTENING 988 /var/run/dbus/system_bus_socket
unix 2 [ ACC ] STREAM LISTENING 29730 /run/user/1000/systemd/private
unix 2 [ ACC ] SEQPACKET LISTENING 357 /run/udev/control
unix 2 [ ACC ] STREAM LISTENING 27233 /run/user/1000/keyring/control

It’s rather fascinating how they operate. The SOCK_STREAM socket type behaves like TCP with reliable delivery, and SOCK_DGRAM is similar to UDP, unordered and unreliable, but fast and low-overhead. You’ve heard how everything in Unix is a file? Instead of networking protocols and IP addresses and ports, Unix domain sockets use special files, which you can see in the above example. They have inodes, metadata, and permissions just like the regular files we use every day.

If you want to dig more deeply there are a lot of excellent books. Or, you might start with man tcp and man 2 socket. Next week, we’ll look at network configurations, and whatever happened to IPv6?

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Source

WP2Social Auto Publish Powered By : XYZScripts.com