Schedule One-Time Commands with the UNIX at Tool

Cron is nice and all, but don’t forget about its cousin at.

When I first started using Linux, it was like being tossed into the deep end
of the UNIX pool. You were expected to use the command line heavily along
with all the standard utilities and services that came with your
distribution. At lot has changed since then, and nowadays, you can use a
standard Linux desktop without ever having to open a terminal or use old
UNIX services. Even as a sysadmin, these days, you often are a few layers of
abstraction above some of these core services.

I say all of this to point out that for us old-timers, it’s easy to take for
granted that everyone around us innately knows about all the command-line
tools we use. Yet, even though I’ve been using Linux for 20 years, I
still learn about new (to me) command-line tools all the time. In this “Back
to Basics” article series, I plan to cover some of the command-line tools
that those new to Linux may never have used before. For those of you who are
more advanced, I’ll spread out this series, so you can expect future
articles to be more technical. In this article, I describe how to use
the at utility to schedule jobs to run at a later date.

at vs. Cron

at is one of those commands that isn’t discussed very much. When
people talk about scheduling commands, typically cron gets the most
coverage. Cron allows you to schedule commands to be run on a periodic
basis. With cron, you can run a command as frequently as every minute or as
seldom as once a day, week, month or even year. You also can define more
sophisticated rules, so commands run, for example, every five minutes, every
weekday, every other hour and many other combinations. System administrators sometimes
will use cron to schedule a local script to collect metrics every minute or
to schedule backups.

On the other hand, although the at command also allows you to schedule
commands, it serves a completely different purpose from cron. While cron
lets you schedule commands to run periodically, at lets you schedule
commands that run only once at a particular time in the future. This
means that at fills a different and usually more immediate need
from cron.

Using at

At one point, the at command came standard on most Linux
distributions, but
these days, even on servers, you may find yourself having to
install the at package explicitly. Once installed, the easiest
way to use at is to type
it on the command line followed by the time you want the command to run:

$ at 18:00

The at command also can accept a number of different time formats. For
instance, it understands AM and PM as well as words like “tomorrow”, so you
could replace the above command with the identical:

$ at 6pm

And, if you want to run the same command at that time tomorrow instead:

$ at 6pm tomorrow

Once you press enter, you’ll drop into an interactive shell:

$ at 6pm tomorrow
warning: commands will be executed using /bin/sh
at>

From the interactive shell, you can enter the command you want to run
at that time. If you want to run multiple commands, press enter after each
command and type the command on the new at> prompt. Once you’re done
entering commands, press Ctrl-D on an empty at> prompt to exit the
interactive shell.

For instance, let’s say I’ve noticed that a particular server has had
problems the past two days at 5:10am for around five minutes, and so far, I’m
not seeing anything in the logs. Although I could just wake up early and log
in to the server, instead I could write a short script that collects data
from ps, netstat, tcpdump and other
command-line tools for a few minutes, so
when I wake up, I can go over the data it collected. Since this is a one-off,
I don’t want to schedule something with cron and risk forgetting about it
and having it run every day, so this is how I would set it up with
at:

$ at 5:09am tomorrow
warning: commands will be executed using /bin/sh
at>
at> /usr/local/bin/my_monitoring_script

Then I would press Ctrl-D, and the shell would exit with this output:

at> <EOT>
job 1 at Wed Sep 26 05:09:00 2018

Managing at Jobs

Once you have scheduled at jobs, it’s useful to be able to pull up a list of
all the at jobs in the queue, so you know what’s running and
when. The atq
command lists the current at queue:

$ atq
1 Wed Sep 26 05:09:00 2018 a kyle

The first column lists the number at assigned to each job and then lists the
time the job will be run and the user it will run as. Let’s say that in
the above example I realize I’ve made a mistake, because my script won’t be able
to run as a regular user. In that case, I would want to use the
atrm command
to remove job number 1:

$ atrm 1

If I were to run atq again, I would see that the job no longer exists.
Then I could sudo up to root and use the at command to schedule the job
again.

at One-Liners

Although at supports an interactive mode, you also can pipe commands to it all
on one line instead. So, for instance, I could schedule the above job with:

$ echo /usr/local/bin/my_monitoring_script | at 5:09am tomorrow

Conclusion

If you didn’t know that at existed, you might find yourself coming up with
all sorts of complicated and convoluted ways to schedule a one-off job. Even
worse, you might need to set an alarm clock so you can wake up extra early
and log in to a problem server. Of course, if you don’t have an alarm clock,
you could use at:

$ echo “aplay /home/kyle/alarm.wav” | at 7am tomorrow

Source

Open Source 2018: It Was the Best of Times, It Was the Worst of Times | Linux.com

Recently, IBM announced that it would be acquiring Red Hat for $34 billion, a more-than-60-percent premium over Red Hat’s market cap, and a nearly 12x multiple on revenues. In many ways, this was a clear sign that 2018 was the year commercial open source has arrived, if there was ever previously a question about it before.

Indeed, the Red Hat transaction is just the latest in a long line of multi-billion dollar outcomes this year. To date, more than $50 billion dollars have been exchanged in open source IPOs and mergers and acquisitions (M&A); and all of the M&A deals are considered “mega deals” — those valued over $5 billion.

  • IBM acquired Red Hat for $34 billion
  • Hortonworks’ $5.2 billion merger with Cloudera
  • Elasticsearch IPO – $4+billion
  • Pivotal IPO – $3.9 billion
  • Mulesoft acquired by Salesforce – $6.5 Billion

If you’re a current open source software (OSS) shareholder, it may feel like the best of times. However, If you’re an OSS user or emerging open source project or company, you might be feeling more ambivalent.

On the positive side, the fact that there have been such good financial outcomes should come as encouragement to the many still-private and outstanding open-source businesses (e.g., Confluent, Docker, HashiCorp, InfluxDB). And, we can certainly hope that this round of exits will encourage more investors to bet on OSS, enabling OSS to continue to be a prime driver of innovation.

However, not all of the news is rosy.

First, since many of these exits were in the form of M&A, we’ve actually lost some prime examples of independent OSS companies. For many years, there was a concern that Red Hat was the only example of a public open source company. Earlier this year, it seemed likely that the total would grow to 7 (Red Hat, Hortonworks, Cloudera, Elasticsearch, Pivotal, Mulesoft, and MongoDB). Assuming the announced M&As close as expected, the number of public open source companies is back down to four, and the combined market cap of public open source companies is much less than it was at the start of the year.

We Need to Go Deeper

I think it’s critical that we view these open source outcomes in the context of another unavoidable story — the growth in cloud computing.

Many of the open source companies involved share an overlooked common denominator: they’ve made most of their money through on-premise businesses. This probably comes as a surprise, as we regularly hear about cloud-related milestones, like the one that states that more than 80% of server workloads are in the cloud, that open source drives ⅔ or more of cloud revenues, and that the cloud computing market is expected to reach $300 billion by 2021.

By contrast, the total revenues of all of the open source companies listed above was less than $7B. And, almost all of the open source companies listed above have taken well over $200 million in investment each to build out direct sales and support to appropriately sell to the large, on premises enterprise market.

yRPFSfntUxV0-LzXJSZJDUuMjBJP_v6jIbOg4MQW

Open Source Driving Revenue, But for Whom?

The most common way that open source is used in the cloud is as a loss-leader to sell infrastructure. The largest cloud companies all offer free or near-free open source services that drive consumption of compute, networking, and storage.

To be clear, this is perfectly legal, and many of the cloud companies have contributed generously in both code and time to open source. However, the fact that it is difficult for OSS companies to monetize their own products with a hosted offering means that they are shut off from one of the most important and sustainable paths to scaling. Perhaps most importantly, OSS companies that are independent are largely closed off from the fastest growing segment of the computing market. Since there are only a handful of companies worldwide with the scale and capital to operate traditional public clouds (indeed, Amazon, Google, Microsoft, and Alibaba are among the largest companies on the planet), and those companies already control a disproportionate share of traffic, data, capital and talent, how can we ensure that investment, monetization, and innovation continue to flow in open source? And, how can open source companies sustainably grow.

For some OSS companies, the answer is M&A. For others, the cloud monetization/competition question has led them to adopt controversial and more restrictive licensing policies, such as Redis Lab’s adoption of the Commons Clause and MongoDB’s Server Side License.

But there may be a different answer to cloud monetization. Namely, create a different kind of cloud, one based on decentralized infrastructure.

Rather than spending billions to build out data centers, decentralized infrastructure approaches (like Storj, SONM, and others), provide incentives for people around the world to contribute spare computing, storage or network capacity. For example, by fairly and transparently allowing storage node operators to share in the revenue generated (i.e., by compensating supply), Storj was able to rapidly grow to a network of 150,000 nodes in 180 countries with over 150 PB of capacity–equivalent to several large data centers. Similarly, rather than spending hundreds of millions on traditional sales and marketing, we believe there is a way to fairly and transparently compensate those who bring demand to the network, so we have programmatically designed our network so that open source companies whose projects send users our way can get fairly and transparently compensated proportional to the storage and network usage they generate. We are actively working to encourage other decentralized networks to do the same, and believe this is the future of open cloud computing

This isn’t charity. Decentralized networks have strong economic incentives to compensate OSS as the primary driver of cloud demand. But, more importantly, we think that this can help drive a virtuous circle of investment, growth, monetization, and innovation. Done correctly, this will ensure that the best of times lay ahead!

Ben Golub is the former CEO of Docker and interim CEO at Storj Labs.

Watch the Open Source Summit keynote presentation from Ben Golub and Shawn Wilkinson to learn more about open source and the decentralized web.

Source

Cheat Sheet of Useful Commands Every Kali Linux User Needs To Know

This cheat sheet includes a list of basic and useful Linux commands that every Kali Linux user needs to know.

If you want to learn how to hack with Kali Linux, the most important thing you should do first is to master the command line interface.

Here’s why:

Tasks that take minutes or even hours to do on a desktop environment (GUI) can be done in a matter of seconds from the command line.

For example:

To download an entire HTML website, you only need to type:

wget -r domain.com

Now if you were to do the same on a GUI, you’d have to save each page one by one.

This is only one of many examples as to how powerful the command line is. There are many other tasks on Linux that can only be done from the command line.

In short:

Knowing your way around a command line will make you a more efficient and effective programmer. You’ll be able to get shit done faster by automating repetitive tasks. ​

​Plus, you’ll look like a complete bad ass in the process.

Use this cheat sheet as a reference in case you forget how to do certain tasks from the command-line. And trust me, it happens.

If you’re new to Unix/Linux operating systems, this cheat sheet also includes the fundamental linux commands such as jumping from one directory to another, as well as more technical stuff like managing processes.

NOTES
Everything inside “<>” should be replaced with a name of a file, directory or command.

Bash = A popular command-line used in Unix/Linux operating systems.

dir = directory/folder
file = file name & type (eg. notes.txt)
cmd = command (eg. mkdir, ls, curl, etc)
location = path/destination (eg. /home/Desktop)

pwd: Display path of current directory you’re in

​ls: List all files and folders in the current directory
ls -la: List detailed list of files and folders, including hidden ones

Change to a specific directory

cd: Change to home directory
cd /user/Desktop: Change to a specific directory called Desktop
cd .. : Move back a directory

Create a directory/folder

mkdir <dir>: Create a new directory
mkdir /home/Desktop/dir: Create a directory in a specific location

Create and edit files

touch <file>: Create an empty file
nano <file>: Edit an existing file or create it if it doesn’t exist.
Alternatives to nano text editor: vim, emacs

Copy, move and rename files and directories

cp <file1> <file2>: Create a copy of a file
cp -r <dir1> <dir2>: Create a copy of a directory and everything in it
cp <file> /home/Desktop/file2: Create a copy of a file in a different directory and name it file2.

mv <file> /home/Desktop: Move a file to a specific directory (overwrites any existing file with the same name)
mv <dir> /home/Desktop: Move a directory to another location
mv <dir1> <dir2>: Rename a file OR directory (dir1 -> dir2)

Delete files

rm <file>: Delete a file
rm -f <file>: Force delete a file
Careful now..

rm -r <dir>: Delete a directory and its contents
rm -rf <dir>: Force delete a directory and its contents
Careful when using this command as it will delete everything inside the directory

Output and analyze files

cat <file>: Display/output the contents of a file
less <file>: Display the contents of a file with scroll (paginate) ability (press q to quit)

head <file>: Display the first ten lines in a file
head -20 <file>: Display the first 20 lines in a file
tail <file>: Display the last ten lines in a file
tail -20 <file>: Display the last 20 lines in a file

diff <file1> <file2>: Check the difference between two files (file1 and file2)

cal: Display monthly calendar

date: Check date and time
uptime: Check system uptime and currently logged in users

uname -a: Display system information.
dmesg: Display kernel ring buffer

poweroff: Shutdown system
reboot: Reboot system

View disk and memory usage

df -h: Display disk space usage
fdisk -l: List disk partition tables
free: Display memory usage

cat /proc/meminfo: Display memory information
cat /proc/cpuinfo: Display cpu information

View user information

whoami: Output your username
w: Check who’s online

history: View a list of your previously executed commands

View last logged in users and information

last: Display last login info of users
last <user>: Display last login info of a specific user

finger <user>: Display user information

Installing & Upgrading Packages

Search for packages

apt-cache pkgnames: List all available packages
apt search <name>: Search for a package and its description
apt show <name>: Check detailed description of a package

Install packages

apt-get install <name>: Install a package
apt-get install <name1> <name2>: Install multiple packages

Update, upgrade & cleanup

apt-get update: Update list of available packages
apt-get upgrade: Install the newest version of available packages
apt-get dist-upgrade: Force upgrade packages.
apt-get autoremove: Remove installed packages that are no longer needed
apt-get clean: Free up disk space by removing archived packages

Delete packages

apt-get remove: Uninstall a package
apt-get remove –purge: Uninstall a package and remove its configuration files

Processes & Job Management

top: Display running processes & system usage in real-time.

ps: Display currently running processes
ps -u <user>: Display currently running processes of a user

kill <PID>: Kill a processes by PID #.
killall <processes>: Kill all processes with specified name.

Start, stop, resume jobs

jobs: Display the status of current jobs
jobs -l: Display detailed info about each job
jobs -r: Display only running jobs

bg: View stopped background jobs or resume job in the background
fg: Resume recent job in the foreground
fg <job>: Bring specific job to the foreground.

ping <host>: Ping a host
whois <domain/IP>: Get whois information about a domain or IP.
dig <domain/IP>: Get DNS information
nslookup: <NS>: Get nameserver information

ifconfig: Configure/display network interfaces
iwconfig: Configure/display wireless network interfaces

netstat -r: Display kernel IP routing tables
netstat -antp: Check for established and listening ports/connections​

arp -a: Display ARP cache tables for all interfaces​

Secure File Transfer (SCP)

Transfer files FROM the local system TO a remote host (Local > Remote)
scp /path/to/file [email protected]:/path/to/dest

Transfer files FROM a remote host TO the local system (Remote > Local)
scp [email protected]:/path/to/file /path/to/dest

Transfer directories and everything within it
scp -r /path/to/dir [email protected]:/path/to/dest

Transfer all files that match a specific filetype
scp /path/to/*.txt [email protected]:/path/to/dest

Transfer local public SSH public key to remote host
cat ~/.ssh/id_rsa.pub | ssh [email protected] ‘cat >> .ssh/authorized_keys’

Am I forgetting something? Let me know in the comments below. I’ll continue to update this when I get a chance.

Source

Kubernetes Tutorial for Beginners | Kubernetes Beginner’s Guide

free kubernetes beginners tutorial ebook

Have you been trying to learn Kubernetes for a while now but still miss some concepts?. Learning Kubernetes can be tough especially for users new to Containers and its orchestration. This ebook is one of the best books to getting started with Kubernetes. It has all the pieces you need to become a Kubernetes master.

For introduction purposes, let’s define what’s Kubernetes. Kubernetes is an open source tool initially designed by Google to aid in automation and management of containers and applications running on them.

If you’ve been playing with container engine tools like Docker, you must have experienced how difficult it is to manage more than one docker container across a number of hosts. This is where Kubernetes comes in. It makes it easy to deploy more than one container across a fleet of nodes and ensure they are highly available and redundant.

Free Ebook Kubernetes Essentials

What’s in “Kubernetes Essentials” eBook?

Everything in this “Kubernetes Essentials” ebook is perfectly arranged, starting from Kubernetes basics to advanced topics for experienced system administrators and Developers. Below is a cover of chapter available in this book.

Chapter 1: Introduction to Kubernetes

In this chapter, you’re introduced to the world of containers. You get to differentiate between Virtualization and Containerization. What’s the difference between Docker and VM, Docker vs Kubernetes, Why you need Kubernetes, Kubernetes use cases all over the world e.t.c?

Chapter 2: Key definitions and components

On chapter two of this ebook, you get to learn all the pieces that makeup Kubernetes. You’re introduced to the concepts of Pods, Clusters, Levels, Services, Replication and all the components of Kubernetes are covered in detail, with a clear definition of its functionalities. This is where you get to understand Kubernetes well and how all its components fit together.

Chapter 3: Kubernetes Concepts

In this chapter, you get to learn Kubernets Networking and Storage subsystem layer in detail. How Pods in Kubernetes manage multiple containers – lifecycle, pods creation, replication, and the multi-node networking like VXLAN. How rescheduling and rolling updates take place in Kubernetes is also covered in this section.

Chapter 4: Deploying Kubernetes Manually

Chapter 4 of this book concentrates on the manual deployment of Kubernetes on CentOS, Ubuntu, and other operating systems. The environment can be Virtual e.g VirtualBox, AWS cloud, Azure or with the help of Vagrant for test environments. You’ll build Kubernetes clusters from scratch, starting from preparation of base OS, the basics of managing a cluster with Vagrant and working with the kubeadm tool, to troubleshooting deployment issues, working with etcd, Kubernetes add-ons, Kubernetes dashboard, Flannel networking, CoreDNS e.t.c.

Chapter 5: Orchestrating Containers with Kubernetes

Now that everything before this chapter introduced you to the basics of Kubernetes and its deployment. It’s time to do the dirty work. Here you start to deploy real applications on containers orchestrated through Kubernetes. By the end of this chapter, you should be confident in the deployment of applications of Kubernetes and expose them to the public via Services. Troubleshooting of Docker containers inside Kubernetes umbrella is covered in detail.

Chapter 6: Deploying Kubernetes with Ansible

You don’t want to deploy Kubernetes manually? don’t worry your medication is here. With ansible, you can automate the deployment of Kubernetes by having everything in a playbook that’s executable. You’ll spend some time writing YAML files which will save you a lot of hours later. With this, it becomes easy to scale out your Kubernetes infrastructure and tear it down when done.

Chapter 7: Provisioning Storage in Kubernetes

Storage is one of the crucial parts of Kubernetes. If poorly designed and deployed, it can cost you money to bring things up to service in case of a failure. This chapter will teach you on best storage guidelines to follow for Kubernetes. You’re introduced to various storage plugins available and advice on which one to pick. The main goal of this chapter is to help you deploy persistent storage that’s easy to scale, and how to use this storage inside containers. NFS and ISCSI are the core storage protocols covered.

Chapter 8: Troubleshooting Kubernetes and Systemd Services

Troubleshooting is a key in all systems management tasks. You’ll learn to inspect and debug issues in Kubernetes. It covers troubleshooting of pods, cluster controllers, worker nodes, Docker containers, storage, networking and all other Kubernetes components. If you have been in Linux world for some time, you must have witnessed the stress of managing services with upstart. There cane Systemd with its challenges and benefits. On this chapter, you’ll learn all the bells and whistles of systemd on Kubernetes. How to fix issues when they arise by utilizing systemd as a tool for troubleshooting

Chapter 9: Kubernetes Maintenance

This chapter includes Kubernetes monitoring with influxdb as a data store, Grafana as a visualization tool and Prometheus monitoring system/ time series database. Using Kubernetes Dashboard to visualize container infrastructure is also covered here and how to do logging for containers. Finally, regular checks and cleaning are essential.

Wrapping Up

Learning Kubernetes is inevitable, especially for System Engineers, Administrators, and DevOps roles. Kubernetes is a recent technology but has revolutionized how containerized applications are deployed in the cloud. Being an open source technology backed by huge community and support of big companies like Red Hat, SUSE and others, its future is definitely great. This ebook will help you get started earlier and grow your career in this interesting and growing containers space. The content of this book is concrete and covers everything you need to become a Kubernetes guru!

Download Ebook

Read Also:

Source

Red Hat Enterprise Linux 8 Hits Beta With Integrated Container Features

It has been three and half years since Red Hat last issued a major new version number of its flagship Red Hat Enterprise Linux platform. A lot has happened since RHEL 7 was launched in June 2014, and Red Hat is now previewing its next-generation RHEL 8 platform in beta.

Among the biggest changes in the last four years across the compute landscape has been the emergence of containers and microservices as being a primary paradigm for application deployment. In RHEL 8, Red Hat is including multiple container tools that it has been developing and proving out in the open-source community, including Buildah (container building), Podman (running containers) and Skopeo (sharing/finding containers).

Systems management is also getting a boost in RHEL 8 with the Composer features that enable organizations to build and deploy custom RHEL images. Management of RHEL is further enhanced via the new Red Hat Enterprise Linux Web Console, which enables administrators to manage bare metal, virtual, local and remote Linux servers.

Although RHEL 8 will be the first major version number update since RHEL 7 in 2014, Red Hat has not been sitting idle the past four years. The company has updated RHEL up to twice a year with new milestone versions. The most recent version is RHEL 7.6, which became generally available on Oct. 30 with new security capabilities.

The RHEL 7.6 release came the day after Red Hat announced it was being acquired by IBM in a $34 billion deal that is set to close in 2019.

Security

New security capabilities will also be a core element of RHEL 8, most notably the inclusion of support for the TLS 1.3 cryptographic standard. TLS 1.3 was announced as a formal standard by the IETF back on March 26, providing an updated version to the core protocol used to secure data in motion across the internet.

Additionally, Red Hat is making it easier for system administrators to manage cryptographic policies in RHEL 8 with a new feature.

“System-wide cryptographic policies, which configures the core cryptographic subsystems, covering the TLS, IPSec, SSH, DNSSec, and Kerberos protocols, are applied by default,” the RHEL 8 release notes state. “With the new update-crypto-policies command, the administrator can easily switch between modes: default, legacy, future, and fips.”

 

Application Streams

In the past, RHEL users were largely stuck with certain version branches of core application libraries in an effort to help maintain compatibility and stability.

Red Hat’s community-led Fedora Linux distribution introduced the concept of modularity earlier this year, with the release of Fedora 28. RHEL 8 is now following the Fedora Modularity lead with the concept of Application Streams.

“Userspace components can now update more quickly than core operating system packages and without having to wait for the next major version of the operating system,” Stefanie Chiras, vice president and general manager of Red Hat Enterprise Linux at Red Hat, wrote in a blog. “Multiple versions of the same package, for example, an interpreted language or a database, can also be made available for installation via an application stream.”

Memory

Perhaps the biggest single change coming to RHEL 8 is in terms of system performance, specifically due to a new upper limit on physical memory capacity.

RHEL 7 had a physical upper limit of 64TB of system memory per server. Thanks to new performance capabilities in next-generation Intel and AMD CPUs, RHEL 8 will have an upper limit of 4PB of physical memory capacity.

Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.

Source

Download Disks Linux 3.31.2

Disks (formerly GNOME Disk Utility) is an open source software that lists mounted storage devices and virtual disk images, allowing users to manipulate them in any way possible.

The application looks exactly like the Disk Utility software of the Mac OS X operating system. It lets users to view detailed information about a certain storage device, such as model, size, partitioning, serial number, assessment, and device path.

In addition, for each drive, the software can display detailed volume information in both graphical and text modes, such as partition type, size, absolute path, filesystem type, and mounted point.

Features at a glance

There are various options for each partition and drive, allowing users to deactivate, mount, unmount, format, delete or benchmark them. You can also do all these action in batch mode, using multiple selected drives at once.

Another interesting feature is the ability to view SMART attributes and run self-tests on a specific disk drive, which will tell you if the device is OK or not and if it contains errors. Also, you can apply advanced power management and write cache settings for each listed disk.

Besides the standard storage devices like SSD (Solid Disk Drive), HDD (Hard Disk Drive) and USB flash drive, the program can also mount and list ISO and IMG disk images, which can be deployed (restored) to one of the aforementioned disk drives that are mounted on your machine. It can also list optical devices, such as CD-ROMs, DVD-ROMs or Blu-Ray drives.

Designed for GNOME

It is distributed as part of the GNOME desktop environment, but it can also be installed on other open source window managers as a standalone application, through the default software repositories of your Linux distribution.

Bottom line

Overall, Disks is an essential application for the GNOME desktop environment, as well as for any Linux-based operating system. It allows you to format and partition disk drives, as well as to write ISO images to USB sticks.

Source

The Opportunity in OpenStack Cloud for Service Providers

Helping Your Clients Embrace the Cloud Can Reap Big Dividends

Digital transformation is affecting every industry, from manufacturing to hospitality and government to finance. As a service provider, you’ve probably seen how this period of rapid change is disrupting your customers—causing both stress and growth. Luckily, your customers’ digital transformation can be an opportunity for your organization too.

Digital transformation is driving increased cloud adoption. According to a new research report from 451 Research, multicloud scenarios are the norm, and that means organizations increasingly need Cloud Management Platforms (CMPs). This is where service providers can step in. One compelling option for CMPs is open source software, including the industry-leading OpenStack cloud.

Open source platforms such as OpenStack can help you to better support the digital transformation initiatives of your customers. By enabling customization, customer choice and support for a broader array of technologies and platforms, open source software such as OpenStack provides benefits proprietary offerings don’t. One of those benefits is the constant innovation and improvement that open source technologies experience due to the contributions of a large community of developers.

OpenStack isn’t a cure-all. It makes great sense for some scenarios and less so for others. The report details where service providers are likely to see the maximum potential opportunity:

Large Enterprises

The largest companies have been early adopters of open source technologies, and with their developer teams and in-house resources, they often have a better understanding of their CMP needs. 451 also expects that enterprise data center growth will occur mostly in hosted environments—private, public and dedicated—as enterprises move increasingly to the cloud.

Private Cloud Requirements

While not exclusively a private cloud opportunity, the majority of the open source CMP opportunity is with private cloud. OpenStack can’t compare or compete with hyperscale public cloud providers in terms of features and functionality, but it can provide the desired control in a private cloud scenario.

Regulated Industries

If you’re a service provider working with customers in a regulated industry such as finance or health care, you likely know the challenges better than anyone. There are often strict requirements that some applications and data run in-house or in a private cloud. This may rule out certain proprietary cloud offerings while creating the opportunity for open source cloud software.

Regional Requirements

Outside of the North American market, people are still wary of trusting the processing and storing of data to a U.S.-based vendor. In addition, legislation—such as the General Data Protection Requirements (GDPR) in Europe—is increasingly adding location and data-transit rules to customers’ burdens.

In these sectors and more, OpenStack presents service providers like you with a compelling opportunity. How to best take advantage of it is the next question. In the paper, you’ll learn:

  • Which of the open source alternatives and go-to-market variations is best for you
  • What you stand to gain from your investment
  • How to best avoid the challenges involved

Source

Practical Networking for Linux Admins: TCP/IP | Linux.com

Get to know networking basics with this tutorial from our archives.

Linux grew up with a networking stack as part of its core, and networking is one of its strongest features. Let’s take a practical look at some of the TCP/IP fundamentals we use every day.

It’s IP Address

I have a peeve. OK, more than one. But for this article just one, and that is using “IP” as a shortcut for “IP address”. They are not the same. IP = Internet Protocol. You’re not managing Internet Protocols, you’re managing Internet Protocol addresses. If you’re creating, managing, and deleting Internet Protocols, then you are an uber guru doing something entirely different.

Yes, OSI Model is Relevant

TCP is short for Transmission Control Protocol. TCP/IP is shorthand for describing the Internet Protocol Suite, which contains multiple networking protocols. You’re familiar with the Open Systems Interconnection (OSI) model, which categorizes networking into seven layers:

  • 7. Application layer
  • 6. Presentation layer
  • 5. Session layer
  • 4. Transport layer
  • 3. Network layer
  • 2. Data link layer
  • 1. Physical layer

The application layer includes the network protocols you use every day: SSH, TLS/SSL, HTTP, IMAP, SMTP, DNS, DHCP, streaming media protocols, and tons more.

TCP operates in the transport layer, along with its friend UDP, the User Datagram Protocol. TCP is more complex; it performs error-checking, and it tries very hard to deliver your packets. There is a lot of back-and-forth communication with TCP as it transmits and verifies transmission, and when packets get lost it resends them. UDP is simpler and has less overhead. It sends out datagrams once, and UDP neither knows nor cares if they reach their destination.

TCP is for ensuring that data is transferred completely and in order. If a file transfers with even one byte missing it’s no good. UDP is good for lightweight stateless transfers such NTP and DNS queries, and is efficient for streaming media. If your music or video has a blip or two it doesn’t render the whole stream unusable.

The physical layer refers to your networking hardware: Ethernet and wi-fi interfaces, cabling, switches, whatever gadgets it takes to move your bits and the electricity to operate them.

Ports and Sockets

Linux admins and users have to know about ports and sockets. A network socket is the combination of an IP address and port number. Remember back in the early days of Ubuntu, when the default installation did not include a firewall? No ports were open in the default installation, so there were no entry points for an attacker. “Opening a port” means starting a service, such as an HTTP, IMAP, or SSH server. Then the service opens a listening port to wait for incoming connections. “Opening a port” isn’t quite accurate because it’s really referring to a socket. You can see these with the netstat command. This example displays only listening sockets and the names of their services:

$ sudo netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 1583/mysqld
tcp 0 0 127.0.0.1:5901 0.0.0.0:* LISTEN 13951/qemu-system-x
tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN 2101/dnsmasq
tcp 0 0 192.168.122.1:80 0.0.0.0:* LISTEN 2001/apache2
tcp 0 0 192.168.122.1:443 0.0.0.0:* LISTEN 2013/apache2
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1200/sshd
tcp6 0 0 :::80 :::* LISTEN 2057/apache2
tcp6 0 0 :::22 :::* LISTEN 1200/sshd
tcp6 0 0 :::443 :::* LISTEN 2057/apache2

This shows that MariaDB (whose executable is mysqld) is listening only on localhost at port 3306, so it does not accept outside connections. Dnsmasq is listening on 192.168.122.1 at port 53, so it is accepting external requests. SSH is wide open for connections on any network interface. As you can see, you have control over exactly what network interfaces, ports, and addresses your services accept connections on.

Apache is listening on two IPv4 and two IPv6 ports, 80 and 443. Port 80 is the standard unencrypted HTTP port, and 443 is for encrypted TLS/SSL sessions. The foreign IPv6 address of :::* is the same as 0.0.0.0:* for IPv4. Those are wildcards accepting all requests from all ports and IP addresses. If there are certain addresses or address ranges you do not want to accept connections from, you can block them with firewall rules.

A network socket is a TCP/IP endpoint, and a TCP/IP connection needs two endpoints. A socket represents a single endpoint, and as our netstat example shows a single service can manage multiple endpoints at one time. A single IP address or network interface can manage multiple connections.

The example also shows the difference between a service and a process. apache2 is the service name, and it is running four processes. sshd is one service with one process listening on two different sockets.

Unix Sockets

Networking is so deeply embedded in Linux that its Unix domain sockets (also called inter-process communications, or IPC) behave like TCP/IP networking. Unix domain sockets are endpoints between processes in your Linux operating system, and they operate only inside the Linux kernel. You can see these with netstat:

$ netstat -lx
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node Path
unix 2 [ ACC ] STREAM LISTENING 988 /var/run/dbus/system_bus_socket
unix 2 [ ACC ] STREAM LISTENING 29730 /run/user/1000/systemd/private
unix 2 [ ACC ] SEQPACKET LISTENING 357 /run/udev/control
unix 2 [ ACC ] STREAM LISTENING 27233 /run/user/1000/keyring/control

It’s rather fascinating how they operate. The SOCK_STREAM socket type behaves like TCP with reliable delivery, and SOCK_DGRAM is similar to UDP, unordered and unreliable, but fast and low-overhead. You’ve heard how everything in Unix is a file? Instead of networking protocols and IP addresses and ports, Unix domain sockets use special files, which you can see in the above example. They have inodes, metadata, and permissions just like the regular files we use every day.

If you want to dig more deeply there are a lot of excellent books. Or, you might start with man tcp and man 2 socket. Next week, we’ll look at network configurations, and whatever happened to IPv6?

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Source

5 of the Best File Managers for Linux

One of the pieces of software you use daily is a file manager. A good file manager is essential to your work. If you are a Linux user and want to try out file managers other than the default one that comes with your system, below is a list of the best Linux file managers you will find.

What Is a File Manager?

Let’s start with a definition first to make sure we are on the same page. A file manager is a computer application that you can access and manage the files and documents stored on your hard disk. In Windows this application is called Windows Explorer, and in macOS, Finder. In Linux there is no one standardized file manager application for all distributions. These are some of the best Linux File managers.

1. Nautilus

linux-file-managers-01-nautilus

Nautilus, now renamed to GNOME Files, is the standard file manager of the GNOME desktop environment. Since GNOME is a very popular desktop environment, this automatically means Nautilus is also among the most used file managers. One of the key features of Nautilus is that it’s clean and simple to use, while still offering all the basic functionality of a file manager, as well as the ability to browse remote files. This is a file manager suitable for novices and everybody who values minimalism and simplicity. If the default functionality is too limiting for you, you can extend it with the help of plugins.

2. Dolphin

linux-file-managers-02-dolphin

Dolphin File Manager is the KDE counterpart of Nautilus. Similarly to Nautilus, it is intended to be simple to use while also leaving room for customization. Split view and multitabs, as well as dockable panels, are among its core features. You can use Dolphin to browse both local and remote files across the network. For some operations Dolphin offers undo/redo functionality, which is pretty handy for those of us who have (too) quick fingers. If the default functionality of Dolphin is not enough, plugins come to the rescue.

3. Thunar

linux-file-managers-03-thunar

Thunar might not be as popular as Nautilus or Dolphin, but I personally like it more. It’s the file manager I use on a daily basis. Thunar is the default file manager for the Xfce Desktop Environment, but you can use it with other environments as well. Similarly to Nautilus and Dolphin, Thunar is lightweight, fast, and easy to use. For an old computer, Thunar is probably the best file manager. It is a relatively simple file manager without tons of fancy (and useless) features, but again, it has plugins to extend the default functionality, if this is needed.

4. Nemo

linux-file-managers-04-nemo

Nemo is a fork of Nautilus, and it’s the default file manager for the Cinnamon desktop environment. One of the special features of Nemo is that it has all the features of Nautilus 3.4 that have been removed in Nautilus 3.6, such as all desktop icons, compact view, etc., and tons of configuration options. Nemo also has useful features, such as open as root, open in terminal, show operation progress when copying/moving files, bookmark management, etc.

5. PCManFM

linux-file-managers-05-pcmanfm-2

The last file manager for Linux on this list – PCManFM – has the very ambitious goal to replace Nautilus, Konqueror and Thunar. PCManFM is the standard file manager in LXDE (a distro developed by the same team of developers), and it’s meant to be lightweight, yet fully functional. I don’t have much personal experience with this file manager, but from what I know, I can’t say it’s groundbreaking, breathtaking, etc. It does have the standard features a file manager offers, such as thumbnails, access to remote file systems, multitabs, drag and drop, etc., but I don’t think it has really outstanding features. Still, if you are curious, you can give it a try and see for yourself.

There are many more file managers for Linux I didn’t include because I don’t think they are as good as the ones listed. Some of these managers are Gentoo file manager, Konqueror, Krusader, GNOME Commander, Midnight Commander, etc. If the 5 file managers I reviewed are not what you like, you can give the rest a try, but don’t expect too much from them.

Is this article useful?

Source

Download GUPnP AV Linux 0.12.11

GUPnP AV i an open source and completely free library software designed as part of the GUPnP framework, providing users with a collection of helpers for building audio and video applications using GUPnP.

What is GUPnP?

GUPnP is an object-oriented and open source framework designed especially for creating UPnP devices and control points, written in C using libsoup and GObject. The GUPnP API is intended to be easy to use, flexible and efficient.

The GUPnP framework was initially created because of developer’s frustrations with the libupnp library and its mess of threads. Therefore, GUPnP is entirely single-threaded, it integrates with the GLib main loop, it’s asynchronous, and offers the same set of features as libupnp.

Getting started with GUPnP AV

Installing the GUPnP AV project on a GNU/Linux computer is the easiest of tasks, as you will have to first download the latest version of the software from Softpedia or via its official website (see the homepage link at the end of the article), and save it on your PC, preferably somewhere on your Home folder.

Use an archive manager utility to extract the contents of the source package, open a terminal emulator application and navigate to the location of the extracted archive files (e.g. cd /home/softpedia/gupnp-av-0.12.7), where you will run the ‘./configure && make’ command to configure/optimize and compile the project.

Please note that you should first install the GUPnP program before attempting to install this tool. After a successful compilation, you can install GUPnP AV system wide and make it available to all users on your machine by running the ‘sudo make install’ command as a privileged user or the ‘make install’ command as root.

Under the hood

Taking a look under the hood of the GUPnP AV program, we can notice that it has been written in the Vala and C programming languages. It is currently supported on 32-bit and 64-bit computer platforms.

Object-oriented framework UPnP devices GUPnP framework Object-oriented Framework A/V UPnP

Source

WP2Social Auto Publish Powered By : XYZScripts.com