Amazon Cloud Directory now available in the AWS GovCloud (US-West) Region

Posted On: Nov 16, 2018

Amazon Cloud Directory is now available in the AWS GovCloud (US-West) Region, an isolated region designed to address specific regulatory and compliance requirements of US Government agencies, as well as contractors, educational institutions, and other US customers that run sensitive workloads in the cloud.

Cloud Directory is a high-performance, serverless, hierarchical datastore. Cloud Directory makes it easy for you to organize and manage your multi-dimensional data such as users, groups, locations, and devices and the rich relationships between them. With Cloud Directory, you can enable use cases such as human resources applications, identity applications including advanced authorization, course catalogs, device registry and network topology.

Please see all AWS Regions where Cloud Directory is available. To learn more about Cloud Directory, see Amazon Cloud Directory.

Source

Install Slack on Ubuntu | Linux Hint

In today’s world, keeping a good collaboration with your team members is quite important. A good team collaboration yields the best result in everything, especially within the professional sector. Slack is a very powerful platform for keeping all the tasks of your new startup or business. Slack is an all-in-one collaboration platform for all sizes of teams and businesses. It offers records of previous conversations, “channel” divvied up by teams, clients, project(s) and others. Slack also integrates a number of handy tools at disposal. You can also connect services like Salesforce, JIRA, Zendesk, and even your proprietary software! Let’s check out on enjoying Slack on Ubuntu.

Installing Slack

There are various ways of installing Slack on your system.

Method 1

Fire up a terminal window and run the following command –

sudo snap install slack –classic

Method 2

Get the latest DEB package of Slack.

Now, run the following commands –

sudo dpkg -i slack-desktop-3.3.3-amd64.deb
sudo apt install -f

Using Slack

After the installation is complete, start Slack –

  • Creating a workspace

Let’s create a new workspace.

At first, enter your email.

Next, enter the confirmation code from your email.

Enter your full name.

Enter your password. Choose something strong.

Next, it’s time to choose your company name.

Now, choose your favorite Slack URL that will give you direct access to your Slack workspace.

Accept the “Terms and Conditions”.

You’re free to send invitations to whoever you like.

Voila! Your workspace is ready to enjoy!

This is the original window of your Slack desktop app.

  • Adding a channel

Channels are basically certain groups that’s specified for just one type of talk, for example, “#programming” for coders only, “#testing” for program testers only discussion etc.

Click the “+” icon after the “Channels” title.

Fill up the information for creating a new channel for your Slack workspace.

  • Integrating apps

On Slack, you are also free to add a number of additional online services from other service providers like Google Drive, Dropbox, Asana, Bitbucket, GitHub, and Trello etc.

Let’s enjoy Google Drive on our Slack.

Click the “+” icon after the “Apps” title.

Clicking “Install” button next to a listed app will redirect you on a browser.

From the browser, click “Install”.

Google Drive integration is complete! Now, you have to authenticate with your Google Drive account.

Enjoy!

Source

Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix

On the 4th of November, Linux 4.20 rc-1 was released with a host of notable changes right from AMD Vega 20 support getting squared away, AMD Picasso APU support, Intel 2.5G Ethernet support, the removal of Speck, and other new hardware support additions and software features. The release that was supposed to upgrade the kernel’s performance, did not succeed in doing so. On the contrary, the kernel is much slower as compared to previous Linux kernel stable releases.

In a blog released by Phoronix, Michael Larabel,e lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org, discussed the results of some tests conducted on the kernel. He bisected the 4.20 kernel merge window to explore the reasons for the significant slowdowns in the kernel for many real-world workloads.

The article attributes this degrade in performance to the Spectre Flaws in the processor. In order to mitigate against the Spectre flaw, an intentional kernel change was made.The change is termed as “STIBP” for cross-hyperthread Spectre mitigation on Intel processors. Single Thread Indirect Branch Predictors (STIBP) prevents cross-hyperthread control of decisions that are made by indirect branch predictors. The STIBP addition in Linux 4.20 will affect systems that have up-to-date/available microcode with this support and where a user’s CPU has Hyper-Threading enabled/present.

Performance issues in Linux 4.20

Michael has done a detailed analysis of the kernel performance and here are some of his findings.

  • Many synthetic and real-world tests showed that the Intel Core i9 performance was not upto the mark.
  • The Rodinia scientific OpenMP tests took 30% longer, Java-based DaCapo tests taking up to ~50% more time to complete, the code compilation tests also extended in length.
  • There was lower PostgreSQL database server performance and longer Blender3D rendering times. All this was noticed in Core i9 7960X and Core i9 7980XE test systems while the AMD Threadripper 2990WX performance was unaffected by the Linux 4.20 upgrade.
  • The latest Linux kernel Git benchmarks also saw a significant pullback in performance from the early days of the Linux 4.20 merge window up through the very latest kernel code as of today. Those affected systems included a low-end Core i3 7100 as well as a Xeon E5 v3 and Core i7 systems.
  • The tests conducted found the Smallpt renderer to slow down significantly
  • PHP performance took a major dive, HMMer also faced a major setback compared to the current Linux 4.19 stable series.

What is surprising is that there are mitigations against Spectre, Meltdown, Foreshadow, etc in Linux 4.19 as well. But 4.20 shows an additional performance drop on top of all the previously outlined performance hits this year. In the entire testing phase, the AMD systems didn’t appear to be impacted. This would mean if a user disables Spectre V2 mitigations to account for better performance- the system’s security could be compromised.

You can head over to Phoronix for a complete analysis of the test outputs and more information on this news.

Source

Download Eye of GNOME Linux 3.31.1

Eye of GNOME is an open source application that allows users to view image files under open source, Linux-based operating systems. It is mostly used under the GNOME desktop environment, where it is called Image Viewer.

Features at a glance

Eye of GNOME can make use of EXIF information stored in digital camera images and display it on an optional sidebar that can be enabled from the View menu. It can read numerous image file formats, including ANI, BMP, GIF, ICO, JPEG, PCX, PNG, PNM, RAS, SVG, TGA, TIFF, WBMP, XBM, and XPM.

Basic image editing functions are displayed on the main toolbar, allowing users to rotate the current image 90 degrees to the left of right in incremental steps, as well as to flip the image horizontally or vertically. The changes can be saved.

Another interesting feature is the ability to import plugins, which add new functionality to the application, using the Preferences dialog. New plugins can be added by installing a binary package entitled Eye of GNOME Plugins.

Getting started with Eye of GNOME

If you use GNOME as your default desktop environment and you double click an image file, it will (most probably) open it with the Eye of GNOME application, which’s name is usually shorten to EOG by the Linux community.

The program provides users with a very basic and uncluttered user interface, comprised of the main toolbar and the statusbar. Optionally, users can choose to view a sidebar, an image gallery that allows them to access more photos from the current folder, as well as fullscreen and slideshow modes.

Availability and supported Linux OSes

The application is distributed as a standalone source package that can be configured, compiled and installed in any desktop environment or operating system. While no binary packages are available for a specific Linux OS, users can install the program from the default software repositories of their Linux distro.

Source

How to Install a Device Driver on Linux | Linux.com

…most default Linux drivers are open source and integrated into the system, which makes installing any drivers that are not included quite complicated, even though most hardware devices can be automatically detected.

To learn more about how Linux drivers work, I recommend reading An Introduction to Device Drivers in the book Linux Device Drivers.

Two approaches to finding drivers

1. User interfaces

If you are new to Linux and coming from the Windows or MacOS world, you’ll be glad to know that Linux offers ways to see whether a driver is available through wizard-like programs. Ubuntu offers the Additional Drivers option. Other Linux distributions provide helper programs, like Package Manager for GNOME, that you can check for available drivers.

2. Command line

What if you can’t find a driver through your nice user interface application? Or you only have access through the shell with no graphic interface whatsoever? Maybe you’ve even decided to expand your skills by using a console. You have two options:

Read more at OpenSource.com

Source

Primer on Yum Package Management Tool

The Yum package management tool is very crucial to the management of Linux systems either you are a Linux systems admin or a power user. Different package management tools are available across different Linux distros and the YUM package management tool is available on the RedHat and CentOS Linux distros. In the background YUM (Yellowdog Updater Modified) is dependent on the RPM(Red Hat Package Manager), and was created to enable the management of packages as parts of a larger system of software repositories instead of individual packages.

The configuration file for Yum is stored in the /etc/ directory, a file named yum.conf. This file can be configured and tweaked to suit certain needs of the system. Below is a sample of the contents of the yum.conf file:

[main]
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=5

This configuration file could be different from whatever you may get on your machine, but the configuration syntax follows the same rules. The repository of packages that can be installed with Yum are usually saved in the /etc/yum.repos.d/ directory, with each *.repo file in the directory serving as repositories of the various packages that can be installed.

The image below shows the structure of a CentOS base repository:

YUM works in a pattern similar to all Linux commands, using the structure below:

With the command above, you can carry out all necessary tasks with YUM. You can get help on how to use YUM with the –help option:

You should get a list of the commands and options that can be run on YUM, just as seen in the images below:

List of commands

List of options

For the rest of this article, we would be completing a couple of tasks with Yum. We would query, install, update and remove packages.

Querying packages with YUM

Let’s say you just got a job as a Linux system administrator at a company, and your first task is to install a couple of packages to help make your tasks easier such as nmap, top etc.

To proceed with this, you need to know about the packages and how well they will fit the computer’s needs.

Task 1: Getting information on a package

To get information on a package such as the package’s version, size, description etc, you need to use the info command.

As an example, the command below would give information on the httpd package:

Below is a snippet of the result from the command:

Name : httpd
Arch : x86_64
Version : 2.4.6
Release : 80.el7.centos.1

Task 2: Searching for existing packages

It is not in all cases you would know the exact name of a package. Sometimes, all you would know is a keyword affiliated with the package. In these scenarios, you can easily search for packages with that keyword in the name or description using the search command.

The command below would give a list of packages that have the keyword “nginx” in it.

Below is a snippet of the result from the command:

collectd-nginx.x86_64 :Nginx plugin for collectd
munin-nginx.noarch : NGINX support for Munin resource monitoring
nextcloud-nginx.noarch : Nginx integration for NextCloud
nginx-all-modules.noarch : A meta package that installs all available Nginx module

Task 3: Querying a list of packages

There are a lots of packages that are installed or are available for installation on the computer. In some cases, you would like to see a list of those packages to know what packages are available for installation.

There are three options for listing packages which would be stated below:

yum list installed: lists the packages that are installed on the machine.

yum list available: lists all packages available to be installed in from enabled repositories.

yum list all: lists all of the packages both installed and available.

Task 4: Getting package dependencies

Packages are rarely installed as standalone tools, they have dependencies which are essential to their functionalities. With Yum, you can get a list of a package’s dependencies with the deplist command.

As an example, the command below fetches a list of httpd’s dependencies:

Below is a snippet of the result:

package: httpd.x86_64 2.4.6-80.el7.centos.1
dependency: /bin/sh
provider: bash.x86_64 4.2.46-30.el7
dependency: /etc/mime.types
provider: mailcap.noarch 2.1.41-2.el7
dependency: /usr/sbin/groupadd
provider: shadow-utils.x86_64 2:4.1.5.1-24.el7

Task 6: Getting information on package groups

Through this article, we have been looking at packages. At this point, package groups would be introduced.

Package groups are collection of packages for serving a common purpose. So if you want to set up your machine’s system tools for example, you do not have to install the packages separately. You can install them all at once as a package group.

You can get information on a package group using the groupinfo command and putting the group name in quotes.

yum groupinfo “group-name”

The command below would fetch information on the “Emacs” package group.

Here is the information:

Group: Emacs
Group-Id: emacs
Description: The GNU Emacs extensible, customizable, text editor.
Mandatory Packages:
=emacs
Optional Packages:
ctags-etags
emacs-auctex
emacs-gnuplot
emacs-nox
emacs-php-mode

Task 7: Listing the available package groups

In the task above, we tried to get information on the “Emacs” package. However, with the grouplist command, you can get a list of available package groups for installation purposes.

The command above would list the available package groups. However, some packages would not be displayed due to their hidden status. To get a list of all package groups including the hidden ones, you add the hidden command as seen below:

Installing packages with YUM

We have looked at how packages can be queried with Yum. As a Linux system administrator you would do more than query packages, you would install them.

Task 8: Installing packages

Once you have the name of the package you like to install, you can install it with the install command.

Example:

Task 9: Installing packages from .rpm files

While you have to install most packages from the repository, in some cases you would be provided with *.rpm files to install. This can be done using the localinstall command. The localinstall command can be used to install *.rpm files either they are available on the machine or in some external repository to be accessed by a link.

yum localinstall file-name.rpm

Task 10: Reinstalling packages

While working with configuration files, errors can occur leaving packages and their config files messed up. The install command can do the job of correcting the mess. However, if there is a new version of the package in the repository, that would be the version to be installed which isn’t what we want.

With the reinstall command, we can re install the current version of packages regardless the latest version available in the repository.

yum reinstall package-name

Task 11: Installing package groups

Earlier, we looked into package groups and how to query them. Now we would see how to install them. Package groups can be installed using the groupinstall command and the name of the package group in quotes.

yum groupinstall “group-name”

Updating packages with YUM

Keeping your packages updated is key. Newer versions of packages often contain security patches, new features, discontinued features etc, so it is key to keep your computer updated as much as possible.

Task 12: Getting information on package updates

As a Linux system administrator, updates would be very crucial to maintaining the system. Therefore, there is a need to constantly check for package updates. You can check for updates with the updateinfo command.

There are lots of possible command combinations that can be used with updateinfo. However we would use only the list installed command.

yum updateinfo list installed

A snippet of the result can be seen below:

FEDORA-EPEL-2017-6667e7ab29 bugfix epel-release-7-11.noarch

FEDORA-EPEL-2016-0cc27c9cac bugfix lz4-1.7.3-1.el7.x86_64

FEDORA-EPEL-2015-0977 None/Sec. novnc-0.5.1-2.el7.noarch

Task 13: Updating all packages

Updating packages is as easy as using the update command. Using the update command alone would update all packages, but adding the package name would update only the indicated package.

yum update : to update all packages in the operating system

yum update httpd : to update the httpd package alone.

While the update command will update to the latest version of the package, it would leave obsolete files which the new version doesn’t need anymore.

To remove the obsolete packages, we use the upgrade command.

yum upgrade : to update all packages in the operating system and delete obsolete packages.

The upgrade command is dangerous though, as it would remove obsolete packages even if you use them for other purposes.

Task 14: Downgrading packages

While it is important to keep up with latest package updates, updates can be buggy. Therefore in a case where an update is buggy, it can be downgraded to the previous version which was stable. Downgrades are done with the downgrade command.

yum downgrade package-name

Removing packages with YUM

As a Linux system administrator, resources have to be managed. So while packages are installed for certain purposes, they should be removed when they are not needed anymore.

Task 15: Removing packages

The remove command is used to remove packages. Simply add the name of the package to be removed, and it would be uninstalled.

While the command above would remove packages, it would leave the dependencies. To remove the dependencies too, the autoremove command is used. This would remove the dependencies, configuration files etc.

yum autoremove package-name

Task 15: Removing package groups

Earlier we talked about installing package groups. It would be tiring to begin removing the packages individually when not needed anymore. Therefore we remove the package group with the groupremove command.

yum groupremove “group-name”

Conclusion

The commands discussed in this article are just a little show of the power of Yum. There are lots of other tasks that can be done with YUM which you can check at the official RHEL web page. However, the commands this article has discussed should get anybody started with doing regular Linux system administration tasks.

Source

Spectre Patches Whack Intel Performance Hard With Linux 4.20 Kernel

Integrating fixes for Spectre and Meltdown has been a long, slow process throughout 2018. We’ve seen new vulnerabilities popping up on a fairly regular cadence, with Intel and other vendors rolling out solutions as quickly as they can be developed. To date, most of these fixes haven’t had a significant impact on performance for ordinary users, but there are signs that new patches in the Linux 4.20 kernel can drag Intel performance down. The impact varies from test to test, but the gaps in some benchmarks are above 30 percent.

Phoronix has the details and test results. The Core i9-7980XE takes 1.28x longer in the Rodinia 2.4 heterogeneous compute benchmark suite. Performance in the DaCapo benchmark (V9.12-MR1) is a massive 1.5x worse. Not every test was impacted nearly this much, as there were other tests that showed regressions in the 5-8 percent range.

Image by Phoronix

Michael Larabel spent some time trying to tease apart the problem and where it had come from, initially suspecting that it might be a P-state bug or an unintended scheduler change. Neither was evident. The culprit is STIBP, or Single Thread Indirect Branch Predictors. According to Intel, there are three ways of mitigating branch target injection attacks (Spectre v2): Indirect Branch Restricted Speculation (IBRS), Single Thread Indirect Branch Predictors (STIBP), and Indirect Branch Predictor Barrier (IBPB). IBRS restricts speculation of indirect branches and carries the most severe performance penalty. STIBP is described as “Prevents indirect branch predictions from being controlled by the sibling Hyperthread.”

IBRS flushes the branch prediction cache between privilege levels and disables branch prediction on the sibling CPU thread. The STIBP fix, in contrast, only disables branch prediction on the HT core. The performance impact is variable, but in some cases it seems as though it would be less of a performance hit to simply disable Hyper-Threading altogether.

I would caution against reading into these results as they might apply to Windows users. There are differences between the patches that have been deployed on Linux systems versus their Windows counterparts. Microsoft recently announced, for example, that it will adopt the retpoline fix in Linux for Spectre Variant 2 flaws, improving overall performance in certain workloads. There seems to be some significant performance impacts in the 4.20 kernel, but what I can’t find is a detailed breakdown on exactly whether these fixes are already in Windows or will be added. In short, it’s not clear if these changes to Linux performance have any implications at all for non-Linux software.

Larabel has also written a follow-up article comparing the performance of all Spectre / Meltdown mitigation patches on Intel hardware through the present day. The impact ranges from 2-8 percent in some tests to 25 – 35 percent in others. There’s conclusive evidence that the Linux 4.20 kernel impacts performance in applications where previous patches did not, and several tests where the combined performance impact is enough to put AMD ahead of Intel in tests Intel previously won. How much this will matter to server vendors is unclear; analysts have generally predicted that these security issues would help Intel’s sales figures as companies replace systems. The idea that these ongoing problems could push companies to adopt AMD hardware instead is rarely discussed and AMD has not suggested this is a major source of new customer business.

Source

Schedule One-Time Commands with the UNIX at Tool

Cron is nice and all, but don’t forget about its cousin at.

When I first started using Linux, it was like being tossed into the deep end
of the UNIX pool. You were expected to use the command line heavily along
with all the standard utilities and services that came with your
distribution. At lot has changed since then, and nowadays, you can use a
standard Linux desktop without ever having to open a terminal or use old
UNIX services. Even as a sysadmin, these days, you often are a few layers of
abstraction above some of these core services.

I say all of this to point out that for us old-timers, it’s easy to take for
granted that everyone around us innately knows about all the command-line
tools we use. Yet, even though I’ve been using Linux for 20 years, I
still learn about new (to me) command-line tools all the time. In this “Back
to Basics” article series, I plan to cover some of the command-line tools
that those new to Linux may never have used before. For those of you who are
more advanced, I’ll spread out this series, so you can expect future
articles to be more technical. In this article, I describe how to use
the at utility to schedule jobs to run at a later date.

at vs. Cron

at is one of those commands that isn’t discussed very much. When
people talk about scheduling commands, typically cron gets the most
coverage. Cron allows you to schedule commands to be run on a periodic
basis. With cron, you can run a command as frequently as every minute or as
seldom as once a day, week, month or even year. You also can define more
sophisticated rules, so commands run, for example, every five minutes, every
weekday, every other hour and many other combinations. System administrators sometimes
will use cron to schedule a local script to collect metrics every minute or
to schedule backups.

On the other hand, although the at command also allows you to schedule
commands, it serves a completely different purpose from cron. While cron
lets you schedule commands to run periodically, at lets you schedule
commands that run only once at a particular time in the future. This
means that at fills a different and usually more immediate need
from cron.

Using at

At one point, the at command came standard on most Linux
distributions, but
these days, even on servers, you may find yourself having to
install the at package explicitly. Once installed, the easiest
way to use at is to type
it on the command line followed by the time you want the command to run:

$ at 18:00

The at command also can accept a number of different time formats. For
instance, it understands AM and PM as well as words like “tomorrow”, so you
could replace the above command with the identical:

$ at 6pm

And, if you want to run the same command at that time tomorrow instead:

$ at 6pm tomorrow

Once you press enter, you’ll drop into an interactive shell:

$ at 6pm tomorrow
warning: commands will be executed using /bin/sh
at>

From the interactive shell, you can enter the command you want to run
at that time. If you want to run multiple commands, press enter after each
command and type the command on the new at> prompt. Once you’re done
entering commands, press Ctrl-D on an empty at> prompt to exit the
interactive shell.

For instance, let’s say I’ve noticed that a particular server has had
problems the past two days at 5:10am for around five minutes, and so far, I’m
not seeing anything in the logs. Although I could just wake up early and log
in to the server, instead I could write a short script that collects data
from ps, netstat, tcpdump and other
command-line tools for a few minutes, so
when I wake up, I can go over the data it collected. Since this is a one-off,
I don’t want to schedule something with cron and risk forgetting about it
and having it run every day, so this is how I would set it up with
at:

$ at 5:09am tomorrow
warning: commands will be executed using /bin/sh
at>
at> /usr/local/bin/my_monitoring_script

Then I would press Ctrl-D, and the shell would exit with this output:

at> <EOT>
job 1 at Wed Sep 26 05:09:00 2018

Managing at Jobs

Once you have scheduled at jobs, it’s useful to be able to pull up a list of
all the at jobs in the queue, so you know what’s running and
when. The atq
command lists the current at queue:

$ atq
1 Wed Sep 26 05:09:00 2018 a kyle

The first column lists the number at assigned to each job and then lists the
time the job will be run and the user it will run as. Let’s say that in
the above example I realize I’ve made a mistake, because my script won’t be able
to run as a regular user. In that case, I would want to use the
atrm command
to remove job number 1:

$ atrm 1

If I were to run atq again, I would see that the job no longer exists.
Then I could sudo up to root and use the at command to schedule the job
again.

at One-Liners

Although at supports an interactive mode, you also can pipe commands to it all
on one line instead. So, for instance, I could schedule the above job with:

$ echo /usr/local/bin/my_monitoring_script | at 5:09am tomorrow

Conclusion

If you didn’t know that at existed, you might find yourself coming up with
all sorts of complicated and convoluted ways to schedule a one-off job. Even
worse, you might need to set an alarm clock so you can wake up extra early
and log in to a problem server. Of course, if you don’t have an alarm clock,
you could use at:

$ echo “aplay /home/kyle/alarm.wav” | at 7am tomorrow

Source

Open Source 2018: It Was the Best of Times, It Was the Worst of Times | Linux.com

Recently, IBM announced that it would be acquiring Red Hat for $34 billion, a more-than-60-percent premium over Red Hat’s market cap, and a nearly 12x multiple on revenues. In many ways, this was a clear sign that 2018 was the year commercial open source has arrived, if there was ever previously a question about it before.

Indeed, the Red Hat transaction is just the latest in a long line of multi-billion dollar outcomes this year. To date, more than $50 billion dollars have been exchanged in open source IPOs and mergers and acquisitions (M&A); and all of the M&A deals are considered “mega deals” — those valued over $5 billion.

  • IBM acquired Red Hat for $34 billion
  • Hortonworks’ $5.2 billion merger with Cloudera
  • Elasticsearch IPO – $4+billion
  • Pivotal IPO – $3.9 billion
  • Mulesoft acquired by Salesforce – $6.5 Billion

If you’re a current open source software (OSS) shareholder, it may feel like the best of times. However, If you’re an OSS user or emerging open source project or company, you might be feeling more ambivalent.

On the positive side, the fact that there have been such good financial outcomes should come as encouragement to the many still-private and outstanding open-source businesses (e.g., Confluent, Docker, HashiCorp, InfluxDB). And, we can certainly hope that this round of exits will encourage more investors to bet on OSS, enabling OSS to continue to be a prime driver of innovation.

However, not all of the news is rosy.

First, since many of these exits were in the form of M&A, we’ve actually lost some prime examples of independent OSS companies. For many years, there was a concern that Red Hat was the only example of a public open source company. Earlier this year, it seemed likely that the total would grow to 7 (Red Hat, Hortonworks, Cloudera, Elasticsearch, Pivotal, Mulesoft, and MongoDB). Assuming the announced M&As close as expected, the number of public open source companies is back down to four, and the combined market cap of public open source companies is much less than it was at the start of the year.

We Need to Go Deeper

I think it’s critical that we view these open source outcomes in the context of another unavoidable story — the growth in cloud computing.

Many of the open source companies involved share an overlooked common denominator: they’ve made most of their money through on-premise businesses. This probably comes as a surprise, as we regularly hear about cloud-related milestones, like the one that states that more than 80% of server workloads are in the cloud, that open source drives ⅔ or more of cloud revenues, and that the cloud computing market is expected to reach $300 billion by 2021.

By contrast, the total revenues of all of the open source companies listed above was less than $7B. And, almost all of the open source companies listed above have taken well over $200 million in investment each to build out direct sales and support to appropriately sell to the large, on premises enterprise market.

yRPFSfntUxV0-LzXJSZJDUuMjBJP_v6jIbOg4MQW

Open Source Driving Revenue, But for Whom?

The most common way that open source is used in the cloud is as a loss-leader to sell infrastructure. The largest cloud companies all offer free or near-free open source services that drive consumption of compute, networking, and storage.

To be clear, this is perfectly legal, and many of the cloud companies have contributed generously in both code and time to open source. However, the fact that it is difficult for OSS companies to monetize their own products with a hosted offering means that they are shut off from one of the most important and sustainable paths to scaling. Perhaps most importantly, OSS companies that are independent are largely closed off from the fastest growing segment of the computing market. Since there are only a handful of companies worldwide with the scale and capital to operate traditional public clouds (indeed, Amazon, Google, Microsoft, and Alibaba are among the largest companies on the planet), and those companies already control a disproportionate share of traffic, data, capital and talent, how can we ensure that investment, monetization, and innovation continue to flow in open source? And, how can open source companies sustainably grow.

For some OSS companies, the answer is M&A. For others, the cloud monetization/competition question has led them to adopt controversial and more restrictive licensing policies, such as Redis Lab’s adoption of the Commons Clause and MongoDB’s Server Side License.

But there may be a different answer to cloud monetization. Namely, create a different kind of cloud, one based on decentralized infrastructure.

Rather than spending billions to build out data centers, decentralized infrastructure approaches (like Storj, SONM, and others), provide incentives for people around the world to contribute spare computing, storage or network capacity. For example, by fairly and transparently allowing storage node operators to share in the revenue generated (i.e., by compensating supply), Storj was able to rapidly grow to a network of 150,000 nodes in 180 countries with over 150 PB of capacity–equivalent to several large data centers. Similarly, rather than spending hundreds of millions on traditional sales and marketing, we believe there is a way to fairly and transparently compensate those who bring demand to the network, so we have programmatically designed our network so that open source companies whose projects send users our way can get fairly and transparently compensated proportional to the storage and network usage they generate. We are actively working to encourage other decentralized networks to do the same, and believe this is the future of open cloud computing

This isn’t charity. Decentralized networks have strong economic incentives to compensate OSS as the primary driver of cloud demand. But, more importantly, we think that this can help drive a virtuous circle of investment, growth, monetization, and innovation. Done correctly, this will ensure that the best of times lay ahead!

Ben Golub is the former CEO of Docker and interim CEO at Storj Labs.

Watch the Open Source Summit keynote presentation from Ben Golub and Shawn Wilkinson to learn more about open source and the decentralized web.

Source

Cheat Sheet of Useful Commands Every Kali Linux User Needs To Know

This cheat sheet includes a list of basic and useful Linux commands that every Kali Linux user needs to know.

If you want to learn how to hack with Kali Linux, the most important thing you should do first is to master the command line interface.

Here’s why:

Tasks that take minutes or even hours to do on a desktop environment (GUI) can be done in a matter of seconds from the command line.

For example:

To download an entire HTML website, you only need to type:

wget -r domain.com

Now if you were to do the same on a GUI, you’d have to save each page one by one.

This is only one of many examples as to how powerful the command line is. There are many other tasks on Linux that can only be done from the command line.

In short:

Knowing your way around a command line will make you a more efficient and effective programmer. You’ll be able to get shit done faster by automating repetitive tasks. ​

​Plus, you’ll look like a complete bad ass in the process.

Use this cheat sheet as a reference in case you forget how to do certain tasks from the command-line. And trust me, it happens.

If you’re new to Unix/Linux operating systems, this cheat sheet also includes the fundamental linux commands such as jumping from one directory to another, as well as more technical stuff like managing processes.

NOTES
Everything inside “<>” should be replaced with a name of a file, directory or command.

Bash = A popular command-line used in Unix/Linux operating systems.

dir = directory/folder
file = file name & type (eg. notes.txt)
cmd = command (eg. mkdir, ls, curl, etc)
location = path/destination (eg. /home/Desktop)

pwd: Display path of current directory you’re in

​ls: List all files and folders in the current directory
ls -la: List detailed list of files and folders, including hidden ones

Change to a specific directory

cd: Change to home directory
cd /user/Desktop: Change to a specific directory called Desktop
cd .. : Move back a directory

Create a directory/folder

mkdir <dir>: Create a new directory
mkdir /home/Desktop/dir: Create a directory in a specific location

Create and edit files

touch <file>: Create an empty file
nano <file>: Edit an existing file or create it if it doesn’t exist.
Alternatives to nano text editor: vim, emacs

Copy, move and rename files and directories

cp <file1> <file2>: Create a copy of a file
cp -r <dir1> <dir2>: Create a copy of a directory and everything in it
cp <file> /home/Desktop/file2: Create a copy of a file in a different directory and name it file2.

mv <file> /home/Desktop: Move a file to a specific directory (overwrites any existing file with the same name)
mv <dir> /home/Desktop: Move a directory to another location
mv <dir1> <dir2>: Rename a file OR directory (dir1 -> dir2)

Delete files

rm <file>: Delete a file
rm -f <file>: Force delete a file
Careful now..

rm -r <dir>: Delete a directory and its contents
rm -rf <dir>: Force delete a directory and its contents
Careful when using this command as it will delete everything inside the directory

Output and analyze files

cat <file>: Display/output the contents of a file
less <file>: Display the contents of a file with scroll (paginate) ability (press q to quit)

head <file>: Display the first ten lines in a file
head -20 <file>: Display the first 20 lines in a file
tail <file>: Display the last ten lines in a file
tail -20 <file>: Display the last 20 lines in a file

diff <file1> <file2>: Check the difference between two files (file1 and file2)

cal: Display monthly calendar

date: Check date and time
uptime: Check system uptime and currently logged in users

uname -a: Display system information.
dmesg: Display kernel ring buffer

poweroff: Shutdown system
reboot: Reboot system

View disk and memory usage

df -h: Display disk space usage
fdisk -l: List disk partition tables
free: Display memory usage

cat /proc/meminfo: Display memory information
cat /proc/cpuinfo: Display cpu information

View user information

whoami: Output your username
w: Check who’s online

history: View a list of your previously executed commands

View last logged in users and information

last: Display last login info of users
last <user>: Display last login info of a specific user

finger <user>: Display user information

Installing & Upgrading Packages

Search for packages

apt-cache pkgnames: List all available packages
apt search <name>: Search for a package and its description
apt show <name>: Check detailed description of a package

Install packages

apt-get install <name>: Install a package
apt-get install <name1> <name2>: Install multiple packages

Update, upgrade & cleanup

apt-get update: Update list of available packages
apt-get upgrade: Install the newest version of available packages
apt-get dist-upgrade: Force upgrade packages.
apt-get autoremove: Remove installed packages that are no longer needed
apt-get clean: Free up disk space by removing archived packages

Delete packages

apt-get remove: Uninstall a package
apt-get remove –purge: Uninstall a package and remove its configuration files

Processes & Job Management

top: Display running processes & system usage in real-time.

ps: Display currently running processes
ps -u <user>: Display currently running processes of a user

kill <PID>: Kill a processes by PID #.
killall <processes>: Kill all processes with specified name.

Start, stop, resume jobs

jobs: Display the status of current jobs
jobs -l: Display detailed info about each job
jobs -r: Display only running jobs

bg: View stopped background jobs or resume job in the background
fg: Resume recent job in the foreground
fg <job>: Bring specific job to the foreground.

ping <host>: Ping a host
whois <domain/IP>: Get whois information about a domain or IP.
dig <domain/IP>: Get DNS information
nslookup: <NS>: Get nameserver information

ifconfig: Configure/display network interfaces
iwconfig: Configure/display wireless network interfaces

netstat -r: Display kernel IP routing tables
netstat -antp: Check for established and listening ports/connections​

arp -a: Display ARP cache tables for all interfaces​

Secure File Transfer (SCP)

Transfer files FROM the local system TO a remote host (Local > Remote)
scp /path/to/file [email protected]:/path/to/dest

Transfer files FROM a remote host TO the local system (Remote > Local)
scp [email protected]:/path/to/file /path/to/dest

Transfer directories and everything within it
scp -r /path/to/dir [email protected]:/path/to/dest

Transfer all files that match a specific filetype
scp /path/to/*.txt [email protected]:/path/to/dest

Transfer local public SSH public key to remote host
cat ~/.ssh/id_rsa.pub | ssh [email protected] ‘cat >> .ssh/authorized_keys’

Am I forgetting something? Let me know in the comments below. I’ll continue to update this when I get a chance.

Source

WP2Social Auto Publish Powered By : XYZScripts.com