An Introduction to Udev: The Linux Subsystem for Managing Device Events | Linux.com

Udev is the Linux subsystem that supplies your computer with device events. In plain English, that means it’s the code that detects when you have things plugged into your computer, like a network card, external hard drives (including USB thumb drives), mouses, keyboards, joysticks and gamepads, DVD-ROM drives, and so on. That makes it a potentially useful utility, and it’s well-enough exposed that a standard user can manually script it to do things like performing certain tasks when a certain hard drive is plugged in.

This article teaches you how to create a udev script triggered by some udev event, such as plugging in a specific thumb drive. Once you understand the process for working with udev, you can use it to do all manner of things, like loading a specific driver when a gamepad is attached, or performing an automatic backup when you attach your backup drive.

A basic script

The best way to work with udev is in small chunks. Don’t write the entire script upfront…

Read more at OpenSource.com

Source

Red Hat Enterprise Linux 8 Beta [LWN.net]

[Posted November 15, 2018 by ris]

Red Hat Enterprise Linux 8 Beta

Red Hat has announced the release of RHEL 8 Beta. “Red Hat Enterprise Linux 8 Beta introduces the concept of Application Streams to deliver userspace packages more simply and with greater flexibility. Userspace components can now update more quickly than core operating system packages and without having to wait for the next major version of the operating system. Multiple versions of the same package, for example, an interpreted language or a database, can also be made available for installation via an application stream. This helps to deliver greater agility and user-customized versions of Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments.”

Source

Use of the grep Command in Linux

Using grep Command in LinuxWhat is grep?

The grep utility that we will be getting a hold of today is a Unix tool that belongs to the same family as the egrep and fgrep utilities. These are all Unix tools designed for performing the repetitive searching task on your files and text. You can search for files and their contents for useful information fetching by specifying particular search criteria through the grep command.

So they say GREP stands for Global Regular Expression Print but where does this command ‘grep’ originate from? grep basically derives from a specific command for the very simple and venerable Unix text editor named ed. This is how the ed command goes:

g/re/p

The purpose of the command is pretty similar to what we mean by searching through grep. This command fetches all the lines in a file matching a certain text pattern.

Let us explore the grep command some more. In this article, we will explain the installation of the grep utility and present some examples through which you can learn exactly how and in which scenario you can use it.

We have run the commands and procedures mentioned in this article on an Ubuntu 18.04 LTS system.

Install grep

Although the grep utility comes by default with most Linux systems, if you do not have it installed on your system, here is the procedure:

Open your Ubuntu Terminal either through the Dash or the Ctrl+Alt+T shortcut. Then enter the following command as root in order to install grep through apt-get:

$ sudo apt-get install grep

Install grep command

Enter y when you are prompted with a y/n option during the installation procedure. After that, the grep utility will be installed on your system.

You can verify the installation by checking the grep version through the following command:

$ grep –version

Check grep command version

Use of the grep Command with Examples

The grep command can be best explained by presenting some scenarios where it can be made use of. Here are a few examples:

Search for Files

If you want to search for a filename that contains a specific keyword, you can filter your file list through the grep command as follows:

Syntax:

$ ls -l | grep -i “searchword

Examples:

$ ls -l | grep -i sample

This command will list all the files in the current directory with the name of the file containing the word “private”.

Search for files with grep

Search for a String in a File

You can fetch a sentence from a file that contains a specific string of text through the grep command.

Syntax:

grep “string” filename

Example:

$ grep “sample file” sampleFile.txt

Search for text in a file with grep

My sample file sampleFile.txt contains the sentence having the string “sample file” that you can see in the above output. The keyword and string appear in a colored form in the search results.

Search for a String in More Than One File

In case you want to search for sentences containing your text string from all the files of the same type, grep command is at your service.

Syntax 1:

$ grep “string” filenameKeyword*

Syntax 2:

$ grep “string” *.extension

Example1:

$ grep “sample file” sample*

Search for a String in More Than One File

This command will fetch all the sentences containing the string “sample file” from all the files with the filename containing the keyword “sample”.

Example 2:

$ grep “sample file” *.txt

Search for a String in More Than One File - Example 2

This command will fetch all the sentences containing the string “sample file” from all the files with .txt extension.

Search for a String in a File Without Taking in Account the Case of the String

In the above examples, my text string was luckily in the same case as that found in my sample text files. If I had entered the following command, my search result would be nil because the text in my file does not start with an upper-case word “Sample”

$ grep “Sample file” *.txt

Search with case sensitive string

Let us tell grep to ignore the case of the search string and print the search results based on the string through the -i option.

Syntax:

$ grep -i “string” filename

Example:

$ grep -i “Sample file” *.txt

Case insensitive search with grep command

This command will fetch all the sentences containing the string “sample file” from all the files with .txt extension. This will not take into account whether the search string was in upper or lower case.

Search on the basis of a regular expression

Through the grep command, you can specify a regular expression with a start and end keyword. The output will be the sentence containing the entire expression between your specified starting and ending keyword. This feature is very powerful as you do not need to write an entire expression in the search command.

Syntax:

$ grep “startingKeyword.*endingKeyword” filename

Example:

$ grep “starting.*.ending” sampleFile.txt

Use regular expressions in grep

This command will print the sentence containing the expression(starting from my startingKeyword and ending on my endingKeyword) from the file that I specified in the grep command.

Display a Specified Number of Lines After/Before the Search String

You can use the grep command to print N number of lines before/after a search string from a file. The search result also includes the line of text containing the search string.

The syntax for N number of lines after the key string:

$ grep -A <N> “string” filename

Example:

$ grep -A 3 -i “samplestring” sampleFile.txt

This is how my sample text file looks like:

sample text file

And this is how the output of the command looks like:

It displays 3 lines, including the one containing the searched string, from the file I specified in the grep command.

The syntax for N number of lines before the key string:

$ grep -B <N> “string” filename

You can also search for N number of lines ‘around’ a text string. That means N number of lines before and N after the text string.

The syntax for N number of lines around the key string:

$ grep -C <N> “string” filename

Through the simple examples described in this article, you can have a grip on the grep command. You can then use it to search filtered results that may include files or contents of the file. This saves a lot of time that was wasted on skimming through the entire search results before you mastered the grep command.

Source

Adding Linux To A PDP-11

The UNIBUS architecture for DEC’s PDPs and Vaxxen was a stroke of genius. If you wanted more memory in your minicomputer, just add another card. Need a drive? Plug it into the backplane. Of course, with all those weird cards, these old UNIBUS PDPs are hard to keep running. The UniBone is the solution to this problem. It puts Linux on a UNIBUS bridge, allowing this card to serve as a memory emulator, a test console, a disk emulator, or any other hardware you can think of.

The key to this build is the BeagleBone, everyone’s second-favorite single board computer that has one feature the other one doesn’t: PRUs, or a programmable real-time unit, that allows you to blink a lot of pins very, very fast. We’ve seen the BeagleBone be used as Linux in a terminal, as the rest of the computer for an old PDP-10 front panel and as the front end for a PDP-11/03.

In this build, the Beaglebone’s PRU takes care of interfacing to the UNIBUS backplane, sending everything to a device emulator running as an application. The UniBone can be configured as memory or something boring, but one of these can emulate four RL02 drives, giving a PDP-11 an amazing forty megabytes of storage. The real killer app of this implementation is giving these emulated drives a full complement of glowing buttons for load, ready, fault, and write protect, just like the front of a real RL02 drive. This panel is controlled over the I2C bus on the Beaglebone, and it’s a work of art. Of course, emulating the drive means you can’t use it as the world’s largest thumb drive, but that’s a small price to pay for saving these old computers.

Source

Wow! Ubuntu 18.04 LTS is getting a 10-Year Support (Instead of 5)

Last updated November 16, 2018

The long-term support (LTS) releases of Ubuntu used to get support for five years. This is changing now. Ubuntu 18.04 will now be supported for ten years. Other LTS releases might also get an extended support.

Ubuntu’s founder Mark Shuttleworth announced this news in a keynote at OpenStack Summit in Berlin.

I’m delighted to announce that Ubuntu 18.04 will be supported for a full 10 years.

Ubuntu 18.04 will get 10 years support

A Move to lead the Internet of Things (IoT)

We are living in a ‘connected world’. The smart devices are connected to the internet everywhere and these are not limited to just smartphones. Toys, cameras, TVs, Refrigerators, Microwaves, weighing scales, electric bulbs and what not.

Collectively, they are called Internet of Things (IoT) and Ubuntu is focusing heavily on it.

The 10-years support announcement for Ubuntu 18.04 is driven by the needs of the IoT market.

…in some of industries like financial services and telecommunications but also from IoT where manufacturing lines for example are being deployed that will be in production for at least a decade.

Ubuntu 16.04, scheduled to reach its end of life in April 2021, will also be given a longer support life span.

What is not clear to me at this moment is whether the extended support is free of cost and if it is, will it be available to all the users including the desktop ones.

Ubuntu has an Extended Security Maintenance (ESM) option for its corporate customers. With ESM, the customers get security fixes for the kernel and essential packages for a few more years even after the end of life of a certain LTS release.

Of course, ESM is a paid feature and it is one of the many ways Canonical, the company behind Ubuntu, generates revenue.

At the moment, it is not clear if the ten years support is for everyone or if it will be a paid service under Extended Security Maintenance. I have contacted Ubuntu for a clarification and I’ll update this article if I get an answer.

Ubuntu is not for sale…yet

After IBM bought Red Hat for $34 billion, people have started wondering if Ubuntu will be sold to a big player like Microsoft.

Shuttleworth has clarified that he has no plans of selling Ubuntu anytime soon. However, he ambiguously also said that he might consider it if its a gigantic offer and if he will be left in charge of Canonical and Ubuntu to realize his vision.

Source

FOSS Project Spotlight: BlueK8s | Linux Journal

Deploying and managing complex stateful applications on Kubernetes.

Kubernetes (aka K8s) is now the de facto container orchestration
framework. Like other popular open-source technologies, Kubernetes has
amassed a considerable ecosystem of complementary tools to address
everything from storage to security. And although it was first created for
running stateless applications, more and more organizations are
interested in using Kubernetes for stateful applications.

However, while Kubernetes has advanced significantly in many areas during the past couple years, there still are considerable gaps when it comes to
running complex stateful applications. It remains challenging to deploy
and manage distributed stateful applications consisting of a multitude of
co-operating services (such as for use cases with large-scale analytics and
machine learning) with Kubernetes.

I’ve been focused on this space for the past several years as a
co-founder of BlueData. During that time, I’ve worked with many teams
at Global 2000 enterprises in several industries to deploy
distributed stateful services successfully, such as Hadoop, Spark, Kafka, Cassandra, TensorFlow and other analytics, data science, machine learning (ML) and deep learning (DL) tools in containerized environments.

In that time, I’ve learned what it takes to deploy complex stateful
applications like these with containers while ensuring enterprise-grade
security, reliability and performance. Together with my colleagues at
BlueData, we’ve broken new ground in using Docker containers for big
data analytics, data science and ML/DL in highly distributed
environments. We’ve developed new innovations to address
requirements in areas like storage, security, networking, performance and
lifecycle management.

Now we want to bring those innovations to the Open Source community—to ensure that these stateful services are supported in the Kubernetes
ecosystem. BlueData’s engineering team has been busy working with
Kubernetes, developing prototypes with Kubernetes in our labs and
collaborating with multiple enterprise organizations to evaluate the
opportunities (and challenges) in using Kubernetes for complex stateful
applications.

To that end, we recently introduced a new Kubernetes open-source
initiative: BlueK8s. The BlueK8s initiative will be composed of several
open-source projects that each will bring enterprise-level capabilities for
stateful applications to Kubernetes.

Kubernetes Director (or KubeDirector for short) is the first open-source project in this initiative. KubeDirector is a custom controller
designed to simplify and streamline the packaging, deployment and
management of complex distributed stateful applications for big data
analytics and AI/ML/DL use cases.

Of course, other existing open-source projects address
various requirements for both stateful and stateless applications. The
Kubernetes Operator framework, for instance, manages the lifecycle of a
particular application, providing a useful resource for building and
deploying application-specific Operators. This is achieved through the
creation of a simple finite state machine, commonly known as a
reconciliation loop:

  • Observe: determine the current state of the application.
  • Analyze: compare the current state of the application with the expected
    state of the application.
  • Act: take the necessary steps to make the running state of the
    application match its expected state.

""

Figure 1. Reconciliation Loop

It’s pretty straightforward to use a Kubernetes Operator to manage a
cloud native stateless application, but that’s not the case for all
applications. Most applications for big data analytics, data science and
AI/ML/DL are not implemented in a cloud native architecture. And, these
applications often are stateful. In addition, a distributed data pipeline
generally consists of a variety of different services that all have
different characteristics and configuration requirements.

As a result, you can’t easily decompose these applications into
self-sufficient and containerizable microservices. And, these applications
are often a mishmash of tightly integrated processes with complex
interdependencies, whose state is distributed across multiple configuration
files. So it’d be challenging to create, deploy and integrate an
application-specific Operator for each possible configuration.

The KubeDirector project is aimed at solving this very problem. Built upon
the Kubernetes custom resource definition (CRD) framework, KubeDirector
does the following:

  • It employs the native Kubernetes API extensions, design philosophy and
    authentication.
  • It requires a minimal learning curve for any developers that have experience
    with Kubernetes.
  • It is not necessary to decompose an existing application to fit
    microservices patterns.
  • It provides native support for preserving application configuration and
    state.
  • It follows an application-agnostic deployment pattern, reducing the time to
    onboard stateful applications to Kubernetes.
  • It is application-neutral, supporting many applications simultaneously via
    application-specific instructions specified in YAML format configuration
    files.
  • It supports the management of distributed data pipelines consisting of
    multiple applications, such as Spark, Kafka, Hadoop, Cassandra, TensorFlow
    and so on, including a variety of related tools for data science,
    ML/DL, business intelligence, ETL, analytics and visualization.

KubeDirector makes it unnecessary to create and implement multiple
Kubernetes Operators in order to manage a cluster composed of multiple
complex stateful applications. You simply can use KubeDirector to manage
the entire cluster. All communication with KubeDirector is performed via
kubectl commands. The anticipated state of a cluster is submitted as a
request to the API server and stored in the Kubernetes etcd database.
KubeDirector will apply the necessary application-specific workflows to
change the current state of the cluster into the expected state of the
cluster. Different workflows can be specified for each application type, as
illustrated in Figure 2, which shows a simple
example (using KubeDirector to deploy and manage containerized Hadoop and
Spark application clusters).

""

Figure 2. Using KubeDirector to Deploy and Manage Containerized
Hadoop and Spark Application Clusters

If you’re interested, we’d love for you to join the growing
community of KubeDirector developers, contributors and adopters. The
initial pre-alpha version of KubeDirector was recently released
at https://github.com/bluek8s/kubedirector. For an architecture overview,
refer to the GitHub project wiki. You can also read more about how it
works in this technical blog post on the Kubernetes site.

Source

AWS Systems Manager Now Supports Multi-Account and Multi-Region Inventory View

Posted On: Nov 15, 2018

AWS Systems Manager, which provides information about your instances and the software installed on them, now supports a multi-account, multi-Region view. With this enhancement, you can simplify your workflow by centrally viewing, storing, and exporting inventory data across your accounts from a single console.

From the Systems Manager console, you can further customize the data displayed by using a pre-defined set of queries. From the same screen, you can easily download all of your data as a CSV file to generate reports on fleet patch compliance, installed applications, or network configuration. Additionally, integration with AWS Glue and Amazon Athena allows you to consolidate inventory data from multiple accounts and regions.

Source

Shotcut Video Editor Adds VA-API Encoding Support For Linux, Other Improvements

Shotcut video editor

Shotcut, a free and open source video editor, was updated to version 18.11.13 yesterday. The new release includes VA-API encoding support on Linux, as well as a new option to use hardware encoder in the export screen, among other improvements.

Shotcut is a free video editor for Linux, macOS and Windows. It includes a wide range of functions, from editing features like trimming, cutting, copying and pasting, to video effects or audio features like peak meter, loudness, waveform, volume control, audio filters, and so on.

There’s much more that Shotcut can do, including edit 4K videos, capture audio, it supports network streaming, and so on. See its features page for a in-depth list.

The application, which uses Qt5 and makes use of the MLT Multimedia Framework, supports a wide range of formats thanks to FFmpeg, and it features an intuitive interface with multiple dockable panels.

The latest Shotcut 18.11.13 adds VA-API encoding support for Linux (H.264/AVC and H.265/HEVC codecs). To enable this, you can use a newly added Use hardware encoder checkbox from the Export Video panel, then click Configure and select h264_vaapi or hevc_vaapi:

Shotcut use hardware encoder

Another change in this version of Shotcut is the

addition of a New Project / Recent Projects screen

, that’s displayed when creating a new project (

File > New

):

Shotcut new project screen

The update also brings a

simple / advanced export mode

. When you export a video (

File > Export Video

), you’ll now see a simplified panel which lets you enable and configure the use of a hardware encoder, a message that explains the defaults, which are suitable for most users and purposes, as well as the presets. A new Advanced button was added at the bottom, which lets users specify video settings like resolution, frame rate, codecs, and so on.

Other changes worth mentioning in Shotcut 18.11.13 include:

  • Added 10 and 20 Pixel Grid options to the player grid button menu
  • Added View > Scopes > Video Waveform
  • Added Settings > Video Mode > Non-Broadcast > Square 1080p 30 fps and 60 fps
  • Added Ut Video export presets
  • Added Spot Remover video filter
  • Increased Scale maximum to 500% for Rotate and Scale filter
  • Made GPU Effects hidden and discouraged
  • macOS: added videotoolbox encoding, signed app bundle and fixed support for macOS 10.10 and 10.11
  • Fixed issues like hanging on exit, crash when undoing split and transition on Timeline, etc.

You can see a complete list of changes here.

Download Shotcut video editor

On the Shotcut download page you’ll find macOS, Linux and Windows binaries. For Linux there are official AppImage and portable tar binaries, as well as links to the Shotcut

Flathub and Snapcraft pages (from where you can install the app as a Flatpak or Snap package). The Flatpak package has not yet been updated to the latest Shotcut 18.11.13 though.

Other video editing software articles on Linux Uprising:

Source

Getting Started with Scilab | Linux Journal

Introducing one of the larger scientific lab packages for Linux.

Scilab
is meant to be an overall package for numerical science, along the
lines of Maple, Matlab or Mathematica. Although a lot of built-in
functionality exists for all sorts of scientific computations, Scilab
also includes its own programming language, which allows you to use that functionality
to its utmost. If you prefer, you instead can use this language to extend
Scilab’s functionality into completely new areas of research. Some of
the functionality includes 2D and 3D visualization and optimization tools,
as well as statistical functions. Also included in Scilab is Xcos, an
editor for
designing dynamical systems models.

Several options exist for installing Scilab on your system. Most package
management systems should have one or more packages available for
Scilab, which also will install several support packages. Or, you
simply can download and install a tarball that contains
everything you need to be able to run Scilab on your system.

Once
it’s installed, start the GUI version of Scilab with
the scilab command. If you installed Scilab via tarball, this command will
be located in the bin subdirectory where you unpacked the tarball.

When
it first starts, you should see a full workspace created for your
project.

""

Figure 1. When you first start Scilab, you’ll see an empty
workspace ready for you to start a new project.

On the left-hand side is a file browser where you can see data
files and Scilab scripts. The right-hand side has several
panes. The top pane is a variable browser, where you can see what
currently exists within the workspace. The middle pane contains a
list of commands within that workspace, and the bottom pane has
a news feed of Scilab-related news. The center of the workspace is the
actual Scilab console where you can interact with the execution engine.

Let’s start with some basic mathematics—for example,
division:

–> 23/7
ans =

3.2857143

As you can see, the command prompt is –>, where you enter the
next command to the execution engine. In the variable browser, you
can see a new variable named ans that contains the results of the
calculation.

Along with basic arithmetic, there is also a number of built-in functions. One thing to be aware of is that these function names are
case-sensitive. For example, the statement sqrt(9) gives the answer
of 3, whereas the statement SQRT(9) returns an error.

There
also are built-in constants for numbers like e or pi. You can use them
in statements, like this command to find the sine of pi/2:

–> sin(%pi / 2)
ans =

1.

If you don’t remember exactly what a function name is, but you remember how
it starts, you can use the tab-completion functionality in the Scilab
console. For example, you can see what functions start with “fa” by
typing those two letters and then pressing the tab key.

""

Figure 2. Use tab-completion to avoid typos while typing
commands in the Scilab console.

You can assign variables with the “=” symbol. For example,
assign your age to the age variable with:

–> age = 47
age =

47.

You then can access this variable directly:

–> age
age =

47.

The variable also will be visible in the variable browser pane. Accessing
variables this way basically executes the variable, which is also why you
can
get extra output. If you want to see only the value, use
the disp() function, which provides output like the following:

–> disp(age)

47.

Before moving onto more complex ideas, you’ll need to move out of the
console. The advantage of the console is that statements are executed
immediately. But, that’s also its disadvantage. To write
larger pieces of code, you’ll want to use the included editor. Click
the Applications→SciNotes menu item to open a new window where
you can enter larger programs.

""

Figure 3. The SciNotes application lets you write larger programs
and then run them within Scilab as a single unit.

Once you’ve finished writing your code, you can run it either by clicking
the run icon on the toolbar or selecting one of the options under the
Execute menu item. When you do this, SciNotes will ask you to save
your code to a file, with the file ending “.sce”, before running. Then,
it gets the console to run this file with the following command:

exec(‘/home/jbernard/temp/scilab-6.0.1/bin/test1.sce’, -1)

If you create or receive a Scilab file outside of Scilab, you can run it
yourself using a similar command.

To build more complex calculations, you also need a way to
make comparisons and loop over several calculations. Comparisons
can be done with either:

if …. then
stmts
end

or:

if …. then
stmts
else
stmts
end

or:

if …. then
stmts
elseif …. then
stmts
else
stmts
end

As you can see, the if and elseif lines need to
end with then. You can
have as many elseif sections as you need for your particular case. Also,
note that the entire comparison block needs to end with the
end statement.

There
also are two types of looping commands: for loops and
while loops. As
an example, you could use the following to find the square roots
of the first 100 numbers:

for i=1:100
a = sqrt(i) disp(a)
end

The for loop takes a sequence of numbers, defined by
start:end,
and each value is iteratively assigned to the dummy variable i. Then
you have your code block within the for loop and close it with the
statement end.

The while loop is similar, except it uses a comparison
statement to decide when to exit the loop.

The last quick item I want to cover is the graphing functionality
available within Scilab. You can create both 2D and 3D graphs,
and you can plot data files or the results of
functions. For example, the following plots the sine function
from 0 to pi*4:

t = linspace(0, 4 * %pi, 100) plot(t, sin(t))

Figure 4. Calling the plot function opens a new viewing
window where you can see the generated graphs.

You can use the linspace command to generate the list of values over
which the function will be executed. The plot function opens a new
window to display the resultant graph. Use the commands under
the Edit menu item to change the plot’s details before saving the
results to an image file.

You can do 3D graphs just as simply. The
following plots a parametric curve over 0 to 4*pi:

t=linspace(0,4*%pi,100); param3d(cos(t),sin(t),t)

This also opens a new plotting window to display the results. If
the default view isn’t appropriate, click
Tools→2D/3D Rotation, and with this selected, right-click
on the graph and rotate it around for a better view of
the result.

Scilab is a very powerful tool for many types of
computational science. Since it’s available on Linux, macOS and
Windows, it’s a great option if you’re collaborating with other
people across multiple operating systems. It might also prove to be a
effective tool to use in teaching environments, giving students
access to a powerful computational platform for no cost, no matter
what type of computer they are using. I hope this short article has
provided some ideas of how it might be useful to you. I’ve
barely covered the many capabilities available
with Scilab, so be sure to visit the main
website
for a number of good tutorials.

Source

WP2Social Auto Publish Powered By : XYZScripts.com