BASH command output to the variable

Different types of bash commands need to be run from the terminal based on the user’s requirements. When the user runs any command from the terminal then it shows the output if no error exists otherwise it shows the error message. Sometimes, the output of the command needs to be stored in a variable for future use. Shell command substitution feature of bash can be used for this purpose. How you can store different types of shell commands into the variable using this feature is shown in this tutorial.

variable=$(command)
variable=$(command [option…] argument1 arguments2 …)
variable=$(/path/to/command)

OR

variable=`command`
variable=`command [option…] argument1 arguments2 …`
variable=`/path/to/command`

***Note: Don’t use any space before and after the equal sign when using the above commands.

Single command output to a variable

Bash commands can be used without any option and argument for those commands where these parts are optional. The following two examples show the uses of simple command substitution.

Example#1:

bash `date` command is used to show the current date and time. The following script will store the output of `date` command into $current_date variable by using command substitution.

$ current_date=$(date)
$ echo “Today is $current_date”

Output:

Example#2:

`pwd` command shows the path of the current working directory. The following script stores the output of `pwd` command into the variable, $current_dir and the value of this variable is printed by using `echo` command.

$ current_dir=`pwd`
$ echo “The current directory is : $current_dir”

Output:

Command with option and argument

The option and argument are mandatory for some bash commands. The following examples show how you can store the output of the command with option and argument into a variable.

Example#3:

Bash `wc` command is used to count the total number of lines, words, and characters of any file. This command uses -c, -w and -l as option and filename as the argument to generate the output. Create a text file named fruits.txt with the following data to test the next script.
fruits.txt

fruits.txt
Mango
Orange
Banana
Grape
Guava
Apple

Run the following commands to count and store the total number of words in the fruits.txt file into a variable, $count_words and print the value by using `echo` command.

$ count_words=`wc -w fruits.txt`
$ echo “Total words in fruits.txt is $count_words”

Output:

Example#4:

`cut` is another bash command that uses option and argument to generate the output. Create a text file named weekday.txt with seven-weekday names to run the next script.

weekday.txt

Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
Sunday

Create a bash file named cmdsub1.sh with the following script. In this script, while loop is used to read the content of weekday.txt file line by line and read the first three characters of each line by using `cut` command. After cutting, the string value is stored in the variable $day. Next, If the statement is used to check the value of $day is ‘Sun’ or not. The output will print ‘Sunday is the holiday‘ when if the condition is true otherwise it will print the value of $day.

cmdsub1.sh

#!/bin/bash
filename

=

‘weekday.txt’
while read

line;

do
day

=

`echo $line | cut -c 1

3`
if [ $day

==

“Sun” ]

thenecho “Sunday is the holiday”

elseecho $day

fidone<$filename

Run the script.

$ cat weekday.txt
$ bash cmdsub1.sh

Output:

Using command substitution in loop

You can store the output of command substitution into any loop variable which is shown in the next example.

Example#5:

Create a file named cmdsub2.sh with the following code. Here, `ls -d */` command is used to retrieve all directory list from the current directory. For loop is used here to read each directory from the output and store it in the variable $dirname which is printed later.

cmdsub2.sh

#!/bin/bash
for dirname in

$

(ls -d */)

doecho “$dirname”

done

Run the script.

Output:

Using nested commands

How you can use multiple commands using pipe(|) is shown in the previous example. But you can use nested commands in command substitution where the output of the first command depends on the output of the second command and it works opposite of the pipe(|) command.

Nested command syntax:

var=`command1 `command“

Example#6:

Two commands, `echo` and `who` are used in this example as the nested command. Here, `who` command will execute first that print the user’s information of the currently logged in user. The output of the `who` command will execute by `echo` command and the output of `echo` will store into the variable $var. Here, the output of `echo` command depends on the output of `who` command.

$ var=`echo `who“
$ echo $var

Output:

Using Command path

If you know the path of the command then you can run the command by specifying the command path when using command substitution. The following example shows the use of command path.

Example#7:

`whoami` command shows the username of the currently logged in user. By default, this command is stored in /usr/bin/ folder. Run the following script to run `whoami` command using path and store in the variable, $output, and print the value of $output.

$ output=$(/usr/bin/whoami)
$ echo $output

Output:

Using Command Line argument

You can use the command line argument with the command as the argument in the command substitution.

Example#8:

Create a bash file named cmdsub3.sh with the following script. `basename` command is used here to retrieve the filename from the 2nd command line argument and stored in the variable, $filename. We know the 1st command line argument is the name of the executing script which is denoted by $0.

#!/bin/bash
filename=`basename $1`
echo “The name of the file is $filename.”

Run the script with the following argument value.

$ bash cmdsub3.sh Desktop/temp/hello.txt

Here, the basename of the path, Desktop/temp/hello.txt is ‘hello.txt’. So, the value of the $filename will be hello.txt.

Output:

Conclusion:

Various uses of command substitutions are shown in this tutorial. If you need to work with multiple commands or depended commands and store the result temporary to do some other tasks later then you can use this feature in your script to get the output.

Source

Mark Shuttleworth is not selling Canonical or Ubuntu — yet

At OpenStack Summit in Berlin, Mark Shuttleworth, founder of Canonical and Ubuntu, said in his keynote the question he gets asked the most is “What does he make of IBM buying Red Hat?” His reply is that IBM had spent too much, but with the growth of the cloud it would probably work out for them.

Actually, the question most of us wanted him to answer is: “After IBM paid a cool $34-billion would he consider selling Canonical?” After all, Canonical is also a top Linux company with a arguably a much stronger cloud and container presence than Red Hat. By The Cloud Market’s latest count of Amazon Web Services (AWS) instances, Ubuntu dominates with 307,217 instances to Red Hat’s 20,311. Even so, in a show floor conversation, Shuttleworth said, “No, I value my independence.”

That’s not to say he’s not willing to listen to proposals. But he has his own vision for Canonical and Ubuntu Linux. If someone were to make him an offer, which would leave him in charge of both and help him further his plans, then he might go for it. Maybe.

It would have to be a heck of an offer though, even by post-Red Hat acquisition terms. Shuttleworth doesn’t need the money. What he wants is to make his mark in technology history.

Of course, that requires money. But he told me that Canonical has been slowly but surely winning over former Red Hat customers. In his keynote, Shuttleworth said the company has been winning many telecom customers and that now five out of the top twenty-five banks are using Ubuntu. Specifically, he mentioned, AT&T, CenturyLink, Deutsche Telekom, NTT Docomo, SoftBank, and Walmart as Canonical customers.

Clearly, Canonical isn’t hurting for cash. In any case, Shuttleworth still plans on a Canonical Initial Public Offering (IPO) in 2019.

So, for now Canonical, under Shuttleworth’s firm hand, will continue to go its own way.

Source

Download PHP Linux 7.2.12 / 7.3.0 RC5

PHP is an open source software project, the most popular general-purpose scripting language crafted especially for web development. In theory, PHP is a hypertext preprocessor, but it’s actually a fast, pragmatic and flexible server-side programming language that helps you create powerful websites.

Can be embedded into HTML

While a skilled web developer can easily embed PHP into HTML, it can be used as a standalone executable. Its syntax draws upon the C, Java, and Perl. It easy to learn if you previously interact with any of the aforementioned programming languages.

Supports XML, IMAP, Java and LDAP

Being designed from the offset to be a universal web programming language, PHP offers support for XML, IMAP, Java, LDAP, several major databases, various Internet protocols, and general data manipulation.

Integrates into a web server

It’s called a server-side programming language because it integrates into a web server, such as Apache or Microsoft IIS. To add support for PHP to a web server, you can install the native web server module or a CGI executable.

It can access database and FTP servers

PHP is an Internet-aware system that can access database servers, including MySQL, PostgreSQL, SQLite, LDAP and Microsoft SQL Server, as well as FTP (File Transfer Protocol) servers.

It is highly extensible via its powerful APIs

PHP is actively developed in multiple stable and development branches, each one supporting various features and components. It is highly extensible via its powerful APIs (Application Programming Interfaces).

Supported operating systems and platforms

PHP is implemented in the C programming language, which means that it’s a cross-platform software supporting GNU/Linux, BSD, Solaris, Mac OS X or Microsoft Windows operating systems. It runs successfully on both 32-bit and 64-bit hardware platforms. It is freely available for download on any of the aforementioned OSes, distributed under the PHP license.

Source

How to Do Deep Machine Learning Tasks Inside KVM Guests with a Passed-through NVIDIA GPU

This article shows how to run deep machine learning tasks in a SUSE Linux Enterprise Server 15 KVM guest. In a first step, you will learn how to do the train/test tasks using CPU and GPU separately. After that, we can compare the performance differences.

Preparation

But first of all, we need to do some preparation work before building both the Caffe and the TensorFlow frameworks with GPU support.

1- Enable vt-d in the host bios and ensure the kernel parameter ‘intel_iommu=on’ is enabled.

2- Pass the nv970GTX on to the SUSE Linux Enterprise Server 15 KVM guest through libvirt.

Note:
* If there are multiple devices in the same iommu group, you need to pass all of them on to the guest.
* What is passed-through is the 970GTX physical function, not a vGPU instance, because 970GTX is not vGPU capable.

3- Disable the visibility of KVM to the guest by hiding the KVM signature. Otherwise, the newer public NVIDIA drivers and tools refuse to work (Please refer to qemu commit#f522d2a for the details).

4- Install the official NVIDIA display driver in the guest:


5- Install Cuda 10, cuDNN 7.3.1 and NCCL 2.3.5 in the guest:

Build the Frameworks

Now it’s time to build the TensorFlow framework with GPU support and the Caffe framework.

As the existing whl package of TensorFlow 1.11 doesn’t support Cuda 10 yet, I built TensorFlow 1.12 from the official Git source.

As next step, build a whl package and install it.

Now let’s create a simple example to test the TensorFlow GPU in the guest:

Through the nvidia-smi command, you can see the process information on GPU0 while the example code is running.

Next, let’s build the Caffe framework from the source, and the Caffe python wrapper.

The setup is done!

Examples

Now let’s try to execute some deep learning tasks.

Example 1.1: This is a Caffe built-in example. Please refer to http://caffe.berkeleyvision.org/gathered/examples/mnist.html to learn more.

Let’s use GPU0 in a guest to train this LeNET model.

During the training progress, we should see that the loss rate presents the downward trend all the time along with continuous iteration. But as the output is too long, I will not show it here.

We got four files at the given folder after the training is done. This is because I set up the system to save the model and the training status every 5000 times. This means we get 2 files after 5000 iterations and 2 files after 10000 iterations.

Now we got a trained model. Let’s test it with 10000 test images to see how good the accuracy is.

See? The accuracy is 0.9844. It is an acceptable result.

Example 1.2: Now let’s re-train a LeNET model using CPU instead of GPU – and let’s see what happens.

When we compare the GPU and the CPU, we can see that there are huge performance differences, while we train/test the LeNET with the mnist dataset.

We know that the traditional LeNET convolutional neural network (CNN) contains seven layers. Except for the input layer, the MNIST database contains 60,000 training images and 10,000 testing images. That means the performance differences become more between the training by CPU and the training by GPU when using deeper neural network layers.

Example 2.1: This example is a TensorFlow built-in example. Let’s do a very simple mnist classifier using the same mnist dataset.

Here we go: As no convolutional layers are involved, the time consumed is quite short. It is only 8.5 seconds. But the accuracy is 0.92, which is not good enough.

If you want, you can check all details through the TensorBoard.

Example 2.2: Now we create a network with five layers CNN which is similar to the LeNET. Let’s re-train the system through GPU0 based on the TensorFlow framework.

You can see now that the accuracy is 0.99 – it got much better, and the time consumed is only 2m 16s.

Example 2.3: Finally, let’s redo example 2.2 with CPU instead of GPU0, to check the performance differences.

With 0.99, the accuracy is really good now. But the time consumed is 19m 53s, which is way longer than the time consumed in example 2.2.

Summary

Finally, let’s summarize our test results:

  • The training/testing performance differences are huge between CPU and GPU. They could be going into the hundreds of times if the network model is complex.
  • SUSE Linux Enterprise Server 15 is a highly reliable platform whatever Machine Learning tasks you want to run on it for research or production purposes.

Source

AI in the Real World

Hilary Mason, general manager for machine learning at Cloudera, discussed AI in the real world in her keynote the recent Open FinTech Forum.

We are living in the future – it is just unevenly distributed with “an outstanding amount of hype and this anthropomorphization of what [AI] technology can actually provide for us,” observed Hilary Mason, general manager for machine learning at Cloudera, who led a keynote on “AI in the Real World: Today and Tomorrow,” at the recent Open FinTech Forum.

AI has existed as an academic field of research since the mid-1950s, and if the forum had been held 10 years ago, we would have been talking about big data, she said. But, today, we have machine learning and feedback loops that allow systems continue to improve with the introduction of more data.

Machine learning provides a set of techniques that fall under the broad umbrella of data science. AI has returned, from a terminology perspective, Mason said, because of the rise of deep learning, a subset of machine learning techniques based around neural networks that has provided not just more efficient capabilities but the ability to do things we couldn’t do at all five years ago.

Imagine the future

All of this “creates a technical foundation on which we can start to imagine the future,’’ she said. Her favorite machine learning application is Google Maps. Google is getting real-time data from people’s smartphones, then it is integrating that data with public data sets, so the app can make predictions based on historical data, she noted.

Getting this right, however, is really hard. Mason shared an anecdote about how her name is a “machine learning-edge case.” She shares her name with a British actress who passed away around 2005 after a very successful career.

Late in her career, the actress played the role of a ugly witch, and a search engine from 2009 combined photos with text results. At the time, Mason was working as a professor, and her bio was paired with the actress’s picture in that role. “Here she is, the ugly hag… and the implication here is obvious,’’ Mason said. “This named entity disambiguation problem is still a problem for us in machine learning in every domain.”

This example illustrates that “this technology has a tremendous amount of potential to make our lives more efficient, to build new products. But it also has limitations, and when we have conferences like this, we tend to talk about the potential, but not about the limitations, and not about where things tend to go a bit wrong.”

Machine learning in FinTech

Large companies operating complex businesses have a huge amount of human and technical expertise on where the ROI in machine learning would be, she said. That’s because they also have huge amounts of data, generally created as a result of operating those businesses for some time. Mason’s rule of thumb when she works with companies, is to find some clear ROI on a cost savings or process improvement using machine learning.

“Lots of people, in FinTech especially, want to start in security, anti-money laundering, and fraud detection. These are really fruitful areas because a small percentage improvement is very high impact.”

Other areas where machine learning can be useful is in understanding your customers, churn analysis and marketing techniques, all of which are pretty easy to get started in, she said.

“But if you only think about the ROI in the terms of cost reduction, you put a boundary on the amount of potential your use of AI will have. Think also about new revenue opportunities, new growth opportunities that can come out of the same technologies. That’s where the real potential is.”

Getting started

The first thing to do, she said is to “drink coffee, have ideas.” Mason said she visits lots of companies and when she sees their list of projects, they’re always good ideas. “I get very worried, because you are missing out on a huge amount of opportunity that would likely look like bad ideas on the surface.”

It’s important to “validate against robust criteria” and create a broad sweep of ideas. Then, go through and validate capabilities. Some of the questions to ask include: is there research activity relevant to what you’re doing? Is there work in one domain you can transfer to another domain? Has somebody done something in another industry that you can use or in an academic context that you can use?

Organizations also need to figure out whether systems are becoming commoditized in open source; meaning “you have a robust software and infrastructure you can build on without having to own and create it yourself.” Then, the organization must figure out if data is available — either within the company or available to purchase.

Then it’s time to “progressively explore the risky capabilities. That means have a phased investment plan,’’ Mason explained. In machine learning, this is done in three phases, starting with validation and exploration: Does the data exist? Can you build a very simple model in a week?

“At each [phase], you have a cost gate to make sure you’re not investing in things that aren’t ready and to make sure that your people are happy, making progress, and not going down little rabbit holes that are technically interesting, but ultimately not tied to the application.”

That said, Mason said predicting the future is of course, very hard, so people write reports on different technologies that are designed to be six months to two years ahead of what they would put in production.

Looking ahead

As progress is made in the development of AI, machine learning and deep learning, there are still things we need to keep in mind, Mason said. “One of the biggest topics in our field right now is how we incorporate ethics, how we comply with expectations of privacy in the practice of data science.”

She gave a plug to a short, free ebook called “Data Driven: Creating a Data Culture,” that she co-authored with DJ Patil, who worked as chief data scientist for President Barack Obama. Their goal, she said, is “to try and get folks who are practicing out in the world of machine learning and data science to think about their tools [and] for them to practice ethics in the context of their work.”

Mason ended her presentation on an optimistic note, observing that “AI will find its way into many fundamental processes of the businesses that we all run. So when I say, ‘Let’s make it boring,’ I actually think that’s what makes it more exciting.’”

Source

Linux Today – What’s New In Red Hat OpenStack Platform 14?

Nov 15, 2018

The Red Hat OpenStack Platform 14 release is based on the upstream OpenStack Rocky milestone, which first became publicly available, on Aug. 30. Among the new features in OSP 14 are improved networking capabilities, including enhanced load balancing capabilities for container workloads. Red Hat is also continuing to push forward on the integration of its OpenShift Kubernetes container orchestration platform with OpenStack.

In a video interview with eWEEK, Mark McLoughlin, Senior Director of Engineering, OpenStack at Red Hat, outlined some of the new features in OSP 14 and the direction for the road ahead.

Source

Odd Realm is a sandbox settlement builder inspired by Dwarf Fortress and Rimworld with Linux support

Inspired by the likes of Dwarf Fortress and Rimworld, Odd Realm is a sandbox settlement builder currently in Early Access on itch.

A sweet find while browsing around for new games today, I came across it and was instantly pulled in by the style. Looks like it could be an interesting choice perhaps if you find Dwarf Fortress to complex or you fancy trying out something new.

You will face off against passing seasons, roaming bandits, underground horrors, and gods from legend making it sound really quite fun.

Features:

  • 4 procedurally generated biomes (Desert, Taiga, Voidland, and Tropical)
  • 24+ Creatures
  • 100+ items, weapons, and gear
  • 100+ buildable blueprints for props, blocks, plants, trees, and platforms
  • 9+ Settler professions
  • Unique scenarios and encounters based on player decisions

Currently only available on itch.io for $10. The developer is planning to eventually released on Steam, once developed enough and enough feedback has been given on how it’s going on the itch store.

Source

New Linux-Targeting Crypto-Mining Malware Combines Hiding and Upgrading Capabilities

Japanese multinational cybersecurity firm Trend Micro has detected a new strain of crypto-mining malware that targets PCs running Linux, according to a report published Nov. 8.

The new strain is reportedly able to hide the malicious process of unauthorized cryptocurrency-mining through users’ CPU by implementing a rootkit component. The malware itself, detected by Trend Micro as Coinminer.Linux.KORKERDS.AB, is also reportedly capable of updating itself.

According to the report, the combination of hiding and self-upgrading capabilities gives the malware a great advantage. While the rootkit fails to hide the increased CPU usage and the presence of a running crypto-mining malware, it is also improved by updates, which can completely repurpose the existing code or tools by editing a few “lines of code,” the report notes.

The new crypto-mining malware strain infects Linux PCs via third-party or compromised plugins. Once installed, the plugin reportedly gets admin rights, with malware able to be run with privileges granted to an application. In this regard, Trend Micro mentioned another case of Linux-targeting crypto malware that used the same entry point, and took place in September this year.

Based on web server statistics, the estimated market share of Linux on personal computers amounted to around 1.8 percent in 2016. The share of Microsoft Windows systems in 2016 was around 89.7, while Mac OS served around 8.5 percent of users.

Recently, Cointelegraph reported that a group of South-Korean hackers will face trial for a cryptojacking case that allegedly infected more than 6,000 computers with malicious crypto-mining malware.

In September, a report revealed that leaked code targeting Microsoft systems, which hackers allegedly stole from the U.S. National Security Agency (NSA), sparked a fivefold increase in cryptocurrency mining malware infections.

Source

6 Best Practices for High-Performance Serverless Engineering | Linux.com

When you write your first few lambdas, performance is the last thing on your mind. Permissions, security, identity and access management (IAM) roles and triggers all conspire to make the first couple of lambdas, even after a “hello world” trial just to get your first serverless deployments up and working. But once your users begin to rely on services your lambdas provide, it’s time to focus on high-performance serverless.

Here are some key things to remember when you’re trying to produce high-performance serverless applications.

1. Observability
Serverless handles scaling really well. But as scale interacts with complexity, slowdowns and bugs are inevitable. I’ll be frank: these can be a bear if you don’t plan for observability from the start.

Read more at The New Stack

Source

How to use systemd-nspawn for Linux system recovery

For as long as GNU/Linux systems have existed, system administrators have needed to recover from root filesystem corruption, accidental configuration changes, or other situations that kept the system from booting into a “normal” state.

Linux distributions typically offer one or more menu options at boot time (for example, in the GRUB menu) that can be used for rescuing a broken system; typically they boot the system into a single-user mode with most system services disabled. In the worst case, the user could modify the kernel command line in the bootloader to use the standard shell as the init (PID 1) process. This method is the most complex and fraught with complications, which can lead to frustration and lost time when a system needs rescuing.

Most importantly, these methods all assume that the damaged system has a physical console of some sort, but this is no longer a given in the age of cloud computing. Without a physical console, there are few (if any) options to influence the boot process this way. Even physical machines may be small, embedded devices that don’t offer an easy-to-use console, and finding the proper serial port cables and adapters and setting up a serial terminal emulator, all to use a serial console port while dealing with an emergency, is often complicated.

When another system (of the same architecture and generally similar configuration) is available, a common technique to simplify the repair process is to extract the storage device(s) from the damaged system and connect them to the working system as secondary devices. With physical systems, this is usually straightforward, but most cloud computing platforms can also support this since they allow the root storage volume of the damaged instance to be mounted on another instance.

Once the root filesystem is attached to another system, addressing filesystem corruption is straightforward using fsck and other tools. Addressing configuration mistakes, broken packages, or other issues can be more complex since they require mounting the filesystem and locating and changing the correct configuration files or databases.

Using systemd

Before systemd, editing configuration files with a text editor was a practical way to correct a configuration. Locating the necessary files and understanding their contents may be a separate challenge, which is beyond the scope of this article.

When the GNU/Linux system uses systemd though, many configuration changes are best made using the tools it provides—enabling and disabling services, for example, requires the creation or removal of symbolic links in various locations. The systemctl tool is used to make these changes, but using it requires a systemd instance to be running and listening (on D-Bus) for requests. When the root filesystem is mounted as an additional filesystem on another machine, the running systemd instance can’t be used to make these changes.

Manually launching the target system’s systemd is not practical either, since it is designed to be the PID 1 process on a system and manage all other processes, which would conflict with the already-running instance on the system used for the repairs.

Thankfully, systemd has the ability to launch containers, fully encapsulated GNU/Linux systems with their own PID 1 and environment that utilize various namespace features offered by the Linux kernel. Unlike tools like Docker and Rocket, systemd doen’t require a container image to launch a container; it can launch one rooted at any point in the existing filesystem. This is done using the systemd-nspawn tool, which will create the necessary system namespaces and launch the initial process in the container, then provide a console in the container. In contrast to chroot, which only changes the apparent root of the filesystem, this type of container will have a separate filesystem namespace, suitable filesystems mounted on /dev, /run, and /proc, and a separate process namespace and IPC namespaces. Consult the systemd-nspawn man page to learn more about its capabilities.

An example to show how it works

In this example, the storage device containing the damaged system’s root filesystem has been attached to a running system, where it appears as /dev/vdc. The device name will vary based on the number of existing storage devices, the type of device, and the method used to connect it to the system. The root filesystem could use the entire storage device or be in a partition within the device; since the most common (simple) configuration places the root filesystem in the device’s first partition, this example will use /dev/vdc1. Make sure to replace the device name in the commands below with your system’s correct device name.

The damaged root filesystem may also be more complex than a single filesystem on a device; it may be a volume in an LVM volume set or on a set of devices combined into a software RAID device. In these cases, the necessary steps to compose and activate the logical device holding the filesystem must be performed before it will be available for mounting. Again, those steps are beyond the scope of this article.

Prerequisites

First, ensure the systemd-nspawn tool is installed—most GNU/Linux distributions don’t install it by default. It’s provided by the systemd-container package on most distributions, so use your distribution’s package manager to install that package. The instructions in this example were tested using Debian 9 but should work similarly on any modern GNU/Linux distribution.

Using the commands below will almost certainly require root permissions, so you’ll either need to log in as root, use sudo to obtain a shell with root permissions, or prefix each of the commands with sudo.

Verify and mount the fileystem

First, use fsck to verify the target filesystem’s structures and content:

$ fsck /dev/vdc1

If it finds any problems with the filesystem, answer the questions appropriately to correct them. If the filesystem is sufficiently damaged, it may not be repairable, in which case you’ll have to find other ways to extract its contents.

Now, create a temporary directory and mount the target filesystem onto that directory:

$ mkdir /tmp/target-rescue
$ mount /dev/vdc1 /tmp/target-rescue

With the filesystem mounted, launch a container with that filesystem as its root filesystem:

$ systemd-nspawn –directory /tmp/target-rescue –boot — –unit rescue.target

The command-line arguments for launching the container are:

  • –directory /tmp/target-rescue provides the path of the container’s root filesystem.
  • –boot searches for a suitable init program in the container’s root filesystem and launches it, passing parameters from the command line to it. In this example, the target system also uses systemd as its PID 1 process, so the remaining parameters are intended for it. If the target system you are repairing uses any other tool as its PID 1 process, you’ll need to adjust the parameters accordingly.
  • — separates parameters for systemd-nspawn from those intended for the container’s PID 1 process.
  • –unit rescue.target tells systemd in the container the name of the target it should try to reach during the boot process. In order to simplify the rescue operations in the target system, boot it into “rescue” mode rather than into its normal multi-user mode.

If all goes well, you should see output that looks similar to this:

Spawning container target-rescue on /tmp/target-rescue.

Press ^] three times within 1s to kill container.

systemd 232 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN)

Detected virtualization systemd-nspawn.

Detected architecture arm.

Welcome to Debian GNU/Linux 9 (Stretch)!

Set hostname to <test>.

Failed to install release agent, ignoring: No such file or directory

[ OK ] Reached target Swap.

[ OK ] Listening on Journal Socket (/dev/log).

[ OK ] Started Dispatch Password Requests to Console Directory Watch.

[ OK ] Reached target Encrypted Volumes.

[ OK ] Created slice System Slice.

Mounting POSIX Message Queue File System…

[ OK ] Listening on Journal Socket.

Starting Set the console keyboard layout…

Starting Restore / save the current clock…

Starting Journal Service…

Starting Remount Root and Kernel File Systems…

[ OK ] Mounted POSIX Message Queue File System.

[ OK ] Started Journal Service.

[ OK ] Started Remount Root and Kernel File Systems.

Starting Flush Journal to Persistent Storage…

[ OK ] Started Restore / save the current clock.

[ OK ] Started Flush Journal to Persistent Storage.

[ OK ] Started Set the console keyboard layout.

[ OK ] Reached target Local File Systems (Pre).

[ OK ] Reached target Local File Systems.

Starting Create Volatile Files and Directories…

[ OK ] Started Create Volatile Files and Directories.

[ OK ] Reached target System Time Synchronized.

Starting Update UTMP about System Boot/Shutdown…

[ OK ] Started Update UTMP about System Boot/Shutdown.

[ OK ] Reached target System Initialization.

[ OK ] Started Rescue Shell.

[ OK ] Reached target Rescue Mode.

Starting Update UTMP about System Runlevel Changes…

[ OK ] Started Update UTMP about System Runlevel Changes.

You are in rescue mode. After logging in, type “journalctl -xb” to view

system logs, “systemctl reboot” to reboot, “systemctl default” or ^D to

boot into default mode.

Give root password for maintenance

(or press Control-D to continue):

In this output, you can see systemd launching as the init process in the container and detecting that it is being run inside a container so it can adjust its behavior appropriately. Various unit files are started to bring the container to a usable state, then the target system’s root password is requested. You can enter the root password here if you want a shell prompt with root permissions, or you can press Ctrl+D to allow the startup process to continue, which will display a normal console login prompt.

When you have completed the necessary changes to the target system, press Ctrl+] three times in rapid succession; this will terminate the container and return you to your original shell. From there, you can clean up by unmounting the target system’s filesystem and removing the temporary directory:

$ umount /tmp/target-rescue
$ rmdir /tmp/target-rescue

That’s it! You can now remove the target system’s storage device(s) and return them to the target system.

The idea to use systemd-nspawn this way, especially the –boot parameter, came from a question posted on StackExchange. Thanks to Shibumi and kirbyfan64sos for providing useful answers to this question!

Source

WP2Social Auto Publish Powered By : XYZScripts.com