Download Manjaro Linux KDE 18

Manjaro Linux KDE is an open source Linux operating system that uses all the powerful features found on other Manjaro editions, but on top of a highly customized KDE desktop environment. It is based on the Arch Linux distribution, which means that it is a very stable, reliable and virus-free operating system.

Follows a rolling-release model

It follows a rolling-release model keeping your installation up-to-date forever (or at least until a complete reinstall is required because of unforeseen circumstances). It is available for download as an ISO image that you will need to burn on a blank DVD disc or use Unetbootin to deploy it on a USB flash drive.

Live DVD boot menu options

It uses exactly the same boot menu found on all official Manjaro releases, allowing users to install the entire operating system on their computer, use the distribution directly from the live media, or boot the operating system that is already installed on the respective computer.

Uses the KDE Plasma desktop environment

Because it uses the KDE Plasma desktop environment, this edition is much bigger in size than any other Manjaro flavor. Besides all the amazing applications that are part of a default KDE installations, this edition includes the entire LibreOffice office suite, the GIMP image editor, and the VLC Media Player applications.

You can install the OS with or without proprietary drivers

An interesting feature of the Manjaro Live CD is the ability to use or install the operating system with or without proprietary drivers. This means that if you have an AMD Radeon or Nvidia graphics card, choosing the second option (Start or install Manjaro (non-free drivers)) is the best method to enjoy a complete Manjaro experience. On the other hand, if you have an Intel video card, we suggest to use the first option when installing or using the live environment.

Bottom line

We highly recommend the Manjaro Linux KDE operating system if you own a high-end computer and you want to transform it into a modern, beautiful, clean and powerful workstation for office, multimedia and gaming tasks.

Source

GNU Linux-Libre 4.19 Kernel Is Now Available for Those Seeking 100% Freedom | Linux.com

With Linux kernel 4.19 hitting the streets, a new version of the GNU Linux-libre kernel is now available, version 4.19, based on the upstream kernel but without any proprietary drivers. Based on the recently released Linux 4.19 kernel series, the GNU Linux-libre 4.17-gnu kernel borrows all the new features, including the experimental EROFS (Enhanced Read-Only File System) file system, initial support for the Wi-Fi 6 (802.11ax) wireless protocol, and mitigations for the L1FT and SpectreRSB security flaws. While the GNU Linux-Libre 4.19 kernel comes with all these goodies found in the upstream Linux 4.19 kernel, it doesn’t ship with proprietary code. Deblobbed drivers include Aspeed ColdFire FSI Master, MT76x0U and MT76x2U Wi-Fi, MTk Bluetooth UART, as well as Keystone and Qualcomm Hexagon Remoteproc.

Source

Linux IoT Landscape: Distributions – IoT For All

Graphic of the Linux Penguin

Linux has traditionally suffered an embarrassment of riches when it comes to the selection of the distribution that that is used to deploy it.

Image of terraced rice farms to represent Linux distributions for IoT stacks

What Is a Linux Distribution?

Linux is an Operating System, which is the program at the heart of controlling a computer. It decides how to partition the available resources (CPU, memory, disk, network) between all of the other programs vying for it. The operating system, while very important, isn’t useful on its own. Its purpose is to manage the compute resources for other programs. Without these other programs, the Operating System doesn’t serve much of a purpose.

That’s where the distribution comes in. A distribution provides a large number of other programs that, together with Linux, can be assembled into working sets for a vast number of purposes. These programs can range from basic program writing tools such as compilers and linkers to communications libraries to spreadsheets and editors to pretty much everything in between. A distribution tends to have a superset of what’s actually used for each individual computer or solution. It also provides many choices for each category of software components that users or companies can assemble into what they consider a working set. A rough analogy can be made to a supermarket in which there are many options for many items on the shelves, and each user picks and chooses what makes sense to them in their cart.

Binary-Based or Source-Based Distribution?

Distributions can largely be split into two categories: binary-based and source-based.

Binary-based distributions provide all of the software components already pre-compiled and ready to be installed. These components are compiled with “good-enough” build options that work fine for the majority of users. They also do provide sources for these components for the minority of users that need or want to compile their own components. Following our supermarket analogy, this supermarket contains all of the food pre-packaged and pre-cooked, but with clear instructions on how to get the ingredients and repeat the process for those that want to tweak a recipe or two. This kind of distribution is exemplified by Debian, Fedora Core, OpenSUSE, Ubuntu, and many others. And while they provide the same type of system, they all do so using different—and unfortunately, incompatible—methods. They’re the primary kind of distribution used in general purpose computers such as servers, desktops, and laptops.

Source-based distributions, on the other hand, focus on providing a framework in which the end users can build all of the components themselves from source code. These distributions also provide tools for easily choosing a sensible starting collection of components and tweaking each component’s build as necessary. These tweaks can be as simple as adding a compile flag to using a different version of the sources or modifying the sources in some way. A user will assemble a menu of what they want to build and then start the build. After minutes or hours, depending on the case, they will have a resulting image which they can use for their computer. Examples of this kind of distribution are Gentoo, Android, and Yocto. In our supermarket analogy, this is closer to a bulk foods store, where you can get pre-measured foods with detailed machine-readable cooking instructions, and you’d have a fancy cooker that can read those instructions and cook the meals for you. And handle tweaks to a range of recipes such as adjusting for brown rice over white rice. Sort of — the analogy gets a bit weak on this one.

These source-based distributions are generally preferred for embedded Linux-based devices in general and IoT devices in particular. While they are harder to set up and maintain, source-based distributions have the unique advantage of being able to tailor the installed image to the exact target hardware in order to maximize resource usage—or minimize resource wastage. And for embedded devices that tend to be a strong constraint. In addition, source based distributions are better suited for cross-building—where the machine on which you build your platform isn’t the same as the one on which you run it—while binary based distributions are better for self-hosted building—where you build and run on the same machine (or same architecture).

Given today’s prevalence of having Intel architecture machines as build machines—and using ARM architecture for IoT products—cross-building support is important for IoT devices.

New Kid On The Block: Container-Centered Distributions

The traditional Linux method—shipping a single unified userspace that contains all of the platform outside of the kernel—is changing. The new model is about having a collection of “containers” that componentize the userspace. The containerized model transforms a portion of the userspace into a federated collection of components with a high degree of independence between each component.

Containerized distribution brings many benefits, from allowing teams to work more independently to making it feasible to do granular platform upgrades. The downside is that they have a larger footprint than non-containerized solutions. If the evolution of technology has shown us anything, however, it’s that when the only downside of a new technology is the footprint, the resourcing available to it tends to expand to make that a smaller and smaller problem at every new generation.

Some of the early options are described below to compare to existing distributions.

The Contenders: Linux Distributions for IoT

Now we must delve into contentious territory. Many people have their favorite Linux distribution, and even if their requirements change wildly (for example going from a server setup to an embedded IoT device), they cling onto that distribution—sometimes to the point of fitting a square peg into a round hole.

I’ll preface the list below: this is a sampling of some well established Linux distributions and some up and comers. Many others exist and might be more suitable for some use cases.

Now with that out of the way…

Yocto

Yocto is a source-based distribution that’s used in many embedded and IoT devices. It tries to unite the benefits of binary-based distributions, such as clear separation of the packages and their dependencies, with the benefits of source-based distributions that allow you to alter your target binaries in significant ways as you make smaller changes.

Diagram demonstrating how Yocto works as a Linux distribution for IoT

Yocto is composed of a series of recipes, each of which describes how to build one module of the system (e.g. library, daemon, application, etc.). These recipes are then collected into layers which collect a series recipes and configure various aspects of how they are supposed to be used together, from compile flags to recipe features, to details on how they show up on the target. Each target build will be composed of a few of these layers, each one adding or removing packages from the lower layers, or modifying their default behavior. This allows multiple parties to tweak their own layer to affect final images. So if the base layer uses a conservative set of compiler flags (which it usually does), a chip vendor can add compiler flags that are beneficial to their specific chip model, and a board vendor can remove chip functionality that their board might not support.

What this means in practice for your IoT product is that your effort to build a solution using a board that already supports Yocto will be to add or modify recipes that provide your value-add over the base functionality. You will also need to have a build and configuration management infrastructure setup that allows creating images for your target, though in today’s world of containers that is not too difficult to do

For more information on Yocto, you can start here. It’s also worth checking out how well supported Yocto is on any dev. boards that you’re considering for your IoT solution.

Debian

Debian is a venerable open source binary-based distribution. It’s both a distribution onto itself and also the baseline for other well-known derived distributions, the most famous of which is Ubuntu.

Debian has a sizeable collection of packages that are already pre-built for ARM (the architecture of choice for IoT), but the level of support and maintenance for the ARM binaries of these packages tends to be significantly less than the Intel counterparts given Debian’s strength in Intel ecosystems. So metrics such as “10,000+ packages built” aren’t all that meaningful. You’ll need to understand the packages that are important to you and how well-supported they are.

A shortcoming of many distributions used in self-hosted setups (e.g. Debian) is that developers don’t understand or remember that package installation might not be done on the machine that will ultimately be running the package, and thus they can’t rely on any functionality from the target being available. Given that this nuisance is also a headache for docker environments, distributions have spent good effort in cleaning up these dependencies, so it’s a smaller problem than it used to be.

The effort to set up a build environment for a small set of packages is fairly trivial, but the infrastructure to build all the packages for a system can become significant.

Because of these, Debian for IoT is a good option as long as the board you can considering already has gone through the effort of supporting Debian, in which case you just need to add or create a few packages to complete your platform

EdgeX Foundry

EdgeX Foundry is not exactly a distribution in the strict sense, in that it does not have any opinion on the Board Support Package (BSP) component of distributions. The BSP is the portion that contains the Linux kernel itself, device drivers and libraries to enable the hardware platform. It starts from a level above that, requiring a working Linux system with docker support as the underlying substrate. From there it provides a wide variety of containers that provide a rich set of middleware and verticals for IoT devices, in particular edge devices(in docker parlance, a container is a self-contained module that usually provides a vertical function such as a database or a web service, in with little or no dependency on the host operating system, libraries, etc).

The concepts behind EdgeX Foundry point to the way forward for larger IoT devices, particularly edge devices, but work remains to be done to define a more constrained version that provides a good set of baseline services. Progress has been made in this regard with a move of some services from JVM to golang based implementations but the footprint will remain out of reach for low and mid-end Linux based IoT for the immediate future.

Foundries.io Microplatform

Foundries.io has created a Linux platform using a Yocto based approach to creating the board support layer and then layers a set of containerized microservices on top of it. Their set of containers is a smaller and more modest set than EdgeX Foundry approach, with a smaller footprint.

While full access to the Foundries.io product with automated updates and management is available via subscription, the underlying platform is open source and available here.

Conclusion

Linux-based IoT is starting a migration from a traditional embedded model where the complete vertical solution is created from a single team/worldview/toolchain/model to a more flexible model with greater separation of firmware, board, middleware, and applications components. This migration is not without cost however, and places higher demand on CPU, memory, and disk requirements. In order to choose a Linux baseline for your next IoT project, you’ll need to take into account what footprint you can afford and what lifespan you plan for your product. Smaller and more quickly replaced products are better off staying close to today’s tried and true solutions such as Yocto. Products that can afford more resources, and require new feature rollout into deployed products as a requirement should look into the more mainstream Linux distributions and the new container-focused solutions as a path forward.

Source

Steam Play thoughts: A Valve game streaming service

With the talk of some big players moving into cloud gaming, along with a number of people thinking Valve will also be doing it, here’s a few thoughts from me.

Firstly, for those that didn’t know already, Google are testing the waters with their own cloud gaming service called Project Stream. For this, they teamed up with Ubisoft to offer Assassin’s Creed Odyssey on the service. I actually had numerous emails about this, from a bunch of Linux gamers who managed to try it out and apparently it worked quite well on Linux.

EA are pushing pretty heavily with this too with what they’re calling Project Atlas, as their Chief Technology Officer talked about in a Medium post on how they’ve got one thousand EA employees now working on it. That sounds incredibly serious to me!

There’s more cloud services offering hardware for a subscription all the time, although a lot of them are quite expensive and use Windows.

So this does beg the question: What is Valve going to do? Cloud gaming services, that will allow people with lower-end devices to play a bunch of AAA games relatively easily could end up cutting into Valve’s wallet.

Enter Valve’s Cloud Gaming Service

Pure speculation of course, but with the amount of big players now moving into the market, I’m sure Valve will be researching it themselves. Perhaps this is what Steam Play is actually progressing towards? With Steam Play, Valve will be able to give users access to a large library of games running on Linux where they don’t have to pay extra fees for any sort of Windows licensing fee from Microsoft and obviously being Linux it would allow them to heavily customise it to their liking.

On top of that, what about the improvements this could further bring for native desktop Linux gaming? Stop and think about it for a moment, how can Valve tell developers they will get the best experience on this cloud gaming platform? Have a native Linux version they support with updates and fixes. Valve are already suggesting developers to use Vulkan, it’s not such a stretch I think.

Think about how many games, even single-player games are connected to the net now in some way with various features. Looking to the future, having it so your games can be accessed from any device with the content stored in the cloud somewhere does seem like the way things are heading. As much as some (including me) aren’t sold on the idea, clearly this is where a lot of major players are heading and Valve won’t want to be left behind.

For Valve, it might not even need to be a subscription service, since they already host the data for the developers. Perhaps, you buy a game and get access to both a desktop and cloud copy? That would be a very interesting and tempting idea. Might not be feasible of course, since the upkeep on the cloud machines might require a subscription if Valve wanted to keep healthy profits, but it’s another way they could possibly trump the already heavy competition.

Think the whole idea is incredibly farfetched? Fair enough, I do a little too. However, they might already have a good amount of the legwork done on this, thanks to their efforts with the Steam Link. Did anyone think a year or two ago you would be able to stream Steam games to your phone and tablet?

Valve also offer movies, TV series and more on Steam so they have quite a lot to offer.

It might not happen at all of course, these are just some basic thoughts of mine on what Valve’s moves might be in future. It’s likely not going to happen for VR titles, since they need so much power and any upset with latency could make people quite sick. Highly competitive games would also be difficult, but as always once it gets going the technology behind it will constantly improve like everything. There’s got to be some sort of end game for all their Linux gaming work and not just to help us, they are a business and they will keep moving along with all the other major players.

Source

Download Manjaro Linux Xfce 18

Manjaro Linux Xfce is an open source and completely free Linux operating system based on the powerful Arch Linux distribution. It uses a tool called BoxIt, which is designed to work like a Git repository. Besides the fact that the Manjaro distribution uses Arch Linux as its base, it aims to be user-friendly, especially because of the easy-to-configure installation. Additionally, it is fully compatible with the Arch User Repository (AUR) and uses the Pacman package manager.

Distributed in multiple editions

Manjaro is distributed with the Xfce, KDE and Openbox desktop environments, supporting both 32-bit and 64-bit architectures. A Netboot edition of Manjaro also exists for advanced users who want to install the distro over the Internet. In addition to the official editions listed above, the talented Manjaro community provides many other flavors, with by the LXDE, Cinnamon, MATE, Enlightenment and awesome window managers.

A rolling-release distribution

Keep in mind that, just like Arch Linux, Manjaro Linux is a rolling release. This means that users don’t need to download a new ISO image in order to upgrade the system to the latest stable version.

Boot options

The boot menu is exactly the same as on the other Manjaro flavors, allowing users to boot the live environment with or without proprietary drivers, check if the hardware components are correctly recognized, test the system memory (RAM), and boot the operating system that is currently installed. “Start Manjaro Linux” is the recommended option for all new users, as it will start the graphical environment powered by the lightweight Xfce window manager.

Default applications

Thunar is the default file manager, Mozilla Firefox can be used for all your web browsing needs, Mozilla Thunderbird is the default email client, and the Pidgin multi-protocol instant messenger application is there for any type of communication. The Steam client for Linux is also installed by default in this edition of Manjaro Linux, along with the HexChat IRC client, Viewnior image viewer, the powerful GIMP image editor, VLC Media Player, Xnoise music organizer and player, and the entire LibreOffice office suite.

Xfce is in charge of the graphical session

Besides the common utilities such as calculator, terminal emulator, text editor, clipboard manager, dictionary, document viewer and archive manager, the Manjaro Xfce edition also includes the GParted utility for disk partitioning tasks, and the Xfburn application for burning CD/DVD discs. We strongly recommend the Xfce edition of Manjaro Linux for new Linux users, old computers, and for all who want to discover the true power of the Arch Linux operating system.

Source

How to Manage Storage on Linux with LVM | Linux.com

Logical Volume Manager (LVM) is a software-based RAID-like system that lets you create “pools” of storage and add hard drive space to those pools as needed. There are lots of reasons to use it, especially in a data center or any place where storage requirements change over time. Many Linux distributions use it by default for desktop installations, though, because users find the flexibility convenient and there are some built-in encryption features that the LVM structure simplifies.

However, if you aren’t used to seeing an LVM volume when booting off of a Live CD for data rescue or migration purposes, LVM can be confusing because the mountcommand can’t mount LVM volumes. For that, you need LVM tools installed. The chances are great that your distribution has LVM utils available—if they aren’t already installed.

This tutorial explains how to create and deal with LVM volumes.

Source

Intel 6th and 7th Gen box PCs offer PCIe graphics expansion

Nov 6, 2018

Aaeon launched a rugged, Linux-friendly line of “Boxer-6841M” industrial computers based on 6th or 7th Gen Core CPUs with either a PCIe x16 slot for Nvidia GPU cards or 2x PCIe x8 slots for frame grabbers.

The Boxer-6841M line of six industrial box PCs is designed for edge AI and machine vision applications. Like last year’s Boxer-6839, the rugged, wall-mountable computers run Linux (Ubuntu 16.04) or Windows on Intel’s 6th Generation “Skylake” and 7th Generation “Kaby Lake” Core and Xeon processors with 35W to 73W TDPs. The systems use T and TE branded Core CPUs and Intel H110 PCH or C236 PCH chipsets.

Boxer-6841M-A2 (left) and smaller, fanless -A5
(click images to enlarge)

 

Aaeon refers to the Boxer-6841M as “compact,” but even the two smaller A5 and A6 models with dual PCIe x8 slots are considerably larger than the Boxer-6839, measuring 260 x 300 x 155mm. The PCIe x8 slots on the fanless A5 and A6 are intended primarily for loading video frame grabber cards. The only difference between the A5 and A6 is that the A6 provides 4x RS232 ports via a side-mounted addition.

The fan-powered A1-A4 models, meanwhile, measure 400 x 200 x 155mm. This gives them room to fit a single PCIe x16 slot for AI-enabled Nvidia Tesla graphics cards.

Boxer-6841M A1 through A4 models (left) and BOXER-6841M power specs
(click images to enlarge)

 

The A1 and A3 models can drive 180W of power to their PCIe x16 slot while the A2 and A4 offer dual 12V inputs to support up to a 250W PCIe x16 card. The dual input design “makes the system more stable by reducing the level of wasted heat that would be produced by a single 24V input,” says Aaeon. (See power specs above.)

Boxer-6841M A5 and A6
(click image to enlarge)

The difference between the A1 and A3 models and between the A2 and A4 models is that only the A1 and A2 support the 73W TDP Intel Xeon parts. All six systems also offer single PCIe x1 slots and dual mini-PCIe slots with USB support. There’s also a single SIM slot and 4G and WiFi options with antennas.

Boxer-6841M-A1 front view, which is the same as A2, A3, and A4 (left) and A4 rear view with extra 12V input, a feature it shares with the A2
(click images to enlarge)

The Boxer-6841M systems support up to 32GB of DDR4 (including ECC) RAM. They offer dual 2.5-inch SATA bays with removable drive support, with an option to expand to 4x bays. The system provides 5x GbE ports that support machine vision cameras.

A VGA port and 2x HDMI 1.4b ports handle video duty, backed up by audio in and out jacks. You also get 4x USB 3.0 ports and an RS-232/422/485 port.

Boxer-6841M-A5 (left) and A6 model with quad-serial port extension
(click images to enlarge)

 

All six models provide 12-24V inputs on the front, as well as a power switch, a remote power connector, and LEDs. The system support -20 to 55°C temperatures with 35W TDPs and -20 to 45°C with the 73W Xeons, both with 0.5m/s airflow. Anti-vibration support is listed as Random, 1Grm, 5~500Hz.

Further information

No pricing or availability information was provided for the Boxer-6841M computers. More information may be found at Aaeon’s Boxer-6841M product page.

Source

​A kinder, gentler Linus Torvalds and Linux 4.20

After apologizing for his behavior in the Linux developer community last September, Linus Torvalds came back to Linux in October. And now, in November, with the first release candidate of the 4.20 Linux kernel out, it’s time to look at what’s what with Torvalds and the controversial Linux Code of Conduct (CoC).

The answer is: We do have a kinder, gentler Torvalds.

Also: Nintendo Switch turned into Linux tablet by hackers CNET

Torvalds told me, besides seeking professional help, he had “an email filter in place (that might be expanded upon or modified as needed or as I come up with more esoteric swearing — the current filter is really pretty basic)” In addition, Torvalds has asked the other senior Linux maintainers “to just send me email if they feel I’ve been unnecessarily abrupt.”

The results? I’ve been going through the Linux Kernel Mailing List (LKML) archives, and I haven’t seen hardily a trace of the blue language that made Torvalds infamous.

Michael Larabel. founder and principal author of the Linux news site Phoronix, went further. He compared and contrasted how Torvalds reacted to major Linux coding no-nos now and last year.

In 2018, a developer enabled a gaming device driver by default in the kernel. Torvalds replied:

We do *not* enable new random drivers by default. And we most *definitely* don’t do it when they are odd-ball ones that most people have never heard of.

Yet the new “BigBen Interactive” driver that was added this merge window did exactly that.

Just don’t do it.

In 2017, another developer made the same kind of blunder with the Dell SMBIOS driver. Then, Torvalds fired back:

As a developer, you think _your_ driver or feature is the most important thing ever, and you have the hardware.

AND ALMOST NOBODY ELSE CARES.

Read it and weep. Unless your hardware is completely ubiquitous, it damn well should not default to being defaulted everybody else’s config.

Notice the change in tone? I did, and I’m sure the wet-behind-the-ears developers on the receiving end did as well.

The new Linux Code of Conduct held now been in effect for several weeks. Before the CoC took effect, people — largely outside the Linux kernel community — were having fits about it. Developers would be kicked out for not being politically correct. Programmers would leave and they take their code with them, Dogs and cats would start living together.

Well, maybe not the last, but you get the idea.

The results in the real world? All’s quiet on the LKML. The last substantive talk about the CoC was two weeks ago. And, that conversation was more about the process of editing the CoC than its substance.

In short, things are peaceful in the Linux community, and they’re working hard on the next release.

Speaking of which, 4.20 will be a large release, with about 300,000 additional and changed lines of code. But there’s nothing Earth-shattering in it. Torvalds wrote 70 percent of it is driver updates, with the bulk of the rest bringing architecture updates and tooling.

Also: The Linux Code of Conduct is long overdue TechRepublic

That said, Torvalds is considering a change in how he works with the patches:

One thing I _would_ like to point out as the merge window closes: I tend to delay some pull requests that I want to take a closer look at until the second week of the merge window when things are calming down, and that _really_ means that I’d like to get all the normal pullrequests in the first week of the two-week merge window. And most people really followed that, but by Wednesday this week I had gotten a big frustrated that I kept getting new pull requests when I wanted to really just spend most of the day looking through the ones that deserved a bit of extra attention.And yes, people generally kind of know about this and I really do get *most* pull requests early. But I’m considering trying to make that a more explicit rule that I will literally stop taking new pull requests some time during the second week unless you have a good reason for whyit was delayed.

Perhaps the most significant news coming out of this release is that the Linux kernel is free of variable-length arrays (VLA)s. While part of standard C, VLAs, Torvalds noted are “actively bad not just for security worries, but simply because VLA’s are a really horribly bad idea in general in the kernel.”

Finally, WireGuard, a proposed built-in Linux virtual private network (VPN), won’t be making it into the kernel this go-around. This is due to some unresolved questions about how it handles encryption.

Perhaps with a kinder, gentler Torvalds in charge, who really likes WireGuard, Wireguard will finally make it in for the 2019 Linux 5.0 kernel release.

Related stories:

Source

How to manage storage on Linux with LVM

Logical Volume Manager (LVM) is a software-based RAID-like system that lets you create “pools” of storage and add hard drive space to those pools as needed. There are lots of reasons to use it, especially in a data center or any place where storage requirements change over time. Many Linux distributions use it by default for desktop installations, though, because users find the flexibility convenient and there are some built-in encryption features that the LVM structure simplifies.

However, if you aren’t used to seeing an LVM volume when booting off of a Live CD for data rescue or migration purposes, LVM can be confusing because the mount command can’t mount LVM volumes. For that, you need LVM tools installed. The chances are great that your distribution has LVM utils available—if they aren’t already installed.

This tutorial explains how to create and deal with LVM volumes.

Create an LVM pool

This article assumes you have a working knowledge of how to interact with hard drives on Linux. If you need more information on the basics before continuing, read my

introduction to hard drives on Linux

.

Usually, you don’t have to set up LVM at all. When you install Linux, it often defaults to creating a virtual “pool” of storage and adding your machine’s hard drive(s) to that pool. However, manually creating an LVM storage pool is a great way to learn what happens behind the scenes.

You can practice with two spare thumb drives of any size, or two hard drives, or a virtual machine with two imaginary drives defined.

First, format the imaginary drive /dev/sdx so that you have a fresh drive ready to use for this demo.

# echo “warning, this ERASES everything on this drive.”
warning, this ERASES everything on this drive.
# dd if=/dev/zero of=/dev/sdx count=8196
# parted /dev/sdx print | grep Disk
Disk /dev/sdx: 100GB
# parted /dev/sdx mklabel gpt
# parted /dev/sdx mkpart primary 1s 100%

This LVM command creates a storage pool. A pool can consist of one or more drives, and right now it consists of one. This example storage pool is named billiards, but you can call it anything.

# vgcreate billiards /dev/sdx1

Now you have a big, nebulous pool of storage space. Time to hand it out. To create two logical volumes (you can think of them as virtual drives), one called vol0 and the other called vol1, enter the following:

# lvcreate billiards 49G –name vol0
# lvcreate billiards 49G –name vol1

Now you have two volumes carved out of one storage pool, but neither of them has a filesystem yet. To create a filesystem on each volume, you must bring the billiards volume group online.

# vgchange –activate y billiards

Now make the file systems. The -L option provides a label for the drive, which is displayed when the drive is mounted on your desktop. The path to the volume is a little different than the usual device paths you’re used to because these are virtual devices in an LVM storage pool.

# mkfs.ext4 -L finance /dev/billiards/vol0
# mkfs.ext4 -L production /dev/billiards/vol1

You can mount these new volumes on your desktop or from a terminal.

# mkdir -p /mnt/vol0 /mnt/vol1
# mount /dev/billiards/vol0 /mnt/vol0
# mount /dev/billiards/vol1 /mnt/vol1

Add space to your pool

So far, LVM has provided nothing more than partitioning a drive normally provides: two distinct sections of drive space on a single physical drive (in this example, 49GB and 49GB on a 100GB drive). Imagine now that the finance department needs more space. Traditionally, you’d have to restructure. Maybe you’d move the finance department data to a new, dedicated physical drive, or maybe you’d add a drive and then use an ugly symlink hack to provide users easy access to their additional storage space. With LVM, however, all you have to do is expand the storage pool.

You can add space to your pool by formatting another drive and using it to create more additional space.

First, create a partition on the new drive you’re adding to the pool.

# part /dev/sdy mkpart primary 1s 100%

Then use the vgextend command to mark the new drive as part of the pool.

# vgextend billiards /dev/sdy1

Finally, dedicate some portion of the newly available storage pool to the appropriate logical volume.

# lvextend -L +49G /dev/billiards/vol0

Of course, the expansion doesn’t have to be so linear. Imagine that the production department suddenly needs 100TB of additional space. With LVM, you can add as many physical drives as needed, adding each one and using vgextend to create a 100TB storage pool, then using lvextend to “stretch” the production department’s storage space across 100TB of available space.

Use utils to understand your storage structure

Once you start using LVM in earnest, the landscape of storage can get overwhelming. There are two commands to gather information about the structure of your storage infrastructure.

First, there is vgdisplay, which displays information about your volume groups (you can think of these as LVM’s big, high-level virtual drives).

# vgdisplay
— Volume group —
VG Name billiards
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size <237.47 GiB
PE Size 4.00 MiB
Total PE 60792
Alloc PE / Size 60792 / <237.47 GiB
Free PE / Size 0 / 0
VG UUID j5RlhN-Co4Q-7d99-eM3K-G77R-eDJO-nMR9Yg

The second is lvdisplay, which displays information about your logical volumes (you can think of these as user-facing drives).

# lvdisplay

— Logical volume —

LV Path /dev/billiards/finance

LV Name finance

VG Name billiards

LV UUID qPgRhr-s0rS-YJHK-0Cl3-5MME-87OJ-vjjYRT

LV Write Access read/write

LV Creation host, time localhost, 2018-12-16 07:31:01 +1300

LV Status available

# open 1

LV Size 149.68 GiB

Current LE 46511

Segments 1

Allocation inherit

Read ahead sectors auto

– currently set to 256

Block device 253:3

[…]

Use LVM in a rescue environment

The “problem” with LVM is that it wraps partitions in a way that is unfamiliar to many administrative users who are used to traditional drive partitioning. Under normal circumstances, LVM drives are activated and mounted fairly invisibly during the boot process or desktop LVM integration. It’s not something you typically have to think about. It only becomes problematic when you find yourself in recovery mode after something goes wrong with your system.

If you need to mount a volume that’s “hidden” within the structure of LVM, you must make sure that the LVM toolchain is installed. If you have access to your /usr/sbin directory, you probably have access to all of your usual LVM commands. But if you’ve booted into a minimal shell or a rescue environment, you may not have those tools. A good rescue environment has LVM installed, so if you’re in a minimal shell, find a rescue system that does. If you’re using a rescue disc and it doesn’t have LVM installed, either install it manually or find a rescue disc that already has it.

For the sake of repetition and clarity, here’s how to mount an LVM volume.

# vgchange –activate y
2 logical volume(s) in volume group “billiards” now active
# mkdir /mnt/finance
# mount /dev/billiards/finance /mnt/finance

Integrate LVM with LUKS encryption

Many Linux distributions use LVM by default when installing the operating system. This permits storage extension later, but it also integrates nicely with disk encryption provided by the Linux Unified Key Setup (LUKS) encryption toolchain.

Encryption is pretty important, and there are two ways to encrypt things: you can encrypt on a per-file basis with a tool like GnuPG, or you can encrypt an entire partition. On Linux, encrypting a partition is easy with LUKS, which, being completely integrated into Linux by way of kernel modules, permits drives to be mounted for seamless reading and writing.

Encrypting your entire main drive usually happens as an option during installation. You select to encrypt your entire drive or just your home partition when prompted, and from that point on you’re using LUKS. It’s mostly invisible to you, aside from a password prompt during boot.

If your distribution doesn’t offer this option during installation, or if you just want to encrypt a drive or partition manually, you can do that.

You can follow this example by using a spare drive; I used a small 4GB thumb drive.

First, plug the drive into your computer. Make sure it’s safe to erase the drive and use lsblk to locate the drive on your system.

If the drive isn’t already partitioned, partition it now. If you don’t know how to partition a drive, check out the link above for instructions.

Now you can set up the encryption. First, format the partition with the cryptsetup command.

# cryptsetup luksFormat /dev/sdx1

Note that you’re encrypting the partition, not the physical drive itself. You’ll see a warning that LUKS is going to erase your drive; you must accept it to continue. You’ll be prompted to create a passphrase, so do that. Don’t forget that passphrase. Without it, you will never be able to get into that drive again!

You’ve encrypted the thumb drive’s partition, but there’s no filesystem on the drive yet. Of course, you can’t write a filesystem to the drive while you’re locked out of it, so open the drive with LUKS first. You can provide a human-friendly name for your drive; for this example, I used mySafeDrive.

# cryptsetup luksOpen /dev/sdx1 mySafeDrive

Enter your passphrase to open the drive.

Look in /dev/mapper and you’ll see that you’ve mounted the volume along with any other LVM volumes you might have, meaning you now have access to that drive. The custom name (e.g., mySafeDrive) is a symlink to an auto-generated designator in /dev/mapper. You can use either path when operating on this drive.

# ls -l /dev/mapper/mySafeDrive
lrwxrwxrwx. 1 root root 7 Oct 24 03:58 /dev/mapper/mySafeDrive -> ../dm-4

Create your filesystem.

# mkfs.ext4 -o Linux -L mySafeExt4Drive /dev/mapper/mySafeDrive

Now do an ls -lh on /dev/mapper and you’ll see that mySafeDrive is actually a symlink to some other dev; probably /dev/dm0 or similar. That’s the filesystem you can mount:

# mount /dev/mapper/mySafeExt4Drive /mnt/hd

Now the filesystem on the encrypted drive is mounted. You can read and write files as you’d expect with any drive.

Use encrypted drives with the desktop

LUKS is built into the kernel, so your Linux system is fully aware of how to handle it. Detach the drive, plug it back in, and mount it from your desktop. In KDE’s Dolphin file manager, you’ll be prompted for a password before the drive is decrypted and mounted.

Using LVM and LUKS is easy, and it provides flexibility for you as a user and an admin. Being tightly integrated into Linux itself, it’s well-supported and a great way to add a layer of security to your data. Try it today!

Source

WP2Social Auto Publish Powered By : XYZScripts.com