Bash Shell Utility Reaches 5.0 Milestone | Linux.com

As we look forward to the release of Linux Kernel 5.0 in the coming weeks, we can enjoy another venerable open source technology reaching the 5.0 milestone: the Bash shell utility. The GNU Project has launched the public version 5.0 of GNU/Linux’s default command language interpreter. Bash 5.0 adds new shell variables and other features and also repairs several major bugs.

New shell variables in Bash 5.0 include BASH_ARGV0, which “expands to $0 and sets $0 on assignment,” says the project. The EPOCHSECONDS variable expands to the time in seconds since the Unix epoch, and EPOCHREALTIME does the same, but with microsecond granularity.

New features include a “history -d” built-in function that can remove ranges of history entries and understands negative arguments as offsets from the end of the history list. There is also a new option called “localvar_inherit” that allows local variables to inherit the value of a variable with the same name at the nearest preceding scope.

A new shell option called “assoc_expand_once” causes the shell to attempt to expand associative array subscripts only once, which may be required when they are used in arithmetic expressions. Among many other new features, a new option is available that can disable sending history to syslog at runtime. In addition, the “globasciiranges” shell option is now enabled by default.

Bash 5.0 also fixes several major bugs. It overhauls how nameref variables resolve and fixes “a number of potential out-of-bounds memory errors discovered via fuzzing,” says the GNU Project’s readme. Changes have been made to the “expansion of $@ and $* in various contexts where word splitting is not performed to conform to a Posix standard interpretation.” Other fixes resolve corner cases for Posix conformance.

Finally, Bash 5.0 introduces a few incompatibilities compared to the most recent Bash 4.4.x. For example, changes to how nameref variables are resolved can cause different behaviors for some uses of namerefs.

Bash to basics

Bash (Bourne-Again Shell) may be 5.0 in development years, but it’s a lot older in Earth orbits. The utility will soon celebrate its 30th anniversary since Brian Fox released Bash 1.0 beta release in June 1989.

Over the years, Bash has expanded upon the POSIX shell spec with interactive command line editing, history substitution, brace expansion, and on some architectures, job control features. It has also borrowed features from the Korn shell (ksh) and the C shell (csh). Most sh scripts can be run by Bash without modification, says the GNU Project.

Bash and other Bourne-based shell utilities have largely survived the introduction of GUI alternatives to the command line such as Git GUI. Experienced Linux developers — and especially sysadmins — tend to prefer the greater speed and flexibility of working directly with the command line. There are also situations where the GUI will spit you back to the command line anyway.

It’s really a matter of whether you will be spending enough time doing Linux development or administration to make it worthwhile to learn the commands. Besides, in a movie, isn’t it more exciting to watch the hacker frantically clacking away at the command line to disable the nuclear weapon rather than clicking options off a menu? Clacking rules!

Bash 5.0 is available for download from the GNU Project’s Bash 5.0 readme page.

Source

Essential System Tools: Krusader – KDE file manager

Essential System Tools

Essential System Tools: Krusader – KDE file manager

This is the latest in our series of articles highlighting essential system tools. These are small, indispensable utilities, useful for system administrators as well as regular users of Linux based systems. The series examines both graphical and text based open source utilities. For this article, we’ll look at Krusader, a free and open source graphical file manager. For details of all tools in this series, please check the table at the summary page of this article.

Krusader is an advanced, twin-panel (commander-style) file manager designed for KDE Plasma. Krusader also runs on other popular Linux desktop environments such as GNOME.

Besides comprehensive file management features, Krusader is almost completely customizable, fast, seamlessly handles archives, and offers a huge feature set.

Krusader is implemented in C++.

Installation

Popular Linux distros provide convenient packages for Krusader. So you shouldn’t need to compile the source code.

If you do want to compile the source code, bear in mind recent versions of Krusader use libraries like Qt5 and KF5, and don’t work on KDE Plasma 4 or older.

On one of our vanilla test machines, KDE Plasma is not installed and there are no KDE applications installed. If you don’t currently use any KDE applications, remember that installing Krusader will drag in many other packages. Krusader’s natural environment is KDE Plasma 5, because it depends on services provided by KDE Frameworks 5 base libraries.

The image below illustrates this point sweetly. Installing Krusader without Plasma requires 36 packages to be installed, consuming a whopping 148 MiB of hard disk space.

Krusader Installation

The image below offers a stark contrast. Here, a different test machine, ‘pluto’, is a vanilla Linux installation running KDE Plasma 5. Installing Krusader doesn’t pull in any other packages. Installing Krusader only consumes 14.90 MiB of disk space.

Krusader-KDE-install

Some of Krusader’s functionality is sourced from external tools. On the first run, Krusader searches for available tools in your $PATH. Specifically, it checks for a diff utility (kdiff3, kompare or xxdiff), an email client (Thunderbird or KMail), a batch renamer (KRename), and a checksum utility (md5sum). It also searches for (de)compression tools (tar, gzip, bzip2, lzma, xz, lha, zip, unzip, arj, unarj, unace, rar, unrar, rpm, dpkg, and 7z). You’re then presented with a Konfigurator window which lets you configure the file manager.

Krusader’s internal editor requires Kate is installed. Kate is a competent text editor developed by KDE.

Krusader is implemented in C++.

In Operation

Here’s Krusader in operation.

Krusader

Let’s break down the user interface. At the top is a standard menu bar which allows access to the features and functions of the file manager. “Useractions” seems a quirky entry.

Then there’s the main tool bar which offers access to commonly used functions. There’s a location tool bar, information label, and panel tool bars. The majority of the window is taken up by the left and right panels. Having two panels makes dragging and dropping files easy.

At the bottom there’s total labels, tabs, tab controls, function key buttons, and a status bar. You can also show a command line but that’s not enabled by default.

Krusader’s tabs let you switch between different directories in one panel, without affecting the directory displayed in the adjacent panel.

Places, favorites and volumes are available in each panel, not on a common side bar.

Krusader offers a wide range of features. We’ll have a look at some of the stand out features for illustration purposes — there are too many to go into great detail on them all! We’re also not going to consider basic functions, or basic file management operations, just take them for granted.

KRename is integrated with Krusader. Another highlight is BookMan, Krusader’s bookmark tool for bookmarking folders, local and remote URLs, and later returning to them in a click of a button.

There’s built in tree, file previews, file split and join, as well as compress/decompress functions.

Krusader can launch a program by clicking on a data file. For example, clicking on an R file launches that document in RStudio.

With profiles you can save and restore your favorite settings. Several features support profiles, you can have e.g. different panel profiles (work, home, remote connections, etc.), search profiles, synchroniser profiles, etc.

KruSearcher

One of the strengths of Krusader is its ability to quickly locate files both locally and on remote file systems. There’s a General Section which covers most searches you’ll want to perform, but if you need additional functionality there’s an Advanced section too.

Let’s take a very simple search. We’re looking to find files in /home/sde/R (and sub-directories) that match the suffix .rdx.

Krusader-KruSearcher

There’s a separate tab that displays the results of the search.

Krusader-KruSearcher-Results

Of course this is an extremely basic search. You can append multiple searches in “the Search for” bar, with or without wildcards, and exclude searches with the | character. You also have the option to specify multiple directories to search or exclude, as well as the ability to search for patterns in files (like grep). There’s recursive searching, the option to search archives, and to follow soft-links during a search.

But that’s not the extent of the search functionality. With the advanced tab, you can restrict search to files matching a specific size or size range, date options, and even by ownership.

Krusader-KruSearcher-Advanced

In the bottom left of each dialog box, there’s a profiles button. This can be a time-saver if you often perform the same search operation. It allows you save search settings.

Synchronise Folders

This function compares two directories with all subdirectories and shows the differences between them. It’s accessed from Tools | Synchronise Folders (or from the keyboard shortcut Ctrl+Y).

The tool lets you synchronize the files and directories, one of the panels can be a remote location.

Here’s a comparison of two directories stored on different partitions.

Krusader-Synchronise

The image below shows that to synchronise the two directories, 45 files will be copied.

Krusader-Synchronise-Action

The Synchronizer is not the only way to compare files. There’s other compare functions available. Specifically, you can compare files by content, and compare directories. The compare by content functionality (accessed from the menu bar “File | Compare by Content”) calls an external graphical difference utility; either Kompare, KDiff3, or xxdiff.

Disk Usage

A disk usage analyzer is a utility which helps users to visualize the disk space being used by each folder and files on a hard disk or other storage media. This type of application often generates a graphical chart to help the visualization process.

Disk usage analyzers are popular with system administrators as one of their essential tools to prevent important directories and partitions from running out of space. Having a hard disk with insufficient free space can often have a detrimental effect on the system’s performance. It can even stop users from logging on to the system, or, in extreme circumstances, cause the system to hang.

However, disk usage analyzers are not just useful tools for system administrators. While modern hard disks are terabytes in size, there are many folk who seem to forever run out of hard drive space. Often the culprit is a large video and/or audio collection, bloated software applications, or games. Sometimes the hard disk is also full of data that that users have no particular interest in. For example, left unchecked, log files and package archives can consume large chunks of hard disk space.

Krusader offers built-in disk usage functionality. It’s accessed from Tools | Disk Usage (or with the keyboard shortcut Alt+Shift+S).

Here’s an image of the tool running.

Krusader-Disk-Usage-Running

And the output image showing the disk space consumed by each directory.

Krusader-Disk-Usage-Results

We’re not convinced that Krusader’s implementation is one of its strong points. We also experienced segmentation faults running the software in GNOME, although no such issues were found with KDE as our desktop environment. There’s definitely room for improvement.

Checksum generation and checking

A checksum is the result of running a checksum algorithm, called a cryptographic hash function, on an item of data, typically a single file. A hash function is an algorithm that transforms (hashes) an arbitrary set of data elements, such as a text file, into a single fixed length value (the hash).

Comparing the checksum that you generate from your version of the file, with the one provided by the source of the file, helps ensure that your copy of the file is genuine and error free. By themselves, checksums are often used to verify data integrity but are not relied upon to verify data authenticity.

You can create a checksum from File | Create Checksum.

Krusader-Checksum

There’s the option to choose the checksum method from a dropdown list. The supported checksum methods are:

  • md5 – a widely used hash function producing a 128-bit hash value.
  • sha1 – Secure Hash Algorithm 1. This cryptographic hash function is not considered secure.
  • sha224 – part of SHA-2 set of cryptographic hash functions. SHA224 produces a 224-bit (28-byte) hash value, typically rendered as a hexadecimal number, 56 digits long.
  • sha256 – produces a 256-bit (32-byte) hash value, typically rendered as a hexadecimal number, 64 digits long.
  • sha384 – produces a 384-bit (48-byte) hash value, typically rendered as a hexadecimal number, 96 digits long.
  • sha512 – produces a 512-bit (64-byte) hash value, typically rendered as a hexadecimal number, 128 digits long.

Krusader checks if you have a tool that supports the type of checksum you need (from your specified checksum file) and displays the files that failed the checksum (if any).

Custom commands

Krusader can be extended with custom add-ons called User Actions.

User Actions are a way to call external programs with variable parameters.

There are a few example User Actions provided which will help you get started. And KDE’s store offers, at the time of writing, 45 community created add-ons, which help to illustrate the possibilities.

Krusader-ActionMan

 

 

 

 

 

 

 

 

 

 

 

 

MountMan

MountMan is a tool which helps you manage your mounted file systems. Mount or unmount file systems of all types with a single mouse click.

When started from the menu (Tools | MountMan), it displays a list of all mounted file systems.

For each file system, MountMan displays its name (which is the actual device name – i.e. /dev/sda1 for a first partition on the first hard disk drive), its file system type (ext4, ext3, ntfs, vfat, ReiserFS etc) and its mount point on your system (the directory on which the file system is mounted).

MountMan also displays usage information using total size, free size, and percentage of available space free. You can sort by clicking the title of any column (in ascending or descending order).

Here’s an image from one of our test systems.

Krusader-MountMan

On this test system, the Free % column doesn’t list the partitions in the correct order.

Highly Configurable

Krusader offers a wealth of configuration options. Use the Menu bar and choose “Settings | Configure Krusader”. This opens a dialog box with many options for configuring the software.

In the Start up section, you can choose a startup profile. This can be a real time-saver. There’s a last session option.

Krusader-Konfigurator-Startup

The Panel section has a whole raft of configuration options with sections for General, View, Buttons, Selection Mode, Media Menu and Layout.

Krusader-Konfigurator-Panel

By default the software uses KDE colours, but you can configure the colours of every element to your heart’s content.

Krusader-Konfigurator-Colours

This panel covers basic operations, including the external terminal, the viewer/editor, and Atomic extensions.

Krusader-Konfigurator-General

With the Advanced section, you can automount filesystems, turn off specific user confirmations (not recommended), even fine-tune the icon cache size (which alters the memory footprint of Krusader).

Krusader-Konfigurator-Advanced

The Archives section lets you change the way the software deals with archives. We don’t recommend enabling write support into an archive as there’s the possibility of data loss in the event of a power failure.

Krusader-Konfigurator-Archives

The dependencies section is where you define the location of external applications including general tools, packers and checksum utilities.

Krusader-Konfigurator-Dependencies

The User Actions sections lets you configure settings in relation to ‘useractions’. You can also change the font for the output-collection.

Krusader-Konfigurator-User-Actions

The final section links the MIMEs to protocols.

Krusader-Konfigurator-Protocols

Website: krusader.org
Support: Krusader Handbook
Developer: Krusader Krew
License: GNU General Public License v2

———————————————————————————————–

Other tools in this series:

Essential System Tools
ps_mem Accurate reporting of software’s memory consumption
gtop System monitoring dashboard
pet Simple command-line snippet manager
Alacritty Innovative, hardware-accelerated terminal emulator
inxi Command-line system information tool that’s a time-saver for everyone
BleachBit System cleaning software. Quick and easy way to service your computer
catfish Versatile file searching software
journalctl Query and display messages from the journal
Nmap Network security tool that builds a “map” of the network
ddrescue Data recovery tool, retrieving data from failing drives as safely as possible
Timeshift Similar to Windows’ System Restore functionality, Time Machine Tool in Mac OS
GParted Resize, copy, and move partitions without data
Clonezilla Partition and disk cloning software
fdupes Find or delete duplicate files
Krusader Advanced, twin-panel (commander-style) file manager
nmon Systems administrator, tuner, and benchmark tool
f3 Detect and fix counterfeit flash storage
QJournalctl Graphical User Interface for systemd’s journalctl

Source

Turn a Raspberry Pi 3B+ into a PriTunl VPN

PriTunl is a VPN solution for small businesses and individuals who want private access to their network.

PriTunl is a fantastic VPN terminator solution that’s perfect for small businesses and individuals who want a quick and simple way to access their network privately. It’s open source, and the basic free version is more than enough to get you started and cover most simple use cases. There is also a paid enterprise version with advanced features like Active Directory integration.

Special considerations on Raspberry Pi 3B+

PriTunl is generally simple to install, but this project—turning a Raspberry Pi 3B+ into a PriTunl VPN appliance—adds some complexity. For one thing, PriTunl is supplied only as AMD64 and i386 binaries, but the 3B+ uses ARM architecture. This means you must compile your own binaries from source. That’s nothing to be afraid of; it can be as simple as copying and pasting a few commands and watching the terminal for a short while.

Another problem: PriTunl seems to require 64-bit architecture. I found this out when I got errors when I tried to compile PriTunl on my Raspberry Pi’s 32-bit operating system. Fortunately, Ubuntu’s beta version of 18.04 for ARM64 boots on the Raspberry Pi 3B+.

Also, the Raspberry Pi 3B+ uses a different bootloader from other Raspberry Pi models. This required a complicated set of steps to install and update the necessary files to get a Raspberry Pi 3B+ to boot.

Installing PriTunl

You can overcome these problems by installing a 64-bit operating system on the Raspberry Pi 3B+ before installing PriTunl. I’ll assume you have basic knowledge of how to get around the Linux command line and a Raspberry Pi.

Start by opening a terminal and downloading the Ubuntu 18.04 ARM64 beta release by entering:

wget http://cdimage.ubuntu.com/releases/18.04/beta/ubuntu-18.04-beta-preinstalled-server-arm64+raspi3.img.xz

Unpack the download:

xz -d ubuntu-18.04-beta-preinstalled-server-arm64+raspi3.xz

Insert the SD card you’ll use with your Raspberry Pi into your desktop or laptop computer. Your computer will assign the SD card a drive letter—something like /dev/sda or /dev/sdb. Enter the dmesg command and examine the last lines of the output to find out the card’s drive assignment.

Be VERY CAREFUL with the next step! I can’t stress that enough; if you get the drive assignment wrong, you could destroy your system.

Write the image to your SD card with the following command, changing <DRIVE> to your SD card’s drive assignment (obtained in the previous step):

dd if=ubuntu-18.04-beta-preinstalled-server-arm64+raspi3.img of=<DRIVE> bs=8M

After it finishes, insert the SD card into your Pi and power it up. Make sure the Pi is connected to your network, then log in with username/password combination ubuntu/ubuntu.

Enter the following commands on your Pi to install a few things to prepare to compile PriTunl:

sudo apt-get -y install build-essential git bzr python python-dev python-pip net-tools openvpn bridge-utils psmisc golang-go libffi-dev mongodb

There are a few changes from the standard PriTunl source installation instructions on GitHub. Make sure you are logged into your Pi and sudo to root:

sudo su -

This should leave you in root’s home directory. To install PriTunl version 1.29.1914.98, enter (per GitHub):

export VERSION=1.29.1914.98
tee -a ~/.bashrc << EOF
export GOPATH=\$HOME/go
export PATH=/usr/local/go/bin:\$PATH
EOF

source ~/.bashrc
mkdir pritunl && cd pritunl
go get -u github.com/pritunl/pritunl-dns
go get -u github.com/pritunl/pritunl-web
sudo ln -s ~/go/bin/pritunl-dns /usr/bin/pritunl-dns
sudo ln -s ~/go/bin/pritunl-web /usr/bin/pritunl-web
wget https://github.com/pritunl/pritunl/archive/$VERSION.tar.gz
tar -xf $VERSION.tar.gz
cd pritunl-$VERSION
python2 setup.py build
pip install -r requirements.txt
python2 setup.py install –prefix=/usr/local

Now the MongoDB and PriTunl systemd units should be ready to start up. Assuming you’re still logged in as root, enter:

systemctl daemon-reload
systemctl start mongodb pritunl
systemctl enable mongodb pritunl

That’s it! You’re ready to hit PriTunl’s browser user interface and configure it by following PriTunl’s installation and configuration instructions on its website.

Related Stories:

Source

NVIDIA GeForce GTX 760/960/1060 / RTX 2060 Linux Gaming & Compute Performance Review

The NVIDIA GeForce RTX 2060 is shipping today as the most affordable Turing GPU option to date at $349 USD. Last week we posted our initial GeForce RTX 2060 Linux review and followed-up with more 1080p and 1440p Linux gaming benchmarks after having more time with the card. In this article is a side-by-side performance comparison of the GeForce RTX 2060 up against the GTX 1060 Pascal, GTX 960 Maxwell, and GTX 760 Kepler graphics cards. Not only are we looking at the raw OpenGL, Vulkan, and OpenCL/CUDA compute performance between these four generations, but also the power consumption and performance-per-Watt.

As some interesting tests following the earlier RTX 2060 Linux benchmarks, over the weekend I wrapped up some GTX 760 vs. GTX 960 vs. GTX 1060 vs. RTX 2060 benchmarks on the same Ubuntu 18.04 LTS system with the NVIDIA 415.25 driver on the Linux 4.20 kernel. Here are some of the key specifications as a reminder:

The GeForce RTX 2060 also has the ray-tracing capabilities, tensor cores, USB Type-C VirtualLink, and other advantages over the previous generations.

Via the Phoronix Test Suite a range of graphics/gaming and compute benchmarks were carried out. The Phoronix Test Suite was also polling the AC system power consumption in real-time from a WattsUp Pro power meter in order to generate performance-per-Watt metrics for each game/application under test.

 

Source

Understanding the Boot process — BIOS vs UEFI – Linux Hint

The boot process is universe unto its own. A lot of steps are needed to happen before your operating system takes over and you get a

running system.

In some sense, there is a tiny embedded OS involved in this whole process. While the process differs from one hardware platform to another, and from one OS to another, let’s look at some of the commonalities that will help us gain a practical understanding of the boot process.

Let’s talk about the regular, non-UEFI, boot process first. What happens between that point in time where you press the power ON button to the point where your OS boots and presents you with a login prompt.

Step1: The CPU is hardwired to run instructions from a physical component, called NVRAM or ROM, upon startup. These instructions constitute the system’s firmware. And it is this firmware where the distinction between BIOS and UEFI is drawn. For now let’s focus on BIOS.

It is the responsibility of the firmware, the BIOS, to probe various components connected to the system like disk controllers, network interfaces, audio and video cards, etc. It then tries to find and load the next set of bootstrapping code.

The firmware goes through storage devices (and network interfaces) in a predefined order, and tries to find a bootloader stored within them. This process is not something a user typically involves herself with. However, there’s a rudimentary UI that you can use to tweak various parameters concerning the system firmware, including the boot order.

You enter this UI by typically holding F12, F2 or DEL key as the system boots. To look for specific key in your case, refer your motherboard’s manual.

Step2: BIOS, then assumes that the boot device starts with an MBR (Master Boot Record) which containers a first-stage boot loader and a disk partition table. Since this first block, the boot-block, is small and the bootloader is very minimalist and can’t do much else, for example, read a file system or load a kernel image.

So the second stage bootloader is called into being.

Step3: The second stage bootloader is responsible for locating and loading the proper Operating System kernel into the memory. The most common example, for Linux users, is the GRUB bootloader. In case you are dual-booting, it even provider you with a simple UI to select the appropriate OS to start.

Even when you have a single OS installed, GRUB menu lets you boot into advanced mode, or rescue a corrupt system by logging into single user mode. Other operating systems have different boot loaders. FreeBSD comes with one of its own so do other Unices.

Step4: Once the appropriate kernel is loaded, there’s still a whole list of userland processes are waiting to be initialized. This includes your SSH server, your GUI, etc if you are running in multiuser mode, or a set of utilities to troubleshoot your system if you are running in single user mode.

Either way an init system is required to handle the initial process creation and continued management of critical processes. Here, again we have a list of different options from traditional init shell scripts that primitive Unices used, to immensely complex systemd implementation which has taken over the Linux world and has its own controversial status in the community. BSDs have their own variant of init which differs from the two mentioned above.

This is a brief overview of the boot process. A lot of complexities have been omitted, in order to make the description friendly for the uninitiated.

UEFI specifics

The part where UEFI vs BIOS difference shows up is in the very first part. If the firmware is of a more modern variant, called UEFI, or Unified Extensible Firmware Interface, it offers a lot more features and customizations. It is supposed to be much more standardized so motherboard manufacturers don’t have to worry about every specific OS that might run on top of them and vice versa.

One key difference between UEFI and BIOS is that UEFI supports a more modern GPT partitioning scheme and UEFI firmware has the capability to read files from a small FAT system.

Often, this means that your UEFI configuration and binaries sit on a GPT partition on your hard disk. This is often known as ESP (EFI System Partition) mounted at /efi, typically.

Having a mountable file system means that your running OS can read the same file system (and dangerously enough, edit it as well!). Many malware exploit this capability to infect the very firmware of your system, which persists even after an OS reinstall.

UEFI being more flexible, eliminates the necessity of having a second stage boot loader like GRUB. Often times, if you are installing a single (well-supported) operating system like Ubuntu desktop or Windows with UEFI enabled, you can get away with not using GRUB or any other intermediate bootloader.

However, most UEFI systems still support a legacy BIOS option, you can fall back to this if something goes wrong. Similarly, if the system is installed with both BIOS and UEFI support in mind, it will have an MBR compatible block in the first few sectors of the hard disk. Similarly, if you need to dual boot your computer, or just use second stage bootloader for other reasons, you are free to use GRUB or any other bootloader that suits your use case.

Conclusion

UEFI was meant to unify the modern hardware platform so operating system vendors can freely develop on top of them. However, it has slowly turned into a bit of a controversial piece of technology especially if you are trying to run open source OS on top of it. That said, it does have its merit and it is better to not ignore its existence.

On the flip-side, legacy BIOS is also going to stick around for at least a few more years in the future. Its understanding is equally important in case you need to fall back to BIOS mode to troubleshoot a system. Hope this article informed you well enough about both these technologies so that the next time you encounter a new system in the wild you can follow along the instructions of obscure manuals and feel right at home.

Source

Basic Emacs Command Explained in Detail

Brief: This detailed guide will give you enough information to start using Emacs, and enough extra to make you want more.

There are many text-based editors in Linux. Some come with most distros, others you have to install after the fact. Text-based editors are an important tool for any Linux user or admins. Servers often don’t have a GUI, and while Linux in itself is very stable, I have seen GUI crash many times. When you lose your GUI, having a set of text-based tools that you are comfortable with is a must.

Before I get started on the basics of operating GNU Emacs, I want to clarify something first. You are probably aware of the “Emacs Vs Vim” war that is responsible for many over-heated discussions. I love Vim, used it for over a decade. But here is the thing about Emacs, it is not just a text editor.

In its core, Emacs could simply be described as a framework of buffers and frames. Frames are how you split your windows, you can have as many frames as you want. In GUI mode, you can have multiple Emacs window, each containing one or more frames. Then you have the buffers, buffers are filled with contents being fed from somewhere. When you feed a buffer with a file, then Emacs plays the role of a text editor. You can even use Emacs as your window manager.

Get familiar with Emacs layout

Getting started with Emacs

First, let’s focus on the basics. Here you will learn the basics of operating Emacs.

Emacs can be installed directly through your distribution’s package management. After installation, starting Emacs will start the GUI mode. This mode can be useful when you start, as it provide a menu bar access. It is important to remember that every menu entry simply execute an Emacs command and this can all be done in text mode. To force text mode while in a GUI environment, run emacs –nw in a terminal.

When you first start Emacs, you will be presented with a welcome screen. This screen already displays more of Emacs features. The underlined text are links, they can be activated with a mouse click or press enter with the cursor on one. Some are links to a webpage, which would more than likely open eww (Emacs built-in web browser). There is also a built-in tutorial on that page.

Your initial layout will consist of a single frame, more than likely containing the content of about Emacs. Below the frame, you have the status bar and what appears to be an empty space below the status bar. This empty space is actually a mini-buffer that Emacs uses to interact with you. In the image, the mini-buffer has the text <Print> is undefined.

Understand the basic layout of Emacs

The essential concept of key bindings in Emacs

Key binding (or keyboard shortcuts) is how you command Emacs. It always starts with a key binding.

Emacs uses modifier keys as the key binding prefix. Most important modifiers are C (Ctrl), M (Meta), S (Shift). M (Meta, usually assigned to ALT).

To summarize, the key convention is:

  • C = Ctrl
  • M = Meta = Alt
  • S = Shift

I honestly think that the key binding is one of the main reason people move away from learning Emacs. There are over 1300 key bindings in a default Emacs setup. But don’t forget, Emacs is only a text editor when editing files, those 1300+ key bindings do a lot more than edit files.

So how does one start learning Emacs and its obsession with key bindings? It is fairly simple, the basic idea is to practice a few key binding at a time, they will become muscle memory very quickly. When I record video lessons about Emacs, I say the key binding as I press them, well at least I try. The truth is that the keystroke are long done and processed while I am still trying to figure out which ones they were. The biggest and hardest part is this document.

Let’s get started and remember to practice the key binding a lot!

The best way to explain how we write key binding is with a few example. Not all the examples are functional key binding though:

  • C-x = CTRL+x
  • C-x 2 = Press CTRL and x then press 2 (CTRL key has to be released before pressing 2)
  • C-x C-2 = Press CTRL and x then CTRL and 2. Or CTRL-x-2 (Don’t release CTRL)
  • M-x command <RET> = Press META+x

You will also regularly see in documentation the key binding written with text in ().

C-x C-f (find-file)

The text inside () represent the Emacs function that will be executed with this key binding. It will become clear why this is important soon.

Emacs favorite key are CTRL and ALT. While the ALT key does not pose a problem, extensive use of the Left CTRL key is very well known to cause pinky finger problems. You can easily use xmodmap to swap CTRL and CAPSLOCK key or any other key you prefer.

Using Emacs with key bindings aka Emacs Commands

As previously mentioned, focus on learning a few key binding at a time, they will become muscle memory. The first set is the hardest part but will be enough for you to start working with Emacs as a text editor.

Manipulating Frames
C-x 2 split-window-below
C-x 3 split-window-right
C-x o other-window
C-x 1 delete-other-window
C-x 0 delete-window
Manipulating buffers
C-x b switch-to-buffer
C-x C-B list-buffers
C-x k kill-buffer
Open and save files
C-x C-f find-file
C-x C-s save-buffer
Search & replace
C-s search-forward
C-r search-backward
Select, cut, copy & paste
C-<SPACE> set-mark-command
C-w kill-region
M-w kill-ring-save
C-y yank
Executing commands
M-x execute-extended-command

Let’s talk about these in details.

Manipulating Frames in Emacs

As previously mentioned, frames are how Emacs splits its window. Before learning about buffer let look at how we split our screen.

  • C-x 2 will split the current frame horizontally.
  • C-x 3 will split the current frame vertically.
  • C-x o will move focus to the next frame. If the mini-buffer is active, it will be part of the cycle.
  • C-x 1 will close all other frame, leaving only the current frame. It does not close buffers.
  • C-x 0 close the current frame.

The image below displays many frames, some displaying the same buffer but with a cursor at different location.

Splitting frames in Emacs editorMultiple frames in Emacs

Manipulating Buffers in Emacs

  • C-x b (switch-to-buffer)
  • C-x C-B (list-buffers)
  • C-x k (kill-buffer)

Buffers are what you work with, buffers contain data, mostly text. When you open a file, the content of that file is placed in a buffer. This buffer will be named after the filename. You then work within the buffer. Changes are applied to the source (file) when saving is requested. By default Emacs has autosave. It does not save every x minutes, but every x modifications done to the buffer. An open buffer with no modification will not trigger the autosave.

To switch to a different buffer, press C-x b, the minibuffer will activate and is ready to receive the buffer name. Autocompletion is bound to <TAB> key. If multiple matches are possible, press <TAB> a second time will create a temporary buffer with all possible match. Should you provide a nonexisting buffer name, a new buffer will be created. This does NOT create a new file. I often use this to create temp buffer.

It takes no time to have many buffers, and you may not remember all of them. To list all the currently opened buffers, press C-x C-b. A new buffer will open in a split frame with the full list of buffers. You can maneuver within that buffer with the arrow key and switch to buffer at point with <RET>.

From within the buffer list, you can flag the entry at point for:

  • saving by pressing s
  • Deletion (Kill the buffer, does not delete the file) by press d or k
  • Remove flags by pressing u
  • Execute all marked flags by pressing x

To kill any buffer, press C-x k the mini buffer will activate and wait for a buffer name, you can enter any existing buffer name and press <RET>. Should you not provide a name and press <RET>, Emacs will close the current buffer.

The following image display the buffer list with buffers flagged for saving or deletion.

Buffer list in EmacsBuffer list

Open and save files in Emacs

  • C-x C-f (find-file)
  • C-x C-s (save-buffer)

Most of the time you need to work with files. Start by opening a file by pressing C-x C-f. The mini buffer present you with a path and is ready for you to enter the path for the file you want to open. Auto-completion works here too, <TAB> will autocomplete directory and file names. Once you are satisfied with the file path, press <RET>. A new buffer will open with the content of the file. The buffer will be named after the file. Once a buffer has modification, 2 stars will appear on the left side of the status bar. To save your modification: C-x C-s

If you provide a path to a nonexisting file after pressing C-x C-f, a new buffer will be created but the file will be created only when you save the buffer.

Should you provide a directory path after C-x C-f, a new buffer will be open with the content of the directory. This new buffer will be in DIRED mode. While in DIRED mode you can maneuver with the arrow keys. When on the desired file, press <RET> and it will be open in a new buffer. Should you again choose to press <RET> on a directory, once again a new buffer will be open in DIRED mode.

Many operations can be done to the filesystem while in DIRED mode, but as previously explained, let’s focus on the basics first.

When you work on a Linux system you often have to work on files that require root access. Emacs handles this very well. Do not close Emacs and run it as root, this is not a good idea.

Press C-x C-f then proceed to erase the given path in the minibuffer and replace it with /sudo::/path/to/file <RET>. Emacs will attempt to open the file with the sudo command and if required you will be prompt for your password.

Should you attempt to use auto-completion <TAB>, your password will be requested if needed and auto-completion will work. DIRED mode can be opened as root. Note that a new buffer named *tramp/sudo …. will be created. This is a buffer needed by Emacs to handle sudo.

Should you remove it, although not recommended, you will more than likely be asked for your password when attempting to save and the tramp buffer will be back. Opening multiple buffers as root will result in multiple tramp buffers.

It is as easy to open a file on a remote computer using ssh from within Emacs:

C-x C-f /ssh:user@host:/path/to/file

Should you be wondering about opening a file on a remote system as root? Yes, but I will keep this one for another time.

This image show on the left frame DIRED mode of my home directory and on the right my .emacs file.

Opening a remote file in EmacsOpening a remote file in Emacs

Search & Replace in Emacs

  • C-s (isearch-forward)
  • C-r (isearch-backward)
  • M-% (query-replace)

To perform a forward search, press C-s this will activate the minibuffer for you to enter the text to search. The search happens as you type. To move to the next match, press C-s again. The search will wrap around but after warning you once. Note that pressing backspace will first go back in the previous match before deleting a character.

To search backwards, press C-r

To replace, press M-% and you will then have the minibuffer ready to take the search parameter. Enter it and press <RET>, then proceed to provide the replacement string and press <RET>.

The search & replace will execute from cursor to end of buffer.

Select, cut, copy and paste in Emacs

  • C-<SPACE> (set-mark-command)
  • C-w (kill-region)
  • M-w (kill-ring-save)
  • C-y (yank)
  • M-y (yank-pop)

Emacs has a whole different concept of select, cut, copy & paste. First, let’s look at selecting. While the mouse works very well in GUI mode, learning to not use it will pay off when GUI is gone. What you call a selected area, Emacs called an active region. To make a selection or activate a region, place your cursor at the beginning of the desired area, then press C-<SPACE> move your cursor 1 character passed the end of the desired area. This region is automatically activated. While this active region concept may seem pointless now, it is important to understand it for when you start making function to automate repetitive tasks.

Emacs does not cut, it kills. When you kill an active region with C-w, Emacs will “cut” it out of the buffer and save it in a kill ring. The kill ring keeps multiple entries either killed C-w or copied M-w. You can then yank (paste) those entries out of the kill ring and into the current buffer at the point with C-y. If right after pressing C-y you proceed with M-y, the entry that was paste into the buffer from the kill ring will be replaced with the previous one. Yanking an entry from the kill ring does NOT remove it from the kill ring and can be yanked again later.

Execute extended command in Emacs

  • M-x (execute-extended-command)

Meta-x is a unique key binding. After pressing the key binding, look at the mini buffer. Your cursor will automatically be placed there and Emacs and is ready to receive a command.

The commands for the key bindings are written in parentheses. Do not type the parentheses when typing the command. After writing your command, press <RET>.

It is common to see M-x command written in the following format:

M-x find-file <RET> ~/path/to/file <RET>

Note that auto-completion is supported with the key <TAB>

Why learn about the command when I can use key binding? First, even with over 1300 key bindings, not every command has one. Commands also this gives you the power to write functions to perform the desired modifications to a selected area (active region) or entire buffer. Those functions can then be called with M-x functionName and if needed bound to a key combination of your choice.

One command worth showing to a new user is M-x shell <RET>. Remember as I said Emacs is not just a text editor. The above command will open a new buffer with a shell prompt. This is not a full terminal emulator, for example, it does not handle curl based application very well. The display is not refreshed properly. But it has other advantages. This shell buffer is a read/write text buffer. When you place your cursor at the end and type a command then press <RET>, the command is sent to a subshell and the output is added to the buffer. This allows you to easily search back and forth, copy and paste, select an entire region and even save the buffer content to a file.

The second most important command to learn is M-x ren-buf <RET> NewBufferName <RET>. This command allows you to rename the current buffer. If the buffer contains the content of a file, this will not rename the file. This is useful when you need more than one shell. If you type M-x shell <RET> and you have one open, it will bring forth the existing one. Rename the *shell* buffer to open more shell. Emacs also has eshell and term, but this is beyond our current scope.

Conclusion

This article represents a very short and quick introduction to Emacs. You learned the basic Emacs commands/key bindings and some basic concept about editing with Emacs.

There is a lot more it can do for you. With over 4000 packages available for installation with Emacs integrated package management system. The flexibility it offers is astonishing. Emacs will grow and evolve with you.

Source

Linux Today – Get started with Wekan, an open source kanban board

In the article in our series on open source tools that will make you more productive in 2019, check out Wekan.

There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year’s resolutions, the itch to start the year off right, and of course, an “out with the old, in with the new” attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn’t have to be that way.

Here’s the second of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019.

Wekan

Kanban boards are a mainstay of today’s agile processes. And many of us (myself included) use them to organize not just our work but also our personal lives. I know several artists who use apps like Trello to keep track of their commision lists as well as what’s in progress and what’s complete.

But these apps are often linked to a work account or a commercial service. Enter Wekan, an open source kanban board you can run locally or on the service of your choice. Wekan offers much of the same functionality as other Kanban apps, such as creating boards, lists, swimlanes, and cards, dragging and dropping between lists, assigning to users, labeling cards, and doing pretty much everything else you’d expect in a modern kanban board.

The thing that distinguishes Wekan from most other kanban boards is the built-in rules. While most other boards support emailing updates, Wekan allows you to set up triggers when taking actions on cards, checklists, and labels.

Wekan can then take actions like moving cards, updating labels, adding checklists, and sending emails.

Setting up Wekan locally is a snap—literally. If your desktop supports Snapcraft applications, installing is as easy as:

sudo snap install wekan

It also supports Docker, which means installing on a server is reasonably straightforward on most servers and desktops.

Overall, if you want a nice kanban board that you can run yourself, Wekan has you covered.

Related Stories:

Source

Beta: mod_lsapi updated – CloudLinux OS Blog

New updated mod_lsapi packages for CloudLinux 6 and 7 as well as for Apache 2.4 (CloudLinux 6 and CloudLinux 7) and EasyApache 4 (CloudLinux 6 and 7) are now available for download from our updates-testing repository.

Changelog:

liblsapi-1.1-35

mod_lsapi-1.1-35

ea-apache24-mod_lsapi-1.1-35

httpd24-mod_lsapi-1.1-35

  • Fixed statistics exceptions flow;
  • MODLS-615: increased epoch for ea-apache24-mod_lsapi due to conflict with package from EasyApache 4 repository;
  • MODLS-615: added liblsapi conflict with ea-liblsapi.

To update:

cPanel & RPM Based:

$ yum update liblsapi liblsapi-devel –enablerepo=cloudlinux-updates-testing
$ yum update mod_lsapi –enablerepo=cloudlinux-updates-testing
$ service httpd restart

DirectAdmin:

$ yum update liblsapi liblsapi-devel –enablerepo=cloudlinux-updates-testing
$ cd /usr/local/directadmin/custombuild
$ ./build set cloudlinux_beta yes
$ ./build update
$ ./build mod_lsapi

To install, follow the instructions in the documentation.

EasyApache 4:

$ yum update liblsapi liblsapi-devel ea-apache24-mod_lsapi –enablerepo=cl-ea4-testing –enablerepo=cloudlinux-updates-testing
$ service httpd restart

To install:

$ yum-config-manager –enable cl-ea4-testing
$ yum update liblsapi liblsapi-devel –enablerepo=cloudlinux-updates-testing

Read documentation cPanel EasyApache 4 documentation.

$ yum-config-manager –disable cl-ea4-testing

Go to MultiPHP Manager and enable mod_lsapi on your domains through lsapi handler.

http24 for CloudLinux 6 and CloudLinux 7

$ yum update liblsapi liblsapi-devel –enablerepo=cloudlinux-updates-testing
$ yum update httpd24-mod_lsapi –enablerepo=cloudlinux-updates-testing

Source

Why Linux Binaries are not as Easy to Handle? – OSnews

Have you ever wondered why in other Operating Systems such as Windows, MacOS or even BeOS installing software is so easy compared to Linux? In such OSes you can simply download and decompress a file or run an installer process which will easily walk you through the process.

This doesn’t happen in Linux, as there are only two standard ways to install software: compiling and installing packages. Such methods can be inconsistent and complicated for new users, but I am not going to write about them, as it has been done in countless previous articles. Instead i am going to focus in writing about why it is difficult for developers to provide a simpler way.

So, why can’t we install and distribute programs in Linux with the same ease as we do in other operating systems? The answer lies in the Unix filesystem layout, which Linux distros follow so strictly for the sake of compatibility. This layout is and was always aimed at multi-user environments, and to save and distribute resources evenly across the system (or even shared across a LAN). But with today’s technology and with the arrival of desktop computers, many of these ideas dont make much sense in that context.

There are four fundamental aspects that, I think, make distributing binaries on linux so hard. I am not a native english speaker, so i am sorry about possible mistakes.

1-Distribution by physical place
2-“Global installs”, or “Dependency Hell vs Dll hell”
3-Current DIR is not in PATH.
4-No file metadata.

1-Distribution by physical place

Often, directories contain the following subdirectories:

lib/ – containing shared libraries
bin/ – containing binary/scripted executables
sbin/ -containing executables only meant for the superuser

If you search around the filesystem, you will find several places where this pattern repeats, for example:
/
/usr
/usr/local
/usr/X11R6

You might wonder why files are distributed like this. This is mainly for historical reasons, like “/” being in a startup disk or rom, “/usr” was a mount point for the global extras, originally loaded from tape, shared disk or even from network, /usr/local for local installed software, I dont know about X11R6, but probably has its own directory because it’s too big.

It should be noted that until very recently, unixes were deployed for very specific tasks, and never meant to be loaded with as many programs as a desktop computer is. This is why we don’t see directories organized by usage as we do in other unix-like OSes (mainly BeOS and OSX), and instead we see them organized by physical place (Something desktop computers no longer care about, since nearly all of them are self contained).

Many years ago, big unix vendors such as SGI and Sun decided to address this problem by creating the /opt directory. The opt directory was supposed to contain the actual programs with their data, and shared data (such as libs or binaries) were exported to the root filesystem (in /usr) by creating symlinks.
This also made the task of removing a program easier, since you simply had to remove the program dir, and then run a script to remove the invalid symlinks. This approach never was popular enough in in Linux distributions,
and it still doesn’t adress the problems of bundled libraries.

Because of this, all installs need to be global, which takes us to the next issue.

2-“Global installs”, or “Dependency Hell vs Dll hell”

Because of the previous issue, all popular distribution methods (both binary packages and source) force the users to install the software globally in the system, available for all accounts. With this approach, all binaries go to common places (/usr/bin, /usr/lib, etc). At first this may look reasonable and the right approach with advantages, such as maximized usage of shared libraries, and simplicity in organization. But then we realize its limits. This way, all programs are forced to use the same exact set of libraries.

Because of this, also, it becomes impossible for developers to just bundle some libraries needed with a binary release, so we are forced to ask the users to install the missing libraries themselves. This is called dependency hell, and it happens when some user downloads a program (either source, package or shared binary) and is told that more libraries are needed for the program to run.

Although the shared library system in Linux is even more complete than the Windows one (with multiple library versions supported, pre-caching on load, and binaries unprotected when run), the OS filesystem layout is not letting us to distribute binaries with bundled libraries we used for developing it that the user probably won’t have.

A dirty trick is to bundle the libraries inside the executable — this is called “static linking” — but this approach has several drawbacks, such as increased memory usage per program instance, more complex error tracing, and even license limitations in many cases, so this method is usually not encouraged.

To conclude with this item, it has to be said that it becomes hard for developers to ship binary bundles with specific versions of a library. Remember that not all libraries need to be bundled, but only the rare ones that an user is not expected to have. Most widely used libraries such as libc, libz or even gtk or QT can remain system-wide.

Many would point out that this approach leads to the so called DLL hell, very usual in Windows. But DLL hell actually happened because programs that bundled core system-wide windows libraries overwrote the installed ones with older versions. This in part happened because Windows not only doesn’t support multiple versions of a library in the way unix does, but also because at boot time the kernel can only load libraries in the 8.3 file format (you can’t really have one called libgtk-1.2.so.0.9.1 ). As a sidenote, and because of that, since Windows 2000, Microsoft keeps a directory with copies of the newest versions available of the libraries in case that any program overwrites them. In short, DLL hell can be simply attributed to the lack of a proper library versioning system.

3-Current DIR is not in PATH

This is quite simple, but it has to be said. By default in Unixes, the current path is not recognized as a library or binary path. Because of this, you cant just unzip a program and run the binary inside. Most shared binaries distributed do a dirty trick and create a shell script containing the following.

#!/bin/sh

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:.
./mybinary

This can be simply solved by adding “.” to the library and binary path, but no distro does it, because it’s not standard in Unixes. Of course, from inside a program it is perfectly normal to access the data from relative paths, so you can still have subdirs with data.

4-No file metadata

Ever wondered why Windows binaries have their own icons and in Linux binaries look all the same? This is because there is not a standard way to define metadata on the files. This means we cant bundle a small pixmap inside the file. Because of this we cant easily hint the user on the proper binary, or even file, to be run. I cant say this is an ELF limitation, since such format will let you add your own sections to the binary, but I think it’s more like a lack-of-a-standard to define how to do it.

Proposed solutions

In short, I think Linux needs to be less standard and more tolerant in the previous aspects if it aims to achieve the same level of user-friendlyness as the ruling desktop operating systems. Otherwise, not only users, but developers become frustrated with this.

For the most important issue, which is libraries, I’d like to propose the following, as a spinoff, but still compatible for Unix desktop distros.

Desktop distros should add “./” to the PATH and LIBRARY_PATH by default, this will make the task of bundling certain “not so common”, or simply modified libraries with a program, and save us the task of writing
scripts called “runme”. This way we could be closer to doing simple “in a directory” installs. I know alternatives exist, but this has been proven to be simple and it works.

Linux’s library versioning system is great already, so why should installing binaries of a library be complicated? A “library installer” job would be to take some libraries, copy them to the library dir, and then update the lib symlink to the newer one.

Agree on a standard way of adding file metadata to the ELF binaries. This way, binaries distributed can be more descriptive to the user. I know I am leaving script based programs out, but those can even add something ala “magic string”.

And the most important thing, understand that the changes are meant to make Linux not only more user-friendly, but also more popular. There are still a lot of Linux users and developers that think the OS is only meant as a server, many users that consider aiming at desktop is too dreamy or too “Microsoft”, and many that think that Linux should remain “true as a Unix”. Because of this, focus should be put so ideas can coexist, and everyone gets what they want.

Source

Best Audio Editors For Linux

You’ve got a lot of choices when it comes to audio editors for Linux. No matter whether you are a professional music producer or just learning to create awesome music, the audio editors will always come in handy.

Well, for professional-grade usage, a DAW (Digital Audio Workstation) is always recommended. However, not everyone needs all the functionalities, so you should know about some of the most simple audio editors as well.

In this article, we will talk about a couple of DAWs and basic audio editors which are available as free and open source solutions for Linux and (probably) for other operating systems.

Top Audio Editors for Linux

Best audio editors and DAW for Linux

We will not be focusing on all the functionalities that DAWs offer – but the basic audio editing capabilities. You may still consider this as the list of best DAW for Linux.

Installation instruction: You will find all the mentioned audio editors or DAWs in your AppCenter or Software center. In case, you do not find them listed, please head to their official website for more information.

1. Audacity

audacity audio editor

Audacity is one of the most basic yet a capable audio editor available for Linux. It is a free and open-source cross-platform tool. A lot of you must be already knowing about it.

It has improved a lot when compared to the time when it started trending. I do recall that I utilized it to “try” making karaokes by removing the voice from an audio file. Well, you can still do it – but it depends.

Features:

It also supports plug-ins that include VST effects. Of course, you should not expect it to support VST Instruments.

  • Live audio recording through a microphone or a mixer
  • Export/Import capability supporting multiple formats and multiple files at the same time
  • Plugin support: LADSPA, LV2, Nyquist, VST and Audio Unit effect plug-ins
  • Easy editing with cut, paste, delete and copy functions.
  • Spectogram view mode for analyzing frequencies

2. LMMS

LMMS is a free and open source (cross-platform) digital audio workstation. It includes all the basic audio editing functionalities along with a lot of advanced features.

You can mix sounds, arrange them, or create them using VST instruments. It does support them. Also, it comes baked in with some samples, presets, VST Instruments, and effects to get started. In addition, you also get a spectrum analyzer for some advanced audio editing.

Features:

  • Note playback via MIDI
  • VST Instrument support
  • Native multi-sample support
  • Built-in compressor, limiter, delay, reverb, distortion and bass enhancer

3. Ardour

Ardour audio editor

Ardour is yet another free and open source digital audio workstation. If you have an audio interface, Ardour will support it. Of course, you can add unlimited multichannel tracks. The multichannel tracks can also be routed to different mixer tapes for the ease of editing and recording.

You can also import a video to it and edit the audio to export the whole thing. It comes with a lot of built-in plugins and supports VST plugins as well.

Features:

  • Non-linear editing
  • Vertical window stacking for easy navigation
  • Strip silence, push-pull trimming, Rhythm Ferret for transient and note onset-based editing

4. Cecilia

cecilia audio editor

Cecilia is not an ordinary audio editor application. It is meant to be used by sound designers or if you are just in the process of becoming one. It is technically an audio signal processing environment. It lets you create ear-bending sound out of them.

You get in-build modules and plugins for sound effects and synthesis. It is tailored for a specific use – if that is what you were looking for – look no further!

Features:

  • Modules to achieve more (UltimateGrainer – A state-of-the-art granulation processing, RandomAccumulator – Variable speed recording accumulator,
    UpDistoRes – Distortion with upsampling and resonant lowpass filter)
  • Automatic Saving of modulations

5. Mixxx

Mixxx audio DJ

If you want to mix and record something while being able to have a virtual DJ tool, Mixxx would be a perfect tool. You get to know the BPM, key, and utilize the master sync feature to match the tempo and beats of a song. Also, do not forget that it is yet another free and open source application for Linux!

It supports custom DJ equipment as well. So, if you have one or a MIDI – you can record your live mixes using this tool.

Features

  • Broadcast and record DJ Mixes of your song
  • Ability to connect your equipment and perform live
  • Key detection and BPM detection

6. Rosegarden

rosegarden audio editor

Rosegarden is yet another impressive audio editor for Linux which is free and open source. It is neither a fully featured DAW nor a basic audio editing tool. It is a mixture of both with some scaled down functionalities.

I wouldn’t recommend this for professionals but if you have a home studio or just want to experiment, this would be one of the best audio editors for Linux to have installed.

Features:

  • Music notation editing
  • Recording, Mixing, and samples

Wrapping Up

These are some of the best audio editors you could find out there for Linux. No matter whether you need a DAW, a cut-paste editing tool, or a basic mixing/recording audio editor, the above-mentioned tools should help you out.

Did we miss any of your favorite? Let us know about it in the comments below.

Source

WP2Social Auto Publish Powered By : XYZScripts.com