Why Linux Binaries are not as Easy to Handle? – OSnews

Have you ever wondered why in other Operating Systems such as Windows, MacOS or even BeOS installing software is so easy compared to Linux? In such OSes you can simply download and decompress a file or run an installer process which will easily walk you through the process.

This doesn’t happen in Linux, as there are only two standard ways to install software: compiling and installing packages. Such methods can be inconsistent and complicated for new users, but I am not going to write about them, as it has been done in countless previous articles. Instead i am going to focus in writing about why it is difficult for developers to provide a simpler way.

So, why can’t we install and distribute programs in Linux with the same ease as we do in other operating systems? The answer lies in the Unix filesystem layout, which Linux distros follow so strictly for the sake of compatibility. This layout is and was always aimed at multi-user environments, and to save and distribute resources evenly across the system (or even shared across a LAN). But with today’s technology and with the arrival of desktop computers, many of these ideas dont make much sense in that context.

There are four fundamental aspects that, I think, make distributing binaries on linux so hard. I am not a native english speaker, so i am sorry about possible mistakes.

1-Distribution by physical place
2-“Global installs”, or “Dependency Hell vs Dll hell”
3-Current DIR is not in PATH.
4-No file metadata.

1-Distribution by physical place

Often, directories contain the following subdirectories:

lib/ – containing shared libraries
bin/ – containing binary/scripted executables
sbin/ -containing executables only meant for the superuser

If you search around the filesystem, you will find several places where this pattern repeats, for example:
/
/usr
/usr/local
/usr/X11R6

You might wonder why files are distributed like this. This is mainly for historical reasons, like “/” being in a startup disk or rom, “/usr” was a mount point for the global extras, originally loaded from tape, shared disk or even from network, /usr/local for local installed software, I dont know about X11R6, but probably has its own directory because it’s too big.

It should be noted that until very recently, unixes were deployed for very specific tasks, and never meant to be loaded with as many programs as a desktop computer is. This is why we don’t see directories organized by usage as we do in other unix-like OSes (mainly BeOS and OSX), and instead we see them organized by physical place (Something desktop computers no longer care about, since nearly all of them are self contained).

Many years ago, big unix vendors such as SGI and Sun decided to address this problem by creating the /opt directory. The opt directory was supposed to contain the actual programs with their data, and shared data (such as libs or binaries) were exported to the root filesystem (in /usr) by creating symlinks.
This also made the task of removing a program easier, since you simply had to remove the program dir, and then run a script to remove the invalid symlinks. This approach never was popular enough in in Linux distributions,
and it still doesn’t adress the problems of bundled libraries.

Because of this, all installs need to be global, which takes us to the next issue.

2-“Global installs”, or “Dependency Hell vs Dll hell”

Because of the previous issue, all popular distribution methods (both binary packages and source) force the users to install the software globally in the system, available for all accounts. With this approach, all binaries go to common places (/usr/bin, /usr/lib, etc). At first this may look reasonable and the right approach with advantages, such as maximized usage of shared libraries, and simplicity in organization. But then we realize its limits. This way, all programs are forced to use the same exact set of libraries.

Because of this, also, it becomes impossible for developers to just bundle some libraries needed with a binary release, so we are forced to ask the users to install the missing libraries themselves. This is called dependency hell, and it happens when some user downloads a program (either source, package or shared binary) and is told that more libraries are needed for the program to run.

Although the shared library system in Linux is even more complete than the Windows one (with multiple library versions supported, pre-caching on load, and binaries unprotected when run), the OS filesystem layout is not letting us to distribute binaries with bundled libraries we used for developing it that the user probably won’t have.

A dirty trick is to bundle the libraries inside the executable — this is called “static linking” — but this approach has several drawbacks, such as increased memory usage per program instance, more complex error tracing, and even license limitations in many cases, so this method is usually not encouraged.

To conclude with this item, it has to be said that it becomes hard for developers to ship binary bundles with specific versions of a library. Remember that not all libraries need to be bundled, but only the rare ones that an user is not expected to have. Most widely used libraries such as libc, libz or even gtk or QT can remain system-wide.

Many would point out that this approach leads to the so called DLL hell, very usual in Windows. But DLL hell actually happened because programs that bundled core system-wide windows libraries overwrote the installed ones with older versions. This in part happened because Windows not only doesn’t support multiple versions of a library in the way unix does, but also because at boot time the kernel can only load libraries in the 8.3 file format (you can’t really have one called libgtk-1.2.so.0.9.1 ). As a sidenote, and because of that, since Windows 2000, Microsoft keeps a directory with copies of the newest versions available of the libraries in case that any program overwrites them. In short, DLL hell can be simply attributed to the lack of a proper library versioning system.

3-Current DIR is not in PATH

This is quite simple, but it has to be said. By default in Unixes, the current path is not recognized as a library or binary path. Because of this, you cant just unzip a program and run the binary inside. Most shared binaries distributed do a dirty trick and create a shell script containing the following.

#!/bin/sh

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:.
./mybinary

This can be simply solved by adding “.” to the library and binary path, but no distro does it, because it’s not standard in Unixes. Of course, from inside a program it is perfectly normal to access the data from relative paths, so you can still have subdirs with data.

4-No file metadata

Ever wondered why Windows binaries have their own icons and in Linux binaries look all the same? This is because there is not a standard way to define metadata on the files. This means we cant bundle a small pixmap inside the file. Because of this we cant easily hint the user on the proper binary, or even file, to be run. I cant say this is an ELF limitation, since such format will let you add your own sections to the binary, but I think it’s more like a lack-of-a-standard to define how to do it.

Proposed solutions

In short, I think Linux needs to be less standard and more tolerant in the previous aspects if it aims to achieve the same level of user-friendlyness as the ruling desktop operating systems. Otherwise, not only users, but developers become frustrated with this.

For the most important issue, which is libraries, I’d like to propose the following, as a spinoff, but still compatible for Unix desktop distros.

Desktop distros should add “./” to the PATH and LIBRARY_PATH by default, this will make the task of bundling certain “not so common”, or simply modified libraries with a program, and save us the task of writing
scripts called “runme”. This way we could be closer to doing simple “in a directory” installs. I know alternatives exist, but this has been proven to be simple and it works.

Linux’s library versioning system is great already, so why should installing binaries of a library be complicated? A “library installer” job would be to take some libraries, copy them to the library dir, and then update the lib symlink to the newer one.

Agree on a standard way of adding file metadata to the ELF binaries. This way, binaries distributed can be more descriptive to the user. I know I am leaving script based programs out, but those can even add something ala “magic string”.

And the most important thing, understand that the changes are meant to make Linux not only more user-friendly, but also more popular. There are still a lot of Linux users and developers that think the OS is only meant as a server, many users that consider aiming at desktop is too dreamy or too “Microsoft”, and many that think that Linux should remain “true as a Unix”. Because of this, focus should be put so ideas can coexist, and everyone gets what they want.

Source

Best Audio Editors For Linux

You’ve got a lot of choices when it comes to audio editors for Linux. No matter whether you are a professional music producer or just learning to create awesome music, the audio editors will always come in handy.

Well, for professional-grade usage, a DAW (Digital Audio Workstation) is always recommended. However, not everyone needs all the functionalities, so you should know about some of the most simple audio editors as well.

In this article, we will talk about a couple of DAWs and basic audio editors which are available as free and open source solutions for Linux and (probably) for other operating systems.

Top Audio Editors for Linux

Best audio editors and DAW for Linux

We will not be focusing on all the functionalities that DAWs offer – but the basic audio editing capabilities. You may still consider this as the list of best DAW for Linux.

Installation instruction: You will find all the mentioned audio editors or DAWs in your AppCenter or Software center. In case, you do not find them listed, please head to their official website for more information.

1. Audacity

audacity audio editor

Audacity is one of the most basic yet a capable audio editor available for Linux. It is a free and open-source cross-platform tool. A lot of you must be already knowing about it.

It has improved a lot when compared to the time when it started trending. I do recall that I utilized it to “try” making karaokes by removing the voice from an audio file. Well, you can still do it – but it depends.

Features:

It also supports plug-ins that include VST effects. Of course, you should not expect it to support VST Instruments.

  • Live audio recording through a microphone or a mixer
  • Export/Import capability supporting multiple formats and multiple files at the same time
  • Plugin support: LADSPA, LV2, Nyquist, VST and Audio Unit effect plug-ins
  • Easy editing with cut, paste, delete and copy functions.
  • Spectogram view mode for analyzing frequencies

2. LMMS

LMMS is a free and open source (cross-platform) digital audio workstation. It includes all the basic audio editing functionalities along with a lot of advanced features.

You can mix sounds, arrange them, or create them using VST instruments. It does support them. Also, it comes baked in with some samples, presets, VST Instruments, and effects to get started. In addition, you also get a spectrum analyzer for some advanced audio editing.

Features:

  • Note playback via MIDI
  • VST Instrument support
  • Native multi-sample support
  • Built-in compressor, limiter, delay, reverb, distortion and bass enhancer

3. Ardour

Ardour audio editor

Ardour is yet another free and open source digital audio workstation. If you have an audio interface, Ardour will support it. Of course, you can add unlimited multichannel tracks. The multichannel tracks can also be routed to different mixer tapes for the ease of editing and recording.

You can also import a video to it and edit the audio to export the whole thing. It comes with a lot of built-in plugins and supports VST plugins as well.

Features:

  • Non-linear editing
  • Vertical window stacking for easy navigation
  • Strip silence, push-pull trimming, Rhythm Ferret for transient and note onset-based editing

4. Cecilia

cecilia audio editor

Cecilia is not an ordinary audio editor application. It is meant to be used by sound designers or if you are just in the process of becoming one. It is technically an audio signal processing environment. It lets you create ear-bending sound out of them.

You get in-build modules and plugins for sound effects and synthesis. It is tailored for a specific use – if that is what you were looking for – look no further!

Features:

  • Modules to achieve more (UltimateGrainer – A state-of-the-art granulation processing, RandomAccumulator – Variable speed recording accumulator,
    UpDistoRes – Distortion with upsampling and resonant lowpass filter)
  • Automatic Saving of modulations

5. Mixxx

Mixxx audio DJ

If you want to mix and record something while being able to have a virtual DJ tool, Mixxx would be a perfect tool. You get to know the BPM, key, and utilize the master sync feature to match the tempo and beats of a song. Also, do not forget that it is yet another free and open source application for Linux!

It supports custom DJ equipment as well. So, if you have one or a MIDI – you can record your live mixes using this tool.

Features

  • Broadcast and record DJ Mixes of your song
  • Ability to connect your equipment and perform live
  • Key detection and BPM detection

6. Rosegarden

rosegarden audio editor

Rosegarden is yet another impressive audio editor for Linux which is free and open source. It is neither a fully featured DAW nor a basic audio editing tool. It is a mixture of both with some scaled down functionalities.

I wouldn’t recommend this for professionals but if you have a home studio or just want to experiment, this would be one of the best audio editors for Linux to have installed.

Features:

  • Music notation editing
  • Recording, Mixing, and samples

Wrapping Up

These are some of the best audio editors you could find out there for Linux. No matter whether you need a DAW, a cut-paste editing tool, or a basic mixing/recording audio editor, the above-mentioned tools should help you out.

Did we miss any of your favorite? Let us know about it in the comments below.

Source

Community collaboration makes for some great OpenStack solutions

Share with friends and colleagues on social media

If you follow the evolution of OpenStack, you know how it’s finding its way into all sorts of workloads, from high-level research to car manufacturing to all-new 5G networks. Organizations are using it for everything from the mundane to the sublime and sharing what they’re learning with the OpenStack community.

Some of the examples offered up at the recent OpenStack Summit Berlin showed that OpenStack is a full-fledged part of the IT mainstream, which means there are a wealth of ideas out there for your own implementation.

OpenStack In many cases, the advances of others – including Adobe, AT&T, NASA, Oerlikon, SBAB Bank, Volkswagen, Workday and many other companies and organizations, big and small – are being contributed back to the community for you and others to use. This is a critical part of OpenStack and SUSE OpenStack Cloud, which take the best the community has to offer to improve the platform and how organizations solve problems.

Take Workday, the human resources software-as-a-service vendor, which in 2019 expects to have half of all its production workloads living on the 45 OpenStack private-cloud clusters it’s running in its global data centers. That represents about 4,600 servers, up from just 600 in 2016.

To manage the growing demand for its products, Workday created and now manages about 4,000 immutable VM images that are updated on their own cycles, with new versions of Workday deployed every weekend. That means the company needs to regularly tear down and replace thousands of VMs in a very short time and do it without any downtime.

That scale required automation, and the growing complexity required a new effort to gather data about their clusters and OpenStack controllers. They used Big Panda for incident management and Wavefront for monitoring and analytics, looking for anomalies and problems.

As it turns out, they uncovered some real issues with how they deployed images, and solved those problems by extending the OpenStack Nova API to leverage its caching capability to pre-load big images – what they call image pre-fetching. This enabled them to speed up the image deployments so instead of big images slowing down the restart of thousands of VMs, they could pre-load them and relaunch new VM instances quickly.

They did some ingenious stuff, like enabling Glance to serve up images directly to remote OpenStack controllers, and got help from the community for figuring it out. With OpenStack’s complexity, that openness made their work doable, and in the end, they offered their Nova API work back to the community.

Workday is just one example of the companies taking advantage of the power of OpenStack and the open source community to solve real problems. Check out these and other OpenStack successes – including these 51 things you need to know – from the OpenStack Summit Berlin.

Share with friends and colleagues on social media

Source

Bash’s Built-in printf Function | Linux Journal

Even if you’re already familiar with the printf command, if you got your information via “man printf” you may be missing a couple of useful features that are provided by bash’s built-in version of the standard printf(1) command.

If you didn’t know bash had its own version of printf, then you didn’t heed the note in the man page for the printf(1) command:

NOTE: your shell may have its own version of printf, which usually supersedes the version described here. Please refer to your shell’s documentation for details about the options it supports.

You did read the man page, didn’t you? I must confess, I’d used printf for quite a while before I realized bash had its own.

To find the documentation for the built-in version of printf, just search for “printf” in the bash man page.

In case you’re completely unfamiliar with the printf command, and similar functions in other languages, a couple quick examples should get you up to speed:

$ printf “Hello worldn”
Hello world

$ printf “2 + 2 is %dn” $((2+2))
2 + 2 is 4

$ printf “%s: %dn” “a string” 12
a string: 12

You provide printf with a format string and a list of values. It then replaces the %… sequences in the string with the values from the list formatted according to the format specification (the part following the percent sign). There are a dozen or more format specifier characters, but 99% of the time, the only ones you’ll need are the following:

  • d – Format a value as a signed decimal number.
  • u – Format a value as an unsigned decimal number.
  • x – Format a value as a hexadecimal number with lower case a-f.
  • X – Format a value as a hexadecimal number with upper case A-F.
  • s – Format a value as a number.

Format specifiers can be preceded by a field width to specify the minimum number of characters to print. A positive width causes the value to be right-justified; a negative width causes the value to be left-justiifed. A width with a leading zero causes numeric fields to be zero-filled. Usually, you want to use negative widths for strings and positive widths for numbers.

Probably not what you want:

$ printf “%20s: %4dn” “string 1” 12 “string 2” 122
string 1: 12
string 2: 122

Still probably not not what you want:

$ printf “%-20s: %-4dn” “string 1” 12 “string 2” 122
string 1 : 12
string 2 : 122

Probably this is what you want:

$ printf “%-20s: %4dn” “string 1” 12 “string 2” 122
string 1 : 12
string 2 : 122

Note that printf reuses the format if it runs out of format specifiers, which in the examples above allows you to print two lines (four values) with only two format specifiers.

If you specify the width as an asterisk, then the width is taken from the next value in the list:

$ printf “%*s: %*dn” -20 “a string” 4 12
a string : 12

Note that if you want to zero-fill a field and specify the width with an asterisk, put the zero before the asterisk:

$ printf “%*s: %0*dn” -20 “a string” 4 12
a string : 0012

So now to the features that bash’s built-in version of printf provides. The first is the -v option, which allows you to put the formatted result into a variable rather than print it out. So instead of:

$ hw=$(printf “Hello world”)
echo $hw
Hello world

You can do this:

$ printf -v hw “Hello world”
echo $hw
Hello world

The second option is for formatting times (and dates):

$ printf “%(%m-%d-%Y %H:%M:%S)Tn” $(date +%s)
01-10-2019 09:11:44

The format specifier here is %(datefmt)T and the value is a system time in seconds from the epoch. The nested datefmt supports the same format options that are supported by strftime(3). You can get a system time value by specifying the +%s format option to the date command.

A couple special arguments are supported by the %(datefmt)T format. From the bash man page:

Two special argument values may be used: -1 represents the current time, and -2 represents the time the shell was invoked. If no argument is specified, conversion behaves as if -1 had been given.

There are a couple of additional features supported by bash’s built-in version of printf, but none that you are likely to need on a regular basis. See the man page for more information.

Source

Easy to Understand Man Pages for Every Linux User

One of the most commonly used and reliable ways of getting help under Unix-like systems is via man pages. Man pages are the standard documentation for every Unix-like system and they correspond to online manuals for programs, functions, libraries, system calls, formal standards and conventions, file formats and so on. However, man pages suffer from many failings one of which is they are too long and some people just don’t like to read too much text on the screen.

The TLDR (stands for “Too Long; Didn’t Read“. ) pages are summarized practical usage examples of commands on different operating systems including Linux. They simplify man pages by offering practical examples.

TLDR is an Internet slang, meaning a post, article, comment or anything such as a manual page was too long, and whoever used the phrase didn’t read it for that reason. The content of TLDR pages is openly available under the permissive MIT License.

In this short article, we will show how to install and use TLDR pages in Linux.

Requirements

  1. Install Latest Nodejs and NPM Version in Linux Systems

Before installing, you can try the live demo of TLDR.

How to Install TLDR Pages in Linux Systems

To conveniently access TLDR pages, you need to install one of the supported clients called Node.js, which is the original client for the tldr-pages project. We can install it from NPM by running.

$ sudo npm install -g tldr

TLDR also available as a Snap package, to install it, run.

$ sudo snap install tldr

After installing the TLDR client, you can view man pages of any command, for example tar command here (you can use any other command here):

$ tldr tar
View Tar Command Man Page

View Tar Command Man Page

Here is another example of accessing the summarized man page for ls command.

$ tldr ls
View ls Command Man Page

View ls Command Man Page

To list all commands for the chosen platform in the cache, use the -l flag.

$ tldr -l 
List All Linux Commands

List All Linux Commands

To list all supported commands in the cache, use the -a flag.

$ tldr -a

You can update or clear the local cache by running.

$ tldr -u	#update local cache 
OR
$ tldr -c 	#clear local cache 

To search pages using keywords, use the -s options, for example.

$ tldr -s  "list of all files, sorted by modification date"
Search Linux Commands Using Keyword

Search Linux Commands Using Keyword

To change the color theme (simple, base16, ocean), use the -t flag.

$ tldr -t ocean

You can also show a random command, with the -r flag.

$ tldr -r   
View Man Page for Random Linux Command

View Man Page for Random Linux Command

You can see a complete list of supported options by running.

$ tldr -h

Note: You can find a list of all supported and dedicated client applications for different platforms, in the TLDR clients wiki page.

TLDR Project Homepagehttps://tldr.sh/

That’s all for now! The TLDR pages are summarized practical examples of commands provided by the community. In this short article, we’ve showed how to install and use TLDR pages in Linux. Use the feedback form to share your thoughts about TLDR or share with us any similar programs out there.

Source

Top 5 Best Ubuntu Alternatives – Linux Hint

If you asked younger Linux users to tell you what their first Linux distribution was, we bet that Ubuntu would be the most common answer. First released in 2004, Ubuntu has helped establish Linux as a viable alternative to Windows and macOS and convinced millions that not all good things in life cost money.

But we’re now in 2019, and there are many excellent desktop Linux distributions that are not based on Ubuntu, and we’ve selected five of them for this article and sorted them by their popularity.

Manjaro is based on Arch Linux, a rolling-release distribution for computers based on x86-64 architectures that follows the KISS principle (“keep it simple, stupid”), emphasizing elegance, code correctness, minimalism, and simplicity. Manjaro sticks to the KISS principle as closely as possible, but it also focuses on user-friendliness and accessibility to make the distribution suitable for Linux newbies and veterans alike.

One of the most praise-worthy features of Manjaro is pacman, a versatile package manager borrowed from Arch Linux. To make pacman more user-friendly, Manjaro includes front-end GUI package manager tools called Pamac and Octopi. Three flagship editions of Manjaro are available— XFCE, KDE, and GNOME—but users can also choose from several community editions, including OpenBox, Cinnamon, i3, Awesome, Budgie, MATE, and Deepin. All editions of Manjaro come with a GUI installer and embrace the rolling release model.

By combining the user-friendliness of Ubuntu with the customizability of Arch Linux, Manjaro developers have created a Linux distribution that allows beginners to learn and grow with it and experienced users to get done more in less time. Because Manjaro boots into a live system, you can easily try it either using a virtual machine or by running it from a DVD or USB flash drive.

Solus

Unlike most popular Linux distributions that you come across these days, Solus is a completely independent desktop operating system built from scratch. Its main goal is to offer a cohesive desktop computing experience, which is something many Linux distributions have been trying to do, with mixed results.

Solus is built around Budgie, a desktop environment that uses various GNOME technologies and is developed by the Solus project, but other desktop environments are available as well, including MATE, and GNOME. Budgie shares many design principles with Windows, but it’s far more customizable and flexible.

Solus ships with a whole host of useful software applications to take care of all your computing needs right out of the box. Content creators can animate in Synfig Studio, produce music in Musescore or Mixxx, design and illustrate in GIMP and Inkscape, and edit video in Avidemux or Shotcut. All applications and system components are continuously updated, so there are no large OS updates to worry about.

Fedora

Fedora would never be the Linux distribution of choice of Linus Torvalds, the creator of the Linux kernel, if it didn’t do something right. First released in 2003, Fedora is known for focusing on innovation and offering cutting-edge features that take months to appear in other Linux distributions. The development of this Linux distribution is sponsored by Red Hat, who uses it as the upstream source of the commercial Red Hat Enterprise Linux distribution.

Thanks to built-in Docker support, you can containerize your own apps or deploy containerized apps out of the box on Fedora. The default desktop environment in Fedora is GNOME 3, which was chosen for its user-friendliness and complete support for open source development tools. That said, several other desktop environments, including XFCE, KDE, MATE, and Cinnamon, are available as well.

Just like Ubuntu, Fedora is also great as a server operating system. It features an enterprise-class, scalable database server powered by the open-source PostgreSQL project, brings a new Modular repository that provides additional versions of software on independent lifecycles, and comes with powerful administration tools to help you monitor your system’s performance and status.

openSUSE

Once known as SUSE Linux and SuSE Linux Professional, openSUSE is a popular Linux distribution that offers two distinct release models: rolling release and 2/3–3/4 years per fixed release. openSUSE Tumbleweed provides the rolling release model, while openSUSE Leap provides the traditional release model.

Regardless of which release model you choose, you can always access all openSUSE tools, including the comprehensive Linux system configuration and installation tool YaST, the open and complete distribution development platform Open Build Service, or the powerful Linux software management engine ZYpp, which provides the backend for the default command line package management tool for openSUSE, zypper.

OpenSUSE has been around since 2005, and it’s now in the hands of Swedish private equity group EQT Partners, which purchased it for $2.5 billion in July 2018. The acquisition didn’t affect the distribution’s development in any way, and SUSE developers expect the partnership with EQT to help it exploit the excellent market opportunity both in the Linux operating system area and in emerging product groups in the open source space, according to its official press release.

Debian

You probably know that Ubuntu is a Debian-based Linux distribution, but you may not know that Debian is actually a great alternative to Ubuntu. Not only is Debian one of the earliest Linux distributions in the world, but it’s also one of the most active, with over 51,000 packages and translations in 75 languages.

Since its beginning in 1993, Debian has been firmly committed to free software. The famous Debian Social Contract states that the distribution will always remain 100 percent free and will never require the use of a non-free component. It also states that Debian developers will always give back to the free software community by communicating things such as bug fixes to upstream authors.

Before you download and install Debian, you should familiarize yourself with its three main branches. The Stable branch targets stable and well-tested software to provide maximum stability. The Testing branch includes software that has received some testing but is not ready to be included in the Stable branch just yet. Finally, the Unstable branch includes bleeding-edge software that is likely to have some bugs.

Source

Linux Today – Linux 5.0 rc2

Jan 13, 2019, 22:00 (0 Talkback[s])

(Other stories by Linus Torvalds)

So the merge window had somewhat unusual timing with the holidays, and
I was afraid that would affect stragglers in rc2, but honestly, that
doesn’t seem to have happened much. rc2 looks pretty normal.

Were there some missing commits that missed the merge window? Yes. But

no more than usual. Things look pretty normal.

What’s a bit abnormal is that I’m traveling again, and so for me it’s

a Monday release, but it’s (intentionally) the usual “Sunday
afternoon” release schedule back home. I’m trying to not surprise
people too much.

As to actual changes: all looks fairly normal. Yes, there’s a fair

number of perf tooling updates, so that certainly stands out in the
diffstat, but if you ignore the tooling and just look at the kernel,
it’s about two thirds drivers (networking, gpu, block, scsi..), with
the rest being the usual mix of arch updates (ARM, RISC-V, x86, csky),
with some filesystem (btrfs, cifs) and vm fixes.

Go test,

Linus

Complete Story

Related Stories:

Source

Understanding Load Average on Linux – Linux Hint

Load average is a measurement of the amount of work versus free CPU cycles available on a system processor. In this article I’ll define the term, demonstrate how Linux calculates this value, then provide insight into how to interpret system load.

Before we dive into Linux load averages, we must explore the different ways load is calculated and address the most common measurement of CPU load – a percentage.

Windows calculates load differently from Linux, and since Windows has been historically more popular on the desktop, the Windows definition of load is generally understood by most computer users. Most Windows users have seen the system load in the task manager displayed as a percentage ranging from 0% to 100%.

In Windows this is derived by examining how “busy” the System Idle Process is and using the inverse to represent the system load. For example, if the idle thread is executing 99% of the time, CPU load in Windows would be 1%. This value is easy to understand but provides less overall detail about the true status of the system.

In Linux, the load average is instead is represented by a decimal number starting at 0.00. The value can be roughly defined as the number of processes over the past minute that had to wait their turn for execution. Unlike Windows, Linux load average is not an instant measurement. Load is given in three values – the one minute average, the five minute average, and the fifteen minute average.

Understanding Load Average in Linux

At first, this extra layer of detail seems unnecessary if you simply want to know the current state of CPU load in your system. But since the averages of three time periods are given, rather than an instant measurement, you can get a more complete idea of the change of system load over time in a single glance of three numbers

Displaying the load average is simple. On the command line, you can use a variety of commands. I simply use the “w” command:

root@virgo [~]# w
21:08:43 up 38 days, 4:34, 4 users, load average: 3.11, 2.75, 2.70

The rest of the command will display who’s logged on and what they’re executing, but for our purposes this information is irrelevant so I’ve clipped it from the above display.

In an ideal system, no process should be held up by another process (or thread), but in a single processor system, this occurs when the load goes above 1.00.

The words “single processor system” are incredibly important here. Unless you’re running an ancient computer, your machine probably has multiple CPU cores. In the machine I’m on, I have 16 cores:

In this case, a load average of 3.11 is not alarming at all. It simply means that a bit more than three processes were ready to execute and CPU cores were present to handle their execution. On this particular system, the load would have to reach 16 to be considered at “100%”.

To translate this to a percent-based system load, you could use this simple, if not obtuse, command:

cat /proc/loadavg | cut -c 1-4 | echo “scale=2; ($(</dev/stdin)/`nproc`)*100” | bc -l

This command sequences isolates the 1-minute average via cut and echos it, divided by the number of CPU cores, through bc, a command-line calculator, to derive the percentage.

This value is by no means scientific but does provide a rough approximation of CPU load in percent.

A Minute to Learn, a Lifetime to Master

In the previous section I put the “100%” example of a load of 16.0 on a 16 CPU core system in quotes because the calculation of load in Linux is a bit more nebulous than Windows. The system administrator must keep in mind that:

  • Load is expressed in waiting processes and threads
  • It is not an instantaneous value, rather an average, and
  • It’s interpretation must include the number of CPU cores, and
  • May over-inflate I/O waits like disk reads

Because of this, getting a handle of CPU load on a Linux system is not entirely an empirical matter. Even if it were, CPU load alone is not an adequate measurement of overall system resource utilization. As such, an experienced Linux administrator will consider CPU load in concert with other values such as I/O wait and the percentage of kernel versus system time.

I/O Wait

I/O wait is most easily seen via the “top” command:

In the screenshot above I have highlighted the I/O wait value. This is a percentage of time that the CPU was waiting on input or output commands to complete. This is usually indicative of high disk activity. While a high wait percentage alone may not significantly degrade CPU-bound tasks, it will reduce I/O performance for other tasks and will make the system feel sluggish.

High I/O wait without any obvious cause might indicate a problem with a disk. Use the “dmesg” command to see if any errors have occurred.

Kernel vs. System Time

The above highlighted values represent the user and kernel (system) time. This is a breakdown of the overall consumption of CPU time by users (i.e. applications, etc.) and the kernel (i.e. interaction with system devices). Higher user time will indicate more CPU usage by programs where higher kernel time will indicate more system-level processing.

A Fairly Average Load

Learning the relationship of load average to actual system performance takes time, but before long you’ll see a distinct correlation. Armed with the intricacies of system performance metrics, you’ll be able to make better decisions about hardware upgrades and program resource utilization.

Source

Kaku – web technologies-based music player

YouTube-tools

Kaku – web technologies-based music player

My CD collection has taken over my spare room. With very little space to store more, I’m gradually spending more time using streaming services.

Linux is blessed with a mouthwatering array of excellent open source music players. But I’m always on the look out for fresh and innovative music players.

Kaku bills itself as the next generation music client. Is that self-proclaimed hype? The software is written in JavaScript.

Installation

Users are well catered for irrespective of their operating system, as the project offers binaries for Linux, Mac OS X, and Windows.

For Linux, there’s official packages for Debian/Ubuntu (32- and 64-bit). For other distros, there may be packages available from their respective repositories.

The developer also provides an AppImage (32- and 64-bit) which makes it easy to run the software. AppImage is a format for distributing portable software on Linux without needing superuser permissions to install the application. All that’s required is to download the AppImage, and make the file executable by typing:

$ chmod u+x ./Kaku-2.0.1-x86_64.AppImage

In operation

Here’s an image of Kaku in action.

Kaku

First thing you might notice from the above screenshot is that Kaku replaces the maximize, minimize, and close buttons with Mac style buttons. I prefer my applications to have a consistent look and feel, but it’s not a big issue.

At the top right of the window, there’s a search bar. By default, Kaku displays results from YouTube, but there’s also the option to search Vimeo, SoundCloud, MixCloud, or all of them. Videos can be displayed in a list view or a icon view. By default, you’re presented with the top ranking YouTube music videos.

At the bottom left, there’s a small window that displays the video. There’s the standard playback buttons and playback bar together with the option to cast the output to a device, and toggle TV mode. You can make videos appear full screen or occupy all of the Kaku estate.

Latest News

This section is redundant, useless, and hopefully will be removed in a later release. It shows only release notes for early releases of Kaku. You might (incorrectly) conclude that Kaku hasn’t been updated in years. This isn’t the case; the software is under active development. But even if this section offered details of recent improvements to the software, that information is much better placed on the project’s GitHub page rather than cluttering up the application itself.

Search Results

You’re taken to this section whenever you use the search bar, or click Search Results (unless you’re in TV mode). Videos that match your search criteria are easily added to the play queue. The search bar displays even in TV mode, but search results are not displayed.

Play Queue

This section is populated by clicking the “Add to Play Queue” button, which is shown in Home and Search Results. The video currently being played is highlighted in light blue. Right click on a video gives the option to add it to a playlist. You’ll need to have a playlist created first though.

History

As you might have guessed, this section shows a history of videos that you’ve watched. You can click on any entry and replay that video. Right click on a video gives the option to add it to a playlist.

Settings

This section lets you configure the application. I’ll cover the configuration options in the next page.

Online DJ

Does the prospect of becoming a DJ entice you? This section lets you become your own DJ, offering your choice of music videos to listeners (known as guests). When you create a room, a room key is generated. You share this key with your guests which gives them access to your room. As the DJ, what you play will be offered to your guests. By joining the room, you can also text chat with everyone in the room. Neat!

Other Features

Before looking at some of the other functionality, let’s discuss memory usage.

I recently put Headset under the microscope. Like Kaku, Headset is a YouTube player although it’s implemented in a very different way.

One thing that cropped up with Headset was its excessive memory usage, sometimes topping 1GB of RAM. Kaku is much more frugal with memory, consuming less than 300MB in typical usage.

Kaku

The settings section offers the following functionality:

  • Enable desktop notifications.
  • Keep Kaku on top. Unlike vlc, this feature actually works!
  • Enable Chatroom. The chatroom is available with the Online DJ functionality.
  • Internationalization support – translations are available for Arabic, Chinese, Czech, Dutch, French, German, Italian, Portuguese, Portuguese (Brazilian), Russian, Spanish, and other languages.
  • Choose the Top Ranking for different countries. Bizarrely, the application defaults to United Arab Emirates.
  • Change the default searcher: YouTube, Vimeo, SoundCloud, Mixcloud, or all of them.
  • Default track format: Best Video, or Best Audio.
  • Import YouTube playlists.
  • Backup data locally or to Dropbox.
  • Sync data locally or from Dropbox.
  • Update Player.
  • Reset the database.

The software also offers playlists, with the ability to rename/remove them.

Summary

Kaku is a capable but not a great music/video player. There’s a good range of streaming services available to use. And it’s much more pleasurable to watch music videos using Kaku than on the streaming service website itself. While the software works well, it’s a tad rough round the edges and idiosyncratic.

Overall, I’m left with the feeling of meh! The Online DJ functionality and text chat functionality is innovative, although it’s clunky.

There’s lots of functionality I’d love added. I’d like shuffle playback, better keyboard shortcuts, an option to choose the playback resolution, and tons more. While the software purports to show the best video quality available, this clearly isn’t the case.

The player is under active development. I’ll be keeping my eagle eyes on new releases.

Website: kaku.rocks
Support: GitHub code repository
Developer: Chia-Lung Chen and contributors
License: MIT License

Source

Compact i.MX6 UL gateway offers WiFi, 4G, LoRa, and ZigBee

Forlinx’s “FCU1101” is a compact embedded gateway with -35 to 70℃ support that runs Linux on an i.MX6 UL and offers 4x isolated RS485 ports, a LAN port, and WiFi, 4G, LoRa, and ZigBee.

A year ago, the wireless studded, serial connected FCU1101 might have been called an IoT gateway, but the name seems to be going out of fashion. A similar system with a more powerful processor than the FCU1101‘s power-efficient, Cortex-A7 based NXP i.MX6 UltraLite (UL) might today be called an edge server. Forlinx calls its mini-PC sized, 105 x 100 x 33mm device what we used to call them back in the day: an embedded computer.

FCU1101 without antennas

The FCU1101 is notable for being one of the few embedded systems we’ve seen without a USB port. Instead, the device turns its limited real estate over to 4x RS485 ports deployed with terminal connectors. The serial ports are 1.5KV-isolated and protected against electrostatic discharge per ESD Level 4. They also support ModBus protocols.

FCU1101 with antennas

The other major component is a set of four wireless radios and three external antennas. The 2.4GHz ZigBee and 433MHz LoRa modems share an antenna. The 4G module and the 802.11b/g/n radio with optional STA and AP mode support have their own antennas.

The antenna for the Netcom 4G module, which lists support only for Chinese carriers, is a tethered, standalone unit to avoid cross-interference. There’s also a SIM slot for 4G and a 10/100 Ethernet port. The spec list suggests there is a similar system available that adds GPS and an audio interface.

FCU1101 front detail view

The FCU1101’s 528MHz, single-core i.MX6 UL SoC is backed up with 256MB LVDDR3 RAM, 256MB to 1GB NAND flash, and a microSD slot. There are also reset and boot buttons, 2x LEDs, and an RTC.

FCU1101 rear detail view

The system supports -35 to 70℃ temperatures (0 to 70℃ when using WiFi), and appears to offer two different power inputs, neither of which are shown in the detail views above. There’s a 12V input that is said to support 9-36V and offer anti-reverse and over-current protection, as well as a 24V input with 12-24V support and reverse protection.

The Linux 3.14.38 stack ships with a Yaffs2 file system, MQTT, and a wide range of web server and network protocol support.

Further information

No pricing or availability information was provided for the FCU1101. More information may be found on the Forlinx FCU1101 product page.

Source

WP2Social Auto Publish Powered By : XYZScripts.com