How to Mount Windows Partitions in Ubuntu

If you are running a dual-boot of Ubuntu and Windows, sometimes you might fail to access a Windows partition (formatted with NTFS or FAT32 filesystem type), while using Ubuntu, after hibernating Windows (or when it’s not fully shutdown).

This is because, Linux cannot mount and open hibernated Windows partitions (the full discussion of this is beyond the ambit of this article).

In this article, we will simply show how to mount Windows partition in Ubuntu. We will explain a few useful methods of solving the above issue.

Mount Windows Using the File Manager

The first and safest way is to boot into Windows and fully shutdown the system. Once you have done that, power on the machine and select Ubuntu kernel from the grub menu to boot into Ubuntu.

After a successful logon, open your file manager, and from the left pane, find the partition you wish to mount (under Devices) and click on it. It should be automatically mounted and its contents will show up in the main pane.

Mounted Windows Partition

Mounted Windows Partition

Mount Windows Partition in Read Only Mode From Terminal

The second method is to manually mount the filesystem in read only mode. Usually, all mounted filesystems are located under the directory /media/$USERNAME/.

Ensure that you have a mount point in that directory for the Windows partition (in this example, $USERNAME=aaronkilik and the Windows partition is mounted to a directory called WIN_PART, a name which corresponds to the device label):

$ cd /media/aaronkilik/
$ ls -l
List Mounted Partitions

List Mounted Partitions

In case the mount point is missing, create it using the mkdir command as shown (if you get “permission denied” errors, use sudo command to gain root privileges):

$ sudo mkdir /media/aaronkilik/WIN_PART

To find the device name, list all block devices attached to the system using the lsblk utility.

$ lsblk
List Block Devices

List Block Devices

Then mount the partition (/dev/sdb1 in this case) in read-only mode to the above directory as shown.

$ sudo mount -t vfat -o ro /dev/sdb1 /media/aaronkilik/WIN_PART		#fat32
OR
$ sudo mount -t ntfs-3g -o ro /dev/sdb1 /media/aaronkilik/WIN_PART	#ntfs

Now to get mount details (mount point, options etc..) of the device, run the mount command without any options and pipe its output to grep command.

$ mount | grep "sdb1" 
List Windows Partition

List Windows Partition

After successfully mounting the device, you can access files on your Windows partition using any applications in Ubuntu. But, remember that, because the device is mounted as read-only, you will not be able to write to the partition or modify any files.

Also note that if Windows is in a hibernated state, if you write to or modify files in the Windows partition from Ubuntu, all your changes will be lost after a reboot.

For more information, refer to the Ubuntu community help wiki: Mounting Windows Partitions.

That’s all! In this article, we have shown how to mount Windows partition in Ubuntu. Use the feedback form below to reach us for any questions if you face any unique challenges or for any comments.

Source

An Introduction to the Machine Learning Platform as a Service | Linux.com

Machine-Learning-Platform-as-a-Service (ML PaaS) is one of the fastest growing services in the public cloud. It delivers efficient lifecycle management of machine learning models.

At a high level, there are three phases involved in training and deploying a machine learning model. These phases remain the same from classic ML models to advanced models built using sophisticated neural network architecture.

Provision and Configure Environment

Before the actual training takes place, developers and data scientists need a fully configured environment with the right hardware and software configuration.

Hardware configuration may include high-end CPUs, GPUs, or FPGAs that accelerate the training process. Configuring the software stack deals with installing a diverse set of frameworks and tools that are specific to the model.

These fully configured environments need to run as a cluster where training jobs may run in parallel. Large datasets need to be made locally available to each of the machines in the cluster to speed up access. Provisioning, configuring, orchestrating, and terminating the compute resources is a complex task.

The development and data science team rely on internal DevOps teams to tackle this problem. DevOps teams automate the steps through traditional provisioning and configuration tools such as Chef, Puppet, and Ansible. ML training jobs cannot start till DevOps teams hand off the environment to the data science team.

Training & Tuning an ML Model

Once the testbed is ready, data scientists perform the steps of data preparation, training, hyperparameter tuning, and evaluation of the model. This is an iterative process where each step may be repeated multiple times until the results are satisfactory.

During the training and tuning phase, data scientists record multiple metrics such as the number of nodes in a layer, the number of layers of deep learning neural network, the learning rate used by an optimizer, the scoring technique along with the actual score. These metrics are useful in choosing the right combination of parameters that deliver the most accurate results.

The available frameworks and tools don’t include the mechanism for logging and recording the metrics critical to the collaborative and iterative training process. Data science teams build their own logging engine for recording and tracking critical metrics. Since this engine is external to the environment, they need to maintain the logging infrastructure and visualization tools.

Serving and Scaling an ML Model

Once the data science team evolves a fully trained model, it is made available to developers to use it in production. The model, which is typically a serialized object, needs to be wrapped in a REST web service that can be consumed through standard HTTP client libraries and SDKs.

Since models are continuously trained and tuned, there will be new versions published often by the data science teams. DevOps is expected to implement a CI/CD pipeline to deploy the ML artifacts in production. They may have to perform blue/green deployments to find the best model for production usage.

The web service exposing the ML model has to scale to meet the demand of the consumers. It also needs to be highly secure aligning with the rest of the policies defined by central IT.

To meet these requirements, DevOps teams are turning to containers and Kubernetes to manage the CI/CD pipelines, security, and scalability of ML models. They are using tools such as Jenkins or Spinnaker to integrate the data processing pipeline with software delivery pipelines.

The Challenge for Developers and Data Scientists

In the above three phases, development and data science teams find it extremely challenging to deal with the first and last phases. Their strength is training, tuning, and evolving the most accurate model than dealing with infrastructure and software configuration. The high reliance on DevOps teams introduces an additional layer of dependency for these teams.

Developers are productive when they can use APIs for automating repetitive tasks. Unfortunately, there are no standard, portable, well-defined APIs for the first and the last phases of ML model development and deployment.

The rise of ML PaaS

ML PaaS delivers the best of both worlds — iterative software development and model management — to developers and data scientists. It removes the friction involved in configuring and provisioning environments for training and serving machine learning models.

The best thing about an ML PaaS is the availability of APIs that abstract the underlying hardware and software stack. Developers can call a couple of APIs to spin up a large cluster of GPU-based machines fully configured with data preparation tools, training frameworks, and monitoring tools to kick off a complex training job. They will also be able to take advantage of data processing pipelines for automating ETL jobs. When the model is ready, they will publish the latest version as a developer-facing web service without worrying about packaging and deploying the artifacts and dependencies.

Public cloud providers have all the required building blocks to deliver ML PaaS. They are now exposing an abstract service that connects the dots between compute, storage, networks, and databases to bring a unified service for developers. Even though the service can be accessed through the console, the real value of the platform is exploited through the CLI and SDK. DevOps teams can integrate the CLI to automation while developers consume the SDK from IDEs such as Jupyter Notebooks, VS Code, or PyCharm.

The SDK simplifies the creation of data processing and software delivery pipelines for developers. By changing a single parameter, they would be able to switch from a CPU-based training cluster to a powerful GPU cluster running the latest NVIDIA K80 or P100 accelerators.

Cloud providers such as Amazon, Google, IBM, and Microsoft have built robust ML PaaS offerings.

Source

Zipping files on Linux: the many variations and how to use them

There are quite a few interesting things that you can do with “zip” commands other than compress and uncompress files. Here are some other zip options and how they can help.

how to zip files on Linux
Some of us have been zipping files on Unix and Linux systems for many decades — to save some disk space and package files together for archiving. Even so, there are some interesting variations on zipping that not all of us have tried. So, in this post, we’re going to look at standard zipping and unzipping as well as some other interesting zipping options.

The basic zip command

First, let’s look at the basic zip command. It uses what is essentially the same compression algorithm as gzip, but there are a couple important differences. For one thing, the gzip command is used only for compressing a single file where zip can both compress files and join them together into an archive. For another, the gzip command zips “in place”. In other words, it leaves a compressed file — not the original file alongside the compressed copy. Here’s an example of gzip at work:

$ gzip onefile
$ ls -l
-rw-rw-r-- 1 shs shs 10514 Jan 15 13:13 onefile.gz

And here’s zip. Notice how this command requires that a name be provided for the zipped archive where gzip simply uses the original file name and adds the .gz extension.

$ zip twofiles.zip file*
  adding: file1 (deflated 82%)
  adding: file2 (deflated 82%)
$ ls -l
-rw-rw-r-- 1 shs shs 58021 Jan 15 13:25 file1
-rw-rw-r-- 1 shs shs 58933 Jan 15 13:34 file2
-rw-rw-r-- 1 shs shs 21289 Jan 15 13:35 twofiles.zip

Notice also that the original files are still sitting there.

The amount of disk space that is saved (i.e., the degree of compression obtained) will depend on the content of each file. The variation in the example below is considerable.

$ zip mybin.zip ~/bin/*
  adding: bin/1 (deflated 26%)
  adding: bin/append (deflated 64%)
  adding: bin/BoD_meeting (deflated 18%)
  adding: bin/cpuhog1 (deflated 14%)
  adding: bin/cpuhog2 (stored 0%)
  adding: bin/ff (deflated 32%)
  adding: bin/file.0 (deflated 1%)
  adding: bin/loop (deflated 14%)
  adding: bin/notes (deflated 23%)
  adding: bin/patterns (stored 0%)
  adding: bin/runme (stored 0%)
  adding: bin/tryme (deflated 13%)
  adding: bin/tt (deflated 6%)

The unzip command

The unzip command will recover the contents from a zip file and, as you’d likely suspect, leave the zip file intact, whereas a similar gunzip command would leave only the uncompressed file.

$ unzip twofiles.zip
Archive:  twofiles.zip
  inflating: file1
  inflating: file2
$ ls -l
-rw-rw-r-- 1 shs shs 58021 Jan 15 13:25 file1
-rw-rw-r-- 1 shs shs 58933 Jan 15 13:34 file2
-rw-rw-r-- 1 shs shs 21289 Jan 15 13:35 twofiles.zip

The zipcloak command

The zipcloak command encrypts a zip file, prompting you to enter a password twice (to help ensure you don’t “fat finger” it) and leaves the file in place. You can expect the file size to vary a little from the original.

$ zipcloak twofiles.zip
Enter password:
Verify password:
encrypting: file1
encrypting: file2
$ ls -l
total 204
-rw-rw-r-- 1 shs shs 58021 Jan 15 13:25 file1
-rw-rw-r-- 1 shs shs 58933 Jan 15 13:34 file2
-rw-rw-r-- 1 shs shs 21313 Jan 15 13:46 twofiles.zip   <== slightly larger than
                                                           unencrypted version

Keep in mind that the original files are still sitting there unencrypted.

The zipdetails command

The zipdetails command is going to show you details — a lot of details about a zipped file, likely a lot more than you care to absorb. Even though we’re looking at an encrypted file, zipdetails does display the file names along with file modification dates, user and group information, file length data, etc. Keep in mind that this is all “metadata.” We don’t see the contents of the files.

$ zipdetails twofiles.zip

0000 LOCAL HEADER #1       04034B50
0004 Extract Zip Spec      14 '2.0'
0005 Extract OS            00 'MS-DOS'
0006 General Purpose Flag  0001
     [Bit  0]              1 'Encryption'
     [Bits 1-2]            1 'Maximum Compression'
0008 Compression Method    0008 'Deflated'
000A Last Mod Time         4E2F6B24 'Tue Jan 15 13:25:08 2019'
000E CRC                   F1B115BD
0012 Compressed Length     00002904
0016 Uncompressed Length   0000E2A5
001A Filename Length       0005
001C Extra Length          001C
001E Filename              'file1'
0023 Extra ID #0001        5455 'UT: Extended Timestamp'
0025   Length              0009
0027   Flags               '03 mod access'
0028   Mod Time            5C3E2584 'Tue Jan 15 13:25:08 2019'
002C   Access Time         5C3E27BB 'Tue Jan 15 13:34:35 2019'
0030 Extra ID #0002        7875 'ux: Unix Extra Type 3'
0032   Length              000B
0034   Version             01
0035   UID Size            04
0036   UID                 000003E8
003A   GID Size            04
003B   GID                 000003E8
003F PAYLOAD

2943 LOCAL HEADER #2       04034B50
2947 Extract Zip Spec      14 '2.0'
2948 Extract OS            00 'MS-DOS'
2949 General Purpose Flag  0001
     [Bit  0]              1 'Encryption'
     [Bits 1-2]            1 'Maximum Compression'
294B Compression Method    0008 'Deflated'
294D Last Mod Time         4E2F6C56 'Tue Jan 15 13:34:44 2019'
2951 CRC                   EC214569
2955 Compressed Length     00002913
2959 Uncompressed Length   0000E635
295D Filename Length       0005
295F Extra Length          001C
2961 Filename              'file2'
2966 Extra ID #0001        5455 'UT: Extended Timestamp'
2968   Length              0009
296A   Flags               '03 mod access'
296B   Mod Time            5C3E27C4 'Tue Jan 15 13:34:44 2019'
296F   Access Time         5C3E27BD 'Tue Jan 15 13:34:37 2019'
2973 Extra ID #0002        7875 'ux: Unix Extra Type 3'
2975   Length              000B
2977   Version             01
2978   UID Size            04
2979   UID                 000003E8
297D   GID Size            04
297E   GID                 000003E8
2982 PAYLOAD

5295 CENTRAL HEADER #1     02014B50
5299 Created Zip Spec      1E '3.0'
529A Created OS            03 'Unix'
529B Extract Zip Spec      14 '2.0'
529C Extract OS            00 'MS-DOS'
529D General Purpose Flag  0001
     [Bit  0]              1 'Encryption'
     [Bits 1-2]            1 'Maximum Compression'
529F Compression Method    0008 'Deflated'
52A1 Last Mod Time         4E2F6B24 'Tue Jan 15 13:25:08 2019'
52A5 CRC                   F1B115BD
52A9 Compressed Length     00002904
52AD Uncompressed Length   0000E2A5
52B1 Filename Length       0005
52B3 Extra Length          0018
52B5 Comment Length        0000
52B7 Disk Start            0000
52B9 Int File Attributes   0001
     [Bit 0]               1 Text Data
52BB Ext File Attributes   81B40000
52BF Local Header Offset   00000000
52C3 Filename              'file1'
52C8 Extra ID #0001        5455 'UT: Extended Timestamp'
52CA   Length              0005
52CC   Flags               '03 mod access'
52CD   Mod Time            5C3E2584 'Tue Jan 15 13:25:08 2019'
52D1 Extra ID #0002        7875 'ux: Unix Extra Type 3'
52D3   Length              000B
52D5   Version             01
52D6   UID Size            04
52D7   UID                 000003E8
52DB   GID Size            04
52DC   GID                 000003E8

52E0 CENTRAL HEADER #2     02014B50
52E4 Created Zip Spec      1E '3.0'
52E5 Created OS            03 'Unix'
52E6 Extract Zip Spec      14 '2.0'
52E7 Extract OS            00 'MS-DOS'
52E8 General Purpose Flag  0001
     [Bit  0]              1 'Encryption'
     [Bits 1-2]            1 'Maximum Compression'
52EA Compression Method    0008 'Deflated'
52EC Last Mod Time         4E2F6C56 'Tue Jan 15 13:34:44 2019'
52F0 CRC                   EC214569
52F4 Compressed Length     00002913
52F8 Uncompressed Length   0000E635
52FC Filename Length       0005
52FE Extra Length          0018
5300 Comment Length        0000
5302 Disk Start            0000
5304 Int File Attributes   0001
     [Bit 0]               1 Text Data
5306 Ext File Attributes   81B40000
530A Local Header Offset   00002943
530E Filename              'file2'
5313 Extra ID #0001        5455 'UT: Extended Timestamp'
5315   Length              0005
5317   Flags               '03 mod access'
5318   Mod Time            5C3E27C4 'Tue Jan 15 13:34:44 2019'
531C Extra ID #0002        7875 'ux: Unix Extra Type 3'
531E   Length              000B
5320   Version             01
5321   UID Size            04
5322   UID                 000003E8
5326   GID Size            04
5327   GID                 000003E8

532B END CENTRAL HEADER    06054B50
532F Number of this disk   0000
5331 Central Dir Disk no   0000
5333 Entries in this disk  0002
5335 Total Entries         0002
5337 Size of Central Dir   00000096
533B Offset to Central Dir 00005295
533F Comment Length        0000
Done

The zipgrep command

The zipgrep command is going to use a grep-type feature to locate particular content in your zipped files. If the file is encrypted, you will need to enter the password provided for the encryption for each file you want to examine. If you only want to check the contents of a single file from the archive, add its name to the end of the zipgrep command as shown below.

$ zipgrep hazard twofiles.zip file1
[twofiles.zip] file1 password:
Certain pesticides should be banned since they are hazardous to the environment.

The zipinfo command

The zipinfo command provides information on the contents of a zipped file whether encrypted or not. This includes the file names, sizes, dates and permissions.

$ zipinfo twofiles.zip
Archive:  twofiles.zip
Zip file size: 21313 bytes, number of entries: 2
-rw-rw-r--  3.0 unx    58021 Tx defN 19-Jan-15 13:25 file1
-rw-rw-r--  3.0 unx    58933 Tx defN 19-Jan-15 13:34 file2
2 files, 116954 bytes uncompressed, 20991 bytes compressed:  82.1%

The zipnote command

The zipnote command can be used to extract comments from zip archives or add them. To display comments, just preface the name of the archive with the command. If no comments have been added previously, you will see something like this:

$ zipnote twofiles.zip
@ file1
@ (comment above this line)
@ file2
@ (comment above this line)
@ (zip file comment below this line)

If you want to add comments, write the output from the zipnote command to a file:

$ zipnote twofiles.zip > comments

Next, edit the file you’ve just created, inserting your comments above the (comment above this line) lines. Then add the comments using a zipnote command like this one:

$ zipnote -w twofiles.zip < comments

The zipsplit command

The zipsplit command can be used to break a zip archive into multiple zip archives when the original file is too large — maybe because you’re trying to add one of the files to a small thumb drive. The easiest way to do this seems to be to specify the max size for each of the zipped file portions. This size must be large enough to accomodate the largest included file.

$ zipsplit -n 12000 twofiles.zip
2 zip files will be made (100% efficiency)
creating: twofile1.zip
creating: twofile2.zip
$ ls twofile*.zip
-rw-rw-r-- 1 shs shs  10697 Jan 15 14:52 twofile1.zip
-rw-rw-r-- 1 shs shs  10702 Jan 15 14:52 twofile2.zip
-rw-rw-r-- 1 shs shs  21377 Jan 15 14:27 twofiles.zip

Notice how the extracted files are sequentially named “twofile1” and “twofile2”.

Wrap-up

The zip command, along with some of its zipping compatriots, provide a lot of control over how you generate and work with compressed file archives.

Source

Get started with Cypht, an open source email client

There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year’s resolutions, the itch to start the year off right, and of course, an “out with the old, in with the new” attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn’t have to be that way.

Here’s the fourth of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019.

Cypht

We spend a lot of time dealing with email, and effectively managing your email can make a huge impact on your productivity. Programs like Thunderbird, Kontact/KMail, and Evolution all seem to have one thing in common: they seek to duplicate the functionality of Microsoft Outlook, which hasn’t really changed in the last 10 years or so. Even the console standard-bearers like Mutt and Cone haven’t changed much in the last decade.

Cypht is a simple, lightweight, and modern webmail client that aggregates several accounts into a single view. Along with email accounts, it includes Atom/RSS feeds. It makes reading items from these different sources very simple by using an “Everything” screen that shows not just the mail from your inbox, but also the newest articles from your news feeds.

It uses a simplified version of HTML messages to display mail or you can set it to view a plain-text version. Since Cypht doesn’t load images from remote sources (to help maintain security), HTML rendering can be a little rough, but it does enough to get the job done. You’ll get plain-text views with most rich-text mail—meaning lots of links and hard to read. I don’t fault Cypht, since this is really the email senders’ doing, but it does detract a little from the reading experience. Reading news feeds is about the same, but having them integrated with your email accounts makes it much easier to keep up with them (something I sometimes have issues with).

Users can use a preconfigured mail server and add any additional servers they use. Cypht’s customization options include plain-text vs. HTML mail display, support for multiple profiles, and the ability to change the theme (and make your own). You have to remember to click the “Save” button on the left navigation bar, though, or your custom settings will disappear after that session. If you log out and back in without saving, all your changes will be lost and you’ll end up with the settings you started with. This does make it easy to experiment, and if you need to reset things, simply logging out without saving will bring back the previous setup when you log back in.

Installing Cypht locally is very easy. While it is not in a container or similar technology, the setup instructions were very clear and easy to follow and didn’t require any changes on my part. On my laptop, it took about 10 minutes from starting the installation to logging in for the first time. A shared installation on a server uses the same steps, so it should be about the same.

In the end, Cypht is a fantastic alternative to desktop and web-based email clients with a simple interface to help you handle your email quickly and efficiently.

What to read next

Source

Professional Audio Closer to Linux – OSnews

Browsing Freshmeat tonight, the premier online Linux software repository, I came across to these two great (and brand new) applications, ReBorn and ReZound. Reborn, a Rebirth clone that will soon become open source according to the developer, provides a software emulation of three of Roland’s most famous electronic musical instruments. It got me thinking as to how much more viable Linux is today as a professional (or semi-professional) audio platform than it used to be two years ago. Update: On a related multimedia notice, WinAMP 3.0 for Windows was released yesterday.While ALSA, and especially OSS, still have some limitations, it seems that a number of great audio apps are emerging. Unfortunately, with only 4-5 exceptions, the same do not apply for professional 3D/rendering/video/vector-imaging applications though. Linux still does not have something similar to Apple’s iMovie or personalStudio for simple users, or Adobe Premier, or Cinema4D/Bryce/etc or a really professional DTP system, or something with the power of Illustrator/FireWorks/Freehand.

However, let’s browse together these great audio apps that are available today. Some of them, might actually need a helping hand to get further developed.

    • ReBorn – A Linux version of the Windows/Mac program ReBirth, providing a software emulation of three of Roland Corporation’s most famous electronic musical instruments: the TB303 Bassline, the TR808 Rhythm Composer and the TR909 Rhythm Composer. Also thrown in are four audio effects, individual mixers and a programmable sequencer. ReBorn is fully compatible with the ReBirth .rbs song file format. (UPDATE: The project is now dead due to legal issues.)
    • ReZound -Aims to be a stable, open source, and graphical audio file editor primarily for but not limited to the Linux operating system.
    • Anthem – An advanced open source MIDI sequencer which allows you to record, edit, and playback music using a sophisticated and acclaimed object oriented song technology.
    • Ardour – A professional multitrack/multichannel audio recorder and DAW for Linux, using ALSA-supported audio interfaces. It supports up to 32 bit samples, 24+ channels at up to 96kHz, full MMC control, a non-destructive, non-linear editor, and LADSPA plugins.
    • DAP – A comprehensive audio sample editing and processing suite. It currently supports AIFF and AIFF-C audio files, 8 or 16 bit resolution, and 1, 2 or 4 channels of audio data. The package offers comprehensive editing, playback, and recording facilities including full time stretch resampling, manual data editing, and a reasonably complete DSP processing suite.
    • GNUsound – A sound editor for Linux/x86. It supports multiple tracks, multiple outputs, and 8, 16, or 24/32 bit samples. It can read a number of audio formats through libaudiofile, and saves them as WAV.
    • Bristol – A synthesizer emulation package. It includes a Moog Mini, Moog Voyager, Hammond B3, Prophet 5, Juno 6, DX 7, and others.
    • Audacity – A cross-platform multitrack audio editor. It allows you to record sounds directly or to import Ogg, WAV, AIFF, AU, IRCAM, or MP3 files. It features a few simple effects, all of the editing features you should need, and unlimited undo. The GUI was built with wxWindows and the audio I/O currently uses OSS under Linux. We recently reviewed its version 1.0.
    • TerminatorX – A realtime audio synthesizer that allows you to “scratch” on digitally sampled audio data (*.wav, *.au, *.mp3, etc.) the way hiphop-DJs scratch on vinyl records. It features multiple turntables, realtime effects (built-in as well as LADSPA plugin effects), a sequencer, and an easy-to-use GTK+ GUI.
    • LAoE – A graphical audiosample-editor, based on multi-layers, floating-point samples, volume-masks, variable selection-intensity, and many plugins suitable to manipulate sound, such as filtering, retouching, resampling, graphical spectrogram editing by brushes and rectangles, sample-curve editing by freehand-pen and spline and other interpolation curves, effects like reverb, echo, compress, expand, pitch-shift, time-stretch, and much more.
    • MidiMountain – A sequencer to edit standard MIDI files. Its easy-to-use interface should help beginners to edit and create MIDI songs (sequences), and it is designed to edit every definition known to standard MIDI files and the MIDI transfer protocol, from easy piano roll editing to changing binary system exclusive messages.
    • GNoise – A GTK+ based wave file editor. It uses a display cache and a double-buffered display for maximum speed with large files. It supports common editing functions such as cut, copy, paste, fade in/out, normalize, and more, with unlimited undo.
    • MusE – A Qt 2.1-based MIDI sequencer for Linux with editing and recording capabilities. While the sequencer is playing you can edit events in realtime with the pianoroll editor or the score editor. Recorded MIDI events can be grouped as parts and arranged in the arrange editor.
    • Rosegarden – An integrated MIDI sequencer and musical notation editor. The stable version (2.1) is a simple application for any Unix/X system. The development branch (Rosegarden-4) is an entirely new KDE application.
    • KGuitar – A guitarist suite for KDE. It’s based on MIDI concepts and includes tabulature editor, chord construction helpers, and importing and exporting song formats.
    • Swami – An instrument patch file editor using SoundFont files that allows you to create and distribute instruments from audio samples used for composing music. It uses iiwusynth, a software synthesizer, which has real time effect control, support for modulators, and routable audio via Jack.
    • SoundTracker – A pattern-oriented music editor (similar to the DOS program ‘FastTracker’). Samples are lined up on tracks and patterns which are then arranged to a song. Supported module formats are XM and MOD; the player code is the one from OpenCP. A basic sample recorder and editor is also included.
    • Tutka – A tracker style MIDI sequencer for Linux (and other systems; only Linux is supported at this time though). It is similar to programs like SoundTracker, ProTracker and FastTracker except that it does not support samples and is meant for MIDI use only.
    • amSynth – A realtime polyphonic analogue modeling synthesizer. It provides a virtual analogue synthesizer in the style of the classic Moog Minimoog/Roland Junos. It offers an easy-to-use interface and synth engine, while still creating varied sounds. It runs as a standalone application, using either the ALSA audio and MIDI sequencer system or the plain OSS devices.
    • Cheese Tracker – A program to create module music that aims to have an interface and feature set similar to that of Impulse Tracker. It also has some advantages such as oscilloscopes over each pattern track, more detailed sample info, a more detailed envelope editor, improved filters, and effect buffers (chorus/reverb) with individual send levels per channel.
    • SpiralSynth Modular – An object orientated modular softsynth / sequencer / sampler. Audio or control data can be freely passed between the plugins, and is all treated the same.
    • gAlan – An audio-processing tool for X windows and Win32. It allows you to build synthesizers, effects chains, mixers, sequencers, drum-machines, etc. in a modular fashion by linking together icons representing primitive audio-processing components.
    • Xsox – An X interface for sox. Record or play many types of sound files. Cut, copy, paste, add effects, convert file types etc.
    • Voodoo Tracker – A project that aims to harness and extend the power of conventional trackers. Imagine self contained digital studio; complete and ready for your modern music needs. Additionally Voodoo will provide an interface that is designed for live performances.
    • SLab – Direct to Disk Audio Recording Studio is a free HDD audio recording system for linux operating systems, written using Tcl/Tk. SLab can record up to 64 tracks.
    • BeatForce – A computer DJing system, with two players with independent playlist, song database, mixer, sampler etc. It was planned as a feature enhanced Linux replacement for BPM-Studio from Alcatech.

Do you know any more professional or simply fully working audio applications for Linux? Share your knowledge with us (but do not mention plain audio players please). Dave Philips has a web page with many projects mentioned too.

Source

Bash Shell Utility Reaches 5.0 Milestone | Linux.com

As we look forward to the release of Linux Kernel 5.0 in the coming weeks, we can enjoy another venerable open source technology reaching the 5.0 milestone: the Bash shell utility. The GNU Project has launched the public version 5.0 of GNU/Linux’s default command language interpreter. Bash 5.0 adds new shell variables and other features and also repairs several major bugs.

New shell variables in Bash 5.0 include BASH_ARGV0, which “expands to $0 and sets $0 on assignment,” says the project. The EPOCHSECONDS variable expands to the time in seconds since the Unix epoch, and EPOCHREALTIME does the same, but with microsecond granularity.

New features include a “history -d” built-in function that can remove ranges of history entries and understands negative arguments as offsets from the end of the history list. There is also a new option called “localvar_inherit” that allows local variables to inherit the value of a variable with the same name at the nearest preceding scope.

A new shell option called “assoc_expand_once” causes the shell to attempt to expand associative array subscripts only once, which may be required when they are used in arithmetic expressions. Among many other new features, a new option is available that can disable sending history to syslog at runtime. In addition, the “globasciiranges” shell option is now enabled by default.

Bash 5.0 also fixes several major bugs. It overhauls how nameref variables resolve and fixes “a number of potential out-of-bounds memory errors discovered via fuzzing,” says the GNU Project’s readme. Changes have been made to the “expansion of $@ and $* in various contexts where word splitting is not performed to conform to a Posix standard interpretation.” Other fixes resolve corner cases for Posix conformance.

Finally, Bash 5.0 introduces a few incompatibilities compared to the most recent Bash 4.4.x. For example, changes to how nameref variables are resolved can cause different behaviors for some uses of namerefs.

Bash to basics

Bash (Bourne-Again Shell) may be 5.0 in development years, but it’s a lot older in Earth orbits. The utility will soon celebrate its 30th anniversary since Brian Fox released Bash 1.0 beta release in June 1989.

Over the years, Bash has expanded upon the POSIX shell spec with interactive command line editing, history substitution, brace expansion, and on some architectures, job control features. It has also borrowed features from the Korn shell (ksh) and the C shell (csh). Most sh scripts can be run by Bash without modification, says the GNU Project.

Bash and other Bourne-based shell utilities have largely survived the introduction of GUI alternatives to the command line such as Git GUI. Experienced Linux developers — and especially sysadmins — tend to prefer the greater speed and flexibility of working directly with the command line. There are also situations where the GUI will spit you back to the command line anyway.

It’s really a matter of whether you will be spending enough time doing Linux development or administration to make it worthwhile to learn the commands. Besides, in a movie, isn’t it more exciting to watch the hacker frantically clacking away at the command line to disable the nuclear weapon rather than clicking options off a menu? Clacking rules!

Bash 5.0 is available for download from the GNU Project’s Bash 5.0 readme page.

Source

Essential System Tools: Krusader – KDE file manager

Essential System Tools

Essential System Tools: Krusader – KDE file manager

This is the latest in our series of articles highlighting essential system tools. These are small, indispensable utilities, useful for system administrators as well as regular users of Linux based systems. The series examines both graphical and text based open source utilities. For this article, we’ll look at Krusader, a free and open source graphical file manager. For details of all tools in this series, please check the table at the summary page of this article.

Krusader is an advanced, twin-panel (commander-style) file manager designed for KDE Plasma. Krusader also runs on other popular Linux desktop environments such as GNOME.

Besides comprehensive file management features, Krusader is almost completely customizable, fast, seamlessly handles archives, and offers a huge feature set.

Krusader is implemented in C++.

Installation

Popular Linux distros provide convenient packages for Krusader. So you shouldn’t need to compile the source code.

If you do want to compile the source code, bear in mind recent versions of Krusader use libraries like Qt5 and KF5, and don’t work on KDE Plasma 4 or older.

On one of our vanilla test machines, KDE Plasma is not installed and there are no KDE applications installed. If you don’t currently use any KDE applications, remember that installing Krusader will drag in many other packages. Krusader’s natural environment is KDE Plasma 5, because it depends on services provided by KDE Frameworks 5 base libraries.

The image below illustrates this point sweetly. Installing Krusader without Plasma requires 36 packages to be installed, consuming a whopping 148 MiB of hard disk space.

Krusader Installation

The image below offers a stark contrast. Here, a different test machine, ‘pluto’, is a vanilla Linux installation running KDE Plasma 5. Installing Krusader doesn’t pull in any other packages. Installing Krusader only consumes 14.90 MiB of disk space.

Krusader-KDE-install

Some of Krusader’s functionality is sourced from external tools. On the first run, Krusader searches for available tools in your $PATH. Specifically, it checks for a diff utility (kdiff3, kompare or xxdiff), an email client (Thunderbird or KMail), a batch renamer (KRename), and a checksum utility (md5sum). It also searches for (de)compression tools (tar, gzip, bzip2, lzma, xz, lha, zip, unzip, arj, unarj, unace, rar, unrar, rpm, dpkg, and 7z). You’re then presented with a Konfigurator window which lets you configure the file manager.

Krusader’s internal editor requires Kate is installed. Kate is a competent text editor developed by KDE.

Krusader is implemented in C++.

In Operation

Here’s Krusader in operation.

Krusader

Let’s break down the user interface. At the top is a standard menu bar which allows access to the features and functions of the file manager. “Useractions” seems a quirky entry.

Then there’s the main tool bar which offers access to commonly used functions. There’s a location tool bar, information label, and panel tool bars. The majority of the window is taken up by the left and right panels. Having two panels makes dragging and dropping files easy.

At the bottom there’s total labels, tabs, tab controls, function key buttons, and a status bar. You can also show a command line but that’s not enabled by default.

Krusader’s tabs let you switch between different directories in one panel, without affecting the directory displayed in the adjacent panel.

Places, favorites and volumes are available in each panel, not on a common side bar.

Krusader offers a wide range of features. We’ll have a look at some of the stand out features for illustration purposes — there are too many to go into great detail on them all! We’re also not going to consider basic functions, or basic file management operations, just take them for granted.

KRename is integrated with Krusader. Another highlight is BookMan, Krusader’s bookmark tool for bookmarking folders, local and remote URLs, and later returning to them in a click of a button.

There’s built in tree, file previews, file split and join, as well as compress/decompress functions.

Krusader can launch a program by clicking on a data file. For example, clicking on an R file launches that document in RStudio.

With profiles you can save and restore your favorite settings. Several features support profiles, you can have e.g. different panel profiles (work, home, remote connections, etc.), search profiles, synchroniser profiles, etc.

KruSearcher

One of the strengths of Krusader is its ability to quickly locate files both locally and on remote file systems. There’s a General Section which covers most searches you’ll want to perform, but if you need additional functionality there’s an Advanced section too.

Let’s take a very simple search. We’re looking to find files in /home/sde/R (and sub-directories) that match the suffix .rdx.

Krusader-KruSearcher

There’s a separate tab that displays the results of the search.

Krusader-KruSearcher-Results

Of course this is an extremely basic search. You can append multiple searches in “the Search for” bar, with or without wildcards, and exclude searches with the | character. You also have the option to specify multiple directories to search or exclude, as well as the ability to search for patterns in files (like grep). There’s recursive searching, the option to search archives, and to follow soft-links during a search.

But that’s not the extent of the search functionality. With the advanced tab, you can restrict search to files matching a specific size or size range, date options, and even by ownership.

Krusader-KruSearcher-Advanced

In the bottom left of each dialog box, there’s a profiles button. This can be a time-saver if you often perform the same search operation. It allows you save search settings.

Synchronise Folders

This function compares two directories with all subdirectories and shows the differences between them. It’s accessed from Tools | Synchronise Folders (or from the keyboard shortcut Ctrl+Y).

The tool lets you synchronize the files and directories, one of the panels can be a remote location.

Here’s a comparison of two directories stored on different partitions.

Krusader-Synchronise

The image below shows that to synchronise the two directories, 45 files will be copied.

Krusader-Synchronise-Action

The Synchronizer is not the only way to compare files. There’s other compare functions available. Specifically, you can compare files by content, and compare directories. The compare by content functionality (accessed from the menu bar “File | Compare by Content”) calls an external graphical difference utility; either Kompare, KDiff3, or xxdiff.

Disk Usage

A disk usage analyzer is a utility which helps users to visualize the disk space being used by each folder and files on a hard disk or other storage media. This type of application often generates a graphical chart to help the visualization process.

Disk usage analyzers are popular with system administrators as one of their essential tools to prevent important directories and partitions from running out of space. Having a hard disk with insufficient free space can often have a detrimental effect on the system’s performance. It can even stop users from logging on to the system, or, in extreme circumstances, cause the system to hang.

However, disk usage analyzers are not just useful tools for system administrators. While modern hard disks are terabytes in size, there are many folk who seem to forever run out of hard drive space. Often the culprit is a large video and/or audio collection, bloated software applications, or games. Sometimes the hard disk is also full of data that that users have no particular interest in. For example, left unchecked, log files and package archives can consume large chunks of hard disk space.

Krusader offers built-in disk usage functionality. It’s accessed from Tools | Disk Usage (or with the keyboard shortcut Alt+Shift+S).

Here’s an image of the tool running.

Krusader-Disk-Usage-Running

And the output image showing the disk space consumed by each directory.

Krusader-Disk-Usage-Results

We’re not convinced that Krusader’s implementation is one of its strong points. We also experienced segmentation faults running the software in GNOME, although no such issues were found with KDE as our desktop environment. There’s definitely room for improvement.

Checksum generation and checking

A checksum is the result of running a checksum algorithm, called a cryptographic hash function, on an item of data, typically a single file. A hash function is an algorithm that transforms (hashes) an arbitrary set of data elements, such as a text file, into a single fixed length value (the hash).

Comparing the checksum that you generate from your version of the file, with the one provided by the source of the file, helps ensure that your copy of the file is genuine and error free. By themselves, checksums are often used to verify data integrity but are not relied upon to verify data authenticity.

You can create a checksum from File | Create Checksum.

Krusader-Checksum

There’s the option to choose the checksum method from a dropdown list. The supported checksum methods are:

  • md5 – a widely used hash function producing a 128-bit hash value.
  • sha1 – Secure Hash Algorithm 1. This cryptographic hash function is not considered secure.
  • sha224 – part of SHA-2 set of cryptographic hash functions. SHA224 produces a 224-bit (28-byte) hash value, typically rendered as a hexadecimal number, 56 digits long.
  • sha256 – produces a 256-bit (32-byte) hash value, typically rendered as a hexadecimal number, 64 digits long.
  • sha384 – produces a 384-bit (48-byte) hash value, typically rendered as a hexadecimal number, 96 digits long.
  • sha512 – produces a 512-bit (64-byte) hash value, typically rendered as a hexadecimal number, 128 digits long.

Krusader checks if you have a tool that supports the type of checksum you need (from your specified checksum file) and displays the files that failed the checksum (if any).

Custom commands

Krusader can be extended with custom add-ons called User Actions.

User Actions are a way to call external programs with variable parameters.

There are a few example User Actions provided which will help you get started. And KDE’s store offers, at the time of writing, 45 community created add-ons, which help to illustrate the possibilities.

Krusader-ActionMan

 

 

 

 

 

 

 

 

 

 

 

 

MountMan

MountMan is a tool which helps you manage your mounted file systems. Mount or unmount file systems of all types with a single mouse click.

When started from the menu (Tools | MountMan), it displays a list of all mounted file systems.

For each file system, MountMan displays its name (which is the actual device name – i.e. /dev/sda1 for a first partition on the first hard disk drive), its file system type (ext4, ext3, ntfs, vfat, ReiserFS etc) and its mount point on your system (the directory on which the file system is mounted).

MountMan also displays usage information using total size, free size, and percentage of available space free. You can sort by clicking the title of any column (in ascending or descending order).

Here’s an image from one of our test systems.

Krusader-MountMan

On this test system, the Free % column doesn’t list the partitions in the correct order.

Highly Configurable

Krusader offers a wealth of configuration options. Use the Menu bar and choose “Settings | Configure Krusader”. This opens a dialog box with many options for configuring the software.

In the Start up section, you can choose a startup profile. This can be a real time-saver. There’s a last session option.

Krusader-Konfigurator-Startup

The Panel section has a whole raft of configuration options with sections for General, View, Buttons, Selection Mode, Media Menu and Layout.

Krusader-Konfigurator-Panel

By default the software uses KDE colours, but you can configure the colours of every element to your heart’s content.

Krusader-Konfigurator-Colours

This panel covers basic operations, including the external terminal, the viewer/editor, and Atomic extensions.

Krusader-Konfigurator-General

With the Advanced section, you can automount filesystems, turn off specific user confirmations (not recommended), even fine-tune the icon cache size (which alters the memory footprint of Krusader).

Krusader-Konfigurator-Advanced

The Archives section lets you change the way the software deals with archives. We don’t recommend enabling write support into an archive as there’s the possibility of data loss in the event of a power failure.

Krusader-Konfigurator-Archives

The dependencies section is where you define the location of external applications including general tools, packers and checksum utilities.

Krusader-Konfigurator-Dependencies

The User Actions sections lets you configure settings in relation to ‘useractions’. You can also change the font for the output-collection.

Krusader-Konfigurator-User-Actions

The final section links the MIMEs to protocols.

Krusader-Konfigurator-Protocols

Website: krusader.org
Support: Krusader Handbook
Developer: Krusader Krew
License: GNU General Public License v2

———————————————————————————————–

Other tools in this series:

Essential System Tools
ps_mem Accurate reporting of software’s memory consumption
gtop System monitoring dashboard
pet Simple command-line snippet manager
Alacritty Innovative, hardware-accelerated terminal emulator
inxi Command-line system information tool that’s a time-saver for everyone
BleachBit System cleaning software. Quick and easy way to service your computer
catfish Versatile file searching software
journalctl Query and display messages from the journal
Nmap Network security tool that builds a “map” of the network
ddrescue Data recovery tool, retrieving data from failing drives as safely as possible
Timeshift Similar to Windows’ System Restore functionality, Time Machine Tool in Mac OS
GParted Resize, copy, and move partitions without data
Clonezilla Partition and disk cloning software
fdupes Find or delete duplicate files
Krusader Advanced, twin-panel (commander-style) file manager
nmon Systems administrator, tuner, and benchmark tool
f3 Detect and fix counterfeit flash storage
QJournalctl Graphical User Interface for systemd’s journalctl

Source

Turn a Raspberry Pi 3B+ into a PriTunl VPN

PriTunl is a VPN solution for small businesses and individuals who want private access to their network.

PriTunl is a fantastic VPN terminator solution that’s perfect for small businesses and individuals who want a quick and simple way to access their network privately. It’s open source, and the basic free version is more than enough to get you started and cover most simple use cases. There is also a paid enterprise version with advanced features like Active Directory integration.

Special considerations on Raspberry Pi 3B+

PriTunl is generally simple to install, but this project—turning a Raspberry Pi 3B+ into a PriTunl VPN appliance—adds some complexity. For one thing, PriTunl is supplied only as AMD64 and i386 binaries, but the 3B+ uses ARM architecture. This means you must compile your own binaries from source. That’s nothing to be afraid of; it can be as simple as copying and pasting a few commands and watching the terminal for a short while.

Another problem: PriTunl seems to require 64-bit architecture. I found this out when I got errors when I tried to compile PriTunl on my Raspberry Pi’s 32-bit operating system. Fortunately, Ubuntu’s beta version of 18.04 for ARM64 boots on the Raspberry Pi 3B+.

Also, the Raspberry Pi 3B+ uses a different bootloader from other Raspberry Pi models. This required a complicated set of steps to install and update the necessary files to get a Raspberry Pi 3B+ to boot.

Installing PriTunl

You can overcome these problems by installing a 64-bit operating system on the Raspberry Pi 3B+ before installing PriTunl. I’ll assume you have basic knowledge of how to get around the Linux command line and a Raspberry Pi.

Start by opening a terminal and downloading the Ubuntu 18.04 ARM64 beta release by entering:

wget http://cdimage.ubuntu.com/releases/18.04/beta/ubuntu-18.04-beta-preinstalled-server-arm64+raspi3.img.xz

Unpack the download:

xz -d ubuntu-18.04-beta-preinstalled-server-arm64+raspi3.xz

Insert the SD card you’ll use with your Raspberry Pi into your desktop or laptop computer. Your computer will assign the SD card a drive letter—something like /dev/sda or /dev/sdb. Enter the dmesg command and examine the last lines of the output to find out the card’s drive assignment.

Be VERY CAREFUL with the next step! I can’t stress that enough; if you get the drive assignment wrong, you could destroy your system.

Write the image to your SD card with the following command, changing <DRIVE> to your SD card’s drive assignment (obtained in the previous step):

dd if=ubuntu-18.04-beta-preinstalled-server-arm64+raspi3.img of=<DRIVE> bs=8M

After it finishes, insert the SD card into your Pi and power it up. Make sure the Pi is connected to your network, then log in with username/password combination ubuntu/ubuntu.

Enter the following commands on your Pi to install a few things to prepare to compile PriTunl:

sudo apt-get -y install build-essential git bzr python python-dev python-pip net-tools openvpn bridge-utils psmisc golang-go libffi-dev mongodb

There are a few changes from the standard PriTunl source installation instructions on GitHub. Make sure you are logged into your Pi and sudo to root:

sudo su -

This should leave you in root’s home directory. To install PriTunl version 1.29.1914.98, enter (per GitHub):

export VERSION=1.29.1914.98
tee -a ~/.bashrc << EOF
export GOPATH=\$HOME/go
export PATH=/usr/local/go/bin:\$PATH
EOF

source ~/.bashrc
mkdir pritunl && cd pritunl
go get -u github.com/pritunl/pritunl-dns
go get -u github.com/pritunl/pritunl-web
sudo ln -s ~/go/bin/pritunl-dns /usr/bin/pritunl-dns
sudo ln -s ~/go/bin/pritunl-web /usr/bin/pritunl-web
wget https://github.com/pritunl/pritunl/archive/$VERSION.tar.gz
tar -xf $VERSION.tar.gz
cd pritunl-$VERSION
python2 setup.py build
pip install -r requirements.txt
python2 setup.py install –prefix=/usr/local

Now the MongoDB and PriTunl systemd units should be ready to start up. Assuming you’re still logged in as root, enter:

systemctl daemon-reload
systemctl start mongodb pritunl
systemctl enable mongodb pritunl

That’s it! You’re ready to hit PriTunl’s browser user interface and configure it by following PriTunl’s installation and configuration instructions on its website.

Related Stories:

Source

NVIDIA GeForce GTX 760/960/1060 / RTX 2060 Linux Gaming & Compute Performance Review

The NVIDIA GeForce RTX 2060 is shipping today as the most affordable Turing GPU option to date at $349 USD. Last week we posted our initial GeForce RTX 2060 Linux review and followed-up with more 1080p and 1440p Linux gaming benchmarks after having more time with the card. In this article is a side-by-side performance comparison of the GeForce RTX 2060 up against the GTX 1060 Pascal, GTX 960 Maxwell, and GTX 760 Kepler graphics cards. Not only are we looking at the raw OpenGL, Vulkan, and OpenCL/CUDA compute performance between these four generations, but also the power consumption and performance-per-Watt.

As some interesting tests following the earlier RTX 2060 Linux benchmarks, over the weekend I wrapped up some GTX 760 vs. GTX 960 vs. GTX 1060 vs. RTX 2060 benchmarks on the same Ubuntu 18.04 LTS system with the NVIDIA 415.25 driver on the Linux 4.20 kernel. Here are some of the key specifications as a reminder:

The GeForce RTX 2060 also has the ray-tracing capabilities, tensor cores, USB Type-C VirtualLink, and other advantages over the previous generations.

Via the Phoronix Test Suite a range of graphics/gaming and compute benchmarks were carried out. The Phoronix Test Suite was also polling the AC system power consumption in real-time from a WattsUp Pro power meter in order to generate performance-per-Watt metrics for each game/application under test.

 

Source

Understanding the Boot process — BIOS vs UEFI – Linux Hint

The boot process is universe unto its own. A lot of steps are needed to happen before your operating system takes over and you get a

running system.

In some sense, there is a tiny embedded OS involved in this whole process. While the process differs from one hardware platform to another, and from one OS to another, let’s look at some of the commonalities that will help us gain a practical understanding of the boot process.

Let’s talk about the regular, non-UEFI, boot process first. What happens between that point in time where you press the power ON button to the point where your OS boots and presents you with a login prompt.

Step1: The CPU is hardwired to run instructions from a physical component, called NVRAM or ROM, upon startup. These instructions constitute the system’s firmware. And it is this firmware where the distinction between BIOS and UEFI is drawn. For now let’s focus on BIOS.

It is the responsibility of the firmware, the BIOS, to probe various components connected to the system like disk controllers, network interfaces, audio and video cards, etc. It then tries to find and load the next set of bootstrapping code.

The firmware goes through storage devices (and network interfaces) in a predefined order, and tries to find a bootloader stored within them. This process is not something a user typically involves herself with. However, there’s a rudimentary UI that you can use to tweak various parameters concerning the system firmware, including the boot order.

You enter this UI by typically holding F12, F2 or DEL key as the system boots. To look for specific key in your case, refer your motherboard’s manual.

Step2: BIOS, then assumes that the boot device starts with an MBR (Master Boot Record) which containers a first-stage boot loader and a disk partition table. Since this first block, the boot-block, is small and the bootloader is very minimalist and can’t do much else, for example, read a file system or load a kernel image.

So the second stage bootloader is called into being.

Step3: The second stage bootloader is responsible for locating and loading the proper Operating System kernel into the memory. The most common example, for Linux users, is the GRUB bootloader. In case you are dual-booting, it even provider you with a simple UI to select the appropriate OS to start.

Even when you have a single OS installed, GRUB menu lets you boot into advanced mode, or rescue a corrupt system by logging into single user mode. Other operating systems have different boot loaders. FreeBSD comes with one of its own so do other Unices.

Step4: Once the appropriate kernel is loaded, there’s still a whole list of userland processes are waiting to be initialized. This includes your SSH server, your GUI, etc if you are running in multiuser mode, or a set of utilities to troubleshoot your system if you are running in single user mode.

Either way an init system is required to handle the initial process creation and continued management of critical processes. Here, again we have a list of different options from traditional init shell scripts that primitive Unices used, to immensely complex systemd implementation which has taken over the Linux world and has its own controversial status in the community. BSDs have their own variant of init which differs from the two mentioned above.

This is a brief overview of the boot process. A lot of complexities have been omitted, in order to make the description friendly for the uninitiated.

UEFI specifics

The part where UEFI vs BIOS difference shows up is in the very first part. If the firmware is of a more modern variant, called UEFI, or Unified Extensible Firmware Interface, it offers a lot more features and customizations. It is supposed to be much more standardized so motherboard manufacturers don’t have to worry about every specific OS that might run on top of them and vice versa.

One key difference between UEFI and BIOS is that UEFI supports a more modern GPT partitioning scheme and UEFI firmware has the capability to read files from a small FAT system.

Often, this means that your UEFI configuration and binaries sit on a GPT partition on your hard disk. This is often known as ESP (EFI System Partition) mounted at /efi, typically.

Having a mountable file system means that your running OS can read the same file system (and dangerously enough, edit it as well!). Many malware exploit this capability to infect the very firmware of your system, which persists even after an OS reinstall.

UEFI being more flexible, eliminates the necessity of having a second stage boot loader like GRUB. Often times, if you are installing a single (well-supported) operating system like Ubuntu desktop or Windows with UEFI enabled, you can get away with not using GRUB or any other intermediate bootloader.

However, most UEFI systems still support a legacy BIOS option, you can fall back to this if something goes wrong. Similarly, if the system is installed with both BIOS and UEFI support in mind, it will have an MBR compatible block in the first few sectors of the hard disk. Similarly, if you need to dual boot your computer, or just use second stage bootloader for other reasons, you are free to use GRUB or any other bootloader that suits your use case.

Conclusion

UEFI was meant to unify the modern hardware platform so operating system vendors can freely develop on top of them. However, it has slowly turned into a bit of a controversial piece of technology especially if you are trying to run open source OS on top of it. That said, it does have its merit and it is better to not ignore its existence.

On the flip-side, legacy BIOS is also going to stick around for at least a few more years in the future. Its understanding is equally important in case you need to fall back to BIOS mode to troubleshoot a system. Hope this article informed you well enough about both these technologies so that the next time you encounter a new system in the wild you can follow along the instructions of obscure manuals and feel right at home.

Source

WP2Social Auto Publish Powered By : XYZScripts.com