Trying To Make Ubuntu 18.10 Run As Fast As Intel’s Clear Linux

With the recent six-way Linux OS tests on the Core i9 9900K there was once again a number of users questioning the optimizations by Clear Linux out of Intel’s Open-Source Technology Center and remarks whether changing the compiler flags, CPU frequency scaling governor, or other settings would allow other distributions to trivially replicate its performance. Here’s a look at some tweaked Ubuntu 18.10 Cosmic Cuttlefish benchmarks against the latest Intel Clear Linux rolling-release from this i9-9900K 8-core / 16-thread desktop system.

 

 

In the forum comments and feedback elsewhere to that previous Linux distribution comparison, there were random comments questioning:

– Whether Clear Linux’s usage of the P-State “performance” governor by default explains its performance advantage with most other distributions opting for the “powersave” governor.

– Clear Linux is faster because its built with the Intel Compiler (ICC). This is not the case at all with Clear being built by GCC and LLVM/Clang, but seems to be a common misconception. So just tossing that out there…

– Clear Linux is faster because of its aggressive default CFLAGS/CXXFLAGS/FFLAGS. This does certainly help in some built-from-source benchmarks, but that’s not all.

About a year ago I tried similar tests to today of tweaking Ubuntu 17.10 to try to run like Clear Linux while in this article is a fresh look. The OS configurations tested were: Clear Linux – Clear Linux running on the i9-9900K with its Linux 4.18.16 kernel, Mesa 18.3-devel, GCC 8.2.1, EXT4 file-system, and other default components.

Ubuntu – The default Ubuntu 18.10 installation on the same system with its Linux 4.18 kernel, Mesa 18.2.2, GCC 8.2.0, EXT4, and other stock components/settings. Ubuntu + Perf Gov – The same Ubuntu 18.10 stack but when switching over to the P-State performance governor rather than the default P-State powersave mode. Ubuntu + Perf + Flags – The P-State performance mode from above on Ubuntu 18.10 but also setting the same CFLAGS/CXXFLAGS/FFLAGS as used by Clear Linux before re-building all of the source-based benchmarks to compare the performance impact of the default tuning parameters. Ubuntu + Perf + Flags + Kernel – The Ubuntu 18.10 tweaked state of above with the P-State performance governor and tuned compiler flags while also building a Linux 4.18.16 kernel from source with the relevant patches and Kconfig configuration as shipped by Clear Linux. Their kernel configuration and carried patches can be found via clearlinux-pkgs/linux on GitHub. Ubuntu + Clear Docker – Ubuntu 18.10 with the P-State performance governor, running on the Clear Linux optimized kernel, and using Docker CE to run the latest Clear Linux Docker image for having all of the Clear user-space components running within this container. The same system was used for all of the testing and was the Intel Core i9 9900K at stock speeds, ASUS PRIME Z390-A motherboard, 16GB DDR4-3200 memory, Samsung 970 EVO 250GB NVMe SSD, and Radeon RX Vega 64 8GB graphics card.

 

 

I ran 92 different tests on Ubuntu 18.10 and Clear Linux for a wide look at the performance between these distributions ranging from scripting language benchmarks like PHP and Python to various scientific workloads, code compilation, and other tests. With the 92 test runs, here are the key findings from this large round of testing of Clear Linux compared to Ubuntu 18.10 in five different tuned states:

– When comparing the out-of-the-box Clear Linux to Ubuntu 18.10, the Intel distribution was the fastest in 66 of the benchmarks (72%) with Ubuntu only taking the lead in 26 of these different benchmarks.

– Switching to the P-State “performance” governor on Ubuntu 18.10 only allowed it to win over Clear Linux in an extra 5 benchmarks… Clear Linux still came out ahead 66% of the time against Ubuntu either out-of-the-box or with the performance governor.

– In the third state of Ubuntu 18.10 with using the P-State performance governor and copying Clear’s compiler flags allowed Ubuntu 18.10 to enhance its performance relative to the default Ubuntu configuration, but Clear Linux was still leading ~66% of the time.

– When pulling in the Clear Linux kernel modifications to Ubuntu 18.10 and keeping the optimized compiler flags and performance governor, Ubuntu 18.10 just picked up one more win while Clear Linux was still running the fastest in 59 of the 92 benchmarks.

– Lastly, when running the Clear Linux Docker container on Ubuntu 18.10 while keeping the tweaked kernel and P-State performance governor, Clear Linux won now in “just” 54 of the 92 benchmarks, or about 59% of the time it was the fastest distribution.

Going to these varying efforts to tweak Ubuntu for faster performance resulted in Clear Linux’s lead shrinking from 72% to 58%… Or about 64% if not counting the run of using the Clear Linux Docker container itself on Ubuntu 18.10 for the optimized Clear user-space.

This data shows that Clear Linux still does much more than adjusting a few tunables to get to the leading performance that it’s not as trivial as adjusting CFLAGS/CXXFLAGS, opting for the performance governor, etc. Clear additionally makes use of GCC Function Multi-Versioning (FMV) for optimizing its binaries to use the fastest code path depending upon the CPU detected at run-time among other compiler/tooling optimizations. It also often patches its Glibc and other key components beyond just Linux kernel patches not yet ready to be mainlined. Other misconceptions to clear up about this open-source operating system is that it does not use the Intel ICC compiler, it does run on AMD hardware (and does so in a speedy manner as well), and runs on Intel hardware going back to around Sandy Bridge, just not the very latest and greatest generations.

While the prominent performance numbers are already shared, on the following pages are a look at some of the interesting benchmark results from this comparison.
Source

New Quick Start builds a CI/CD pipeline to test AWS CloudFormation templates using AWS TaskCat

Posted On: Oct 30, 2018

This Quick Start deploys a continuous integration and continuous delivery (CI/CD) pipeline on the Amazon Web Services (AWS) Cloud in about 15 minutes, to automatically test AWS CloudFormation templates from a GitHub repository.

The CI/CD environment includes AWS TaskCat for testing, AWS CodePipeline for continuous integration, and AWS CodeBuild as your build service.

TaskCat is an open-source tool that tests AWS CloudFormation templates. It creates stacks in multiple AWS Regions simultaneously and generates a report with a pass/fail grade for each region. You can specify the regions, indicate the number of Availability Zones you want to include in the test, and pass in the AWS CloudFormation parameter values you want to test. You can use the CI/CD pipeline to test any AWS CloudFormation templates, including nested templates, from a GitHub repository.

To get started:

You can also download the AWS CloudFormation template that automates the deployment from

GitHub, or view the TaskCat source code.

To browse and launch other AWS Quick Start reference deployments, see our

complete catalog

Quick Starts are automated reference deployments that use AWS CloudFormation templates to deploy key technologies on AWS, following AWS best practices. This Quick Start was built by AWS solutions architects.

Source

Braiins OS Is The First Fully Open Source, Linux-based Bitcoin Mining System

Braiins Systems, the company behind the Slush Pool, has announced Braiins OS. The creators of this bitcoin mining software have claimed that it’s the world’s first fully open source system for cryptocurrency embedded devices.

The initial release of the operating system is based on OpenWrt, which is basically a Linux operating system for embedded devices. You can find its code here.

Those who know about OpenWrt must be aware of the fact that it’s very versatile. As a result, Braiins OS can also be extended in different applications in future.

In a Medium post, Braiins Systems has said that different weird cases of non-standard behavior of mining devices cause tons of issues. With this new mining software, the company wishes to make things easier for mining pool operators and miners.

The OS keeps monitoring the working conditions and hardware to create reports of errors and performance. Braiins also claimed to reduce power consumption by 20%.

The very first Braiins OS release lets you download the images for Antminer S9 and DragonMintT1. Currently, the software is in the alpha stage, and the developers have requested the miners to test it and share the feedback.

Also Read: Top 10 Best Linux Distros For 2018 — Distro Choosing Guide

Source

Install Ubuntu on Raspberry Pi

Canonical released a minimal version of Ubuntu specifically made for IoT devices which is called Ubuntu Core. Ubuntu Core requires less storage and memory to run. Ubuntu Core is really fast. It is very lightweight. Ubuntu Core can be installed on Raspberry Pi microcomputers. You need Raspberry Pi 2 or 3 single board microcomputer if you want to install and run Ubuntu Core on it.

In this article, I will show you how to install Ubuntu Core on Raspberry Pi 3 Model B. So, let’s get started.

To follow this article, you need:

  • Raspberry Pi 2 or 3 Single Board Microcomputer.
  • A 16GB or more microSD card.
  • HDMI Cable.
  • An USB Keyboard.
  • Ethernet Cable.
  • Power Adapter for Raspberry Pi.
  • A Laptop or Desktop computer for installing/flashing Ubuntu Core on the SD card.

Setting Up Ubuntu One Account for Ubuntu Core:

If you want to use Ubuntu Core on your Raspberry Pi 3, then you need an Ubuntu One account. If you don’t have an Ubuntu One account, you can create one for free. Just visit https://login.ubuntu.com and click on I don’t have an Ubuntu One account as marked in the screenshot below.

Now, fill in the required details and click on Create account.

Now, verify your email address and your account should be created. Now, visit https://login.ubuntu.com/ and login to your Ubuntu One account. Now, click on SSH keys and you should see the following page. Here, you have to import the SSH key of the machine from which you will be connecting to your Ubuntu Core installed on your Raspberry Pi 3 device.

You can generate SSH key very easily with the following command:

By default, the SSH keys will be saved in the .ssh/ directory of your login user’s HOME directory. If you want to save it somewhere else, just type in the path where you would like to save it and press <Enter>. I will leave the defaults.

Now, press <Enter>.

NOTE: If you want to encrypt the SSH key with password, type it in here and press <Enter>.

Press <Enter> again.

NOTE: If you’ve typed in a password in the earlier step, just re-type the same password and press <Enter>.

Your SSH key should be generated.

Now, read the SSH key with the following command:

Now, copy the SSH key as marked in the screenshot below.

Now, paste it in the Ubuntu One website and click on Import SSH key as marked in the screenshot below.

As you can see, the SSH key is added.

Downloading Ubuntu Core:

Now that you have your Ubuntu One account set up, it’s time to download Ubuntu Core. First, go to the official website of Ubuntu at https://www.ubuntu.com/download/iot/raspberry-pi-2-3

Now, scroll down to the Download Ubuntu Core section and click on the download link for either Raspberry Pi 2 or Raspberry Pi 3 depending on the version of Raspberry Pi you have. I have Raspberry Pi 3 Model B, so I am going for the Raspberry Pi 3 image.

Your download should start.

Flashing Ubuntu Core on microSD Card:

You can flash Ubuntu Core on your microSD card very easily on Windows, Linux and macOS operating system using Etcher. Etcher is a really easy to use software for flashing microSD cards for Raspberry Pi devices. You can download Etcher from the official website of Etcher at https://etcher.io/

NOTE: I can’t show you how to install Etcher in this article as it is out of the scope of this article. You should be able to install Etcher on your own. It’s very easy.

Once you install Etcher, open Etcher and click on Select image.

A file picker should be opened. Now, select the Ubuntu Core image that you just downloaded and click on Open.

Now, insert the microSD card on your computer and click on Select drive.

Now, click to select your microSD card and click on Continue.

Finally, click on Flash!

As you can see, your microSD card is being flashed…

Once your microSD card is flashed, close Etcher.

Preparing Raspberry Pi:

Now that you have flashed Ubuntu Core on the microSD card, insert it on your Raspberry Pi’s microSD card slot. Now, connect one end of the Ethernet cable to the RJ45 Ethernet port of your Raspberry Pi and another end to one of the port on your Router or Switch. Now, connect one end of the HDMI cable to your Raspberry Pi and the other end to your Monitor. Also, connect the USB keyboard to one of the USB port of your Raspberry Pi. Finally, plug in the power adapter to your Raspberry Pi.

After connecting everything, my Raspberry Pi 3 Model B looks as follows:

Setting Up Ubuntu Core on Raspberry Pi:

Now, power on your Raspberry Pi device and it should boot into Ubuntu Core as you can see in the screenshot below.

One you see the following window, press <Enter> to configure Ubuntu Core.

First, you have to configure networking. This is essential for Ubuntu Core to work. To do that, press <Enter> here.

As you can see, Ubuntu Core has automatically configured the network interface using DHCP. The IP address is 192.168.2.15 in my case. Yours should be different. Once you’re done, select [ Done ], press <Enter>.

Now, type in the email address that you used to create your Ubuntu One account. Then, select [ Done ] and press <Enter>.

The configuration is complete. Now press <Enter>.

Now, you should see the following window. You can SSH into your Raspberry Pi with the command as marked in the screenshot below.

Connecting to Raspberry Pi Using SSH:

Now, SSH into your Raspberry Pi device from your computer as follows:

$ ssh dev.shovon8@192.168.2.15

Now, type in yes and press <Enter>.

You should be logged into your Raspberry Pi.

As you can see, I am running Ubuntu Core 16.

It’s using just a few megabytes of memory. It’s very lightweight as I said.

So, that’s how you install Ubuntu Core on Raspberry Pi 2 and Raspberry Pi 3. Thanks for reading this article.

Source

Download Bitnami GitLab Stack Linux 11.4.3-0

Bitnami GitLab Stack is a freely distributed and multiplatform software project that greatly simplifies the installation and hosting of the GitLab application, as well as of its runtime dependencies, on personal computers, so you can easily run your own GitLab server.

What is GitLab?

GitLab is an open source and self hosted Git management application, which can be easily described as a secure, stable and fast solution based on Gitolite and Rails. BitNami GitLab Stack will install the following packages: GitLab, Apache, Ruby, Rails, Redis, GitLab’s fork for Gitolite and Git.

Installing Bitnami GitLab Stack

Bitnami GitLab Stack is distributed as native installers, which have been built using BitRock’s cross-platform installer tool. They are available for all GNU/Linux distributions, but won’t work on Microsoft Windows and Mac OS X operating systems.

To install GitLab on your desktop computer or laptop, simply download the package that corresponds to your computer’s hardware architecture (32-bit or 64-bit), make it executable, run it and follow the instructions displayed on the screen.

Run GitLab in the cloud

Thanks to Bitnami, customers are now able to run the GitLab application in the cloud with their hosting platform or by using a pre-built cloud image for the Amazon EC2 or Windows Azure, or any other supported cloud hosting provider.

Bitnami’s GitLab virtual appliance

In addition to run GitLab in the cloud or to install it on personal computers, you can also virtualize it using Bitnami’s virtual appliance, which is based on the latest stable release of Ubuntu (64-bit), and designed for the VMware ESX, ESXi and Oracle VirtualBox virtualization software.

The GitLab Docker container and LAMP module

Bitnami will also provide users with a GitLab Docker container, which can be downloaded from the project’s homepage (see link below). Unfortunately, they don’t provide a module that could have allowed you to deploy GitLab on top of your LAMP (Linux, Apache, MySQL and PHP) stack.

Source

Normalizing Filenames and Data with Bash

URLify: convert letter sequences into safe URLs with hex
equivalents.

This is my 155th column. That means I’ve been writing for Linux
Journal
for:

$ echo “155/12” | bc
12

No, wait, that’s not right. Let’s try that again:

$ echo “scale=2;155/12” | bc
12.91

Yeah, that many years. Almost 13 years of writing about shell scripts and
lightweight programming within the Linux environment. I’ve covered a lot
of ground, but I want to go back to something that’s fairly basic and
talk about filenames and the web.

It used to be that if you had filenames that had spaces in them, bad things would
happen: “my mom’s cookies.html” was a recipe for disaster, not
good cookies—um, and not those sorts of web cookies either!

As the web evolved, however, encoding of special characters became the norm,
and every Web browser had to be able to manage it, for better or worse. So
spaces became either “+” or %20 sequences, and everything else that
wasn’t a regular alphanumeric character was replaced by its hex ASCII
equivalent.

In other words, “my mom’s cookies.html” turned into
“my+mom%27s+cookies.html” or “my%20mom%27s%20cookies.html”.
Many symbols took on a second life too, so “&” and “=” and
“?” all got their own meanings, which meant that they needed to be
protected if they were part of an original filename too. And what about if
you had a “%” in your original filename? Ah yes, the recursive nature
of encoding things….

So purely as an exercise in scripting, let’s write a script that
converts any string you hand it into a “web-safe” sequence. Before
starting, however, pull out a piece of paper and jot down how you’d solve
it.

Normalizing Filenames for the Web

My strategy is going to be easy: pull the string apart into individual
characters, analyze each character to identify if it’s an alphanumeric,
and if it’s not, convert it into its hexadecimal ASCII equivalent,
prefacing it with a “%” as needed.

There are a number of ways to break a string into its individual letters,
but let’s use Bash string variable manipulations, recalling that
${#var}
returns the number of characters in variable $var, and that
$ will
return just the letter in $var at position x. Quick now, does indexing start
at zero or one?

Here’s my initial loop to break $original into its component letters:

input=”$*”

echo $input

for (( counter=0 ; counter < ${#input} ; counter++ ))
do
echo “counter = $counter — $”
done

Recall that $* is a shortcut for everything from the invoking command line
other than the command name itself—a lazy way to let users quote the
argument or not. It doesn’t address special characters, but that’s
what quotes are for, right?

Let’s give this fragmentary script a whirl with some input from the
command line:

$ sh normalize.sh “li nux?”
li nux?
counter = 0 — l
counter = 1 — i
counter = 2 —
counter = 3 — n
counter = 4 — u
counter = 5 — x
counter = 6 — ?

There’s obviously some debugging code in the script, but it’s
generally a good idea to leave that in until you’re sure it’s working
as expected.

Now it’s time to differentiate between characters that are acceptable
within a URL and those that are not. Turning a character into a hex sequence
is a bit tricky, so I’m using a sequence of fairly obscure
commands. Let’s start with just the command line:

$ echo ‘~’ | xxd -ps -c1 | head -1
7e

Now, the question is whether “~” is actually the hex ASCII sequence
7e or not. A quick glance at http://www.asciitable.com confirms that, yes, 7e is
indeed the ASCII for the tilde. Preface that with a percentage sign, and
the tough job of conversion is managed.

But, how do you know what characters can be used as they are? Because of the weird
way the ASCII table is organized, that’s going to be three ranges:
0–9 is in one area of the table, then A–Z in a second area and
a–z in a
third. There’s no way around it, that’s three range tests.

There’s a really cool way to do that in Bash too:

if [[ “$char” =~ [a-z] ]]

What’s happening here is that this is actually a regular expression (the
=~) and a range [a-z] as the test. Since the action
I want to take after
each test is identical, it’s easy now to implement all three tests:

if [[ “$char” =~ [a-z] ]]; then
output=”$output$char”
elif [[ “$char” =~ [A-Z] ]]; then
output=”$output$char”
elif [[ “$char” =~ [0-9] ]]; then
output=”$output$char”
else

As is obvious, the $output string variable will be built up to have the
desired value.

What’s left? The hex output for anything that’s not an otherwise
acceptable character. And you’ve already seen how that can be implemented:

hexchar=”$(echo “$char” | xxd -ps -c1 | head -1)”
output=”$output%$hexchar”

A quick run through:

$ sh normalize.sh “li nux?”
li nux? translates to li%20nux%3F

See the problem? Without converting the hex into uppercase, it’s a bit
weird looking. What’s “nux”? That’s just another step in the subshell
invocation:

hexchar=”$(echo “$char” | xxd -ps -c1 | head -1 |
tr ‘[a-z]’ ‘[A-Z]’)”

And now, with that tweak, the output looks good:

$ sh normalize.sh “li nux?”
li nux? translates to li%20nux%3F

What about a non-Latin-1 character like an umlaut or an n-tilde? Let’s
see what happens:

$ sh normalize.sh “Señor Günter”
Señor Günter translates to Se%C3B1or%200AG%C3BCnter

Ah, there’s a bug in the script when it comes to these two-byte character
sequences, because each special letter should have two hex byte sequences. In
other words, it should be converted to se%C3%B1or g%C3%BCnter (I restored the
space to make it a bit easier to see what I’m talking about).

In other words, this gets the right sequences, but it’s missing
a percentage sign—%C3B should be %C3%B, and
%C3BC should be %C3%BC.

Undoubtedly, the problem is in the hexchar assignment subshell statement:

hexchar=”$(echo “$char” | xxd -ps -c1 | head -1 |
tr ‘[a-z]’ ‘[A-Z]’)”

Is it the -c1 argument to xxd? Maybe. I’m going to leave identifying and
fixing the problem as an exercise for you, dear reader. And while you’re
fixing up the script to support two-byte characters, why not replace
“%20” with “+” too?

Finally, to make this maximally useful, don’t forget that there are a
number of symbols that are valid and don’t need to be converted within
URLs too, notably the set of “-_./!@#=&?”, so you’ll want to
ensure that they don’t get hexified (is that a word?).

Source

Ubuntu’s Cosmic Cuttlefish Brings Performance Improvements and More – Linux.com

 

Canonical has just recently announced that Ubuntu 18.10, code named ‘Cosmic Cuttlefish’, is ready for downloading at the Ubuntu release site. Some of the features of this new release include:

  • the latest version of Kubernetes with improved security and scalability
  • access to 4,100 snaps
  • better support for gaming graphics and hardware including support for the extremely fast Qualcomm Snapdragon 845
  • fingerprint unlocking for compatible systems (e.g., Ubuntu phones)

The new theme

The Yaru Community theme — the theme for Ubuntu 10.18 — is included with Ubuntu 18.10 along with a new desktop wallpaper that displays an artistic rendition of a cuttlefish (a marine animal related to squid, octopuses, and nautiluses).

Source

Papa’s Got a Brand New NAS: the Software

Who needs a custom NAS OS or a web-based GUI when command-line
NAS software is so easy to configure?

In a recent letter to the editor, I was contacted by a reader who
enjoyed my “Papa’s
Got a Brand New NAS”
article, but wished I had
spent more time describing the software I used. When I
wrote the article, I decided not to dive into the software too much,
because it all was pretty standard for serving files under Linux.
But on second thought, if you want to re-create what I made, I
imagine it would be nice to know the software side as well, so this article
describes the software I use in my home NAS.

The OS

My NAS uses the ODROID-XU4 as the main computing platform, and so
far, I’ve found its octo-core ARM CPU and the rest of its resources
to be adequate for a home NAS. When I first set it up, I visited the
official wiki
page
for the computer, which provides a number of OS
images, including Ubuntu and Android images that you can copy onto a
microSD card. Those images are geared more toward desktop use,
however, and I wanted a minimal server image. After some searching,
I found a minimal image for what was the current Debian stable
release at the time (Jessie)
.

Although this minimal image worked okay for me, I don’t necessarily
recommend just going with whatever OS some volunteer on a forum
creates. Since I first set up the computer, the Armbian project has
been released, and it supports a number of standardized OS images for quite
a few ARM platforms including the ODROID-XU4. So if you
want to follow in my footsteps, you may want to start with the minimal Armbian
Debian image
.

If you’ve ever used a Raspberry Pi before, the process of setting
up an alternative ARM board shouldn’t be too different. Use another
computer to write an OS image to a microSD card, boot the ARM board,
and at boot, the image will expand to fill the existing filesystem.
Then reboot and connect to the network, so you can log in with the default
credentials your particular image sets up. Like with Raspbian builds,
the first step you should perform with Armbian or any other OS image
is to change the default password to something else. Even better,
you should consider setting up proper user accounts instead of
relying on the default.

The nice thing about these Debian-based ARM images is that you end
up with a kernel that works with your hardware, but you also have
the wide variety of software that Debian is known for at your
disposal. In general, you can treat this custom board like any other
Debian server. I’ve been using Debian servers for years, and
many online guides describe how to set up servers under Debian, so
it provides a nice base platform for just about anything you’d
like to do with the server.

In my case, since I was migrating to this new NAS from an existing
1U Debian server, including just moving over the physical hard drives
to a new enclosure, the fact that the distribution was the same
meant that as long as I made sure I installed the same packages on
this new computer, I could generally just copy over my configuration
files wholesale from the old computer. This is one of the big
benefits to rolling your own NAS off a standard Linux distribution
instead of using some prepackaged NAS image. The prepackaged solution
may be easier at first, but if you ever want to migrate off of it
to some other OS, it may be difficult, if not impossible, to take
advantage of any existing settings. In my situation, even if I had gone
with another Linux distribution, I still could have copied over all
of my configuration files to the new distribution—in some cases
even into the same exact directories.

NFS

As I mentioned, since I was moving from an existing 1U NAS server
built on top of standard Debian services, setting up my NFS service
was a simple matter of installing the nfs-kernel-server Debian
package, copying my /etc/exports file over from my old server and
restarting the nfs-kernel-server service with:

$ sudo service nfs-kernel-server restart

If you’re not familiar with setting up a traditional NFS server
under Linux, so many different guides exist that I
doubt I’d be adding much to the world of NFS documentation
by rehashing it again here. Suffice it to say that it comes down to
adding entries into your /etc/exports file that tell the NFS server
which directories to share, who to share them with (based on IP)
and what restrictions to use. For instance, here’s a sample entry I
use to share a particular backup archive directory with a particular
computer on my network:

/mnt/storage/archive 192.168.0.50(fsid=715,rw)

This line tells the NFS server to share the local /mnt/storage/archive
directory with the machine that has the IP 192.168.0.50, to give
it read/write privileges and also to assign this particular share
with a certain filesystem ID. I’ve discovered that assigning a
unique fsid value to each entry in /etc/exports can help the NFS
server identify each filesystem it’s exporting explicitly with
this ID, in case it can’t find a UUID for the filesystem (or if you
are exporting multiple directories within the same filesystem).
Once I make a change to the /etc/exports file, I like to tell the
NFS service to reload the file explicitly with:

$ sudo service nfs-kernel-server reload

NFS has a lot of different and complicated options you can apply
to filesystems, and there’s a bit of an art to tuning things exactly
how you want them to be (especially if you are deciding between
version 3 and 4 of the NFS protocol). I typically turn to the exports
man page (type man exports in a terminal) for good descriptions
of all the options and to see configuration examples.

Samba

If you just need to share files with Linux clients, NFS may be all
you need. However, if you have other OSes on your network, or clients
who don’t have good NFS support, you may find it useful to
offer Windows-style SMB/CIFS file sharing using Samba as well. Although Samba
is configured quite differently from NFS, it’s still not too
complicated.

First, install the Samba package for your distribution. In my case,
that meant:

$ sudo apt install samba

Once the package is installed, you will see that Debian provides a
well commented /etc/samba/smb.conf file with ordinary defaults set.
I then edited that /etc/samba/smb.conf file and made sure to restrict
access to my Samba service to only those IPs I wanted to allow by
setting the following options in the networking section of the
smb.conf:

hosts allow = 192.168.0.20, 192.168.0.22, 192.168.0.23
interfaces = 127.0.0.1 192.168.0.1/24
bind interfaces only = Yes

These changes restrict Samba access to only a few IPs, and explicitly
tell Samba to listen to localhost and a particular interface on the
correct IP network.

There are additional ways you can configure access control with
Samba, and by default, Debian sets it up so that Samba uses local
UNIX accounts. This means you can set up local UNIX accounts on the
server, give them a strong password, and then require that users
authenticate with the appropriate user name and password before they
have access to a file share. Because this is already set up in Debian,
all I had left to do was to add some file shares to the end of my
smb.conf file using the commented examples as a reference. This
example shows how to share the same /mnt/storage/archive directory
with Samba instead of NFS:

[archive]
path = /mnt/storage/archive/
revalidate = Yes
writeable = Yes
guest ok = No
force user = greenfly

As with NFS, there are countless guides on how to configure Samba.
In addition to those guides, you can do as I do and check out the
heavily commented smb.conf or type man smb.conf if you want more
specifics on what a particular option does. As with NFS, when you
change a setting in smb.conf, you need to reload Samba with:

$ sudo service samba reload

Conclusion

What’s refreshing about setting up Linux as a NAS is that file
sharing (in particular, replacing Windows SMB file servers in corporate
environments) is one of the first major forays Linux made in the
enterprise. As a result, as you have seen, setting up Linux to be
a NAS is pretty straightforward even without some nice GUI. What’s
more, since I’m just using a normal Linux distribution instead of
some custom NAS-specific OS, I also can use this same server for
all sorts of other things, such as a local DNS resolver, local mail
relay or any other Linux service I might think of. Plus, down the
road if I ever feel a need to upgrade, it should be pretty easy to
move these configurations over to brand new hardware.

Resources

Source

Monthly News – October 2018 – The Linux Mint Blog

Before we talk about new features and project news I’d like to send a huge thank you to all the people who support our project. Many thanks to our donors, our sponsors, our patrons and all the people who are helping us. I’d also like to say we’ve had a lot of fun working on developing Linux Mint lately and we’re excited to share the news with you.

Release schedule

We will be working to get Linux Mint 19.1 out for Christmas this year, with all three editions released at the same time and the upgrade paths open before the holiday season.

Patreon

Following the many requests we received to look into an alternative to Paypal, we’re happy to announce Linux Mint is now on Patreon: https://www.patreon.com/linux_mint.

Our project received 33 pledges so far and we decided to use this service to help support Timeshift, a project which is very important to us and adds significant value to Linux Mint.

Mint-Y

Joseph Mccullar continued to improve the Mint-Y theme. Through a series of subtle changes he managed to dramatically increase the theme’s contrast.

The screenshot below shows the Xed text editor using the Mint-Y theme as it was in Mint 19 (on the left), and using the Mint-Y theme with Joseph’s changes (on the right):

The difference is immediately noticeable when the theme is applied on the entire desktop. Labels look sharp and stand out on top of their backgrounds. So do the icons which now look darker than before.

The changes also make it easier to visually identify the focused window:

In the above screenshot, the terminal is focused and its titlebar label is darker than in the other windows. This contrast is much more noticeable with Joseph’s changes (below the red line) than before (above the red line).

Status icons

Linux Mint 19 featured monochrome status icons. Although these icons looked nice on dark panels they didn’t work well in white context menus or in cases where the panel background color was changed by the user.

To tackle this issue, Linux Mint 19.1 will ship with support for symbolic icons in Redshift, mate-volume-control-applet, onboard and network-manager-applet.

Xapp

Stephen Collins added an icon chooser to the XApp library.

The icon chooser provides a dialog and a button and will make it easier for our applications to select themed icons and/or icon paths.

Cinnamon

Cinnamon 4.0 will look more modern thanks to a new panel layout. Whether you enjoy the new look or prefer the old one, we want everyone to feel at home in their operating system, so you’ll have the option to embrace the change or to click a button to make Cinnamon look just like it did before.

The idea of a larger and darker panel had been in the roadmap for a while.

Within our team, Jason Hicks and Lars Mueller (Cobinja) maintained two of the most successful 3rd party Cinnamon applets, respectively “Icing Task Manager” and “CobiWindowList”, two attempts at implementing a window list with app grouping and window previews, a feature which had become the norm in other major desktop operating systems, whether it was in the form of a dock (in Mac OS), a panel (in Windows) or a sidebar (in Ubuntu).

And recently German Franco had caught our attention on the need to use strict icon sizes to guarantee icons looked crisp rather than blurry.

We talked about all of this and Niko Krause, Joseph, Jason and I started working on a new panel layout for Cinnamon. We forked “Icing Task Manager” and integrated it into Cinnamon itself. That new applet received a lot of attention, many changes and eventually replaced the traditional window list and the panel launchers in the default Cinnamon panel.

Users were given the ability to define a different icon size for each of the three panel zones (left, center and right for horizontal panels, or top, center and bottom for vertical ones). Each panel zone can now have a crisp icon size such as 16, 22, 24, 32, 48 or 64px or it can be made to scale either exactly (to fit the panel size) or optimally (to scale down to the largest crisp icon size which fits in the panel).

Mint-Y-Dark was adapted slightly to look even more awesome and is now the default Cinnamon theme in Linux Mint.

By default, Cinnamon will feature a dark large 40px panel, where icons look crisp everywhere, and where they scale in the left and center zones but are restricted to 24px on the right (where we place the system tray and status icons).

This new look, along with the new workflow defined by the grouped window list, make Cinnamon feel much more modern than before.

We hope you’ll enjoy this new layout, we’re really thrilled with it, and if you don’t that’s OK too. We made sure everyone would be happy.

As you go through the “First Steps” section of the Linux Mint 19.1 welcome screen, you’ll be asked to choose your favorite desktop layout:

With a click of a button you’ll be able to switch back and forth between old and new and choose whichever default look pleases you the most.

Update Manager

Support for mainline kernels was added to the Update Manager. Thanks to “gm10” for implementing this.

Sponsorships:

Linux Mint is proudly sponsored by:

Donations in September:

A total of $9,932 were raised thanks to the generous contributions of 467 donors:

$500, Marc M.
$200, Anthony W.
$200, Lasse S.
$150 (4th donation), Jan S.
$109 (14th donation), Hendrik S.
$109 (2nd donation), Richard aka “Friendica @ meld.de
$109 (2nd donation), Adler-Apotheke Ahrensburg
$109, Juan E.
$109, Henning K.
$100 (6th donation), Robert K. aka “usmc_bob”
$100 (5th donation), Michael S.
$100 (5th donation), Kenneth P.
$100 (4th donation), Randall H.
$100 (2nd donation), Timothy M.
$100 (2nd donation), Timothy M.
$100 (2nd donation), Timothy M.
$100, Sherwood O.
$100, John Czuba aka “Minky”
$100, Dorothy
$100, Megan C.
$100, Stephen M.
$100, Philip C.
$100, Ronal M.
$84 (3rd donation), Thomas Ö.
$76, Jean-marc F.
$75 (2nd donation), D. C. .
$74, Mary A.
$54 (14th donation), Dr. R. M.
$54 (9th donation), Volker P.
$54 (3rd donation), Mark P.
$54 (2nd donation), Danilo Cesari aka “Dany”
$54, Bernd W.
$54, Ronald S.
$54, Marc V.
$54, Jean-pierre V.
$54, David P.
$50 (9th donation), James Denison aka “Spearmint2”
$50 (8th donation), Hans J.
$50 (4th donation), Tibor aka “tibbi
$50 (3rd donation), An L.
$50 (3rd donation), Shermanda E.
$50 (3rd donation), Harry H. I.
$50 (2nd donation), Colin B.
$50 (2nd donation), Katherine K.
$50 (2nd donation), Richard O.
$50, Charles L.
$50, Thomas W.
$50, Dietrich S.
$50, Harrie K.
$50, Martin S.
$50, Philip C.
$50, Randy R.
$50, Joseph D.
$50, Walter D.
$45 (2nd donation), The W.
$44, Den
$42 (23rd donation), Wolfgang P.
$40 (6th donation), Efran G.
$40 (3rd donation), Soumyashant Nayak
$40, Remi L.
$40, Flint W. O.
$40, Ivan Y.
$39, Steve S.
$35 (2nd donation), Joe L.
$33 (103th donation), Olli K.
$33 (7th donation), NAGY Attila aka “GuBo”
$33 (4th donation), Alfredo T.
$33 (3rd donation), Zerlono
$33 (2nd donation), Luca D. M.
$33 (2nd donation), Stephen M.
$33, aka “kaksikanaa”
$33, Sebastian J. E.
$33, Mario S.
$33, Raxis E.
$30 (3rd donation), John W.
$30 (3rd donation), Fred C.
$30 (2nd donation), Colin H.
$30, Robert P.
$30, Paul W.
$30, Riccardo C.
$27 (6th donation), Ralf D.
$27 (2nd donation), Holger S.
$27, Florian B.
$27, Mirko G.
$27, Lars P.
$27, Horst K.
$27, Henrik K.
$26, Veikko M.
$25 (85th donation), Ronald W.
$25 (24th donation), Larry J.
$25 (5th donation), Lennart J.
$25 (4th donation), B. H. .
$25 (3rd donation), Todd W.
$25 (3rd donation), Troy A.
$25 (3rd donation), William S.
$25 (3rd donation), Peter C.
$25 (2nd donation), William M.
$25 (2nd donation), Garrett R.
$25 (2nd donation), Chungkuan T.
$25 (2nd donation), Lynn H.
$25, Michael G.
$25, Nathan M.
$25, Fred V.
$25, Rory P.
$25, Anibal M.
$25, John S.
$25, Rick Oliver aka “Rick”
$25, Tan T.
$25, Darren K.
$25, Robert M.
$25, Darren E.
$25, Leslie P.
$25, Bob S.
$25, Balázs S.
$25, Eric W.
$25, Robert M.
$22 (19th donation), Derek R.
$22 (5th donation), Nigel B.
$22 (5th donation), David M.
$22 (4th donation), Janne K.
$22 (3rd donation), Ernst L.
$22 (3rd donation), Bernhard J.
$22 (3rd donation), Daniel M.
$22 (3rd donation), Stefan N.
$22 (3rd donation), Bruno Weber
$22 (2nd donation), Bruno T.
$22 (2nd donation), Nicolas R.
$22 (2nd donation), Timm A. M.
$22, Klaus D.
$22, Alexander L.
$22, Vincent G.
$22, Stefan L.
$22, George S.
$22, Roland T.
$22, Peter D.
$22, Pa M.
$22, Thomas H.
$22, David H.
$22, Aritz M. O.
$22, Julien D.
$22, Tanguy R.
$22, Jean-christophe B.
$22, Johan Z.
$22, Alex Mich
$20 (43th donation), Curt Vaughan aka “curtvaughan ”
$20 (10th donation), Lance M.
$20 (9th donation), Kevin Safford
$20 (5th donation), John D.
$20 (4th donation), Marius G.
$20 (4th donation), K. T. .
$20 (3rd donation), Mohamed A.
$20 (3rd donation), Bezantnet, L.
$20 (3rd donation), Bryan F.
$20 (3rd donation), Tim K.
$20 (3rd donation), David F.
$20 (2nd donation), Matthew M.
$20 (2nd donation), Barry D.
$20 (2nd donation), Ronald W.
$20 (2nd donation), Graham M.
$20 (2nd donation), Srikanth P.
$20 (2nd donation), Pixel Motion Film Entertainment, LLC
$20 (2nd donation), Bryan F.
$20, Thomas H.
$20, Eric W.
$20, Arthur S.
$20, Robert G.
$20, Stuart R.
$20, Stephen D.
$20, Joseph M.
$20, Carol V.
$20, David B.
$20, Kevin E.
$20, John K.
$20, Eyal D.
$20, Lawrence M.
$20, Jesse F.
$20, Manuel D. A.
$20, John C. B. J.
$20, Raymundo P.
$20, Nemer A.
$20, Brad S.
$20, Andrew E.
$20, Mixso Qld
$20, David R DeSpain PE
$20, Monka S. aka “Kaz”
$20, Paul B.
$16 (20th donation), Andreas S.
$16 (6th donation), Sabine L.
$16 (2nd donation), Mathias B.
$16 (2nd donation), L. T. .
$16 (2nd donation), Bernard D. B.
$16, Michael N.
$16, Patrick H.
$16, Roland W.
$15 (17th donation), Stefan M. H.
$15 (7th donation), John A.
$15 (6th donation), Hermann W.
$15 (3rd donation), Ishiyama T.
$15 (2nd donation), Eugen T.
$15 (2nd donation), Thomas J. M.
$15, Fred B.
$15, Eric H.
$15, Barnard W.
$15, Francis D.
$15, Lim C. W.
$15, framaga2000
$15, Rodolfo L.
$15, Jonathan D.
$15, Travis B.
$13 (21st donation), Johann J.
$13, Rafael A. O. Paulucci aka “rpaulucci3
$12 (90th donation), Tony C. aka “S. LaRocca”
$12 (35th donation), JobsHiringNearMe
$12 (20th donation), Johann J.
$11 (16th donation), Alessandro S.
$11 (13th donation), Doriano G. M.
$11 (10th donation), Rufus
$11 (9th donation), Denis D.
$11 (9th donation), Per J.
$11 (6th donation), Annette T.
$11 (5th donation), Pierre G.
$11 (4th donation), Barry J.
$11 (4th donation), Oprea M.
$11 (4th donation), Emanuele Proietti aka “Manuermejo”
$11 (3rd donation), Marcel S.
$11 (3rd donation), Michael B.
$11 (3rd donation), Tangi Midy
$11 (3rd donation), Christian F.
$11 (2nd donation), Dominique M.
$11 (2nd donation), Alisdair L.
$11 (2nd donation), Renaud B.
$11 (2nd donation), Björn M.
$11 (2nd donation), Marius G.
$11 (2nd donation), August F.
$11 (2nd donation), Reinhard P. G.
$11 (2nd donation), David G.
$11, August F.
$11, Jeffrey R.
$11, Kerstin J.
$11, Martin L.
$11, Pjerinjo
$11, Stanislav G. aka “Sgcko7”
$11, Chavdar M.
$11, David C.
$11, Angelos N.
$11, Adam Butler
$11, Daniel C. G.
$11, Marco B.
$11, Anthony M.
$11, Stuart G.
$11, João P. D. aka “jpdiniz”
$11, Sven W.
$11, Radoslav J.
$11, Csaba Z. S.
$11, Alejandro M. G.
$11, Esa T.
$11, Hugo G.
$11, Lauri P.
$11, Johannes R.
$11, Vittorio F.
$10 (34th donation), Thomas C.
$10 (25th donation), Frank K.
$10 (21st donation), Jim A.
$10 (18th donation), Dinu P.
$10 (17th donation), Dinu P.
$10 (12th donation), Tomasz K.
$10 (11th donation), Chris K.
$10 (11th donation), hotelsnearbyme.net
$10 (6th donation), Mattias E.
$10 (4th donation), Frederick M.
$10 (4th donation), John T.
$10 (3rd donation), Roger S.
$10 (3rd donation), Wilfred F.
$10 (3rd donation), Raymond H. aka “Rosko”
$10 (2nd donation), Bobby E.
$10 (2nd donation), Neilor C.
$10 (2nd donation), Sara E.
$10 (2nd donation), Scott O.
$10 (2nd donation), Michael S.
$10 (2nd donation), John W.
$10, Richard R.
$10, George M.
$10, Leszek D.
$10, Eduardo B.
$10, Dmytro L.
$10, Dave G.
$10, Arthur A.
$10, James S.
$10, Polk O.
$10, Reid N.
$10, Geoff H.
$10, Gary G.
$10, Rodney D.
$10, Jeremy P.
$10, Randolph R.
$10, Harry S.
$10, Jett Fuel Productions
$10, Douglas S. aka “AJ Gringo”
$10, Carlos M. P. A.
$10, alphabus
$10, Ivan M.
$10, Lebogang L.
$10, lin pei hung
$10, Glen D.
$10, Brian H.
$10, Christopher D.
$10, Scott M.
$9, Roberto P.
$8 (3rd donation), Cyril U.
$8 (2nd donation), Caio C. M.
$8, Stefan S.
$8, John T.
$7 (8th donation), GaryD
$7 (5th donation), Jan Miszura
$7 (3rd donation), Kiyokawa E.
$7 (3rd donation), Daniel J G II
$7 (2nd donation), Mirko Bukilić aka “Bukela”
$7, Ante B.
$7, Wayne O.
$6.44, Mahmood M.
$6 (2nd donation), Alan H.
$6, Sydney G.
$6, Nancy H.
$5 (28th donation), Eugene T.
$5 (21st donation), Kouji Sugibayashi
$5 (20th donation), Kouji Sugibayashi
$5 (19th donation), Bhavinder Jassar
$5 (14th donation), Dmitry P.
$5 (11th donation), J. S. .
$5 (11th donation), Web Design Company
$5 (10th donation), Lumacad Coupon Advertising
$5 (10th donation), Blazej P. aka “bleyzer”
$5 (7th donation), AlephAlpha
$5 (7th donation), Joseph G.
$5 (7th donation), Халилова А.
$5 (6th donation), Goto M.
$5 (5th donation), Scott L.
$5 (5th donation), Russell S.
$5 (5th donation), Pokies Portal
$5 (4th donation), Giuseppino M.
$5 (4th donation), Adjie aka “AJ
$5 (4th donation), rptev
$5 (3rd donation), Jalister
$5 (3rd donation), Tomasz R.
$5 (3rd donation), Daniela K.
$5 (2nd donation), Pawel K.
$5 (2nd donation), Ramon O.
$5 (2nd donation), Sergei K.
$5 (2nd donation), Jerry F.
$5 (2nd donation), Joseph J. G.
$5 (2nd donation), Erik P.
$5 (2nd donation), Stefan N.
$5 (2nd donation), Nenad G.
$5, Sergio M.
$5, Paul B.
$5, Sergio G.
$5, Gregory M.
$5, Almir D. A. B. F.
$5, Paul R.
$5, Stamatis S.
$5, The Art of War by Sun Tzu
$5, Borut B.
$5, Mitchell S.
$5, Angela S.
$5, Manny V.
$5, Silviu P.
$5, Lyudmila N.
$5, Ligrani F.
$5, Drug Rehab Thailand aka “Siam Rehab
$5, Alfredo G.
$5, Mike K.
$5, Peter A. aka “Skwanchi”
$5, Harmen P.
$5, Joseangel S.
$5, Jaime S.
$5, Ruslan A.
$5, Corrie B.
$5, Beverlee H.
$5, Akiva G.
$5, Alexander P.
$5, Kepa M. S.
$5, Christian M.
$4 (9th donation), nordvpn coupon
$4, Alexander Z.
$3.7, Alex H.
$3.6, Allen D.
$3.4, Patricia G.
$3.35, Di_Mok
$3.2, Trina Z.
$3.1, Edward K.
$3.1, Sarie B.
$3 (3rd donation), Lubos S.
$3, Frederik V. D.
$3, Somfalvi J.
$3, Therese N.
$3, Mikko S.
$2.9, Allison C.
$2.8, Marsha E.
$2.8, Joe F.
$2.6, Maureen M.
$2.6, Okneia F.
$2.5, Tonya G.
$2.4 (2nd donation), Tonya G.
$2.4, Jearlin B.
$2.3 (2nd donation), Edward K.
$2.3, Henry H.
$2.3, Pedro P.
$2.2, Joseph Lenzo DOB
$79.87 from 59 smaller donations

If you want to help Linux Mint with a donation, please visit https://www.linuxmint.com/donors.php

Patrons:

Linux Mint is proudly supported by 33 patrons, for a sum of $239 per month.

To become a Linux Mint patron, please visit https://www.patreon.com/linux_mint

Rankings:

  • Distrowatch (popularity ranking): 2249 (2nd)
  • Alexa (website ranking): 4180

Source

MySQL Replication Master Slave Setup

Mysql Replication

MySQL Replication allows you to synchronize slave copies of a MySQL server. You can then use the slave to perform backups and a recovery option if the master should go offline for any reason. MySQL needs to be installed on both servers.

Install MySQL on both servers:

yum install -y mysql-server mysql-client mysql-devel

Edit /etc/my.cnf on both servers and set a unique numerical server id(any number is fine as long as they are not the same):

server-id = 1

Configure MySQL Replication On The Master

On the master ensure a bin log is set in /etc/my.cnf

log_bin = /var/log/mysql/mysql-bin.log

Restart mysql

service mysqld restart

Connect o mysql on the master

mysql -u root -p

Grant privileges to the slave

GRANT REPLICATION SLAVE ON *.* TO ‘slave’@’%’ IDENTIFIED BY ‘password’;

Load the new privileges

FLUSH PRIVILEGES;

Lock the MySQL master so no new updates can be written while you are creating the slave

FLUSH TABLE WITH READ LOCK;

Get the current master status

SHOW MASTER STATUS;

This will return a similar result to this:

mysql> SHOW MASTER STATUS;
+——————+———-+————–+——————+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+——————+———-+————–+——————+
| mysql-bin.000001 | 107 | | |
+——————+———-+————–+——————+
1 row in set (0.00 sec)

This is the position the slave will be on, save this information for later. You will need to keep the mysql client open on the master. If you close it the read lock will stop and will cause replication issues when trying to sync it.

Open a new ssh client and dump the databases

mysqldump -u root -p –all-databases > all.sql

If it is particularly large mysql server, you can rsync all of /var/lib/mysql

Once the copy has completed go ahead and type the following on the MySQL master:

UNLOCK TABLES;

Go ahead and quit on the master

Configure MySQL Replication On The Slave

Import the databases on the slave

mysql < all.sql

You should also enabled the server-id in /etc/my.cnf and restart it

Once it has been restarted and the databases have been imported. You can setup the replication with the following with the following command in the mysql client:

CHANGE MASTER TO MASTER_HOST=’IP ADDRESS OF MASTER’,MASTER_USER=’slave’, MASTER_PASSWORD=’password’, MASTER_LOG_FILE=’mysql-bin.000001′, MASTER_LOG_POS= 107;

Change MASTER_LOG_FILE and MASTER_LOG_POS to the values you got earlier from the master. Once you have entered the above command go ahead and start the slave:

START SLAVE;

To check current slave status to

SHOW SLAVE STATUS;

This is a basic Master-Slave Mysql replication configuration.

Apr 29, 2017LinuxAdmin.io

Source

WP2Social Auto Publish Powered By : XYZScripts.com