Normalizing Filenames and Data with Bash

URLify: convert letter sequences into safe URLs with hex
equivalents.

This is my 155th column. That means I’ve been writing for Linux
Journal
for:

$ echo “155/12” | bc
12

No, wait, that’s not right. Let’s try that again:

$ echo “scale=2;155/12” | bc
12.91

Yeah, that many years. Almost 13 years of writing about shell scripts and
lightweight programming within the Linux environment. I’ve covered a lot
of ground, but I want to go back to something that’s fairly basic and
talk about filenames and the web.

It used to be that if you had filenames that had spaces in them, bad things would
happen: “my mom’s cookies.html” was a recipe for disaster, not
good cookies—um, and not those sorts of web cookies either!

As the web evolved, however, encoding of special characters became the norm,
and every Web browser had to be able to manage it, for better or worse. So
spaces became either “+” or %20 sequences, and everything else that
wasn’t a regular alphanumeric character was replaced by its hex ASCII
equivalent.

In other words, “my mom’s cookies.html” turned into
“my+mom%27s+cookies.html” or “my%20mom%27s%20cookies.html”.
Many symbols took on a second life too, so “&” and “=” and
“?” all got their own meanings, which meant that they needed to be
protected if they were part of an original filename too. And what about if
you had a “%” in your original filename? Ah yes, the recursive nature
of encoding things….

So purely as an exercise in scripting, let’s write a script that
converts any string you hand it into a “web-safe” sequence. Before
starting, however, pull out a piece of paper and jot down how you’d solve
it.

Normalizing Filenames for the Web

My strategy is going to be easy: pull the string apart into individual
characters, analyze each character to identify if it’s an alphanumeric,
and if it’s not, convert it into its hexadecimal ASCII equivalent,
prefacing it with a “%” as needed.

There are a number of ways to break a string into its individual letters,
but let’s use Bash string variable manipulations, recalling that
${#var}
returns the number of characters in variable $var, and that
$ will
return just the letter in $var at position x. Quick now, does indexing start
at zero or one?

Here’s my initial loop to break $original into its component letters:

input=”$*”

echo $input

for (( counter=0 ; counter < ${#input} ; counter++ ))
do
echo “counter = $counter — $”
done

Recall that $* is a shortcut for everything from the invoking command line
other than the command name itself—a lazy way to let users quote the
argument or not. It doesn’t address special characters, but that’s
what quotes are for, right?

Let’s give this fragmentary script a whirl with some input from the
command line:

$ sh normalize.sh “li nux?”
li nux?
counter = 0 — l
counter = 1 — i
counter = 2 —
counter = 3 — n
counter = 4 — u
counter = 5 — x
counter = 6 — ?

There’s obviously some debugging code in the script, but it’s
generally a good idea to leave that in until you’re sure it’s working
as expected.

Now it’s time to differentiate between characters that are acceptable
within a URL and those that are not. Turning a character into a hex sequence
is a bit tricky, so I’m using a sequence of fairly obscure
commands. Let’s start with just the command line:

$ echo ‘~’ | xxd -ps -c1 | head -1
7e

Now, the question is whether “~” is actually the hex ASCII sequence
7e or not. A quick glance at http://www.asciitable.com confirms that, yes, 7e is
indeed the ASCII for the tilde. Preface that with a percentage sign, and
the tough job of conversion is managed.

But, how do you know what characters can be used as they are? Because of the weird
way the ASCII table is organized, that’s going to be three ranges:
0–9 is in one area of the table, then A–Z in a second area and
a–z in a
third. There’s no way around it, that’s three range tests.

There’s a really cool way to do that in Bash too:

if [[ “$char” =~ [a-z] ]]

What’s happening here is that this is actually a regular expression (the
=~) and a range [a-z] as the test. Since the action
I want to take after
each test is identical, it’s easy now to implement all three tests:

if [[ “$char” =~ [a-z] ]]; then
output=”$output$char”
elif [[ “$char” =~ [A-Z] ]]; then
output=”$output$char”
elif [[ “$char” =~ [0-9] ]]; then
output=”$output$char”
else

As is obvious, the $output string variable will be built up to have the
desired value.

What’s left? The hex output for anything that’s not an otherwise
acceptable character. And you’ve already seen how that can be implemented:

hexchar=”$(echo “$char” | xxd -ps -c1 | head -1)”
output=”$output%$hexchar”

A quick run through:

$ sh normalize.sh “li nux?”
li nux? translates to li%20nux%3F

See the problem? Without converting the hex into uppercase, it’s a bit
weird looking. What’s “nux”? That’s just another step in the subshell
invocation:

hexchar=”$(echo “$char” | xxd -ps -c1 | head -1 |
tr ‘[a-z]’ ‘[A-Z]’)”

And now, with that tweak, the output looks good:

$ sh normalize.sh “li nux?”
li nux? translates to li%20nux%3F

What about a non-Latin-1 character like an umlaut or an n-tilde? Let’s
see what happens:

$ sh normalize.sh “Señor Günter”
Señor Günter translates to Se%C3B1or%200AG%C3BCnter

Ah, there’s a bug in the script when it comes to these two-byte character
sequences, because each special letter should have two hex byte sequences. In
other words, it should be converted to se%C3%B1or g%C3%BCnter (I restored the
space to make it a bit easier to see what I’m talking about).

In other words, this gets the right sequences, but it’s missing
a percentage sign—%C3B should be %C3%B, and
%C3BC should be %C3%BC.

Undoubtedly, the problem is in the hexchar assignment subshell statement:

hexchar=”$(echo “$char” | xxd -ps -c1 | head -1 |
tr ‘[a-z]’ ‘[A-Z]’)”

Is it the -c1 argument to xxd? Maybe. I’m going to leave identifying and
fixing the problem as an exercise for you, dear reader. And while you’re
fixing up the script to support two-byte characters, why not replace
“%20” with “+” too?

Finally, to make this maximally useful, don’t forget that there are a
number of symbols that are valid and don’t need to be converted within
URLs too, notably the set of “-_./!@#=&?”, so you’ll want to
ensure that they don’t get hexified (is that a word?).

Source

Ubuntu’s Cosmic Cuttlefish Brings Performance Improvements and More – Linux.com

 

Canonical has just recently announced that Ubuntu 18.10, code named ‘Cosmic Cuttlefish’, is ready for downloading at the Ubuntu release site. Some of the features of this new release include:

  • the latest version of Kubernetes with improved security and scalability
  • access to 4,100 snaps
  • better support for gaming graphics and hardware including support for the extremely fast Qualcomm Snapdragon 845
  • fingerprint unlocking for compatible systems (e.g., Ubuntu phones)

The new theme

The Yaru Community theme — the theme for Ubuntu 10.18 — is included with Ubuntu 18.10 along with a new desktop wallpaper that displays an artistic rendition of a cuttlefish (a marine animal related to squid, octopuses, and nautiluses).

Source

Papa’s Got a Brand New NAS: the Software

Who needs a custom NAS OS or a web-based GUI when command-line
NAS software is so easy to configure?

In a recent letter to the editor, I was contacted by a reader who
enjoyed my “Papa’s
Got a Brand New NAS”
article, but wished I had
spent more time describing the software I used. When I
wrote the article, I decided not to dive into the software too much,
because it all was pretty standard for serving files under Linux.
But on second thought, if you want to re-create what I made, I
imagine it would be nice to know the software side as well, so this article
describes the software I use in my home NAS.

The OS

My NAS uses the ODROID-XU4 as the main computing platform, and so
far, I’ve found its octo-core ARM CPU and the rest of its resources
to be adequate for a home NAS. When I first set it up, I visited the
official wiki
page
for the computer, which provides a number of OS
images, including Ubuntu and Android images that you can copy onto a
microSD card. Those images are geared more toward desktop use,
however, and I wanted a minimal server image. After some searching,
I found a minimal image for what was the current Debian stable
release at the time (Jessie)
.

Although this minimal image worked okay for me, I don’t necessarily
recommend just going with whatever OS some volunteer on a forum
creates. Since I first set up the computer, the Armbian project has
been released, and it supports a number of standardized OS images for quite
a few ARM platforms including the ODROID-XU4. So if you
want to follow in my footsteps, you may want to start with the minimal Armbian
Debian image
.

If you’ve ever used a Raspberry Pi before, the process of setting
up an alternative ARM board shouldn’t be too different. Use another
computer to write an OS image to a microSD card, boot the ARM board,
and at boot, the image will expand to fill the existing filesystem.
Then reboot and connect to the network, so you can log in with the default
credentials your particular image sets up. Like with Raspbian builds,
the first step you should perform with Armbian or any other OS image
is to change the default password to something else. Even better,
you should consider setting up proper user accounts instead of
relying on the default.

The nice thing about these Debian-based ARM images is that you end
up with a kernel that works with your hardware, but you also have
the wide variety of software that Debian is known for at your
disposal. In general, you can treat this custom board like any other
Debian server. I’ve been using Debian servers for years, and
many online guides describe how to set up servers under Debian, so
it provides a nice base platform for just about anything you’d
like to do with the server.

In my case, since I was migrating to this new NAS from an existing
1U Debian server, including just moving over the physical hard drives
to a new enclosure, the fact that the distribution was the same
meant that as long as I made sure I installed the same packages on
this new computer, I could generally just copy over my configuration
files wholesale from the old computer. This is one of the big
benefits to rolling your own NAS off a standard Linux distribution
instead of using some prepackaged NAS image. The prepackaged solution
may be easier at first, but if you ever want to migrate off of it
to some other OS, it may be difficult, if not impossible, to take
advantage of any existing settings. In my situation, even if I had gone
with another Linux distribution, I still could have copied over all
of my configuration files to the new distribution—in some cases
even into the same exact directories.

NFS

As I mentioned, since I was moving from an existing 1U NAS server
built on top of standard Debian services, setting up my NFS service
was a simple matter of installing the nfs-kernel-server Debian
package, copying my /etc/exports file over from my old server and
restarting the nfs-kernel-server service with:

$ sudo service nfs-kernel-server restart

If you’re not familiar with setting up a traditional NFS server
under Linux, so many different guides exist that I
doubt I’d be adding much to the world of NFS documentation
by rehashing it again here. Suffice it to say that it comes down to
adding entries into your /etc/exports file that tell the NFS server
which directories to share, who to share them with (based on IP)
and what restrictions to use. For instance, here’s a sample entry I
use to share a particular backup archive directory with a particular
computer on my network:

/mnt/storage/archive 192.168.0.50(fsid=715,rw)

This line tells the NFS server to share the local /mnt/storage/archive
directory with the machine that has the IP 192.168.0.50, to give
it read/write privileges and also to assign this particular share
with a certain filesystem ID. I’ve discovered that assigning a
unique fsid value to each entry in /etc/exports can help the NFS
server identify each filesystem it’s exporting explicitly with
this ID, in case it can’t find a UUID for the filesystem (or if you
are exporting multiple directories within the same filesystem).
Once I make a change to the /etc/exports file, I like to tell the
NFS service to reload the file explicitly with:

$ sudo service nfs-kernel-server reload

NFS has a lot of different and complicated options you can apply
to filesystems, and there’s a bit of an art to tuning things exactly
how you want them to be (especially if you are deciding between
version 3 and 4 of the NFS protocol). I typically turn to the exports
man page (type man exports in a terminal) for good descriptions
of all the options and to see configuration examples.

Samba

If you just need to share files with Linux clients, NFS may be all
you need. However, if you have other OSes on your network, or clients
who don’t have good NFS support, you may find it useful to
offer Windows-style SMB/CIFS file sharing using Samba as well. Although Samba
is configured quite differently from NFS, it’s still not too
complicated.

First, install the Samba package for your distribution. In my case,
that meant:

$ sudo apt install samba

Once the package is installed, you will see that Debian provides a
well commented /etc/samba/smb.conf file with ordinary defaults set.
I then edited that /etc/samba/smb.conf file and made sure to restrict
access to my Samba service to only those IPs I wanted to allow by
setting the following options in the networking section of the
smb.conf:

hosts allow = 192.168.0.20, 192.168.0.22, 192.168.0.23
interfaces = 127.0.0.1 192.168.0.1/24
bind interfaces only = Yes

These changes restrict Samba access to only a few IPs, and explicitly
tell Samba to listen to localhost and a particular interface on the
correct IP network.

There are additional ways you can configure access control with
Samba, and by default, Debian sets it up so that Samba uses local
UNIX accounts. This means you can set up local UNIX accounts on the
server, give them a strong password, and then require that users
authenticate with the appropriate user name and password before they
have access to a file share. Because this is already set up in Debian,
all I had left to do was to add some file shares to the end of my
smb.conf file using the commented examples as a reference. This
example shows how to share the same /mnt/storage/archive directory
with Samba instead of NFS:

[archive]
path = /mnt/storage/archive/
revalidate = Yes
writeable = Yes
guest ok = No
force user = greenfly

As with NFS, there are countless guides on how to configure Samba.
In addition to those guides, you can do as I do and check out the
heavily commented smb.conf or type man smb.conf if you want more
specifics on what a particular option does. As with NFS, when you
change a setting in smb.conf, you need to reload Samba with:

$ sudo service samba reload

Conclusion

What’s refreshing about setting up Linux as a NAS is that file
sharing (in particular, replacing Windows SMB file servers in corporate
environments) is one of the first major forays Linux made in the
enterprise. As a result, as you have seen, setting up Linux to be
a NAS is pretty straightforward even without some nice GUI. What’s
more, since I’m just using a normal Linux distribution instead of
some custom NAS-specific OS, I also can use this same server for
all sorts of other things, such as a local DNS resolver, local mail
relay or any other Linux service I might think of. Plus, down the
road if I ever feel a need to upgrade, it should be pretty easy to
move these configurations over to brand new hardware.

Resources

Source

Monthly News – October 2018 – The Linux Mint Blog

Before we talk about new features and project news I’d like to send a huge thank you to all the people who support our project. Many thanks to our donors, our sponsors, our patrons and all the people who are helping us. I’d also like to say we’ve had a lot of fun working on developing Linux Mint lately and we’re excited to share the news with you.

Release schedule

We will be working to get Linux Mint 19.1 out for Christmas this year, with all three editions released at the same time and the upgrade paths open before the holiday season.

Patreon

Following the many requests we received to look into an alternative to Paypal, we’re happy to announce Linux Mint is now on Patreon: https://www.patreon.com/linux_mint.

Our project received 33 pledges so far and we decided to use this service to help support Timeshift, a project which is very important to us and adds significant value to Linux Mint.

Mint-Y

Joseph Mccullar continued to improve the Mint-Y theme. Through a series of subtle changes he managed to dramatically increase the theme’s contrast.

The screenshot below shows the Xed text editor using the Mint-Y theme as it was in Mint 19 (on the left), and using the Mint-Y theme with Joseph’s changes (on the right):

The difference is immediately noticeable when the theme is applied on the entire desktop. Labels look sharp and stand out on top of their backgrounds. So do the icons which now look darker than before.

The changes also make it easier to visually identify the focused window:

In the above screenshot, the terminal is focused and its titlebar label is darker than in the other windows. This contrast is much more noticeable with Joseph’s changes (below the red line) than before (above the red line).

Status icons

Linux Mint 19 featured monochrome status icons. Although these icons looked nice on dark panels they didn’t work well in white context menus or in cases where the panel background color was changed by the user.

To tackle this issue, Linux Mint 19.1 will ship with support for symbolic icons in Redshift, mate-volume-control-applet, onboard and network-manager-applet.

Xapp

Stephen Collins added an icon chooser to the XApp library.

The icon chooser provides a dialog and a button and will make it easier for our applications to select themed icons and/or icon paths.

Cinnamon

Cinnamon 4.0 will look more modern thanks to a new panel layout. Whether you enjoy the new look or prefer the old one, we want everyone to feel at home in their operating system, so you’ll have the option to embrace the change or to click a button to make Cinnamon look just like it did before.

The idea of a larger and darker panel had been in the roadmap for a while.

Within our team, Jason Hicks and Lars Mueller (Cobinja) maintained two of the most successful 3rd party Cinnamon applets, respectively “Icing Task Manager” and “CobiWindowList”, two attempts at implementing a window list with app grouping and window previews, a feature which had become the norm in other major desktop operating systems, whether it was in the form of a dock (in Mac OS), a panel (in Windows) or a sidebar (in Ubuntu).

And recently German Franco had caught our attention on the need to use strict icon sizes to guarantee icons looked crisp rather than blurry.

We talked about all of this and Niko Krause, Joseph, Jason and I started working on a new panel layout for Cinnamon. We forked “Icing Task Manager” and integrated it into Cinnamon itself. That new applet received a lot of attention, many changes and eventually replaced the traditional window list and the panel launchers in the default Cinnamon panel.

Users were given the ability to define a different icon size for each of the three panel zones (left, center and right for horizontal panels, or top, center and bottom for vertical ones). Each panel zone can now have a crisp icon size such as 16, 22, 24, 32, 48 or 64px or it can be made to scale either exactly (to fit the panel size) or optimally (to scale down to the largest crisp icon size which fits in the panel).

Mint-Y-Dark was adapted slightly to look even more awesome and is now the default Cinnamon theme in Linux Mint.

By default, Cinnamon will feature a dark large 40px panel, where icons look crisp everywhere, and where they scale in the left and center zones but are restricted to 24px on the right (where we place the system tray and status icons).

This new look, along with the new workflow defined by the grouped window list, make Cinnamon feel much more modern than before.

We hope you’ll enjoy this new layout, we’re really thrilled with it, and if you don’t that’s OK too. We made sure everyone would be happy.

As you go through the “First Steps” section of the Linux Mint 19.1 welcome screen, you’ll be asked to choose your favorite desktop layout:

With a click of a button you’ll be able to switch back and forth between old and new and choose whichever default look pleases you the most.

Update Manager

Support for mainline kernels was added to the Update Manager. Thanks to “gm10” for implementing this.

Sponsorships:

Linux Mint is proudly sponsored by:

Donations in September:

A total of $9,932 were raised thanks to the generous contributions of 467 donors:

$500, Marc M.
$200, Anthony W.
$200, Lasse S.
$150 (4th donation), Jan S.
$109 (14th donation), Hendrik S.
$109 (2nd donation), Richard aka “Friendica @ meld.de
$109 (2nd donation), Adler-Apotheke Ahrensburg
$109, Juan E.
$109, Henning K.
$100 (6th donation), Robert K. aka “usmc_bob”
$100 (5th donation), Michael S.
$100 (5th donation), Kenneth P.
$100 (4th donation), Randall H.
$100 (2nd donation), Timothy M.
$100 (2nd donation), Timothy M.
$100 (2nd donation), Timothy M.
$100, Sherwood O.
$100, John Czuba aka “Minky”
$100, Dorothy
$100, Megan C.
$100, Stephen M.
$100, Philip C.
$100, Ronal M.
$84 (3rd donation), Thomas Ö.
$76, Jean-marc F.
$75 (2nd donation), D. C. .
$74, Mary A.
$54 (14th donation), Dr. R. M.
$54 (9th donation), Volker P.
$54 (3rd donation), Mark P.
$54 (2nd donation), Danilo Cesari aka “Dany”
$54, Bernd W.
$54, Ronald S.
$54, Marc V.
$54, Jean-pierre V.
$54, David P.
$50 (9th donation), James Denison aka “Spearmint2”
$50 (8th donation), Hans J.
$50 (4th donation), Tibor aka “tibbi
$50 (3rd donation), An L.
$50 (3rd donation), Shermanda E.
$50 (3rd donation), Harry H. I.
$50 (2nd donation), Colin B.
$50 (2nd donation), Katherine K.
$50 (2nd donation), Richard O.
$50, Charles L.
$50, Thomas W.
$50, Dietrich S.
$50, Harrie K.
$50, Martin S.
$50, Philip C.
$50, Randy R.
$50, Joseph D.
$50, Walter D.
$45 (2nd donation), The W.
$44, Den
$42 (23rd donation), Wolfgang P.
$40 (6th donation), Efran G.
$40 (3rd donation), Soumyashant Nayak
$40, Remi L.
$40, Flint W. O.
$40, Ivan Y.
$39, Steve S.
$35 (2nd donation), Joe L.
$33 (103th donation), Olli K.
$33 (7th donation), NAGY Attila aka “GuBo”
$33 (4th donation), Alfredo T.
$33 (3rd donation), Zerlono
$33 (2nd donation), Luca D. M.
$33 (2nd donation), Stephen M.
$33, aka “kaksikanaa”
$33, Sebastian J. E.
$33, Mario S.
$33, Raxis E.
$30 (3rd donation), John W.
$30 (3rd donation), Fred C.
$30 (2nd donation), Colin H.
$30, Robert P.
$30, Paul W.
$30, Riccardo C.
$27 (6th donation), Ralf D.
$27 (2nd donation), Holger S.
$27, Florian B.
$27, Mirko G.
$27, Lars P.
$27, Horst K.
$27, Henrik K.
$26, Veikko M.
$25 (85th donation), Ronald W.
$25 (24th donation), Larry J.
$25 (5th donation), Lennart J.
$25 (4th donation), B. H. .
$25 (3rd donation), Todd W.
$25 (3rd donation), Troy A.
$25 (3rd donation), William S.
$25 (3rd donation), Peter C.
$25 (2nd donation), William M.
$25 (2nd donation), Garrett R.
$25 (2nd donation), Chungkuan T.
$25 (2nd donation), Lynn H.
$25, Michael G.
$25, Nathan M.
$25, Fred V.
$25, Rory P.
$25, Anibal M.
$25, John S.
$25, Rick Oliver aka “Rick”
$25, Tan T.
$25, Darren K.
$25, Robert M.
$25, Darren E.
$25, Leslie P.
$25, Bob S.
$25, Balázs S.
$25, Eric W.
$25, Robert M.
$22 (19th donation), Derek R.
$22 (5th donation), Nigel B.
$22 (5th donation), David M.
$22 (4th donation), Janne K.
$22 (3rd donation), Ernst L.
$22 (3rd donation), Bernhard J.
$22 (3rd donation), Daniel M.
$22 (3rd donation), Stefan N.
$22 (3rd donation), Bruno Weber
$22 (2nd donation), Bruno T.
$22 (2nd donation), Nicolas R.
$22 (2nd donation), Timm A. M.
$22, Klaus D.
$22, Alexander L.
$22, Vincent G.
$22, Stefan L.
$22, George S.
$22, Roland T.
$22, Peter D.
$22, Pa M.
$22, Thomas H.
$22, David H.
$22, Aritz M. O.
$22, Julien D.
$22, Tanguy R.
$22, Jean-christophe B.
$22, Johan Z.
$22, Alex Mich
$20 (43th donation), Curt Vaughan aka “curtvaughan ”
$20 (10th donation), Lance M.
$20 (9th donation), Kevin Safford
$20 (5th donation), John D.
$20 (4th donation), Marius G.
$20 (4th donation), K. T. .
$20 (3rd donation), Mohamed A.
$20 (3rd donation), Bezantnet, L.
$20 (3rd donation), Bryan F.
$20 (3rd donation), Tim K.
$20 (3rd donation), David F.
$20 (2nd donation), Matthew M.
$20 (2nd donation), Barry D.
$20 (2nd donation), Ronald W.
$20 (2nd donation), Graham M.
$20 (2nd donation), Srikanth P.
$20 (2nd donation), Pixel Motion Film Entertainment, LLC
$20 (2nd donation), Bryan F.
$20, Thomas H.
$20, Eric W.
$20, Arthur S.
$20, Robert G.
$20, Stuart R.
$20, Stephen D.
$20, Joseph M.
$20, Carol V.
$20, David B.
$20, Kevin E.
$20, John K.
$20, Eyal D.
$20, Lawrence M.
$20, Jesse F.
$20, Manuel D. A.
$20, John C. B. J.
$20, Raymundo P.
$20, Nemer A.
$20, Brad S.
$20, Andrew E.
$20, Mixso Qld
$20, David R DeSpain PE
$20, Monka S. aka “Kaz”
$20, Paul B.
$16 (20th donation), Andreas S.
$16 (6th donation), Sabine L.
$16 (2nd donation), Mathias B.
$16 (2nd donation), L. T. .
$16 (2nd donation), Bernard D. B.
$16, Michael N.
$16, Patrick H.
$16, Roland W.
$15 (17th donation), Stefan M. H.
$15 (7th donation), John A.
$15 (6th donation), Hermann W.
$15 (3rd donation), Ishiyama T.
$15 (2nd donation), Eugen T.
$15 (2nd donation), Thomas J. M.
$15, Fred B.
$15, Eric H.
$15, Barnard W.
$15, Francis D.
$15, Lim C. W.
$15, framaga2000
$15, Rodolfo L.
$15, Jonathan D.
$15, Travis B.
$13 (21st donation), Johann J.
$13, Rafael A. O. Paulucci aka “rpaulucci3
$12 (90th donation), Tony C. aka “S. LaRocca”
$12 (35th donation), JobsHiringNearMe
$12 (20th donation), Johann J.
$11 (16th donation), Alessandro S.
$11 (13th donation), Doriano G. M.
$11 (10th donation), Rufus
$11 (9th donation), Denis D.
$11 (9th donation), Per J.
$11 (6th donation), Annette T.
$11 (5th donation), Pierre G.
$11 (4th donation), Barry J.
$11 (4th donation), Oprea M.
$11 (4th donation), Emanuele Proietti aka “Manuermejo”
$11 (3rd donation), Marcel S.
$11 (3rd donation), Michael B.
$11 (3rd donation), Tangi Midy
$11 (3rd donation), Christian F.
$11 (2nd donation), Dominique M.
$11 (2nd donation), Alisdair L.
$11 (2nd donation), Renaud B.
$11 (2nd donation), Björn M.
$11 (2nd donation), Marius G.
$11 (2nd donation), August F.
$11 (2nd donation), Reinhard P. G.
$11 (2nd donation), David G.
$11, August F.
$11, Jeffrey R.
$11, Kerstin J.
$11, Martin L.
$11, Pjerinjo
$11, Stanislav G. aka “Sgcko7”
$11, Chavdar M.
$11, David C.
$11, Angelos N.
$11, Adam Butler
$11, Daniel C. G.
$11, Marco B.
$11, Anthony M.
$11, Stuart G.
$11, João P. D. aka “jpdiniz”
$11, Sven W.
$11, Radoslav J.
$11, Csaba Z. S.
$11, Alejandro M. G.
$11, Esa T.
$11, Hugo G.
$11, Lauri P.
$11, Johannes R.
$11, Vittorio F.
$10 (34th donation), Thomas C.
$10 (25th donation), Frank K.
$10 (21st donation), Jim A.
$10 (18th donation), Dinu P.
$10 (17th donation), Dinu P.
$10 (12th donation), Tomasz K.
$10 (11th donation), Chris K.
$10 (11th donation), hotelsnearbyme.net
$10 (6th donation), Mattias E.
$10 (4th donation), Frederick M.
$10 (4th donation), John T.
$10 (3rd donation), Roger S.
$10 (3rd donation), Wilfred F.
$10 (3rd donation), Raymond H. aka “Rosko”
$10 (2nd donation), Bobby E.
$10 (2nd donation), Neilor C.
$10 (2nd donation), Sara E.
$10 (2nd donation), Scott O.
$10 (2nd donation), Michael S.
$10 (2nd donation), John W.
$10, Richard R.
$10, George M.
$10, Leszek D.
$10, Eduardo B.
$10, Dmytro L.
$10, Dave G.
$10, Arthur A.
$10, James S.
$10, Polk O.
$10, Reid N.
$10, Geoff H.
$10, Gary G.
$10, Rodney D.
$10, Jeremy P.
$10, Randolph R.
$10, Harry S.
$10, Jett Fuel Productions
$10, Douglas S. aka “AJ Gringo”
$10, Carlos M. P. A.
$10, alphabus
$10, Ivan M.
$10, Lebogang L.
$10, lin pei hung
$10, Glen D.
$10, Brian H.
$10, Christopher D.
$10, Scott M.
$9, Roberto P.
$8 (3rd donation), Cyril U.
$8 (2nd donation), Caio C. M.
$8, Stefan S.
$8, John T.
$7 (8th donation), GaryD
$7 (5th donation), Jan Miszura
$7 (3rd donation), Kiyokawa E.
$7 (3rd donation), Daniel J G II
$7 (2nd donation), Mirko Bukilić aka “Bukela”
$7, Ante B.
$7, Wayne O.
$6.44, Mahmood M.
$6 (2nd donation), Alan H.
$6, Sydney G.
$6, Nancy H.
$5 (28th donation), Eugene T.
$5 (21st donation), Kouji Sugibayashi
$5 (20th donation), Kouji Sugibayashi
$5 (19th donation), Bhavinder Jassar
$5 (14th donation), Dmitry P.
$5 (11th donation), J. S. .
$5 (11th donation), Web Design Company
$5 (10th donation), Lumacad Coupon Advertising
$5 (10th donation), Blazej P. aka “bleyzer”
$5 (7th donation), AlephAlpha
$5 (7th donation), Joseph G.
$5 (7th donation), Халилова А.
$5 (6th donation), Goto M.
$5 (5th donation), Scott L.
$5 (5th donation), Russell S.
$5 (5th donation), Pokies Portal
$5 (4th donation), Giuseppino M.
$5 (4th donation), Adjie aka “AJ
$5 (4th donation), rptev
$5 (3rd donation), Jalister
$5 (3rd donation), Tomasz R.
$5 (3rd donation), Daniela K.
$5 (2nd donation), Pawel K.
$5 (2nd donation), Ramon O.
$5 (2nd donation), Sergei K.
$5 (2nd donation), Jerry F.
$5 (2nd donation), Joseph J. G.
$5 (2nd donation), Erik P.
$5 (2nd donation), Stefan N.
$5 (2nd donation), Nenad G.
$5, Sergio M.
$5, Paul B.
$5, Sergio G.
$5, Gregory M.
$5, Almir D. A. B. F.
$5, Paul R.
$5, Stamatis S.
$5, The Art of War by Sun Tzu
$5, Borut B.
$5, Mitchell S.
$5, Angela S.
$5, Manny V.
$5, Silviu P.
$5, Lyudmila N.
$5, Ligrani F.
$5, Drug Rehab Thailand aka “Siam Rehab
$5, Alfredo G.
$5, Mike K.
$5, Peter A. aka “Skwanchi”
$5, Harmen P.
$5, Joseangel S.
$5, Jaime S.
$5, Ruslan A.
$5, Corrie B.
$5, Beverlee H.
$5, Akiva G.
$5, Alexander P.
$5, Kepa M. S.
$5, Christian M.
$4 (9th donation), nordvpn coupon
$4, Alexander Z.
$3.7, Alex H.
$3.6, Allen D.
$3.4, Patricia G.
$3.35, Di_Mok
$3.2, Trina Z.
$3.1, Edward K.
$3.1, Sarie B.
$3 (3rd donation), Lubos S.
$3, Frederik V. D.
$3, Somfalvi J.
$3, Therese N.
$3, Mikko S.
$2.9, Allison C.
$2.8, Marsha E.
$2.8, Joe F.
$2.6, Maureen M.
$2.6, Okneia F.
$2.5, Tonya G.
$2.4 (2nd donation), Tonya G.
$2.4, Jearlin B.
$2.3 (2nd donation), Edward K.
$2.3, Henry H.
$2.3, Pedro P.
$2.2, Joseph Lenzo DOB
$79.87 from 59 smaller donations

If you want to help Linux Mint with a donation, please visit https://www.linuxmint.com/donors.php

Patrons:

Linux Mint is proudly supported by 33 patrons, for a sum of $239 per month.

To become a Linux Mint patron, please visit https://www.patreon.com/linux_mint

Rankings:

  • Distrowatch (popularity ranking): 2249 (2nd)
  • Alexa (website ranking): 4180

Source

MySQL Replication Master Slave Setup

Mysql Replication

MySQL Replication allows you to synchronize slave copies of a MySQL server. You can then use the slave to perform backups and a recovery option if the master should go offline for any reason. MySQL needs to be installed on both servers.

Install MySQL on both servers:

yum install -y mysql-server mysql-client mysql-devel

Edit /etc/my.cnf on both servers and set a unique numerical server id(any number is fine as long as they are not the same):

server-id = 1

Configure MySQL Replication On The Master

On the master ensure a bin log is set in /etc/my.cnf

log_bin = /var/log/mysql/mysql-bin.log

Restart mysql

service mysqld restart

Connect o mysql on the master

mysql -u root -p

Grant privileges to the slave

GRANT REPLICATION SLAVE ON *.* TO ‘slave’@’%’ IDENTIFIED BY ‘password’;

Load the new privileges

FLUSH PRIVILEGES;

Lock the MySQL master so no new updates can be written while you are creating the slave

FLUSH TABLE WITH READ LOCK;

Get the current master status

SHOW MASTER STATUS;

This will return a similar result to this:

mysql> SHOW MASTER STATUS;
+——————+———-+————–+——————+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+——————+———-+————–+——————+
| mysql-bin.000001 | 107 | | |
+——————+———-+————–+——————+
1 row in set (0.00 sec)

This is the position the slave will be on, save this information for later. You will need to keep the mysql client open on the master. If you close it the read lock will stop and will cause replication issues when trying to sync it.

Open a new ssh client and dump the databases

mysqldump -u root -p –all-databases > all.sql

If it is particularly large mysql server, you can rsync all of /var/lib/mysql

Once the copy has completed go ahead and type the following on the MySQL master:

UNLOCK TABLES;

Go ahead and quit on the master

Configure MySQL Replication On The Slave

Import the databases on the slave

mysql < all.sql

You should also enabled the server-id in /etc/my.cnf and restart it

Once it has been restarted and the databases have been imported. You can setup the replication with the following with the following command in the mysql client:

CHANGE MASTER TO MASTER_HOST=’IP ADDRESS OF MASTER’,MASTER_USER=’slave’, MASTER_PASSWORD=’password’, MASTER_LOG_FILE=’mysql-bin.000001′, MASTER_LOG_POS= 107;

Change MASTER_LOG_FILE and MASTER_LOG_POS to the values you got earlier from the master. Once you have entered the above command go ahead and start the slave:

START SLAVE;

To check current slave status to

SHOW SLAVE STATUS;

This is a basic Master-Slave Mysql replication configuration.

Apr 29, 2017LinuxAdmin.io

Source

Amazon ECS-CLI Supports Private Registry Authentication

You can now use the Amazon Elastic Container Service (ECS) Command Line Interface (Amazon ECS-CLI) to create AWS secrets for your private registry credentials.

Previously, in order to use the ECS-CLI to run tasks that used images from a private registry, you had to first create AWS Secrets for your registry credentials.

Now you can provide the ECS-CLI with an input file that includes the the registry names and associated credentials, and the ECS-CLI will create the AWS Secrets as well as an IAM role for you that can be used by ECS to access the secrets.

To learn more about how ECS-CLI supports creating AWS secrets for private registry credentials, read our documentation.To see where ECS is available, visit our region table.

Source

Apache Kafka using Keys for Partition

Apache Kafka is a data streaming platform responsible for streaming data from a number of sources to a lot of targets

. The sources are also called producers. The data produced is needed by a completely different group called consumers for various purposes. Kafka is the layer that sits between the producers and consumers and aggregates the data into a usable pipeline. Also Kafka itself is a distributed platform, so the Kafka layer is composed of various servers running a kafka, these servers or nodes are hence known as Kafka

Brokers.

That overview is a bit in the abstract so let’s ground it in a real-world scenario, imagine you need to monitor several web servers. Each running its own website, and new logs are constantly being generated in each one of them every second of the day. On top of that there are a number of email servers that you need to monitor as well.

You may need to store that data for record keeping and billing purposes, which is a batch job that doesn’t require immediate attention. You might want to run analytics on the data to make decisions in real-time which requires accurate and immediate input of data. Suddenly you find yourself in the need for streamlining the data in a sensible way for all the various needs. Kafka acts as that layer of abstraction to which multiple sources can publish different streams of data and a given consumer can subscribe to the streams it finds relevant. Kafka will make sure that the data is well-ordered. It is the internals of Kafka that we need to understand before we get to the topic of Partitioning and Keys.

Kafka Topics are like tables of a database. Each topic consists of data from a particular source of a particular type. For example, your cluster’s health can be a topic consisting of CPU and memory utilization information. Similarly, incoming traffic to across the cluster can be another topic.

Kafka is designed to be horizontally scalable. That is to say, a single instance of Kafka consists of multiple Kafka brokers running across multiple nodes, each can handle streams of data parallel to the other. Even if a few of the nodes fail your data pipeline can continue to function. A particular topic can then be split into a number of partitions. This partitioning is one of the crucial factors behind the horizontal scalability of Kafka.

Multiple producers, data sources for a given topic, can write to that topic simultaneously because each writes to a different partition, at any given point. Now, usually data is assigned to a partition randomly, unless we provide it with a key.

Partitioning and Ordering

Just to recap, producers are writing data to a given topic. That topic is actually split into multiple partitions. And each partition lives independently of the others, even for a given topic. This can lead to a lot of confusion when the ordering to data matters. Maybe you need your data in a chronological order but having multiple partitions for your datastream doesn’t guarantee perfect ordering.

You can use only a single partition per topic, but that defeats the whole purpose of Kafka’s distributed architecture. So we need some other solution.

Keys for Partitions

Data from a producer are sent to partitions randomly, as we mentioned before. Messages being the actual chunks of data. What producers can do besides just sending messages is to add a key that goes along with it.

All the messages that come with the specific key will go to the same partition. So, for example, a user’s activity can be tracked chronologically if that user’s data is tagged with a key and so it always end up in one partition. Let’s call this partition p0 and the user u0.

Partition p0 will always pick up the u0 related messages because that key tie them together. But that doesn’t mean that p0 is only tied up with that. It can also take up messages from u1 and u2 if it has the capacity to do so. Similarly, other partitions can consume data from other users.

The point that a given user’s data isn’t spread across different partition ensuring chronological ordering for that user. However, the overall topic of user data, can still leverage the distributed architecture of Apache Kafka.

Conclusion

While distributed systems like Kafka solve some older problems like lack of scalability or having single a point of failure. They come with a set of problems that are unique to their own design. Anticipating these problems is an essential job of any system architect. Not only that, sometimes you really have to do a cost-benefit analysis to determine whether the new problems are a worthy trade-off for getting rid of the older ones. Ordering and synchronization are just the tip of the iceberg.

Hopefully, articles like these and the official documentation can help you along the way.

Source

How to Install Google Chrome Web Browser on CentOS 7

Google Chrome is the most widely used web browser in the world. It is fast, easy to use and secure browser built for the modern web.

Chrome is not an open source browser and it is not included in the CentOS repositories. It is based on Chromium, an open-source browser which is available in the EPEL repositories.

This tutorial teaches how to install Google Chrome web browser on CentOS 7. The same instructions apply for any RHEL based distribution, including Fedora and Scientific Linux.

Prerequisites

Before continuing with this tutorial, make sure you are logged in as a user with sudo privileges.

Installing Google Chrome on CentOS

Follow the steps listed below to install Google Chrome on your CentOS system:

  1. Start by opening your terminal and download the latest Google Chrome .rpm package with following wget command:

    wget https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm

  2. Once the file is downloaded, install Google Chrome on your CentOS 7 system by typing:

    sudo yum localinstall google-chrome-stable_current_x86_64.rpm

    The command above will prompt you to enter your user password and then it will install Chrome and all other required packages.

Starting Google Chrome

Now that you have Google Chrome installed on your CentOS system you can start it either from the command line by typing google-chrome & or by clicking on the Google Chrome icon (Applications -> Internet -> Google Chrome):

When you start Google Chrome for the first time, a window like the following will appear asking if you want to make Google Chrome your default browser and to send usage statistic and crash reports to Google:

Select according to your preference, and click OK to proceed.

Google Chrome will open and you’ll see the default Chrome welcome page.

At this point, you have Chrome installed on your CentOS machine. You can sign-in to Chrome with your Google Account to sync your bookmarks, history, passwords and other settings on all your devices.

Updating Google Chrome

During the installation process the official Google repository will be added to your system. You can use the cat command to verify the file contents:

cat /etc/yum.repos.d/google-chrome.repo[google-chrome]
name=google-chrome
baseurl=http://dl.google.com/linux/chrome/rpm/stable/x86_64
enabled=1
gpgcheck=1
gpgkey=https://dl.google.com/linux/linux_signing_key.pub

This ensures that your Google Chrome installation will be updated automatically when a new version is released through your desktop standard Software Update tool.

Conclusion

In this tutorial we’ve shown you how to install Google Chrome on your CentOS 7 desktop machine. If you’ve previously used a different browser, like Firefox or Opera, you can import your bookmarks and settings into Chrome.

Source

Secure net appliance offers optional SFP

Lanner’s “NCA-1515” secure network appliance is equipped with an Atom C3000 SoC, 2x mini-PCIe, M.2 with Nano-SIM, and up to 8x GbE ports with one-pair bypass and optional SFP.

Lanner has launched a security-oriented NCA-1515 network appliance with a variety of networking and storage options. The 231 x 200 x 44mm desktop appliance features an Intel Atom C3000 “Denverton” SoC and is designed for vCPE/uCPE and edge security applications.

Other desktop network appliances based on the Atom C3000 include Nexcom’s vDNA 1160, Advantech’s FWA-1012VC, Axiomtek’s NA362, and Aaeon’s FWS-7360 and FWS-2360.

NCA-1515, front and back
(click images to enlarge)

 

Unlike most of these competitors, the NCA-1515 lacks support for the higher-end 12- and 16-core Atom C3000 models, limiting you to the octa-core C3708, the quad-core C3508, and the dual-core C3308, with clock rates ranging between 1.5GHz and 2.0GHz. No OS support is listed, but Linux is almost certainly supported.

Like most of its rivals, the NCA-1515 supports Intel AES-NI and Intel QuickAssist Technology (Intel QAT), featuring accelerated symmetric encryption and authentication, asymmetric encryption, digital signatures, RSA, DH, ECC, and lossless data compression. Paired with the Atom C3000, QuickAssist “greatly boosts network responsiveness and security by distributing processing power to more critical applications and by offloading computationally intensive compression and encryption/decryption tasks,” says Lanner. The system also provides a secure boot mechanism, support for TPM 2.0, and a Kensington lock for physical device security.

Unlike the vDNA 1160, NA362, and FWS-7360, there are no 10GbE ports. All four models have a bank of 4x MAC- and copper-based Gigabit Ethernet ports. Two of these models feature one-pair bypass. One adds two more copper GbE and two more optical SFP GbE ports (with dedicated LEDs) via an Intel i350 server adapter PCIe board. The other adds only the pair of SFP ports.

The three configurations are:

  • 6x GbE RJ45 with 1 pair Gen3 bypass; 2x GbE SFP
  • 4x GbE RJ45 with 1 pair Gen3 bypass; 2x GbE SFP
  • 4x GbE RJ45 without bypass

You can load up to 32GB of 2400/2133MHz DDR4 RAM with ECC, and there’s a standard allotment of 8GB eMMC. There’s also an option for adding a 2.5-inch SATA bay.

Simplified (left) and full detail views of the NCA-1515
(click images to enlarge)

 

There are plenty of options for wireless. You get dual mini-PCIe slots (PCIe/USB2.0), as well as an M.2 B-key 2242 slot that is linked to dual Nano-SIM slots for adding cellular capability.

The NCA-1515 is further equipped with 2x USB 2.0 ports, an RJ45 console port, and an LOM (Lights Out Management) remote access port with an OPMA (Open Platform Management Architecture) slot. A watchdog, RTC, and LEDs are also onboard.

The system has a 12V DC input jack with power and reset buttons and, depending on the SKU, a 36W or 60W adapter. There’s a passive heatsink and system cooling fan that together support a 0 to 40°C range.

Further information

No pricing or availability information was provided for the NCA-1515. More information may be found on Lanner’s NCA-1515 product page.

Source

WP2Social Auto Publish Powered By : XYZScripts.com