Revisiting the Unix philosophy in 2018

In 1984, Rob Pike and Brian W. Kernighan published an article called “Program Design in the Unix Environment” in the AT&T Bell Laboratories Technical Journal, in which they argued the Unix philosophy, using the example of BSD’s cat -v implementation. In a nutshell that philosophy is: Build small, focused programs—in whatever language—that do only one thing but do this thing well, communicate via stdin/stdout, and are connected through pipes.

Sound familiar?

Yeah, I thought so. That’s pretty much the definition of microservices offered by James Lewis and Martin Fowler:

In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API.

While one *nix program or one microservice may be very limited or not even very interesting on its own, it’s the combination of such independently working units that reveals their true benefit and, therefore, their power.

*nix vs. microservices

The following table compares programs (such as cat or lsof) in a *nix environment against programs in a microservices environment.

*nix Microservices
Unit of execution program using stdin/stdout service with HTTP or gRPC API
Data flow Pipes ?
Configuration & parameterization Command-line arguments,
environment variables, config files
JSON/YAML docs
Discovery Package manager, man, make DNS, environment variables, OpenAPI

Let’s explore each line in slightly greater detail.

Unit of execution

The unit of execution in *nix (such as Linux) is an executable file (binary or interpreted script) that, ideally, reads input from

stdin

and writes output to

stdout

. A microservices setup deals with a service that exposes one or more communication interfaces, such as HTTP or gRPC APIs. In both cases, you’ll find stateless examples (essentially a purely functional behavior) and stateful examples, where, in addition to the input, some internal (persisted) state decides what happens.

Data flow

Traditionally, *nix programs could communicate via pipes. In other words, thanks to Doug McIlroy, you don’t need to create temporary files to pass around and each can process virtually endless streams of data between processes. To my knowledge, there is nothing comparable to a pipe standardized in microservices, besides my little Apache Kafka-based experiment from 2017.

Configuration and parameterization

How do you configure a program or service—either on a permanent or a by-call basis? Well, with *nix programs you essentially have three options: command-line arguments, environment variables, or full-blown config files. In microservices, you typically deal with YAML (or even worse, JSON) documents, defining the layout and configuration of a single microservice as well as dependencies and communication, storage, and runtime settings. Examples include Kubernetes resource definitions, Nomad job specifications, or Docker Compose files. These may or may not be parameterized; that is, either you have some templating language, such as Helm in Kubernetes, or you find yourself doing an awful lot of sed -i commands.

Discovery

How do you know what programs or services are available and how they are supposed to be used? Well, in *nix, you typically have a package manager as well as good old man; between them, they should be able to answer all the questions you might have. In a microservices setup, there’s a bit more automation in finding a service. In addition to bespoke approaches like Airbnb’s SmartStack or Netflix’s Eureka, there usually are environment variable-based or DNS-based approaches that allow you to discover services dynamically. Equally important, OpenAPI provides a de-facto standard for HTTP API documentation and design, and gRPC does the same for more tightly coupled high-performance cases. Last but not least, take developer experience (DX) into account, starting with writing good Makefiles and ending with writing your docs with (or in?) style.

Pros and cons

Both *nix and microservices offer a number of challenges and opportunities

Composability

It’s hard to design something that has a clear, sharp focus and can also play well with others. It’s even harder to get it right across different versions and to introduce respective error case handling capabilities. In microservices, this could mean retry logic and timeouts—maybe it’s a better option to outsource these features into a service mesh? It’s hard, but if you get it right, its reusability can be enormous.

Observability

In a monolith (in 2018) or a big program that tries to do it all (in 1984), it’s rather straightforward to find the culprit when things go south. But, in a

yes | tr \n x | head -c 450m | grep n

or a request path in a microservices setup that involves, say, 20 services, how do you even start to figure out which one is behaving badly? Luckily we have standards, notably OpenCensus and OpenTracing. Observability still might be the biggest single blocker if you are looking to move to microservices.

Global state

While it may not be such a big issue for *nix programs, in microservices, global state remains something of a discussion. Namely, how to make sure the local (persistent) state is managed effectively and how to make the global state consistent with as little effort as possible.

Wrapping up

In the end, the question remains: Are you using the right tool for a given task? That is, in the same way a specialized *nix program implementing a range of functions might be the better choice for certain use cases or phases, it might be that a monolith is the best option for your organization or workload. Regardless, I hope this article helps you see the many, strong parallels between the Unix philosophy and microservices—maybe we can learn something from the former to benefit the latter.

Source

Linux Apps For MediaTek Chromebooks A Little Closer

Linux Apps For MediaTek Chromebooks A Little Closer

November 7, 2018

If you are the proud owner of a MediaTek-powered Chromebook such as the Acer Chromebook R13 or Lenovo Flex 11, some new features are headed your way.

Spotted in the Canary channel in mid-October, the Crostini Project is now live in the Developer channel for Chromebooks with the ARM-based MediaTek processor. This brings native Linux app functionality to the Chromebooks with the MT8173C chipset and although the number of devices is few, MediaTek Chromebooks are relatively inexpensive and versatile machines.

Here’s the list of Chromebooks with the MediaTek processor.

  • Lenovo 300e Chromebook
  • Lenovo N23 Yoga Chromebook
  • Acer Chromebook R13
  • Poin2 Chromebook 11C
  • Lenovo Chromebook C330
  • Poin2 Chromebook 14
  • Lenovo Chromebook S330

I have the Lenovo 300e at my desk and it looks to be handling Linux apps like a champ thus far. I know that the low-powered ARM processor isn’t going to be a device that will draw developers in need of serious horsepower but for the average user, these devices are great. On top of that, you can pick up most of the Chromebooks on this list for $300 or less. As a second device to pack around when you’re out of the office or relaxing on the couch, they’re tough to beat.

If you’re interested in checking out Linux apps on your MediaTek Chromebook, head to the settings menu and click About Chrome OS>Detailed build information>Change Channel. Keep in mind, the Developer channel can frequently be unstable and moving back to Beta or Stable will powerwash your device and delete locally saved data. Make sure you back up anything you don’t wish to lose. Once you’re there, head back to settings and you should see a Linux apps menu. Turn it on and wait for the terminal to install.

If you’re new to Linux apps, you can check out how to install the Gnome Software center here and start exploring new apps for your Chromebook.

Shop MediaTek Chromebooks On Amazon

Source

Download Mozilla Thunderbird Linux 60.3.0

Sending and receiving emails is like breathing these days, and you will need a reliable and extremely stable application to do it right. Mozilla Thunderbird is one of those rare applications that provides users with a feature-rich, easy to use and extendable email client. Besides begin an email client, the software is also a very good RSS news reader, as well as a newsgroup and chat client. It is supported and installed by default in many Linux operating systems.

Features at a glance

Among some of its major highlights, we can mention adaptive junk mail controls, saved search folders, global inbox support, message grouping, privacy protection, and comprehensive mail migration from other email clients.

Mozilla Thunderbird is designed to be very comprehensive. It will help users communicate better in an office space, allowing them to send, received emails, chat with their colleagues, and stay updated with the latest news.

Few know that the application provides users with a built-in web browser functionality, using a tabbed user interface and based on its bigger brother, the powerful Mozilla Firefox web explorer. Another interesting feature is the ability to add extensions, popularly known as add-ons, which will extended the default functionality of the application.

Supported operating systems

Do to the fact that it is written by Mozilla, the software supports multiple operating systems, including Linux, Microsoft Windows and Mac OS X, as well as the 64-bit and 32-bit hardware platforms.

There are many popular Linux distribution that use Mozilla Thunderbird as the default email client application, integrated in a wide range of open source desktop environments including GNOME, Xfce, LXDE, Openbox, Enlightenment, or KDE.

Bottom line

Using the Mozilla applications in a Linux environment is the best choice one can make. They are without any doubt among the most popular and open source email, news reader, newsgroup, chat and web browsing apps.

Source

GraphQL Gets Its Own Foundation | Linux.com

Addressing the rapidly growing user base around GraphQL, The Linux Foundation has launched the GraphQL Foundation to build a vendor-neutral community around the query language for APIs (application programming interfaces).

“Through the formation of the GraphQL Foundation, I hope to see GraphQL become industry standard by encouraging contributions from a broader group and creating a shared investment in vendor-neutral events, documentation, tools, and support,” said Lee Byron, co-creator of GraphQL, in a statement.

“GraphQL has redefined how developers work with APIs and client-server interactions,” said Chris Aniszczyk, Linux Foundation vice president of developer relations…

Read more at The New Stack

Source

Install Docker on Raspberry Pi

Docker is a containerization system for Linux. It is used to run lightweight Linux containers on top of another Linux host operation system (a.k.a Docker host). If you’re trying to learn Docker on a real computer, then Raspberry Pi is a very cost effective solution. As Docker containers are lightweight, you can easily fit it 5-10 or more Docker containers on a Raspberry Pi host. I recommend you buy Raspberry Pi 3 Model B or Raspberry Pi 3 Model B+ if you want to setup Docker on it as these models of Raspberry Pi has 1GB of memory (RAM). The more memory you have the better. But sadly, there’s no Raspberry Pi released yet that has more than 1 GB of memory.

In this article, I will show you how to install Docker on Raspberry Pi 3 Model B. I will be using Ubuntu Core operating system on my Raspberry Pi 3 Model B for the demonstration.

You need:

  • A Raspberry Pi 3 Model B or Raspberry Pi 3 Model B+ Single Board Computer device.
  • At least 16GB of microSD Card for installing Ubuntu Core.
  • An Ethernet Cable for internet connection. You can also use the built-in Wi-Fi for the internet. But I prefer wired connection as I think it’s more reliable.
  • HDMI Cable.
  • A Monitor with HDMI port.
  • An USB Keyboard for configuring Ubuntu Core for the first time.
  • A Power Adapter for the Raspberry Pi.

Install Ubuntu Core on Raspberry Pi 3:

I showed you how to install and configure Ubuntu Core on Raspberry Pi 2 and Raspberry Pi 3 in another Raspberry Pi article I wrote on LinuxHint. You can check it at (Link to the Install Ubuntu on Raspberry Pi article)

Powering on Raspberry Pi 3:

Once you have everything set up, connect all the required devices and connectors to your Raspberry Pi and turn it on.

Connecting to Raspberry Pi 3 via SSH:

Once you have Ubuntu Core OS configured, you should be able to connect to your Raspberry Pi 3 via SSH. The required information to connect to your Raspberry Pi via SSH should be displayed on the Monitor connected to your Raspberry Pi as you can see in the marked section of the screenshot below.

Now, from any of the computer that you have SSH key added to your Ubuntu One account, run the following command to connect to the Raspberry Pi via SSH:

$ ssh dev.shovon8@192.168.2.15

NOTE: Replace the username and the IP address of the command with yours.

You may see an error while connecting to your Raspberry Pi via SSH, in that case, just run the following command:

$ ssh-keygen -f ~/.ssh/known_hosts -R 192.168.2.15

Now, you should be able to connect to your Raspberry Pi via SSH again. If it’s the first time you’re connecting to your Raspberry Pi via SSH, then you should see the following message. Just type in yes and then press <Enter>.

You should be connected.

Installing Docker on Raspberry Pi 3:

On Ubuntu Core, you can only install snap packages. Luckily, Ubuntu Core has Docker snap package in the official snap package repository. So, you won’t have any trouble installing Docker on Raspberry Pi 3. To install Docker on Raspberry Pi 3, run the following command:

$ sudo snap install docker

As you can see, Docker is being installed. It will take a while to complete.

At this point Docker is installed. As you can see, the version of Docker is 18.06.1. It is Docker Community Edition.

Now, run the following command to connect Docker to the system:

$ sudo snap connect docker:home

Using Docker on Raspberry Pi 3:

In this section, I will show you how to run Docker containers on Raspberry Pi 3. Let’s get started. You can search for Docker images with the following command:

$ sudo docker search KEYWORD

For example, to search for Ubuntu docker images, run the following command:

$ sudo docker search ubuntu

As you can see, the search result is displayed. You can download and use any Docker image from here. The first Docker image in the search result is ubuntu. Let’s download and install it.

To download (in Docker term pull) the ubuntu image, run the following command:

$ sudo docker pull ubuntu

As you can see, the Docker ubuntu image is being pulled.

The Docker ubuntu image is pulled.

You can list all the Docker images that you’ve pulled with the following command:

Now, you can create a Docker container using the ubuntu image with the following command:

$ sudo docker run -it ubuntu

As you can see, a Docker container is created and you’re logged into the shell of the new container.

Now, you can run any command you want here as you can see in the screenshot below.

To exit out of the shell of the container, run the following command:

You can list all the containers you’ve created with the following command:

As you can see, the container I’ve created earlier has the Container ID 0f097e568547. The container is not running anymore.

You can start the container 0f097e568547 again, with the following command:

$ sudo docker start 0f097e568547

As you can see, the container 0f097e568547 is running again.

To log in to the shell of the container, run the following command:

$ sudo docker attach 0f097e568547

As you can see, I am logged into the shell of the container 0f097e568547 again.

You can check how much memory, CPU, disk I/O, network I/O etc the running containers are using with the following command:

As you can see, I have two containers running and their ID, name, CPU usage, memory usage, network usage, disk usage, pid etc are displayed in a nicely formatted way.

I am running Docker and 2 containers on my Raspberry Pi 3 and I still have about 786 MB of memory available/free. Docker on Raspberry Pi 3 is amazing.

So, that’s how you install and use Docker on Raspberry Pi 3. Thanks for reading this article.

Source

Virtualizing the Clock – Linux Journal

Dmitry Safonov wanted to implement a namespace for time information. The
twisted and bizarre thing about virtual machines is that they get more
virtual all the time. There’s always some new element of the host system
that can be given its own namespace and enter the realm of the virtual
machine. But as that process rolls forward, virtual systems have to share
aspects of themselves with other virtual systems and the host system
itself—for example, the date and time.

Dmitry’s idea is that users should be able to set the day and time on their
virtual systems, without worrying about other systems being given the same
day and time. This is actually useful, beyond the desire to live in the past
or future. Being able to set the time in a container is apparently one of
the crucial elements of being able to migrate containers from one physical
host to another, as Dmitry pointed out in his post.

As he put it:

The kernel provides access to several clocks:
CLOCK_REALTIME,
CLOCK_MONOTONIC, CLOCK_BOOTTIME. Last two clocks are monotonous, but the
start points for them are not defined and are different for each running
system. When a container is migrated from one node to another, all clocks
have to be restored into consistent states; in other words, they have to
continue running from the same points where they have been dumped.

Dmitry’s patch wasn’t feature-complete. There were various questions still
to consider. For example, how should a virtual machine interpret the time
changing on the host hardware? Should the virtual time change by the same
offset? Or continue unchanged? Should file creation and modification times
reflect the virtual machine’s time or the host machine’s time?

Eric W. Biederman supported this project overall and liked the code in the
patch, but he did feel that the patch could do more. He thought it was a little
too lightweight. He wanted users to be able to set up new time namespaces at
the drop of a hat, so they could test things like leap seconds before
they actually occurred and see how their own projects’ code worked under
those various conditions.

To do that, he felt there should be a whole “struct timekeeper” data
structure for each namespace. Then pointers to those structures could be
passed around, and the times of virtual machines would be just as
manipulable and useful as times on the host system.

In terms of timestamps for filesystems, however, Eric felt that it might
be best to limit the feature set a little bit. If users could create files
with timestamps in the past, it could introduce some nasty security
problems. He felt it would be sufficient simply to “do what distributed
filesystems do when dealing with hosts with different clocks”.

The two went back and forth on the technical implementation details. At one
point, Eric remarked, in defense of his preference:

My experience with
namespaces is that if we don’t get the advanced features working there is
little to no interest from the core developers of the code, and the
namespaces don’t solve additional problems. Which makes the namespace a
hard sell. Especially when it does not solve problems the developers of the
subsystem have.

At one point, Thomas Gleixner came into the conversation to remind Eric that
the time code needed to stay fast. Virtualization was good, he said, but
“timekeeping_update() is already heavy and walking through a gazillion of
namespaces will just make it horrible.”

He reminded Eric and Dmitry that:

It’s not only timekeeping, i.e. reading time, this is also affecting all
timers which are armed from a namespace.

That gets really ugly because when you do settimeofday() or adjtimex() for a
particular namespace, then you have to search for all armed timers of that
namespace and adjust them.

The original posix timer code had the same issue because it mapped the clock
realtime timers to the timer wheel so any setting of the clock caused a full
walk of all armed timers, disarming, adjusting and requeing them. That’s
horrible not only performance wise, it’s also a locking nightmare of all
sorts.

Add time skew via NTP/PTP into the picture and you might have to adjust
timers as well, because you need to guarantee that they are not expiring
early.

So, there clearly are many nuances to consider. The discussion ended there,
but this is a good example of the trouble with extending Linux to create
virtual machines. It’s almost never the case that a whole feature can be
fully virtualized and isolated from the host system. Security concerns,
speed concerns, and even code complexity and maintainability come into the
picture. Even really elegant solutions can be shot down by, for example, the
possibility of hostile users creating files with unnaturally old timestamps.

Note: if you’re mentioned above and want to post a response above the comment section, send a message with your response text to ljeditor@linuxjournal.com.

Source

Gumstix enhances Geppetto board design service with new Board Builder UI

Nov 7, 2018

Gumstix has expanded its Linux-oriented Geppetto online embedded board development platform with a free “Board Builder” service that offers a checklist interface for selecting modules, ports, and more.

Gumstix has added a free Board Builder service to its Geppetto Design-to-Order (D2O) custom board design service. The Board Builder improvements make the drag-and-drop Geppetto interface even easier to use, enabling customization of ports, layout and other features.

With Board Builder, you can select items from a checklist, including computer-on-modules, memory, network, sensors, audio, USB, and other features. You can then select a custom size, and you’re presented with 2D and 3D views of board diagrams that you can further manipulate.

Geppetto Board Builder design process for a Raspberry Pi CM3 based IoT design

Board Builder will prompt you with suggestions for power and other features. These tips are based on your existing design, as well as Gumstix’s deep knowledge base about embedded Linux boards.

We quickly whipped up a little Raspberry Pi Compute Module 3 based carrier board (above), which admittedly needs a lot of work. Even if you’re not a serious board developer, it’s a painless, and rather addictive way to do hardware prototyping — sort of a Candy Crush for wannabe hardware geeks.

Serious developers, meanwhile, can go on to take full advantage of the Geppetto service. Once the board is created, “free Automated Board Support Package (AutoBSP), technical documentation (AutoDoc) and 3D previews can be instantly downloaded to anyone who designs a hardware device in the Geppetto online D2O,” says Gumstix.

You can then use Geppetto’s fast small-run manufacturing order service to quickly manufacture small runs of the board within 15 days. The initial $1,999 manufacturing price is reduced for higher quantity jobs and repeat board spins.

Since Gumstix launched its free, web-based Geppetto service several years ago, it designs most of its own boards with the service. Anyone can use Geppetto to modify Gumstix’s carrier board designs or start from scratch and build a custom board. The Geppetto service supports a growing number of Linux- and Android driven modules ranging from the company’s own DuoVero and Overo modules to the Nvidia Jetson TX2 that drives the recent Gumstix Aerocore 2 for Nvidia Jetson.

Further information

The Board Builder interface is available now on the free Geppetto D2O service. More information may be found on the Gumstix Geppetto Board Builder page. You can start right away with Board Builder here.

Source

Overcoming Your Terror of Arch Linux | Software

A recent episode of a Linux news podcast I keep up with featured an interview with a journalist who had written a piece for a non-Linux audience about giving it a try. It was surprisingly widely read. The writer’s experience with some of the more popular desktop distributions had been overwhelmingly positive, and he said as much in his piece and during the subsequent podcast interview.

However, when the show’s host asked whether he had tried Arch Linux — partly to gauge the depth of his experimentation and partly as a joke — the journalist immediately and unequivocally dismissed the idea, as if it were obviously preposterous.

Although that reaction came from an enthusiastic Linux novice, it is one that is not uncommon even among seasoned Linux users. Hearing it resurface in the podcast got me contemplating why that is — as I am someone who is comfortable with and deeply respects Arch.

What Are You Afraid Of?

1. “It’s hard to install.”

The most common issue skeptics raise, by far, is that the installation process is challenging and very much hands-on. Compared to modern day installers and wizards, this is undoubtedly true. In contrast to most mainstream Linux distributions (and certainly to proprietary commercial operating systems), installing Arch is a completely command line-driven process.

Parts of the operating system that users are accustomed to getting prefabricated, like the complete graphical user interface that makes up the desktop, have to be assembled from scratch out of the likes of the X Window server, the desired desktop environment, and the display manager (i.e. the startup login screen).

Linux did not always have installers, though, and Arch’s installation process is much closer to how it was in the days of yore. Installers are a huge achievement, and a solution to one of the biggest obstacles to getting non-expert general users to explore and join the Linux community, but they are a relative luxury in the history of Linux.

Also, installers can get it wrong, as I found out when trying to make some modest adjustments to the default Ubuntu installation settings. While Arch let me set up a custom system with a sequence of commands, Ubuntu’s installer nominally offered a menu for selecting the same configuration, but simply could not to execute it properly under the hood once the installer was set in motion.

2. “The rolling releases are unstable.”

In my experience, Arch’s implementation of the rolling release model has been overwhelmingly stable, so claims to the contrary are largely overblown as far as I am concerned.

When users have stability problems, it’s generally because they’re trying something that either is highly complicated or something for which there is little to no documentation. These precarious use cases are not unique to Arch. Combining too many programs or straying into uncharted territory are more or less equally susceptible to stability issues in Arch as with any other distribution — or any operating system, for that matter.

Just like any software developers, the Arch developers want people to like and have a good experience using their distro, so they take care to get it right. In a way, Arch’s modular approach, with each package optimized and sent out as soon as it’s ready, actually makes the whole operation run more smoothly.

Each sub-team at Arch receives a package from upstream (wherever that might be), makes the minimum number of changes to integrate it with Arch’s conventions, and then pushes it out to the whole Arch user base.

Because every sub-team is doing this and knows every other sub-team is doing the same, they can be sure of exactly what software environment they will be working with and integrating into: the most recent one.

The only times I’ve ever had an update break my system, the Arch mailing list warned me it would, and the Arch forums laid out exactly how to fix it. In other words, by checking the things that responsible users should check, you should be fine.

3. “I don’t want to have to roll back packages.”

Package downgrades are related to, and probably the more feared manifestation of, the above. Again, if you’re not doing anything crazy with your system and the software on it, and you read from Arch’s ample documentation, you probably won’t have to.

As with the risk of instability that comes from complicated setups on any distribution, package downgrades are potentially necessary on distributions besides Arch as well. In fact, whereas most distributions assume you never will have to perform a downgrade and thus don’t design their package management systems to easily (or at least intuitively) do it, Arch makes it easy and thoroughly outlines the steps.

4. “It doesn’t have as many packages,” and “I heard the AUR is scary.”

The criticism of Arch’s relatively smaller base of total available packages usually goes hand-in-hand with that of the unofficial repository being a sort of Wild West. As far as the official repositories are concerned, the number is somewhat smaller than in Debian- or Red Hat-based distributions. Fortunately, the Arch User Repository (AUR) usually contains whatever the official repos lack that most any user possibly could hope for.

This is where most naysayers chime in to note that malicious packages have been found in the AUR. This occasionally has been the case, but what most of us don’t always think about is that this also can be said of the Android Play Store, the Apple App Store, and just about every other software manager that you can think of.

Just as with every app store or software center, if users are careful to give a bit of scrutiny to the software they are considering — in AUR’s case by scanning the (very short) files associated with AUR packages and reading forum pages on the more questionable ones — they will generally be fine.

Others may counter that it’s not the potential hazards of the AUR that are at issue, but that more so than with, say, Debian-based distributions, there is software that falls outside of both the official Arch repos and the AUR. To start with, this is less the case than it once was, given the meteoric rise in the popularity of the Arch-based Manjaro distribution.

Beyond that, most software that isn’t in any of Arch’s repos can be compiled manually. Just as manual installations like Arch’s were the norm for Linux once upon a time, the same holds true for compilations being the default mode of software installation.

Arch’s Tricks Come With Some Major Treats

With those points in mind, hopefully Arch doesn’t seem so daunting. If that’s not enough to convince you to give it a whirl, here are a few points in Arch’s favor that are worth considering.

To start off, manual installation not only gives you granular control over your system, but also teaches you where everything is, because you put it there. Things like the root directory structure, the initial ram filesystem and the bootloader won’t be a mystery that computer use requires you to blindly accept, because during installation you directly installed and generated all these (and more) and arranged them in their proper places.

Manual installation also cuts way down on bloat, since you install everything one package at a time — no more accepting whatever the installer dumps onto your fresh system. This is an especially nice advantage considering that, as many Linux distributions become more geared toward mainstream audiences, their programs become more feature-rich, and therefore bulkier.

Depending on how you install it, Arch running the heaviest desktop environment still can be leaner than Ubuntu running the lightest one, and that kind of efficiency is never a bad thing.

Rolling releases are actually one of Arch’s biggest strengths. Arch’s release model gives you the newest features right away, long before distros with traditional synchronized, batch update models.

Most importantly, with Arch, security patches drop immediately. Every time a major Linux vulnerability comes out — there usually isn’t much malware that exploits these vulnerabilities, but there are a lot of vulnerabilities to potentially exploit — Arch is always the first to get a patch out and into the hands of its users, and usually within a day of the vulnerability being announced.

You’ll probably never have to roll back packages, but if you do, you will be armed with the knowledge to rescue your system from some of the most serious problems.

If you can live-boot the Arch installation image (which doubles as a repair image) from a USB, mount your non-booted installed system to the live system, chroot in to the non-booted system (i.e. switch from the root of the live system to treating your non-booted system as the temporary root), and install a cached previous version of problem packages, you know how to solve a good proportion of the most serious problems any system might have.

That sounds like a lot, but that’s also why Arch Linux has the best documentation of any Linux distribution, period.

Finally, plumbing the AUR for packages will teach you how to review software for security, and compiling source code will give you an appreciation for how software works. Getting in the habit of spotting sketchy behavior in package build and make files will serve you well as a computer user overall.

It also will prod you to reevaluate your relationship with your software. If you make a practice of seriously weighing every installation, you might start being pickier with what you do choose to install.

Once you’ve compiled a package or two, you will start to realize just how unbounded you are in how to use your system. App stores have gotten us used to thinking of computing devices in terms of what its developers will let us do with them, not in terms of what we want to do with them, or what it’s possible to do with them.

It might sound cheesy, but compiling a program really makes you reshape the way you see computers.

Safely Locked Away in a Virtual World of Its Own

If you’re still apprehensive about Arch but don’t want to pass on it, you can install it as a virtual machine to tinker with the installation configurations before you commit to running it on bare hardware.

Software like VirtualBox allows you to allocate a chunk of your hard drive and blocks of memory to running a little computer inside your computer. Since Linux systems in general, and Arch in particular, don’t demand much of your hardware resources, you don’t have to allocate much space to it.

To create a sandbox for constructing your Arch Linux, tell VirtualBox you want a new virtual system and set the following settings (with those not specified here left to default): 2 GB of RAM (though you can get away with 1 GB) and 8 GB of storage.

You will now have a blank system to choose in VirtualBox. All you have to do now is tell it where to find the Arch installation image — just enter the system-specific settings, go to storage, and set the Arch ISO as storage.

When you boot the virtual machine, it will live-boot this Arch image, at which point your journey begins. Once your installation is the way you want it, go back into the virtual system’s settings, remove the Arch installer ISO, reboot, and see if it comes to life.

There’s a distinct rush you feel when you get your own Arch system to boot for the first time, so revel in it.

Source

A Look At The AMD EPYC Performance On The Amazon EC2 Cloud

Of the announcements from yesterday’s AMD Next Horizon event, one that came as a surprise was the rolling out of current-generation EPYC processors to the Amazon Elastic Compute Cloud (EC2). Available so far are the AMD-powered M5a and R5a instance types to offer Amazon cloud customers more choice as well as being priced 10% lower than comparable instances. Here are some initial benchmarks of the AMD performance in the Amazon cloud.

 

 

Initially the AMD EPYC instances on EC2 are the M5a “general purpose” and R5a “memory optimized” instance types. For the purposes of this initial benchmarking over the past day, I focused on looking at the general purpose performance using the m5a.xlarge, m5a.2xlarge, m5a.4xlarge, and m5a.12xlarge sizes. More details on the different AMD EPYC options available can be found via this AWS blog post. Amazon will also be rolling out T3a instances in the near future as well.

 

 

Amazon says these new AMD instances are powered by “custom AMD EPYC processors running at 2.5 GHz.” In the testing of the M5a instance types, the reported CPU is an AMD EPYC 7571 at 2.5GHz and comprised of 32 cores / 64 threads, granted depending upon the instance type is just a subset of that computing capacity. The EPYC 7571 isn’t publicly available but appears to be a slightly faster version of the EPYC 7551.

 

 

With the AMD M5a instance types I compared them to the Intel-powered M5 instance types of the same size. These Intel-based instances offer the same vCPU and ECU ratings as well as the available system memory and other factors, but the EPYC-based instances are about 10% cheaper thanks to the more competitive pricing with AMD’s current server hardware. The Intel M5 instances were using Xeon Platinum 8175M processors.

Via the Phoronix Test Suite a range of benchmarks were carried out between these instances not only looking at the raw performance but also the performance-per-dollar for the on-demand cloud instance pricing in the US West (Oregon) region where the testing was carried out.

Amazon EC2 isn’t the only cloud service offering EPYC CPUs but among others is also SkySilk. Hopefully in the coming days I’ll have the time to wrap up some multi-cloud benchmark comparisons for performance and value. While benchmarking all of the instances, Ubuntu 18.04 LTS with the Linux 4.15 kernel was utilized. The default Spectre/Meltdown mitigations on each platform were active.

Source

WP2Social Auto Publish Powered By : XYZScripts.com