The top 5 Linux and open-source stories of 2018

Last year was among the best of times for Linux and open-source. It was also the worst of years. The top five Linux and open-source stories tell it all.

Spectre/Meltdown

First, last January there were a lot of exhausted and angry Linux kernel developers. That’s because a fundamental chip design mistake led to Linux and all Intel-based operating systems having to deal with the Spectre and Meltdown major security problems.

Also: Researchers discover seven new Meltdown and Spectre attacks

Intel’s refusal to let developers work openly with each other led to massive delays in fixing the problems. As Greg Kroah-Hartman, the stable Linux kernel maintainer, explained, “When we get a kernel security bug, it goes to the Linux kernel security team, we drag in the right people, we work with the distributions getting everyone on the same page and push out patches.” Not this time. “Intel siloed SUSE, they siloed Red Hat, they siloed Canonical. They never told Oracle, and they wouldn’t let us talk to each other.”

Linus Torvalds, Linux’s master developer, added, that with the “security issues kept under wraps, we couldn’t do our usual open methods. This made fixing the bugs much more painful than it should be.”

Adding insult to injury Spectre problems persist to this day, and the fixes cause significant slowdowns for both Linux and all other operating systems. We will be stuck with this until a new generation of CPUs fixes the Spectre-family of bugs once and for all.

IBM Buys Red Hat

I didn’t see this coming. IBM made the biggest software company acquisition of all time when it paid $34 billion for Red Hat. This deal wasn’t about Linux. It was about IBM wanting Red Hat’s cloud, container, and Kubernetes expertise.

Also: How the cloud wars forced IBM to buy Red Hat for $34 billion

Will it work? Maybe. IBM is betting the farm on becoming a hybrid-cloud power. On the other hand, had IBM stayed pat with its current offerings, it would have only continued its long slow decline.

To make the deal work, in 2019 IBM must double-down on its Red Hat wager. That means putting Red Hat executives in charge of the merged company. I’ll feel much better about this deal’s future if IBM CEO Ginni Rometty retires and is replaced by Red Hat CEO Jim Whitehurst.

Torvalds steps back from running Linux and Linux developers adopt a new code of conduct

Even now it’s hard to believe that Linus Torvalds took a break from running Linux. For almost 25 years, Torvalds was the benevolent dictator for life of Linux. The only way most people could see him leaving was if he was hit by a bus.

Also: Linus Torvalds and Linux Code of Conduct: 7 myths debunked

It turns out that what could make him step back was realizing his take-no-crap from anyone management style wasn’t working anymore. Torvalds said, “I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely.”

Torvalds wouldn’t be gone for long. As he came back, a new code of conduct for Linux kernel developers came with him. Despite numerous cries of outrage, mostly from people who weren’t Linux programmers, claiming Linux had been taken over by Social Justice Warriors (SJW)s, Linux development has continued on as always.

Google incorporates Linux into Chrome OS

If you look closely, you can see that Linux is the foundation operating system for Google’s Chrome OS. This makes Chrome OS, I argue, the most successful Linux desktop to date.

Also: Google’s Chrome OS gets new app muscle with built-in Linux CNET

It was only in 2018, however, that Google has made it possible to run native Linux simultaneously with Chrome OS. Curiously, this follows on the heels of Microsoft enabling Windows 10 users to run Linux with Windows Subsystem for Linux (WSL). We may never have a year of Linux on the desktop, but Linux is nevertheless becoming ever more possible as a built-in add-on to other desktop operating systems.

Microsoft buys GitHub and open-sources its patent portfolio

Microsoft buying GitHub, the leading Git-based open-source code collaboration site, was surprising. Microsoft open-sourcing its patent portfolio was shocking.

Also: Pretty much no one quit GitHub over the Microsoft acquisition TechRepublic

By joining the Open Invention Network (OIN), an open-source patent consortium, Microsoft essentially agreed to grant a royalty-free and unrestricted license to its entire patent portfolio to all other OIN members.

This — not Torvalds stepping back from kernel development — was the most surprising Linux news of the year. Years ago, I’d said the one thing Microsoft had to do — to convince everyone in open source that it’s truly an open-source supporter — is stop using its patents against Android vendors. Well, that day finally arrived.

Even now, there are many people who think Microsoft is the Evil Empire, which will stab Linux and open-source in the back. They’re wrong. With this move, Microsoft is putting its own multi-billion dollar intellectual property behind Linux. As unbelievable as it may seem, Microsoft has become a leading open-source and Linux company.

Last year was a heck of a year. While I’m sure there will be many new, major developments for Linux and open-source software in 2019, I find it almost impossible to imagine that 2019 will bring even greater surprises… Well, unless we see MS-Linux. While I think that’s possible, it would also be a real shocker.

Related Stories:

Source

Surface Go with Linux Review: almost the perfect open source notepad

You have probably had your fill of Surface Go reviews that seem to split the tech world in two. You’ve also most likely seen the brawls between the Surface Go and the iPad Pro, especially those revolving around the rhetoric of real PCs. So why not have yet another Surface Go review? This time, however, we’ll take a rather different spin and highlight one aspect that really does make the Surface Go a “real PC”: being able to install other operating systems like Linux. And in that regard, it is near perfect as an on-the-go Linux digital notepad.

Specs and Design

I won’t bore you with the details you’ve most likely read before. The Surface Go is by no means a powerful machine. If pure performance is measured, it could very well be outranked by last year’s iPad Pros, especially when it comes to battery life. But just to recap, Microsoft’s smallest Surface runs on a “special” Intel Pentium Gold 4415Y. The 10-inch screen still bears Microsoft’s unique 3:2 ratio, this time at 1800×1200 pixels. The battery is rated at 27Wh and charges either via Microsoft’s usual proprietary Surface Connect or, surprise surprise, a lone USB-C port that does both power, data, and video out.

One point of contention with earlier Surface Go reviews was the fact that most of them reviewed the more expensive model with 8 GB of RAM and 128 GB of SSD, which is also what I bought. While that may almost be a necessity when it comes to Windows 10, especially after breaking out of S Mode, it may be a minor consideration if you have Linux in mind right from the very start. Linux is more efficient with both RAM and storage, though the 64 GB eMMC type on the base model could be a bottleneck. If, however, you plan on dual booting Windows and Linux, at least get the third model with 4 GB of RAM and 128 GB of SSD storage.

The Surface Go is definitely a looker for its size and bears the same design as its larger and more professional siblings. The sleek magnesium chassis makes it look pro even for its diminutive size while the slightly curved edges and lightweight construction make it comfortable to hold with one hand over longer periods. Not too long, though, because it’s still 1.1 lbs of metal and plastic. All the ports, which includes a headphone jack, are on the right side while the opposite edge is left barren to make room for magnetically sticking a Surface Pen to. The top has the power and volume rocker buttons along the plastic antenna area while the bottom has the groove and POGO pins for the Surface Go Type Cover. Both accessories are sold separately, of course.

Living side by side

It’s quite impressive and comforting how Linux has come a long way in supporting even new devices that have just come out of the market. Perhaps it helps that many of the components that Microsoft used in the Surface Go have also been used in other Surface Pros, which have already been tested by daring Linux users.

As such, it fortunately didn’t take much to get Linux cohabiting with Windows 10 on the same machine. It may or may not be easier to have simply wiped off Microsoft’s OS but I still had use for that. On the Windows side, the biggest step was to disable BitLocker encryption on the C: drive (if it was even enabled) and then shrinking the Windows partition to make room for Linux plus 8 GB or so of swap. As mentioned, Linux isn’t much of a memory hog and non-critical system files can be offloaded to a microSD card anyway. Linux distros have also come a long way in making sure their installers work with modern features like UEFI and Secure Boot so the process was thankfully straightforward and uneventful.

It’s almost surprising, pleasantly, of course, how many things worked properly right out of the box. Wi-Fi needed a bit of coaxing but that is fortunately already documented. Bluetooth was working from day one. Display, touch, and even the Surface Pen’s pressure sensitivity and buttons worked without a hitch. The Type Cover’s touch pad was also properly detected and supported multi-finger gestures. Even power management was off to a good start. Accelerometer and proximity sensors are also detected, though their use mostly depends on your distro and desktop environment of choice. In this case, I used the Ubuntu-based KDE Neon. Long story short, save for a few pieces we’ll get to later, the Surface Go Linux experience is almost painless, as if you were installing it on any other modern laptop.

Performance and Battery

Installing Linux on the Surface Go would have been an exercise in futility if it ended up being unusable. Then again, this piece probably wouldn’t have been written in the first place if that were the case. While it’s harder to benchmark Linux performance due to lack of popular tools and the combinations off distros and desktops, one can probably make a generalization and rate it as “Great!”.

The display is bright and crisp. It’s considered a High DPI screen, though, so you may have to adjust the resolution or zooming to your comfort levels. Touch is completely usable and may even be fun to use, provided you’re using software that support it. Linux users might have to workaround those, but there is no shortage of utilities and tools for those. Onboard, for example, makes for a great configurable virtual keyboard while Touchegg on Ubuntu lets you have some multi-finger touch screen gestures as well.

Performance, of course, varies depending on the software you use. Again, Linux and its programs are kinder to CPU and memory but there will be times that even the 8 GB RAM might cause the system to choke for a bit. That’s especially true when you have multiple tabs open in Chrome or having multiple hi-res layers in Krita. Compiling in the background with multiple programs often could also result in some noticeable lag but nothing I threw at it has caused the Surface Go to grind to a halt. Yes, you can even play games on it, including those found on Steam for Linux. Your concern, however, will be the middling Intel GPU and throttling due to heat.

Battery life is another one of those metrics that is hard to pin down. Microsoft advertises 9 hours but none of the reviewers reached that much. They consider themselves lucky if they reach 6 hours. On Linux, 7 hours average is normal and might even be on the low end. The Surface Go makes up for its disappointing longevity with its ability to be topped off with a power bank. That said, not any power bank would do. One that has USB-C Power Deliver and dishes out 30 to 40 watts is probably the best. A slim 18W would be the bare minimum but, depending on what you’re doing, it could be a slow trickle or even a slow discharge.

Almost Perfect

Unsurprisingly, not everything works, or at least not yet. Neither camera is detected, for one, and while that saves you from being ridiculed taking photos with a large slab, it does leave out video chats and conferences. Audio is also a bit on the soft side though the mic does work at least. The biggest problem at the moment, however, is that the Surface Go boots directly into Windows, no matter how you installed Linux properly. You have to boot into Advanced Restart options after booting into Windows to get it to boot into GRUB. Or probably don’t reboot at all since Suspend works just fine.

So why go through all that to install Linux? It isn’t a matter of “because you can”, though there’s definitely some bragging rights involved. The Surface Go is actually an impressive piece of tech and is probably the lightest, best-looking, and well-performing Linux tablet you’ll be able to get your hands on. Save for a tablet that’s been made and designed to run Linux from that start, of course.

There is no shortage of small-form Linux computers out there, from Planet Computing’s Gemini PDA to the GPD Pocket “palm top” to the quirky stylus-enabled One Mix Yoga. But when it comes to an eye and finger-friendly general purpose Linux tablet that you can do almost anything on, within reason and limitations, the Surface Go seems to have, rather ironically, come closest to being the Linux iPad Pro. Now that is a real computer.

Source

Midori: A Lightweight Open Source Web Browser

Last updated January 4, 2019

Here’s a quick review of lightweight, fast, open source web browser Midori that has returned from the dead.

If you are looking for a lightweight alternative web browser, try Midori.

Midori is an open source web browser that focuses more on being lightweight than providing you a ton of features.

If you have never heard of Midori, you might think that it is a new application but Midori was first released in the year 2007.

Because it focused on speed, Midori soon gathered a niche following and became the default browser in lightweight Linux distributions like Bodhi Linux, SilTaz etc.

Even elementary OS used Midori as its default browser. But the development of Midori stalled around 2016 and its fans started wondering if Midori was dead already. elementary OS dropped it from its latest release (I assume) for this reason.

The good news is that Midori is not dead. After almost two years of inactivity, the development resumed in the last quarter of 2018. A few extensions including an ad-blocker were added in the later releases.

Features of Midori web browser

Midori web browser

Here are some of the main features of Midroi browser

  • Written in Vala with GTK+3 and WebKit rendering engine.
  • Tabs, windows and session management
  • Speed dial
  • Saves tab for the next session by default
  • Uses DuckDuckGo as a default search engine. It can be changed to Google and Yahoo.
  • Bookmark management
  • Customizable and extensible interface
  • Extension modules can be written in C and Vala
  • Supports HTML5
  • An extremely limited set of extensions include an ad-blocker, colorful tabs etc. No third-party extensions.
  • Form history
  • Private browsing
  • Available for Linux and Windows

Trivia: Midori is a Japanese word that means Green. Midori developer is not Japanese, if you were guessing something on that line.

Experiencing Midori

Midori web browser in Ubuntu 18.04

I have been using Midori for past few days. The experience is mostly fine. It supports HTML5 and renders the websites quickly. The ad-blocker is okay. Browsing experience is more or less smooth as you would expect in any standard web browser.

The lack of extensions has always been a weaker point of Midori so I am not going to talk about that.

What I did notice that it doesn’t support international languages. I couldn’t find a way to add new language support. It could not render the Hindi fonts at all and I am guessing it’s the same with many other non-Romance languages.

I also had my fair share of troubles with YouTube videos. Some videos would throw playback error while others would run just fine.

Midori didn’t eat my RAM like Chrome so that’a a big plus here.

If you want to try out Midori, let’s see how can you get your hands on it.

Install Midori on Linux

Midori is no longer available in Ubuntu 18.04 repository. However, the newer versions of Midori can be easily installed using the Snap packages.

If you are using Ubuntu, you can find Midori (snap version) in the software center and install it from there.

Midori browser is available in Ubuntu Software CenterMidori browser is available in Ubuntu Software Center

For other Linux distributions, make sure that you have Snap support enabled and then you can install Midori using the command below:

sudo snap install midori

You always have the option to compile from the source code. You can download the source code of Midori from its website.

If you like Midori and want to help this open source project, please donate to them or buy Midori merchandise from their shop.

Do you use Midori or have you ever tried it? How’s your experience with it? What other web browser do you prefer to use? Please share your views in the comment section below.

Source

Back on the Block » Linux Magazine

Ubuntu Linux gets back to basics with the Ubuntu 18.10 release – an appealing and practical distro that isn’t worried about conquering the world.

Ubuntu is back. The same Ubuntu that I loved back in 2011 before Unity and Gnome 3 happened. Both were great projects, but they broke my workflow, so I moved to openSUSE and Arch Linux with the Plasma desktop.

Much water has flowed under the bridge since then. Canonical’s dream of taking over Microsoft (Windows), Google (Android), and Apple (iOS) didn’t materialize, and they decided to reduce their focus on the consumer space.

What was supposed to be bad news for Canonical turned out to be good news for open source communities, because Canonical shut down its in-house projects and returned those projects upstream. The controversial Unity desktop went away, and Gnome resumed the throne of being the default desktop environment and shell for the world’s most popular Linux distribution.

Source

Using Linux for Logic | Linux Journal

I’ve covered tons of different scientific
applications you can run on your computer to do rather complex
calculations, but so far, I’ve not really given much thought to
the hardware on which this software runs. So in this article, I take a look at
a software package that lets you dive deep down to the level of the
logic gates used to build up computational units.

At a certain point,
you may find yourself asking your hardware to do too much work. In those cases,
you need to understand what your hardware is and how it works. So,
let’s start by looking at the lowest level: the lowly
logic gate. To that end, let’s use a software package named Logisim
in order to play with logic gates in various groupings.

Logisim should be available in most distributions’ package management
systems. For example, in Debian-based distros, install it
with the following command:

sudo apt-get install logisim

You then can start it from your desktop environment’s menu,
or you can open a terminal, type logisim and press
Enter. You should see a main section of the application
where you can start to design your logic circuit. On the left-hand side,
there’s a selection pane with all of the units you can use for your
design, including basic elements like wires and logic gates, and
more complex units like memory or arithmetic units.

""

Figure 1. When you first start Logisim, you get a blank project where
you can start to design your first logic circuit.

To learn how to start using Logisim, let’s look at how to set up one of
the most basic logic circuits: an AND gate.

""

Figure 2. You easily can add logic gates to your circuit to model
computations.

If you click the
Gates entry on the left-hand side, you’ll see a full list of available
logic gates. Clicking the AND gate allows you to add them to the design
pane by clicking on the location where you want them added. At the bottom
of the left-hand side, you’ll see a pane that displays the attributes
of the selected gate. You can use this pane to edit those attributes to
make the gate behave exactly the way you want. For this example,
let’s change the number of inputs value from 5 to 2. The next
step is to add an output pin in order to see when the output is either
1 or 0. You can find pins in the wiring section.

On the front side of the
AND gate, you’ll want to add pins so you can control input. In the
attributes for each of the pins, you’ll see that you can change whether
the pin is supposed to be an output pin. You also can set whether
the pin is supposed to be a three-state pin.

The last step is to
connect all of these pieces by simply clicking and dragging
between the separate components.

""

Figure 3. You can add extra items, like inputs and outputs, to your
logic circuit.

By default, the input pins
currently are set to 0, so once the wires are connected, you should see
that the output is set to 0. In order to toggle the input pins, you first need
to select the toggle tool from the toolbar at the top of the window
(the one shaped like a pointing hand). Once you have selected this tool,
you can click on the input pins to change their states. Once both inputs
are set to 1, you should see the output flip to 1 also.

While you can build your circuits up from first principles and see how
they behave, Logisim also lets you define the behavior first and generate
a circuit that gives you the defined behavior. Clicking the
Window→Combinational Analysis menu item pops up a new window where you can
do exactly that.

""

Figure 4. You can build up your logic circuits in reverse by defining
the behavior you wanted first, then allowing it to generate a circuit that
gives you this required behavior.

The first step is to provide a list of
inputs. You simply add a series of labels, one for each input. For this
example, you’ll define an x, y and z. Next, you’ll need to click
the outputs tab and do the same for the number of outputs you want to
model. Let’s just define a single output for this example.

The last step
is actually to define the behavior linking the inputs to the outputs. This
is done through a logic table. So here, you’ll have the output
as 0, unless either x and z or y and z are high.

""


Figure 5. Logisim includes a tool that allows you to generate logic circuits
based on a truth table that you define to handle the computation you’re
interested in modeling.

Once
you’re happy with the definition, click the Build Circuit
button at the bottom of the window. This pops up a new dialog window
where you can define the name and select the destination project, as
well as choosing whether to use only NAND gates or only 2-input
gates.

""

Figure 6. By using the Combinational Analysis window, you can create
more complex circuits based purely on their expected behavior.

You can click on the inputs to toggle them and
verify that everything behaves as you had planned.
The Combinational Analysis window has two other tabs: Expression and Minimized. The
Expression tab shows you the logical mathematical expression that
describes the truth table you defined. You can edit your
circuit further by editing this equation directly. The minimized tab gives you
the logical equation as either the sum of products or the product of sums.

Once you finish your circuit, you can save it in a .circ
file. These files define a complete circuit that can be reused as a single
unit. When you do want to reuse them in a larger, more complex circuit,
click Project→Load Library→Logisim Library and
select the saved file. This allows you to build up very
complicated computing circuits rather quickly.

You also can export the circuit itself
by clicking File→Export Image. This allows you
to save the circuit as an image that you can use in a report or
some other process.

This is just a brief introduction, but I hope Logisim helps you learn a bit more
about the fundamentals of computing and logical structures.

Source

How to use Magit to manage Git projects

Git is an excellent version control tool for managing projects, but it can be hard for novices to learn. It’s difficult to work from the Git command line unless you’re familiar with the flags and options and the appropriate situations to use them. This can be discouraging and cause people to be stuck with very limited usage.

Fortunately, most of today’s integrated development environments (IDEs) include Git extensions that make using it a lot easier. One such Git extension available in Emacs is called Magit.

The Magit project has been around for 10 years and defines itself as “a Git porcelain inside Emacs.” In other words, it’s an interface where every action can be managed by pressing a key. This article walks you through the Magit interface and explains how to use it to manage a Git project.

If you haven’t already, install Emacs, then install Magit before you continue with this tutorial…

Source

Changing the month format: a fairly general solution

bannerFor a full list of BASHing data blog posts, see the index page.

I sometimes need to change the month format in a dataset, for instance from “Jan” to “01”, or “3” to “March”. There are various clever ways to do this on the command line, but I’m not good at remembering clever. To save time I wrote a table with the 6 different month formats I see most often. It’s the table you see below, and if you highlight and copy it, you should be able to paste it into a text editor as a tab-separated table. Save the file as “months”.

1 01 Jan January i I
2 02 Feb February ii II
3 03 Mar March iii III
4 04 Apr April iv IV
5 05 May May v V
6 06 Jun June vi VI
7 07 Jul July vii VII
8 08 Aug August viii VIII
9 09 Sep September ix IX
10 10 Oct October x X
11 11 Nov November xi XI
12 12 Dec December xii XII

My general strategy is to use AWK to change formats, and to load “months” into an appropriate array in the AWK command. Below are a few examples.

Expand abbreviated month

first

awk ‘FNR==NR /-/ ‘ months file

Split “full” date into ISO 8601 components

second

awk -F”t” ‘FNR==NR FNR==1 FNR>1 ‘ months file

Convert “full American” date into Roman month-numeral date

third

awk -F”t” ‘FNR==NR FNR==1 FNR>1 ‘ months file

Convert date range into ISO 8601 interval date

fourth

awk ‘FNR==NR /[0-9]-[0-9]/ / – / && length($0)<21 / – / && length($0)>21 ‘ months file

To remind myself which fields are which in “months”, I just cat the file before building the AWK command:

months

Last update: 2018-12-30

Source

Linux Hacker Board Trends in 2018 and Beyond | Linux.com

When I read Brian Benchoff’s recent claim in Hackaday that the maker board market was stalling, I had a sense that there might be some truth to it. The novelty of community-backed, open-spec SBCs has worn off, and there were few new boards in 2018 that seem destined to become Raspberry Pi killers. Yet, the more I researched open-spec Linux/Android maker SBCs for LinuxGizmos’ New Year’s edition SBC catalog, the more I realized that the sector was very much alive — just a bit quieter than before.

There were 19 new SBC entries since our June roundup of 116 SBCs (compared to 13 new products that appeared in that reader survey catalog since the January 2018 New Year’s hacker catalog roundup of 103 boards). Despite the removal from market of several older products in Q2 2018 and the dissolution of The Next Thing and its Chip board — and even after we eliminated several older boards with fading communities, such as the 86Duino and PCDuino8 — we ended up with 122 boards, six more than in June.

Benchoff’s speculation that fewer maker boards were sold in 2018 may well be correct, but I have seen no proof of it. If there has been a slowdown, Benchoff nailed the reason: poor documentation. Other drawbacks to the hacker board scene include buggy software and less frequently, hardware. In many cases, the documentation and images are fine, but by the time they arrive, your shiny new SBC is already halfway to obsolescence.

Many of the casual, Raspberry Pi home automation hobbyists who experimented with faster, but not always reliable Banana, Orange, and NanoPi’s and other more obscure bargain-basement SBCs in recent years have returned to the fold. And why not return to the very capable Raspberry Pi 3 Model B+ — the SBC of the year and the winner of our June reader survey — or the smaller new RPi 3 Model A+? The RPi 3B+ is only a modest improvement over the 3B, but it is close enough to the competition in price, features, and performance to shift the comparison to software and support issues. If the Raspberry Pi Foundation had been faster to transition from ARM11 to Cortex-A SoCs, the hacker board market might be considerably smaller than it is today.

Despite the atypically buggy PoE board for the RPi 3B+, which was quickly resolved with a refund and a reboot, things tend to work more smoothly in Raspberry land. Most buyers are more interested in solid community support and no-doubt software and HAT compatibility than they are in having full schematics.

Yet, there’s a second trend that is leading us toward more diversity: The maker board movement continues to merge with the commercial SBC industry. There appears to be significant growth in small manufacturing customers using open-spec boards with a variety of special features ranging from AI to voice control to Time Sensitive Networking. These technically knowledgeable buyers need solid documentation and schematics for prototyping new products but are usually less interested in other community resources. More to the point, they are tired of the high prices charged by commercial vendors, which only make sense with huge volumes.

New maker SBCs and trends in late 2018

Many of the new boards we’ve seen since June are aimed at relatively niche applications rather than trying to beat the Pi at world domination. There are new router boards such as the Banana Pi BPI-R2, as well as an increasing number of extended temperature SBCs such as the Firefly-PX3-SE. We’ve seen industrial focused newcomers such as the HummingBoard CBi (CAN bus interface) and novel sandwich-style module/carrier board combos like the Khadas Edge. Seeed has introduced a ReSpeaker v2.0 model aimed at far-field voice control applications.

The trend toward Power-over-Ethernet (PoE) continues with products like the Renegade Elite. (The RPi 3B+ may now have a PoE option, but it’s hobbled by a Gigabit Ethernet port that is limited to 300Mbps.)

We’re seeing more SBCs with built-in cellular modems or mini-PCIe and M.2 slots capable of supporting them. FriendlyElec has jumped on this trend this year with 2G, 3G, and 4G themed NanoPi IOT models.

The still fledgling x86 hacker board arena continues to grow with the Intel Gemini Lake based Odroid-H2 board, which appears to be the most powerful hacker SBC in history. The Odroid-H2 is the first hacker board to feature an Atom processor from a recently released Atom/Celeron product family.

Another new Intel-based board — Aaeon’s Apollo Lake based Up Core Plus — is one of several new entries since June designed for machine learning and neural network processing. These AI contenders include the Allwinner V5 based Lindenis V5 and Bitmain Sophon BM1880, which uses a novel TPU-enhanced Arm SoC of Bitmain’s own design. Khadas is selling a version of the Khadas Edge with the AI-enhanced Rockchip RK3399Pro SoC.

Among more mainstream boards, we saw a continuing shift from Allwinner SoCs to the high-end, hexa-core Rockchip RK3399. In early 2018 we saw a lot of RK3399 entries in the $110 to $150 range, but newer models are more affordable.

The new Khadas Edge, Renegade Elite, and 96Boards form-factor Rock960 all start at about $100, and others have been cheaper. FriendlyElec introduced a $65 (2GB) and up NanoPi-M4 and followed up with a compact, $45 NanoPi Neo4. The Neo4 is an impressive feat despite the limitation of only 1GB of RAM, which isn’t enough for the RK3399. Also in this range is Pine64’s $60 (2GB) and up RockPro64, which shipped earlier this year.

No single Raspberry Pi killer emerged in 2018. Yet, the collective group of RK3399 vendors appear to be acting as a counterpoint in the Pi. The RK3399 is fast and offers x86 like technologies such as PCIe, SATA, and HDMI 2.0, and it has better Linux mainline support than the still-improving Allwinner SoCs.

2019 SBC trends

So what can we expect in 2019’s board market? The Raspberry Pi Foundation has already said there will be no major new Pi models in 2019, so as we await 2020’s Raspberry Pi 4, there will be room for other boards to shine. If there’s a major recession, we will likely see fewer SBC introductions. Otherwise, however, the growth in niche applications will probably block consolidation for the time being.

With TI’s new quad-core Cortex-A53 Sitara AM65x, it might be time for a BeagleBone reboot, and we should see more NXP i.MX8M boards such as TechNexion’s new sandwich-style PICO-PI-IMX8M spin on the Wand-Pi-8M. In the x86 world, meanwhile, a community-backed SBC based on the AMD Ryzen Embedded V1000 chip may arrive, although it may not clear our $200 limit.

So far, no RISC-V based SBCs have slid under our $200 limit, but one is likely to arrive in 2019. We could even see a low-cost Google Fuchsia hacker board. Meanwhile, you can check out the new Linux supported C-SKY ISA for as little as $6 with the new C-SKY board.

Overall, we’ll see a growing focus on 5G and edge analytics. In 2019 we are likely to see at least one 5G-ready board, and many more high-end edge IoT boards with NPUs, VPUs, and smartened up GPUs. So get ready to start talking TOPS and deep learning frameworks.

Sounds like fun. Let’s go.

Source

Unit Testing in the Linux Kernel

Brendan Higgins recently proposed adding unit tests to the Linux kernel,
supplementing other development infrastructure such as
perf, autotest and
kselftest. The whole issue of testing is very dear to kernel developers’
hearts, because Linux sits at the core of the system and often has a very
strong stability/security requirement. Hosts of automated tests regularly
churn through kernel source code, reporting any oddities to the mailing
list.

Unit tests, Brendan said, specialize in testing standalone code snippets.
It was not necessary to run a whole kernel, or even to compile the kernel
source tree, in order to perform unit tests. The code to be tested could be
completely extracted from the tree and tested independently. Among other
benefits, this meant that dozens of unit tests could be performed in less
than a second, he explained.

Giving credit where credit was due, Brendan identified
JUnit, Python’s
unittest.mock and Googletest/Googlemock
for C++ as the inspirations for
this new KUnit testing idea.

Brendan also pointed out that since all code being unit-tested is
standalone and has no dependencies, this meant the tests also
were deterministic. Unlike on a running Linux system, where any number of pieces
of the running system might be responsible for a given problem, unit tests
would identify problem code with repeatable certainty.

Daniel Vetter replied extremely enthusiastically to Brendan’s work. In
particular, he said, “Having proper and standardized infrastructure for
kernel unit tests sounds terrific. In other words: I want.” He added that
he and some others already had been working on a much more specialized set
of unit tests for the Direct Rendering Manager (DRM) driver. Brendan’s
approach, he said, would be much more convenient than his own more
localized efforts.

Dan Williams was also very excited about Brendan’s work,
and he said he had
been doing a half-way job of unit tests on the libnvdimm (non-volatile
device) project code. He felt Brendan’s work was much more general-purpose,
and he wanted to convert his own tests to use KUnit.

Tim Bird replied to Brendan’s initial email as well, saying he thought unit
tests could be useful, but he wanted to make sure the behaviors were
correct. In particular, he wanted clarification on just how it was possible
to test standalone code. If the code were to be compiled independently,
would it then run on the local system? What if the local system had a
different hardware architecture from the system for which the code was
intended?
Also, who would maintain unit tests, and where would the tests live, within
the source tree? Would they clutter up the directory being tested, or would
they live
far away in a special directory reserved for test code? And finally, would
test code be easier to write than the code being tested? In other words,
could new developers cut their teeth on a project by writing test code, as
a gateway to helping work on a given driver or subsystem? Or would unit
tests have to be written by people who had total expertise in the area
already?

Brendan attempted to address each of those issues in turn. To start, he
confirmed that the test code was indeed extracted and compiled on the local
system. Eventually, he said, each test would compile into its own
completely independent test binary, although for the moment, they were all
lumped together into a single user-mode-linux (UML) binary.

In terms of cross-compiling test code for other architectures, Brendan felt
this would be hard to maintain and had decided not to support it. Tests
would run locally and would not depend on architecture-specific
characteristics.

In terms of where the unit tests would live, Brendan said they would be in
the same directory as the code being tested. So every directory would have
its own set of unit tests readily available and visible. The same person
maintaining the code being tested would maintain the tests themselves. The
unit tests, essentially, would become an additional element of every
project. That maintainer would then presumably require that all patches to
that driver or subsystem pass all the unit tests before they could be
accepted into the tree.

In terms of who was qualified to write unit tests for a given project,
Brendan explained:

In order to write a unit test, the person who writes
the test must understand what the code they are testing is supposed to do.
To some extent that will probably require someone with some expertise to
ensure that the test makes sense, and indeed a change that breaks a test
should be accompanied by an update to the test. On the other hand, I think
understanding what pre-existing code does and is supposed to do is much
easier than writing new code from scratch, and probably doesn’t require too
much expertise.

Brendan added that unit tests would probably reduce, rather than increase,
a maintainer’s workload. In spite of representing more code overall:

Code
with unit tests is usually cleaner, the tests tell me exactly what the code
is supposed to do, and I can run the tests (or ideally have an automated
service run the tests) that tell me that the code actually does what the
tests say it should. Even when it comes to writing code, I find that writing
code with unit tests ends up saving me time.

Overall, Brendan was very pleased by all the positive interest, and said he
planned to do additional releases to address the various technical
suggestions that came up during the course of discussion.
No voices really were raised in opposition to any of Brendan’s ideas. It
appears that unit tests may soon become a standard part of many drivers and
subsystems.

Note: if you’re mentioned above and want to post a response above the comment section, send a message with your response text to ljeditor@linuxjournal.com.

Source

CentOS Install Htop – Linux Hint

It doesn’t matter whatever system you’re using – Windows, Linux or macOS or whatever, having a handy task manager is always a privilege as it allows you get even more control over the system. That’s why there are a number of reasons I love to have Htop at my disposal.

Htop is a great, interactive system monitor and process manager that targets the UNIX systems. Of course, it’s a CLI tool that uses the text-mode. For using Htop, you need to have “ncurses” present in your system.

This is a very powerful solution for the enterprise and server areas where GUI is mostly avoided. Granted, GUI tools are more awesome-looking and easier-to-use for any user but for pro and server managers, the CLI is the best way to go.

In the case of CentOS – the free edition of Red Hat Enterprise Linux, is the testing ground of the original condition of the server and enterprise. Today, let’s have a look at the installation and usage of Htop on CentOS.

Htop is already available at the Fedora EPEL repository and it’s officially maintained. THat’s why this is the most recommended way of getting htop. Don’t worry; if you wish, you can also download the source and compile it by yourself.

1) Installing from EPEL

Make sure that your system has enabled the EPEL repository –

sudo yum install epel-release
sudo yum update’

Once EPEL is ready, it’s time to install htop –

2) Installing from source

At first, make sure that your system includes the “Development Tools” –

sudo yum groups mark install “Development Tools”
sudo yum groups mark convert “Development Tools”

sudo yum groupinstall “Development Tools”
sudo yum install glibc-devel glibc-headers kernel-headers kernel-devel gnutls-devel

sudo yum install ncurses-devel

Now, download the latest source code of htop

tar -xvzf htop-2.2.0.tar.gz

Start the building process –

Htop usage

Fire up the tool –

This is the window where you’ll find out every single information about your system.

On the top, you can check out the memory and swap usage.

For entering the setup, press F2.

Here, you can easily check out what options and info are available on the main window.

Tree view

It’s my favorite view as it allows you to understand the hierarchy of each process with ease. Press F5 or “t”.

Killing a process

Select a process and hit F9 or “k” button.

Then, select “SIGKILL”.

You can also perform multi-kill. Use “Spacebar” for tagging all the processes you want to kill and then,

Processes from a single user

On the main window, hit “u” key.

Then, select the user you want to see.

Monitor a particular process

Highlight the process and hit “F”.

The highlighting will change the highlighting of the process.

For all the other usage, check out the man page of htop –

Or, the htop help page –

Enjoy!

Source

WP2Social Auto Publish Powered By : XYZScripts.com