Community collaboration makes for some great OpenStack solutions

Share with friends and colleagues on social media

If you follow the evolution of OpenStack, you know how it’s finding its way into all sorts of workloads, from high-level research to car manufacturing to all-new 5G networks. Organizations are using it for everything from the mundane to the sublime and sharing what they’re learning with the OpenStack community.

Some of the examples offered up at the recent OpenStack Summit Berlin showed that OpenStack is a full-fledged part of the IT mainstream, which means there are a wealth of ideas out there for your own implementation.

OpenStack In many cases, the advances of others – including Adobe, AT&T, NASA, Oerlikon, SBAB Bank, Volkswagen, Workday and many other companies and organizations, big and small – are being contributed back to the community for you and others to use. This is a critical part of OpenStack and SUSE OpenStack Cloud, which take the best the community has to offer to improve the platform and how organizations solve problems.

Take Workday, the human resources software-as-a-service vendor, which in 2019 expects to have half of all its production workloads living on the 45 OpenStack private-cloud clusters it’s running in its global data centers. That represents about 4,600 servers, up from just 600 in 2016.

To manage the growing demand for its products, Workday created and now manages about 4,000 immutable VM images that are updated on their own cycles, with new versions of Workday deployed every weekend. That means the company needs to regularly tear down and replace thousands of VMs in a very short time and do it without any downtime.

That scale required automation, and the growing complexity required a new effort to gather data about their clusters and OpenStack controllers. They used Big Panda for incident management and Wavefront for monitoring and analytics, looking for anomalies and problems.

As it turns out, they uncovered some real issues with how they deployed images, and solved those problems by extending the OpenStack Nova API to leverage its caching capability to pre-load big images – what they call image pre-fetching. This enabled them to speed up the image deployments so instead of big images slowing down the restart of thousands of VMs, they could pre-load them and relaunch new VM instances quickly.

They did some ingenious stuff, like enabling Glance to serve up images directly to remote OpenStack controllers, and got help from the community for figuring it out. With OpenStack’s complexity, that openness made their work doable, and in the end, they offered their Nova API work back to the community.

Workday is just one example of the companies taking advantage of the power of OpenStack and the open source community to solve real problems. Check out these and other OpenStack successes – including these 51 things you need to know – from the OpenStack Summit Berlin.

Share with friends and colleagues on social media

Source

Bash’s Built-in printf Function | Linux Journal

Even if you’re already familiar with the printf command, if you got your information via “man printf” you may be missing a couple of useful features that are provided by bash’s built-in version of the standard printf(1) command.

If you didn’t know bash had its own version of printf, then you didn’t heed the note in the man page for the printf(1) command:

NOTE: your shell may have its own version of printf, which usually supersedes the version described here. Please refer to your shell’s documentation for details about the options it supports.

You did read the man page, didn’t you? I must confess, I’d used printf for quite a while before I realized bash had its own.

To find the documentation for the built-in version of printf, just search for “printf” in the bash man page.

In case you’re completely unfamiliar with the printf command, and similar functions in other languages, a couple quick examples should get you up to speed:

$ printf “Hello worldn”
Hello world

$ printf “2 + 2 is %dn” $((2+2))
2 + 2 is 4

$ printf “%s: %dn” “a string” 12
a string: 12

You provide printf with a format string and a list of values. It then replaces the %… sequences in the string with the values from the list formatted according to the format specification (the part following the percent sign). There are a dozen or more format specifier characters, but 99% of the time, the only ones you’ll need are the following:

  • d – Format a value as a signed decimal number.
  • u – Format a value as an unsigned decimal number.
  • x – Format a value as a hexadecimal number with lower case a-f.
  • X – Format a value as a hexadecimal number with upper case A-F.
  • s – Format a value as a number.

Format specifiers can be preceded by a field width to specify the minimum number of characters to print. A positive width causes the value to be right-justified; a negative width causes the value to be left-justiifed. A width with a leading zero causes numeric fields to be zero-filled. Usually, you want to use negative widths for strings and positive widths for numbers.

Probably not what you want:

$ printf “%20s: %4dn” “string 1” 12 “string 2” 122
string 1: 12
string 2: 122

Still probably not not what you want:

$ printf “%-20s: %-4dn” “string 1” 12 “string 2” 122
string 1 : 12
string 2 : 122

Probably this is what you want:

$ printf “%-20s: %4dn” “string 1” 12 “string 2” 122
string 1 : 12
string 2 : 122

Note that printf reuses the format if it runs out of format specifiers, which in the examples above allows you to print two lines (four values) with only two format specifiers.

If you specify the width as an asterisk, then the width is taken from the next value in the list:

$ printf “%*s: %*dn” -20 “a string” 4 12
a string : 12

Note that if you want to zero-fill a field and specify the width with an asterisk, put the zero before the asterisk:

$ printf “%*s: %0*dn” -20 “a string” 4 12
a string : 0012

So now to the features that bash’s built-in version of printf provides. The first is the -v option, which allows you to put the formatted result into a variable rather than print it out. So instead of:

$ hw=$(printf “Hello world”)
echo $hw
Hello world

You can do this:

$ printf -v hw “Hello world”
echo $hw
Hello world

The second option is for formatting times (and dates):

$ printf “%(%m-%d-%Y %H:%M:%S)Tn” $(date +%s)
01-10-2019 09:11:44

The format specifier here is %(datefmt)T and the value is a system time in seconds from the epoch. The nested datefmt supports the same format options that are supported by strftime(3). You can get a system time value by specifying the +%s format option to the date command.

A couple special arguments are supported by the %(datefmt)T format. From the bash man page:

Two special argument values may be used: -1 represents the current time, and -2 represents the time the shell was invoked. If no argument is specified, conversion behaves as if -1 had been given.

There are a couple of additional features supported by bash’s built-in version of printf, but none that you are likely to need on a regular basis. See the man page for more information.

Source

Easy to Understand Man Pages for Every Linux User

One of the most commonly used and reliable ways of getting help under Unix-like systems is via man pages. Man pages are the standard documentation for every Unix-like system and they correspond to online manuals for programs, functions, libraries, system calls, formal standards and conventions, file formats and so on. However, man pages suffer from many failings one of which is they are too long and some people just don’t like to read too much text on the screen.

The TLDR (stands for “Too Long; Didn’t Read“. ) pages are summarized practical usage examples of commands on different operating systems including Linux. They simplify man pages by offering practical examples.

TLDR is an Internet slang, meaning a post, article, comment or anything such as a manual page was too long, and whoever used the phrase didn’t read it for that reason. The content of TLDR pages is openly available under the permissive MIT License.

In this short article, we will show how to install and use TLDR pages in Linux.

Requirements

  1. Install Latest Nodejs and NPM Version in Linux Systems

Before installing, you can try the live demo of TLDR.

How to Install TLDR Pages in Linux Systems

To conveniently access TLDR pages, you need to install one of the supported clients called Node.js, which is the original client for the tldr-pages project. We can install it from NPM by running.

$ sudo npm install -g tldr

TLDR also available as a Snap package, to install it, run.

$ sudo snap install tldr

After installing the TLDR client, you can view man pages of any command, for example tar command here (you can use any other command here):

$ tldr tar
View Tar Command Man Page

View Tar Command Man Page

Here is another example of accessing the summarized man page for ls command.

$ tldr ls
View ls Command Man Page

View ls Command Man Page

To list all commands for the chosen platform in the cache, use the -l flag.

$ tldr -l 
List All Linux Commands

List All Linux Commands

To list all supported commands in the cache, use the -a flag.

$ tldr -a

You can update or clear the local cache by running.

$ tldr -u	#update local cache 
OR
$ tldr -c 	#clear local cache 

To search pages using keywords, use the -s options, for example.

$ tldr -s  "list of all files, sorted by modification date"
Search Linux Commands Using Keyword

Search Linux Commands Using Keyword

To change the color theme (simple, base16, ocean), use the -t flag.

$ tldr -t ocean

You can also show a random command, with the -r flag.

$ tldr -r   
View Man Page for Random Linux Command

View Man Page for Random Linux Command

You can see a complete list of supported options by running.

$ tldr -h

Note: You can find a list of all supported and dedicated client applications for different platforms, in the TLDR clients wiki page.

TLDR Project Homepagehttps://tldr.sh/

That’s all for now! The TLDR pages are summarized practical examples of commands provided by the community. In this short article, we’ve showed how to install and use TLDR pages in Linux. Use the feedback form to share your thoughts about TLDR or share with us any similar programs out there.

Source

Top 5 Best Ubuntu Alternatives – Linux Hint

If you asked younger Linux users to tell you what their first Linux distribution was, we bet that Ubuntu would be the most common answer. First released in 2004, Ubuntu has helped establish Linux as a viable alternative to Windows and macOS and convinced millions that not all good things in life cost money.

But we’re now in 2019, and there are many excellent desktop Linux distributions that are not based on Ubuntu, and we’ve selected five of them for this article and sorted them by their popularity.

Manjaro is based on Arch Linux, a rolling-release distribution for computers based on x86-64 architectures that follows the KISS principle (“keep it simple, stupid”), emphasizing elegance, code correctness, minimalism, and simplicity. Manjaro sticks to the KISS principle as closely as possible, but it also focuses on user-friendliness and accessibility to make the distribution suitable for Linux newbies and veterans alike.

One of the most praise-worthy features of Manjaro is pacman, a versatile package manager borrowed from Arch Linux. To make pacman more user-friendly, Manjaro includes front-end GUI package manager tools called Pamac and Octopi. Three flagship editions of Manjaro are available— XFCE, KDE, and GNOME—but users can also choose from several community editions, including OpenBox, Cinnamon, i3, Awesome, Budgie, MATE, and Deepin. All editions of Manjaro come with a GUI installer and embrace the rolling release model.

By combining the user-friendliness of Ubuntu with the customizability of Arch Linux, Manjaro developers have created a Linux distribution that allows beginners to learn and grow with it and experienced users to get done more in less time. Because Manjaro boots into a live system, you can easily try it either using a virtual machine or by running it from a DVD or USB flash drive.

Solus

Unlike most popular Linux distributions that you come across these days, Solus is a completely independent desktop operating system built from scratch. Its main goal is to offer a cohesive desktop computing experience, which is something many Linux distributions have been trying to do, with mixed results.

Solus is built around Budgie, a desktop environment that uses various GNOME technologies and is developed by the Solus project, but other desktop environments are available as well, including MATE, and GNOME. Budgie shares many design principles with Windows, but it’s far more customizable and flexible.

Solus ships with a whole host of useful software applications to take care of all your computing needs right out of the box. Content creators can animate in Synfig Studio, produce music in Musescore or Mixxx, design and illustrate in GIMP and Inkscape, and edit video in Avidemux or Shotcut. All applications and system components are continuously updated, so there are no large OS updates to worry about.

Fedora

Fedora would never be the Linux distribution of choice of Linus Torvalds, the creator of the Linux kernel, if it didn’t do something right. First released in 2003, Fedora is known for focusing on innovation and offering cutting-edge features that take months to appear in other Linux distributions. The development of this Linux distribution is sponsored by Red Hat, who uses it as the upstream source of the commercial Red Hat Enterprise Linux distribution.

Thanks to built-in Docker support, you can containerize your own apps or deploy containerized apps out of the box on Fedora. The default desktop environment in Fedora is GNOME 3, which was chosen for its user-friendliness and complete support for open source development tools. That said, several other desktop environments, including XFCE, KDE, MATE, and Cinnamon, are available as well.

Just like Ubuntu, Fedora is also great as a server operating system. It features an enterprise-class, scalable database server powered by the open-source PostgreSQL project, brings a new Modular repository that provides additional versions of software on independent lifecycles, and comes with powerful administration tools to help you monitor your system’s performance and status.

openSUSE

Once known as SUSE Linux and SuSE Linux Professional, openSUSE is a popular Linux distribution that offers two distinct release models: rolling release and 2/3–3/4 years per fixed release. openSUSE Tumbleweed provides the rolling release model, while openSUSE Leap provides the traditional release model.

Regardless of which release model you choose, you can always access all openSUSE tools, including the comprehensive Linux system configuration and installation tool YaST, the open and complete distribution development platform Open Build Service, or the powerful Linux software management engine ZYpp, which provides the backend for the default command line package management tool for openSUSE, zypper.

OpenSUSE has been around since 2005, and it’s now in the hands of Swedish private equity group EQT Partners, which purchased it for $2.5 billion in July 2018. The acquisition didn’t affect the distribution’s development in any way, and SUSE developers expect the partnership with EQT to help it exploit the excellent market opportunity both in the Linux operating system area and in emerging product groups in the open source space, according to its official press release.

Debian

You probably know that Ubuntu is a Debian-based Linux distribution, but you may not know that Debian is actually a great alternative to Ubuntu. Not only is Debian one of the earliest Linux distributions in the world, but it’s also one of the most active, with over 51,000 packages and translations in 75 languages.

Since its beginning in 1993, Debian has been firmly committed to free software. The famous Debian Social Contract states that the distribution will always remain 100 percent free and will never require the use of a non-free component. It also states that Debian developers will always give back to the free software community by communicating things such as bug fixes to upstream authors.

Before you download and install Debian, you should familiarize yourself with its three main branches. The Stable branch targets stable and well-tested software to provide maximum stability. The Testing branch includes software that has received some testing but is not ready to be included in the Stable branch just yet. Finally, the Unstable branch includes bleeding-edge software that is likely to have some bugs.

Source

Linux Today – Linux 5.0 rc2

Jan 13, 2019, 22:00 (0 Talkback[s])

(Other stories by Linus Torvalds)

So the merge window had somewhat unusual timing with the holidays, and
I was afraid that would affect stragglers in rc2, but honestly, that
doesn’t seem to have happened much. rc2 looks pretty normal.

Were there some missing commits that missed the merge window? Yes. But

no more than usual. Things look pretty normal.

What’s a bit abnormal is that I’m traveling again, and so for me it’s

a Monday release, but it’s (intentionally) the usual “Sunday
afternoon” release schedule back home. I’m trying to not surprise
people too much.

As to actual changes: all looks fairly normal. Yes, there’s a fair

number of perf tooling updates, so that certainly stands out in the
diffstat, but if you ignore the tooling and just look at the kernel,
it’s about two thirds drivers (networking, gpu, block, scsi..), with
the rest being the usual mix of arch updates (ARM, RISC-V, x86, csky),
with some filesystem (btrfs, cifs) and vm fixes.

Go test,

Linus

Complete Story

Related Stories:

Source

Understanding Load Average on Linux – Linux Hint

Load average is a measurement of the amount of work versus free CPU cycles available on a system processor. In this article I’ll define the term, demonstrate how Linux calculates this value, then provide insight into how to interpret system load.

Before we dive into Linux load averages, we must explore the different ways load is calculated and address the most common measurement of CPU load – a percentage.

Windows calculates load differently from Linux, and since Windows has been historically more popular on the desktop, the Windows definition of load is generally understood by most computer users. Most Windows users have seen the system load in the task manager displayed as a percentage ranging from 0% to 100%.

In Windows this is derived by examining how “busy” the System Idle Process is and using the inverse to represent the system load. For example, if the idle thread is executing 99% of the time, CPU load in Windows would be 1%. This value is easy to understand but provides less overall detail about the true status of the system.

In Linux, the load average is instead is represented by a decimal number starting at 0.00. The value can be roughly defined as the number of processes over the past minute that had to wait their turn for execution. Unlike Windows, Linux load average is not an instant measurement. Load is given in three values – the one minute average, the five minute average, and the fifteen minute average.

Understanding Load Average in Linux

At first, this extra layer of detail seems unnecessary if you simply want to know the current state of CPU load in your system. But since the averages of three time periods are given, rather than an instant measurement, you can get a more complete idea of the change of system load over time in a single glance of three numbers

Displaying the load average is simple. On the command line, you can use a variety of commands. I simply use the “w” command:

root@virgo [~]# w
21:08:43 up 38 days, 4:34, 4 users, load average: 3.11, 2.75, 2.70

The rest of the command will display who’s logged on and what they’re executing, but for our purposes this information is irrelevant so I’ve clipped it from the above display.

In an ideal system, no process should be held up by another process (or thread), but in a single processor system, this occurs when the load goes above 1.00.

The words “single processor system” are incredibly important here. Unless you’re running an ancient computer, your machine probably has multiple CPU cores. In the machine I’m on, I have 16 cores:

In this case, a load average of 3.11 is not alarming at all. It simply means that a bit more than three processes were ready to execute and CPU cores were present to handle their execution. On this particular system, the load would have to reach 16 to be considered at “100%”.

To translate this to a percent-based system load, you could use this simple, if not obtuse, command:

cat /proc/loadavg | cut -c 1-4 | echo “scale=2; ($(</dev/stdin)/`nproc`)*100” | bc -l

This command sequences isolates the 1-minute average via cut and echos it, divided by the number of CPU cores, through bc, a command-line calculator, to derive the percentage.

This value is by no means scientific but does provide a rough approximation of CPU load in percent.

A Minute to Learn, a Lifetime to Master

In the previous section I put the “100%” example of a load of 16.0 on a 16 CPU core system in quotes because the calculation of load in Linux is a bit more nebulous than Windows. The system administrator must keep in mind that:

  • Load is expressed in waiting processes and threads
  • It is not an instantaneous value, rather an average, and
  • It’s interpretation must include the number of CPU cores, and
  • May over-inflate I/O waits like disk reads

Because of this, getting a handle of CPU load on a Linux system is not entirely an empirical matter. Even if it were, CPU load alone is not an adequate measurement of overall system resource utilization. As such, an experienced Linux administrator will consider CPU load in concert with other values such as I/O wait and the percentage of kernel versus system time.

I/O Wait

I/O wait is most easily seen via the “top” command:

In the screenshot above I have highlighted the I/O wait value. This is a percentage of time that the CPU was waiting on input or output commands to complete. This is usually indicative of high disk activity. While a high wait percentage alone may not significantly degrade CPU-bound tasks, it will reduce I/O performance for other tasks and will make the system feel sluggish.

High I/O wait without any obvious cause might indicate a problem with a disk. Use the “dmesg” command to see if any errors have occurred.

Kernel vs. System Time

The above highlighted values represent the user and kernel (system) time. This is a breakdown of the overall consumption of CPU time by users (i.e. applications, etc.) and the kernel (i.e. interaction with system devices). Higher user time will indicate more CPU usage by programs where higher kernel time will indicate more system-level processing.

A Fairly Average Load

Learning the relationship of load average to actual system performance takes time, but before long you’ll see a distinct correlation. Armed with the intricacies of system performance metrics, you’ll be able to make better decisions about hardware upgrades and program resource utilization.

Source

Kaku – web technologies-based music player

YouTube-tools

Kaku – web technologies-based music player

My CD collection has taken over my spare room. With very little space to store more, I’m gradually spending more time using streaming services.

Linux is blessed with a mouthwatering array of excellent open source music players. But I’m always on the look out for fresh and innovative music players.

Kaku bills itself as the next generation music client. Is that self-proclaimed hype? The software is written in JavaScript.

Installation

Users are well catered for irrespective of their operating system, as the project offers binaries for Linux, Mac OS X, and Windows.

For Linux, there’s official packages for Debian/Ubuntu (32- and 64-bit). For other distros, there may be packages available from their respective repositories.

The developer also provides an AppImage (32- and 64-bit) which makes it easy to run the software. AppImage is a format for distributing portable software on Linux without needing superuser permissions to install the application. All that’s required is to download the AppImage, and make the file executable by typing:

$ chmod u+x ./Kaku-2.0.1-x86_64.AppImage

In operation

Here’s an image of Kaku in action.

Kaku

First thing you might notice from the above screenshot is that Kaku replaces the maximize, minimize, and close buttons with Mac style buttons. I prefer my applications to have a consistent look and feel, but it’s not a big issue.

At the top right of the window, there’s a search bar. By default, Kaku displays results from YouTube, but there’s also the option to search Vimeo, SoundCloud, MixCloud, or all of them. Videos can be displayed in a list view or a icon view. By default, you’re presented with the top ranking YouTube music videos.

At the bottom left, there’s a small window that displays the video. There’s the standard playback buttons and playback bar together with the option to cast the output to a device, and toggle TV mode. You can make videos appear full screen or occupy all of the Kaku estate.

Latest News

This section is redundant, useless, and hopefully will be removed in a later release. It shows only release notes for early releases of Kaku. You might (incorrectly) conclude that Kaku hasn’t been updated in years. This isn’t the case; the software is under active development. But even if this section offered details of recent improvements to the software, that information is much better placed on the project’s GitHub page rather than cluttering up the application itself.

Search Results

You’re taken to this section whenever you use the search bar, or click Search Results (unless you’re in TV mode). Videos that match your search criteria are easily added to the play queue. The search bar displays even in TV mode, but search results are not displayed.

Play Queue

This section is populated by clicking the “Add to Play Queue” button, which is shown in Home and Search Results. The video currently being played is highlighted in light blue. Right click on a video gives the option to add it to a playlist. You’ll need to have a playlist created first though.

History

As you might have guessed, this section shows a history of videos that you’ve watched. You can click on any entry and replay that video. Right click on a video gives the option to add it to a playlist.

Settings

This section lets you configure the application. I’ll cover the configuration options in the next page.

Online DJ

Does the prospect of becoming a DJ entice you? This section lets you become your own DJ, offering your choice of music videos to listeners (known as guests). When you create a room, a room key is generated. You share this key with your guests which gives them access to your room. As the DJ, what you play will be offered to your guests. By joining the room, you can also text chat with everyone in the room. Neat!

Other Features

Before looking at some of the other functionality, let’s discuss memory usage.

I recently put Headset under the microscope. Like Kaku, Headset is a YouTube player although it’s implemented in a very different way.

One thing that cropped up with Headset was its excessive memory usage, sometimes topping 1GB of RAM. Kaku is much more frugal with memory, consuming less than 300MB in typical usage.

Kaku

The settings section offers the following functionality:

  • Enable desktop notifications.
  • Keep Kaku on top. Unlike vlc, this feature actually works!
  • Enable Chatroom. The chatroom is available with the Online DJ functionality.
  • Internationalization support – translations are available for Arabic, Chinese, Czech, Dutch, French, German, Italian, Portuguese, Portuguese (Brazilian), Russian, Spanish, and other languages.
  • Choose the Top Ranking for different countries. Bizarrely, the application defaults to United Arab Emirates.
  • Change the default searcher: YouTube, Vimeo, SoundCloud, Mixcloud, or all of them.
  • Default track format: Best Video, or Best Audio.
  • Import YouTube playlists.
  • Backup data locally or to Dropbox.
  • Sync data locally or from Dropbox.
  • Update Player.
  • Reset the database.

The software also offers playlists, with the ability to rename/remove them.

Summary

Kaku is a capable but not a great music/video player. There’s a good range of streaming services available to use. And it’s much more pleasurable to watch music videos using Kaku than on the streaming service website itself. While the software works well, it’s a tad rough round the edges and idiosyncratic.

Overall, I’m left with the feeling of meh! The Online DJ functionality and text chat functionality is innovative, although it’s clunky.

There’s lots of functionality I’d love added. I’d like shuffle playback, better keyboard shortcuts, an option to choose the playback resolution, and tons more. While the software purports to show the best video quality available, this clearly isn’t the case.

The player is under active development. I’ll be keeping my eagle eyes on new releases.

Website: kaku.rocks
Support: GitHub code repository
Developer: Chia-Lung Chen and contributors
License: MIT License

Source

Compact i.MX6 UL gateway offers WiFi, 4G, LoRa, and ZigBee

Forlinx’s “FCU1101” is a compact embedded gateway with -35 to 70℃ support that runs Linux on an i.MX6 UL and offers 4x isolated RS485 ports, a LAN port, and WiFi, 4G, LoRa, and ZigBee.

A year ago, the wireless studded, serial connected FCU1101 might have been called an IoT gateway, but the name seems to be going out of fashion. A similar system with a more powerful processor than the FCU1101‘s power-efficient, Cortex-A7 based NXP i.MX6 UltraLite (UL) might today be called an edge server. Forlinx calls its mini-PC sized, 105 x 100 x 33mm device what we used to call them back in the day: an embedded computer.

FCU1101 without antennas

The FCU1101 is notable for being one of the few embedded systems we’ve seen without a USB port. Instead, the device turns its limited real estate over to 4x RS485 ports deployed with terminal connectors. The serial ports are 1.5KV-isolated and protected against electrostatic discharge per ESD Level 4. They also support ModBus protocols.

FCU1101 with antennas

The other major component is a set of four wireless radios and three external antennas. The 2.4GHz ZigBee and 433MHz LoRa modems share an antenna. The 4G module and the 802.11b/g/n radio with optional STA and AP mode support have their own antennas.

The antenna for the Netcom 4G module, which lists support only for Chinese carriers, is a tethered, standalone unit to avoid cross-interference. There’s also a SIM slot for 4G and a 10/100 Ethernet port. The spec list suggests there is a similar system available that adds GPS and an audio interface.

FCU1101 front detail view

The FCU1101’s 528MHz, single-core i.MX6 UL SoC is backed up with 256MB LVDDR3 RAM, 256MB to 1GB NAND flash, and a microSD slot. There are also reset and boot buttons, 2x LEDs, and an RTC.

FCU1101 rear detail view

The system supports -35 to 70℃ temperatures (0 to 70℃ when using WiFi), and appears to offer two different power inputs, neither of which are shown in the detail views above. There’s a 12V input that is said to support 9-36V and offer anti-reverse and over-current protection, as well as a 24V input with 12-24V support and reverse protection.

The Linux 3.14.38 stack ships with a Yaffs2 file system, MQTT, and a wide range of web server and network protocol support.

Further information

No pricing or availability information was provided for the FCU1101. More information may be found on the Forlinx FCU1101 product page.

Source

The Linux Foundation Announces 2019 Events Schedule

The Linux Foundation hosts the premier open source events around the world to enable technologists and other leaders to come together and drive innovation

SAN FRANCISCO, January 15, 2019The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced its 2019 events schedule. Linux Foundation events are where the creators, maintainers and practitioners of the world’s most important open source projects meet. In 2018, Linux Foundation events attracted more than 32,000 developers, architects, community thought leaders, business executives and other industry professionals from more than 11,000 organizations across 113 countries. New events hosted by the Linux Foundation for 2019 include Cephalocon and gRPC Conf.

The Linux Foundation’s 2019 events will gather more than 35,000 open source influencers to learn from each other about new trends in open source and share knowledge of best practices across projects dealing with operating systems, cloud applications, containers, IoT, networking, data processing, security, storage, AI, software architecture, edge computing and more. Events are hosted by the Linux Foundation and its projects, including Automotive Grade Linux, Cloud Foundry, the Cloud Native Computing Foundation and Kubernetes, Hyperledger, LF Networking and ONAP. The events also looking at the business side of open source, gathering managers and technical leaders to learn about compliance, governance, building an open source office and other areas.

“Linux Foundation events bring open source leaders, technologists and enthusiasts together in locations around the world to work together, network and advance how open source is expanding and developing in various industries,” said Jim Zemlin, Executive Director at the Linux Foundation. “Our events proudly accelerate progress and creativity within the larger community and provide in-person contact that is vital to successful collaboration.”

With a new year comes several new co-located events. After incorporating what was previously known as LinuxCon + ContainerCon + CloudOpen (LC3), the event in Shanghai June 24-26, KubeCon + CloudNativeCon + Open Source Summit China – will now be the largest open source conference in China. Also, Embedded Linux Conference North America will now be co-located with Open Source Summit North America, as Embedded Linux Conference Europe has been with Open Source Summit Europe for several years.

The complete schedule and descriptions of all 2019 events follows below.

The Linux Foundation’s 2019 Schedule of Events
Automotive Grade Linux (AGL) All Member Meeting
March 5-6, 2019
Tokyo, Japan
The Automotive Grade Linux (AGL) All Member Meeting takes place bi-annually and brings the AGL community together to learn about the latest developments, share best practices and collaborate to drive rapid innovation across the industry.

Open Source Leadership Summit
March 12-14, 2019
Half Moon Bay, California
The Linux Foundation Open Source Leadership Summit is the premier forum where open source leaders convene to drive digital transformation with open source technologies and learn how to collaboratively manage the largest shared technology investment of our time. An intimate, by invitation only event, Open Source Leadership Summit fosters innovation, growth and partnerships among the leading projects and corporations working in open technology development.

gRPC Conf 2019
March 21, 2019
Sunnyvale, California
Experts will discuss real-world implementations of gRPC, best practices for developers, and topic expert deep dives. This is a must-attend event for those using gRPC in their applications today as well as those considering gRPC for their enterprise microservices.

Cloud Foundry Summit
April 2-4, 2019
Philadelphia, Pennsylvania
From startups to the Fortune 500, Cloud Foundry is used by businesses around the globe to automate, scale and manage cloud apps throughout their lifecycle. Whether they are a contributor or committer building the platform, or using the platform to attain business goals, Cloud Foundry Summit is where developers, operators, CIOs and other IT professionals go to share best practices and innovate together.

Open Networking Summit North America
April 3-5, 2019
San Jose, California
Open Networking Summit is the industry’s premier open networking event, gathering enterprises, service providers and cloud providers across the ecosystem to share learnings, highlight innovation and discuss the future of Open Source Networking, including SDN, NFV, orchestration and the automation of cloud, network, & IoT services.

Linux Storage, Filesystem and Memory Management Summit
April 30-May 2, 2019
San Juan, Puerto Rico
The Linux Storage, Filesystem & Memory Management Summit gathers the foremost development and research experts and kernel subsystem maintainers to map out and implement improvements to the Linux filesystem, storage and memory management subsystems that will find their way into the mainline kernel and Linux distributions in the next 24-48 months.

Cephalocon
May 19-20, 2019
Barcelona, Spain
Cephalocon Barcelona aims to bring together more than 800 technologists and adopters from across the globe to showcase Ceph’s history and its future, demonstrate real-world applications, and highlight vendor solutions.

KubeCon + CloudNativeCon Europe
May 20-23, 2019
Barcelona, Spain
The Cloud Native Computing Foundation’s flagship conference gathers adopters and technologists from leading open source and cloud native communities. Join developers using Kubernetes, Prometheus, OpenTracing, Fluentd, gRPC, containerd, rkt, CNI, Envoy, Jaeger, Notary, TUF, Vitess, CoreDNS, NATS, Linkerd, Helm, Harbor and etcd as the community gathers for four days to further the education and advancement of cloud native computing.

KubeCon + CloudNativeCon + Open Source Summit China
June 24-26, 2019
Shanghai, China
In 2019, KubeCon + CloudNativeCon and Open Source Summit combine together for one event in China. KubeCon + CloudNativeCon gathers all CNCF projects under one roof. Join leading technologists from open source cloud native communities to further the advancement of cloud native computing. Previously known as LinuxCon + CloudOpen + ContainerCon China (LC3), Open Source Summit gathers technologists and open source industry leaders to collaborate, share information and learn about the newest and most interesting open source technologies, including Linux, IoT, blockchain, AI, networking, and more.

Open Source Summit Japan
July 17-19, 2019
Tokyo, Japan
Open Source Summit Japan is the leading conference in Japan connecting the open source ecosystem under one roof, providing a forum for technologists and open source industry leaders to collaborate and share information, learn about the latest in open source technologies and find out how to gain a competitive advantage by using innovative open solutions.

Automotive Linux Summit
July 17-19, 2019
Tokyo, Japan
Automotive Linux Summit connects the developer community driving the innovation in automotive Linux together with the vendors and users providing and using the code in order to drive the future of embedded devices in the automotive arena.

Linux Security Summit North America
August 19-21, 2019
San Diego, California
The Linux Security Summit (LSS) is a technical forum for collaboration between Linux developers, researchers, and end users with the primary aim of fostering community efforts in analyzing and solving Linux security challenges. LSS is where key Linux security community members and maintainers gather to present and discuss their work and research to peers, joined by those who wish to keep up with the latest in Linux security development and who would like to provide input to the development process.

Open Source Summit + Embedded Linux Conference North America
August 21-23, 2019
San Diego, California
Open Source Summit North America connects the open source ecosystem under one roof. It’s a unique environment for cross-collaboration between developers, sysadmins, devops, architects and others who are driving technology forward. Embedded Linux Conference (ELC) is the premier vendor-neutral technical conference where developers working on embedded Linux and industrial IoT products and deployments gather for education and collaboration, paving the way for innovation. For the first time in 2019, Embedded Linux Conference North America will co-locate with Open Source Summit North America.

Linux Plumbers Conference
September 9-11, 2019
Lisbon, Portugal
The Linux Plumbers Conference is the premier event for developers working at all levels of the plumbing layer and beyond.

Kernel Maintainer Summit
September 12, 2019
Lisbon, Portugal
The Linux Kernel Summit brings together the world’s leading core kernel developers to discuss the state of the existing kernel and plan the next development cycle.

Cloud Foundry Summit Europe
September 11-12, 2019
The Hague, The Netherlands
From startups to the Fortune 500, Cloud Foundry is used by businesses around the globe to automate, scale and manage cloud apps throughout their lifecycle. Whether they are a contributor or committer building the platform, or using the platform to attain business goals, Cloud Foundry Summit Europe is where developers, operators, CIOs and other IT professionals go to share best practices and innovate together.

Open Networking Summit Europe
September 23-25, 2019
Antwerp, Belgium
Open Networking Summit Europe is the industry’s premier open networking event, gathering enterprises, service providers and cloud providers across the ecosystem to share learnings, highlight innovation and discuss the future of Open Source Networking, including SDN, NFV, orchestration and the automation of cloud, network, & IoT services.

Open Source Summit + Embedded Linux Conference Europe
October 28-30, 2019
Lyon, France
Open Source Summit Europe is the leading conference for developers, architects, and other technologists – as well as open source community and industry leaders – to collaborate, share information, learn about the latest technologies and gain a competitive advantage by using innovative open solutions. The co-located Embedded Linux Conference is the premier vendor-neutral technical conference where developers working on embedded Linux and industrial IoT products and deployments gather for education and collaboration, paving the way for innovation.

Linux Security Summit Europe
October 31-November 1, 2019
Lyon, France
The Linux Security Summit (LSS) is a technical forum for collaboration between Linux developers, researchers, and end users with the primary aim of fostering community efforts in analyzing and solving Linux security challenges.

KubeCon + CloudNativeCon North America
November 18-21, 2019
San Diego, California
The Cloud Native Computing Foundation’s flagship conference gathers adopters and technologists from leading open source and cloud native communities. Join developers using Kubernetes, Prometheus, Envoy, OpenTracing, Fluentd, gRPC, containerd, rkt, CNI, Jaeger, Notary, TUF, Vitess, CoreDNS, NATS, Linkerd, Helm, Harbor and etcd to learn and advance cloud native computing.

Open FinTech Forum
December 9, 2019
New York, New York
Focusing on the intersection of financial services and open source, Open FinTech Forum will provide CIOs and senior technologists guidance on building internal open source programs as well as an in-depth look at cutting-edge open source technologies, including AI, Blockchain/Distributed Ledger, Kubernetes/Containers, that can be leveraged to drive efficiencies and flexibility.

Event dates and locations will be announced shortly for additional 2019 events including:

  • The API Strategy & Practice Conference (APIStrat)
  • KVM Forum
  • Open Compliance Forum
  • And much more!

Speaking proposals are now being accepted for the following 2019 events:

  • KubeCon + CloudNativeCon Europe (Submission deadline: January 18)
  • Open Networking Summit North America (Submission deadline: January 21)
  • gRPC Conf 2019 (Submission deadline: January 23)
  • Automotive Grade Linux (AGL) All Member Meeting (Submission deadline: January 23)
  • Open Source Leadership Summit (Submission deadline: January 28)
  • Cephalocon (Submission deadline: February 1)
  • KubeCon + CloudNativeCon + Open Source Summit China (Submission deadline: February 15)
  • Automotive Linux Summit (Submission deadline: March 24)
  • Open Source Summit Japan (Submission deadline: March 24)
  • Linux Security Summit North America (Submission details coming soon)
  • Open Source Summit + Embedded Linux Conference North America (Submission deadline: April 2)
  • Linux Plumbers Conference (Submission details coming soon)
  • Kernel Maintainer Summit (Submission details coming soon)
  • Cloud Foundry Summit Europe (Submission details coming soon)
  • Open Networking Summit Europe (Submission deadline: June 16)
  • Open Source Summit + Embedded Linux Conference Europe (Submission deadline: July 1)
  • Linux Security Summit Europe (Submission details coming soon)
  • KubeCon + CloudNativeCon North America (Submission dates: May 6 – July 12)
  • Open FinTech Forum (Submission dates: January 17 – September 22)

Speaking proposals for all events can be submitted at https://linuxfoundation.smapply.io/.

For more information about all Linux Foundation events, please visit: http://events.linuxfoundation.org.

Additional Resources

About The Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage.

Linux is a registered trademark of Linus Torvalds.

Media Contact:
Dan Brown
The Linux Foundation
415-420-7880
dbrown@linuxfoundation.org

Source

How to Use Netcat to Quickly Transfer Files Between Linux Computers | Linux.com

There’s no shortage of software solutions that can help you transfer files between computers. However, if you do this very rarely, the typical solutions such as NFS and SFTP (through OpenSSH) might be overkill. Furthermore, these services are permanently open to receiving and handling incoming connections. Configured incorrectly, this might make your device vulnerable to certain attacks.

netcat, the so-called “TCP/IP swiss army knife,” can be used as an ad-hoc solution for transferring files through local networks or the Internet. It’s also useful for transferring data to/from your virtual machines or containers when they don’t include the feature out of the box. You can even use it as a copy-paste mechanism between two devices.

How to Install netcat on Various Linux Distributions

Most Linux-based operating systems come with this pre-installed. Open a terminal and type:

netcat-command-not-found

If the command is not found, install the package that contains netcat, a BSD variant. There is also GNU’s version of netcat which contains fewer features. You need netcat on both the computer receiving the file and the one sending it.

On Debian-based distributions such as Ubuntu or Linux Mint, install the utility with:

With openSUSE, follow the instructions on this page, specific to your exact distribution.

On Arch Linux enter the following command:

Unfortunately, the RedHat family doesn’t include the BSD or GNU variants of netcat. For some odd reason, they decided to go with nmap-ncat. While similar, some command line options are not available, for example -N. This means you will have to replace a line such as nc -vlN 1234 > nc with nc -vl 1234 > nc so that it works on RedHat/Fedora.

To install ncat on RedHat:

And on Fedora:

How to Use netcat to Transfer Files Between Linux Computers

On the computer that will receive the file, find the IP address used on your internal network.

After “src” you will see the internal network IP address of the device. If, for some reason, results are irrelevant, you can also try:

netcat-find-ip-address

In the screenshot offered as an example, the IP is 10.11.12.10.

On the same computer, the one that will receive the file, enter this command:

netcat-receiving-file

And on the computer which will send the file, type this, replacing 10.11.12.10 with the IP you discovered earlier:

netcat-sending-file

Directory and file paths can be absolute or relative. An absolute path is “/home/user/Pictures/file.png.” But if you already are in “/home/user,” you can use the relative path, “Pictures/file.png,” as seen in the screenshot above.

In the first command two parameters were used: -v and -l-v makes the output verbose, printing more details, so you can see what is going on. -l makes the utility “listen” on port 44444, essentially opening a communication channel on the receiving device. If you have firewall rules active, make sure they are not blocking the connection.

In the second command, -N makes netcat close when the transfer is done.

Normally, netcat would output in the terminal everything it receives. > creates a redirect for this output. Instead of printing it on the screen, it sends all output to the file specified after >< works in reverse, taking input from the file specified instead of waiting for input from the keyboard.

If you use the above commands without redirections, e.g., nc -vl 44444 and nc -N 10.11.12.10 44444, you create a rudimentary “chat” between the two devices. If you write something in one terminal and press Enter, it will appear on the other computer. This is how you can copy and paste text from one device to the other. Press Ctrl + D(on the sender) or Ctrl + C (anywhere) to close the connection.

Optimize File Transfers

When you send large files, you can compress them on the fly to speed up the transfer.

On the receiving end enter:

And on the sender, enter the following, replacing 10.11.12.10 with the IP address of your receiving device:

Send and Receive Directories

Obviously, sometimes you may want to send multiple files at once, for example, an entire directory. The following will also compress them before sending through the network.

On the receiving end, use this command:

netcat-receiving-tar-gzipped-directory

On the sending device, use:

netcat-sending-tar-gzipped-directory

Conclusion

Preferably, you would only use this on your local area network. The primary reason is that the network traffic is unencrypted. If you would send this to a server, through the Internet, your data packets could be intercepted along the network path. But if the files you transfer do not contain sensitive data, it’s not a real issue. However, servers usually have SSH preconfigured to accept secure FTP connections, and you can use SFTP instead for file transfers.

Source

WP2Social Auto Publish Powered By : XYZScripts.com