The Linux Foundation Announces 2019 Events Schedule

The Linux Foundation hosts the premier open source events around the world to enable technologists and other leaders to come together and drive innovation

SAN FRANCISCO, January 15, 2019The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced its 2019 events schedule. Linux Foundation events are where the creators, maintainers and practitioners of the world’s most important open source projects meet. In 2018, Linux Foundation events attracted more than 32,000 developers, architects, community thought leaders, business executives and other industry professionals from more than 11,000 organizations across 113 countries. New events hosted by the Linux Foundation for 2019 include Cephalocon and gRPC Conf.

The Linux Foundation’s 2019 events will gather more than 35,000 open source influencers to learn from each other about new trends in open source and share knowledge of best practices across projects dealing with operating systems, cloud applications, containers, IoT, networking, data processing, security, storage, AI, software architecture, edge computing and more. Events are hosted by the Linux Foundation and its projects, including Automotive Grade Linux, Cloud Foundry, the Cloud Native Computing Foundation and Kubernetes, Hyperledger, LF Networking and ONAP. The events also looking at the business side of open source, gathering managers and technical leaders to learn about compliance, governance, building an open source office and other areas.

“Linux Foundation events bring open source leaders, technologists and enthusiasts together in locations around the world to work together, network and advance how open source is expanding and developing in various industries,” said Jim Zemlin, Executive Director at the Linux Foundation. “Our events proudly accelerate progress and creativity within the larger community and provide in-person contact that is vital to successful collaboration.”

With a new year comes several new co-located events. After incorporating what was previously known as LinuxCon + ContainerCon + CloudOpen (LC3), the event in Shanghai June 24-26, KubeCon + CloudNativeCon + Open Source Summit China – will now be the largest open source conference in China. Also, Embedded Linux Conference North America will now be co-located with Open Source Summit North America, as Embedded Linux Conference Europe has been with Open Source Summit Europe for several years.

The complete schedule and descriptions of all 2019 events follows below.

The Linux Foundation’s 2019 Schedule of Events
Automotive Grade Linux (AGL) All Member Meeting
March 5-6, 2019
Tokyo, Japan
The Automotive Grade Linux (AGL) All Member Meeting takes place bi-annually and brings the AGL community together to learn about the latest developments, share best practices and collaborate to drive rapid innovation across the industry.

Open Source Leadership Summit
March 12-14, 2019
Half Moon Bay, California
The Linux Foundation Open Source Leadership Summit is the premier forum where open source leaders convene to drive digital transformation with open source technologies and learn how to collaboratively manage the largest shared technology investment of our time. An intimate, by invitation only event, Open Source Leadership Summit fosters innovation, growth and partnerships among the leading projects and corporations working in open technology development.

gRPC Conf 2019
March 21, 2019
Sunnyvale, California
Experts will discuss real-world implementations of gRPC, best practices for developers, and topic expert deep dives. This is a must-attend event for those using gRPC in their applications today as well as those considering gRPC for their enterprise microservices.

Cloud Foundry Summit
April 2-4, 2019
Philadelphia, Pennsylvania
From startups to the Fortune 500, Cloud Foundry is used by businesses around the globe to automate, scale and manage cloud apps throughout their lifecycle. Whether they are a contributor or committer building the platform, or using the platform to attain business goals, Cloud Foundry Summit is where developers, operators, CIOs and other IT professionals go to share best practices and innovate together.

Open Networking Summit North America
April 3-5, 2019
San Jose, California
Open Networking Summit is the industry’s premier open networking event, gathering enterprises, service providers and cloud providers across the ecosystem to share learnings, highlight innovation and discuss the future of Open Source Networking, including SDN, NFV, orchestration and the automation of cloud, network, & IoT services.

Linux Storage, Filesystem and Memory Management Summit
April 30-May 2, 2019
San Juan, Puerto Rico
The Linux Storage, Filesystem & Memory Management Summit gathers the foremost development and research experts and kernel subsystem maintainers to map out and implement improvements to the Linux filesystem, storage and memory management subsystems that will find their way into the mainline kernel and Linux distributions in the next 24-48 months.

Cephalocon
May 19-20, 2019
Barcelona, Spain
Cephalocon Barcelona aims to bring together more than 800 technologists and adopters from across the globe to showcase Ceph’s history and its future, demonstrate real-world applications, and highlight vendor solutions.

KubeCon + CloudNativeCon Europe
May 20-23, 2019
Barcelona, Spain
The Cloud Native Computing Foundation’s flagship conference gathers adopters and technologists from leading open source and cloud native communities. Join developers using Kubernetes, Prometheus, OpenTracing, Fluentd, gRPC, containerd, rkt, CNI, Envoy, Jaeger, Notary, TUF, Vitess, CoreDNS, NATS, Linkerd, Helm, Harbor and etcd as the community gathers for four days to further the education and advancement of cloud native computing.

KubeCon + CloudNativeCon + Open Source Summit China
June 24-26, 2019
Shanghai, China
In 2019, KubeCon + CloudNativeCon and Open Source Summit combine together for one event in China. KubeCon + CloudNativeCon gathers all CNCF projects under one roof. Join leading technologists from open source cloud native communities to further the advancement of cloud native computing. Previously known as LinuxCon + CloudOpen + ContainerCon China (LC3), Open Source Summit gathers technologists and open source industry leaders to collaborate, share information and learn about the newest and most interesting open source technologies, including Linux, IoT, blockchain, AI, networking, and more.

Open Source Summit Japan
July 17-19, 2019
Tokyo, Japan
Open Source Summit Japan is the leading conference in Japan connecting the open source ecosystem under one roof, providing a forum for technologists and open source industry leaders to collaborate and share information, learn about the latest in open source technologies and find out how to gain a competitive advantage by using innovative open solutions.

Automotive Linux Summit
July 17-19, 2019
Tokyo, Japan
Automotive Linux Summit connects the developer community driving the innovation in automotive Linux together with the vendors and users providing and using the code in order to drive the future of embedded devices in the automotive arena.

Linux Security Summit North America
August 19-21, 2019
San Diego, California
The Linux Security Summit (LSS) is a technical forum for collaboration between Linux developers, researchers, and end users with the primary aim of fostering community efforts in analyzing and solving Linux security challenges. LSS is where key Linux security community members and maintainers gather to present and discuss their work and research to peers, joined by those who wish to keep up with the latest in Linux security development and who would like to provide input to the development process.

Open Source Summit + Embedded Linux Conference North America
August 21-23, 2019
San Diego, California
Open Source Summit North America connects the open source ecosystem under one roof. It’s a unique environment for cross-collaboration between developers, sysadmins, devops, architects and others who are driving technology forward. Embedded Linux Conference (ELC) is the premier vendor-neutral technical conference where developers working on embedded Linux and industrial IoT products and deployments gather for education and collaboration, paving the way for innovation. For the first time in 2019, Embedded Linux Conference North America will co-locate with Open Source Summit North America.

Linux Plumbers Conference
September 9-11, 2019
Lisbon, Portugal
The Linux Plumbers Conference is the premier event for developers working at all levels of the plumbing layer and beyond.

Kernel Maintainer Summit
September 12, 2019
Lisbon, Portugal
The Linux Kernel Summit brings together the world’s leading core kernel developers to discuss the state of the existing kernel and plan the next development cycle.

Cloud Foundry Summit Europe
September 11-12, 2019
The Hague, The Netherlands
From startups to the Fortune 500, Cloud Foundry is used by businesses around the globe to automate, scale and manage cloud apps throughout their lifecycle. Whether they are a contributor or committer building the platform, or using the platform to attain business goals, Cloud Foundry Summit Europe is where developers, operators, CIOs and other IT professionals go to share best practices and innovate together.

Open Networking Summit Europe
September 23-25, 2019
Antwerp, Belgium
Open Networking Summit Europe is the industry’s premier open networking event, gathering enterprises, service providers and cloud providers across the ecosystem to share learnings, highlight innovation and discuss the future of Open Source Networking, including SDN, NFV, orchestration and the automation of cloud, network, & IoT services.

Open Source Summit + Embedded Linux Conference Europe
October 28-30, 2019
Lyon, France
Open Source Summit Europe is the leading conference for developers, architects, and other technologists – as well as open source community and industry leaders – to collaborate, share information, learn about the latest technologies and gain a competitive advantage by using innovative open solutions. The co-located Embedded Linux Conference is the premier vendor-neutral technical conference where developers working on embedded Linux and industrial IoT products and deployments gather for education and collaboration, paving the way for innovation.

Linux Security Summit Europe
October 31-November 1, 2019
Lyon, France
The Linux Security Summit (LSS) is a technical forum for collaboration between Linux developers, researchers, and end users with the primary aim of fostering community efforts in analyzing and solving Linux security challenges.

KubeCon + CloudNativeCon North America
November 18-21, 2019
San Diego, California
The Cloud Native Computing Foundation’s flagship conference gathers adopters and technologists from leading open source and cloud native communities. Join developers using Kubernetes, Prometheus, Envoy, OpenTracing, Fluentd, gRPC, containerd, rkt, CNI, Jaeger, Notary, TUF, Vitess, CoreDNS, NATS, Linkerd, Helm, Harbor and etcd to learn and advance cloud native computing.

Open FinTech Forum
December 9, 2019
New York, New York
Focusing on the intersection of financial services and open source, Open FinTech Forum will provide CIOs and senior technologists guidance on building internal open source programs as well as an in-depth look at cutting-edge open source technologies, including AI, Blockchain/Distributed Ledger, Kubernetes/Containers, that can be leveraged to drive efficiencies and flexibility.

Event dates and locations will be announced shortly for additional 2019 events including:

  • The API Strategy & Practice Conference (APIStrat)
  • KVM Forum
  • Open Compliance Forum
  • And much more!

Speaking proposals are now being accepted for the following 2019 events:

  • KubeCon + CloudNativeCon Europe (Submission deadline: January 18)
  • Open Networking Summit North America (Submission deadline: January 21)
  • gRPC Conf 2019 (Submission deadline: January 23)
  • Automotive Grade Linux (AGL) All Member Meeting (Submission deadline: January 23)
  • Open Source Leadership Summit (Submission deadline: January 28)
  • Cephalocon (Submission deadline: February 1)
  • KubeCon + CloudNativeCon + Open Source Summit China (Submission deadline: February 15)
  • Automotive Linux Summit (Submission deadline: March 24)
  • Open Source Summit Japan (Submission deadline: March 24)
  • Linux Security Summit North America (Submission details coming soon)
  • Open Source Summit + Embedded Linux Conference North America (Submission deadline: April 2)
  • Linux Plumbers Conference (Submission details coming soon)
  • Kernel Maintainer Summit (Submission details coming soon)
  • Cloud Foundry Summit Europe (Submission details coming soon)
  • Open Networking Summit Europe (Submission deadline: June 16)
  • Open Source Summit + Embedded Linux Conference Europe (Submission deadline: July 1)
  • Linux Security Summit Europe (Submission details coming soon)
  • KubeCon + CloudNativeCon North America (Submission dates: May 6 – July 12)
  • Open FinTech Forum (Submission dates: January 17 – September 22)

Speaking proposals for all events can be submitted at https://linuxfoundation.smapply.io/.

For more information about all Linux Foundation events, please visit: http://events.linuxfoundation.org.

Additional Resources

About The Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage.

Linux is a registered trademark of Linus Torvalds.

Media Contact:
Dan Brown
The Linux Foundation
415-420-7880
dbrown@linuxfoundation.org

Source

How to Use Netcat to Quickly Transfer Files Between Linux Computers | Linux.com

There’s no shortage of software solutions that can help you transfer files between computers. However, if you do this very rarely, the typical solutions such as NFS and SFTP (through OpenSSH) might be overkill. Furthermore, these services are permanently open to receiving and handling incoming connections. Configured incorrectly, this might make your device vulnerable to certain attacks.

netcat, the so-called “TCP/IP swiss army knife,” can be used as an ad-hoc solution for transferring files through local networks or the Internet. It’s also useful for transferring data to/from your virtual machines or containers when they don’t include the feature out of the box. You can even use it as a copy-paste mechanism between two devices.

How to Install netcat on Various Linux Distributions

Most Linux-based operating systems come with this pre-installed. Open a terminal and type:

netcat-command-not-found

If the command is not found, install the package that contains netcat, a BSD variant. There is also GNU’s version of netcat which contains fewer features. You need netcat on both the computer receiving the file and the one sending it.

On Debian-based distributions such as Ubuntu or Linux Mint, install the utility with:

With openSUSE, follow the instructions on this page, specific to your exact distribution.

On Arch Linux enter the following command:

Unfortunately, the RedHat family doesn’t include the BSD or GNU variants of netcat. For some odd reason, they decided to go with nmap-ncat. While similar, some command line options are not available, for example -N. This means you will have to replace a line such as nc -vlN 1234 > nc with nc -vl 1234 > nc so that it works on RedHat/Fedora.

To install ncat on RedHat:

And on Fedora:

How to Use netcat to Transfer Files Between Linux Computers

On the computer that will receive the file, find the IP address used on your internal network.

After “src” you will see the internal network IP address of the device. If, for some reason, results are irrelevant, you can also try:

netcat-find-ip-address

In the screenshot offered as an example, the IP is 10.11.12.10.

On the same computer, the one that will receive the file, enter this command:

netcat-receiving-file

And on the computer which will send the file, type this, replacing 10.11.12.10 with the IP you discovered earlier:

netcat-sending-file

Directory and file paths can be absolute or relative. An absolute path is “/home/user/Pictures/file.png.” But if you already are in “/home/user,” you can use the relative path, “Pictures/file.png,” as seen in the screenshot above.

In the first command two parameters were used: -v and -l-v makes the output verbose, printing more details, so you can see what is going on. -l makes the utility “listen” on port 44444, essentially opening a communication channel on the receiving device. If you have firewall rules active, make sure they are not blocking the connection.

In the second command, -N makes netcat close when the transfer is done.

Normally, netcat would output in the terminal everything it receives. > creates a redirect for this output. Instead of printing it on the screen, it sends all output to the file specified after >< works in reverse, taking input from the file specified instead of waiting for input from the keyboard.

If you use the above commands without redirections, e.g., nc -vl 44444 and nc -N 10.11.12.10 44444, you create a rudimentary “chat” between the two devices. If you write something in one terminal and press Enter, it will appear on the other computer. This is how you can copy and paste text from one device to the other. Press Ctrl + D(on the sender) or Ctrl + C (anywhere) to close the connection.

Optimize File Transfers

When you send large files, you can compress them on the fly to speed up the transfer.

On the receiving end enter:

And on the sender, enter the following, replacing 10.11.12.10 with the IP address of your receiving device:

Send and Receive Directories

Obviously, sometimes you may want to send multiple files at once, for example, an entire directory. The following will also compress them before sending through the network.

On the receiving end, use this command:

netcat-receiving-tar-gzipped-directory

On the sending device, use:

netcat-sending-tar-gzipped-directory

Conclusion

Preferably, you would only use this on your local area network. The primary reason is that the network traffic is unencrypted. If you would send this to a server, through the Internet, your data packets could be intercepted along the network path. But if the files you transfer do not contain sensitive data, it’s not a real issue. However, servers usually have SSH preconfigured to accept secure FTP connections, and you can use SFTP instead for file transfers.

Source

PlaidML Deep Learning Framework Benchmarks With OpenCL On NVIDIA & AMD GPUs

Pointed out by a Phoronix reader a few days ago and added to the Phoronix Test Suite is the PlaidML deep learning framework that can run on CPUs using BLAS or also on GPUs and other accelerators via OpenCL. Here are our initial benchmarks of this OpenCL-based deep learning framework that is now being developed as part of Intel’s AI Group and tested across a variety of AMD Radeon and NVIDIA GeForce graphics cards.

Over the weekend I carried out a wide variety of benchmarks with PlaidML and its OpenCL back-end for both NVIDIA and AMD graphics cards. The Radeon tests were done with ROCm 2.0 OpenCL and it was working out fine there without any troubles while also working fine with NVIDIA’s OpenCL driver stack. Benchmarks were done with a variety of neural networks, both training and inference, etc.

The graphics cards available for testing included the 16 following GPUs:

– RX 580

– RX 590

– RX Vega 56

– RX Vega 64

– GTX 980

– GTX 980 Ti

– GTX 1060

– GTX 1070

– GTX 1070 Ti

– GTX 1080

– GTX 1080 Ti

– RTX 2060

– RTX 2070

– RTX 2080

– RTX 2080 Ti

– TITAN RTX

All of the tests were run from an AMD Ryzen Threadripper 2990WX workstation with ASUS ROG ZENITH EXTREME motherboard, 4 x 8GB DDR4-3200 memory, and Samsung 970 EVO 500GB NVMe SSD. Ubuntu 18.10 was running on the system with the Linux 4.20.0 kernel and GCC 8.2 compiler.

These PlaidML benchmarks were carried out using the Phoronix Test Suite. Coming up later this week will be PlaidML CPU benchmarks across a variety of operating systems.
Source

CentOS Install AWS CLI – Linux Hint

Amazon is one of the most popular service providers as far as cloud platforms. Using the service, any business can share computing power, content delivery, database storage and a number of additional functionalities. Thus, Amazon offers the best service for businesses to scale and grow.The most business-friendly offer that comes up with the AWS is the pricing. Instead of charging a ton at the starting of each month/year, Amazon treats them as utilities. You pay just as much as you use them and as long as you use them.

With the adjustable price, AWS offers the best platform for almost any use case – data warehousing, directories to content delivery, deployment tools and much more!

Another important factor of the cloud platform is the security. With AWS, the security is on the strongest level. Broad security certification and accreditation, strong data encryption at rest and in-transit, hardware security modules and strong physical security – every single features guarantee a perfect solution for the IT infrastructure.

Now, for enjoying the feature of the AWS, there’s already a powerful console tool available, known as the AWS CLI. It puts all the controls of multiple AWS services within a single tool. As the name suggests, it’s a console tool.

As of enterprise Linux, CentOS/RHELE is the best choice as it comes up with a large community and professional support. Today, let’s check out setting up the AWS CLI tool on CentOS/RHEL.

Setting up the system

For installing AWS CLI, we need to set up the “pip” first. PIP is essentially the package manager for Python. Using the tool, it’s possible to download and install various Python tools directly on your system.

PIP is available on the EPEL repository, not the default one. Make sure that your system supports the EPEL repo.

sudo yum install epel-release

Make sure that the “yum” database cache up-to-date –

sudo yum update

Now, it’s time to install “pip”!

sudo yum install python-pip

Note – The best way of enjoying the software is using Python 3. Python 2.7 is going to become obsolete at one point and Python 3 is going to prevail for sure. Learn how to set up Python 3 on CentOS.

For Python3, you need the “pip3”. It’s specifically for Python 3 platform. Install “pip3” with the following command –

sudo yum install python34-pip

Installing AWS CLI

  • Using “pip” direclty

Update the “pip” or “pip3” first.

sudo pip install –upgrade pip

# OR

sudo pip3 install –upgrade pip

Once the PIP tool is installed, we can install the AWS CLI tool. Just run the following command –

pip3 install awscli –upgrade –user

In the above command, we force the “pip3” to install “awscli”, “upgrade” any outdated component necessary and install the tool in the user’s subdirectories (avoiding file conflicts of the system library).

  • Using the “bundle” installer

This method also uses the “pip” or “pip3” tool from Python, so everything should work just fine as before.

curl “https://s3.amazonaws.com/aws-cli/awscli-bundle.zip” -o “awscli-bundle.zip”

Extract the downloaded file –

unzip awscli-bundle.zip

Now, run the install executable –

sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws

If your user account doesn’t have the permission for performing “sudo” commands, then you should follow the following steps.

curl “https://s3.amazonaws.com/aws-cli/awscli-bundle.zip” -o “awscli-bundle.zip”
unzip awscli-bundle.zip
./awscli-bundle/install -b ~/bin/aws

Now, it’s time to make sure that the environment variables are perfectly set to be able to find out the AWS CLI.

echo $PATH | grep ~/bin

As you can see, “~/bin” is in the PATH environment variable. If not, consider running the following command –

export PATH=~/bin:$PATH

Verifying the installation

Now, the installation is complete. However, we always should make sure that whatever you’ve installed is working properly.

Run the following command –

aws –version

Voila! AWS CLI is installed correctly!

Uninstalling AWS CLI

Depending on your installation process, you can easily uninstall the tool from your system.

  • “pip” or “pip3”

    Run the following command –

    pip uninstall awscli
  • Bundle installer

    Run the following commands –

    sudo rm -rf /usr/local/aws
    sudo rm /usr/local/bin/aws

Enjoy!

Source

VLC Media Player Passes 3 Billion Downloads Mark, AirPlay Support Coming Soon

The open-source VLC Media Player app from VideoLAN reached a major milestone today as it just passed the 3 billion downloads mark on the project’s official website.

VLC is probably the most popular cross-platform media playback application available to date, used by millions of computer users worldwide on all major platforms, including GNU/Linux, Windows, macOS, Android, iOS, Chrome OS, and even Windows Phone OS.

It became one of the most popular media player apps mostly because of its ability to play any type of video without needing a codec pack. Most of the widely used video and audio codecs are incorporated into the application for a hassle-free video playback experience.

But you probably already knew that and already using VLC as your main video player app on your personal computer, tablet, or mobile phones. What you probably didn’t know, is that VLC reached has been downloaded more than 3 billion times on the official website.

The statistics provided by developer VideoLAN aren’t for a specific platform as those are managed via OS app stores, including GNU/Linux distribution repositories. We’re talking here about how many times the app has been downloaded from the official website.

The one billion downloads mark was hit in 2012 and the 2 billion downloads mark in 2016. VideoLAN is present these days at CES 2019 in Las Vegas and celebrated the major milestone with a futuristic countdown at their booth, and said that they’re working on VLC 4, a massive release that would add long-anticipated features.

VLC 3.0.6 is out now, AirPlay support coming soon

The 3 billion downloads mark was probably reached today thanks to the sixth update in the “Vetinari” series. VLC 3.0.6 adds support for 12 bits AV1 decoding, adds HDR support in AV1 if the container provides the metadata, and fixes an issue for DVD subtitles.

You can download VLC 3.0.6 for GNU/Linux, macOS, and Windows right now through our free software portal or directly from the official website if you want to contribute to future download milestones.

Meanwhile, VideoLAN promises to add AirPlay support to VLC for Android, according to developer Jean-Baptiste Kempf (via Variety), allowing users to stream videos from their mobile devices to Apple TVs. AirPlay support should be available in an update next month.

In front of an audience at #CES2019 we reached the 3 billion downloads of #VLC in live. What an amazing journey! See you in a billion 🙂 pic.twitter.com/sncDsiQ1J3
— Ludovic Fauvet (@etixxx) January 10, 2019

Source

Axiomtek announces its first Type 7 module

Axiomtek announced a “CEM700” COM Express Type 7 Basic module with 5th Gen Intel Xeon or Pentium CPUs, 2x 10GBASE-KR, 2x SATA III, plenty of PCIe, extended temperature support, and a new Type 7 carrier board.

The CEM700, which follows earlier Axiomtek COM Express modules such as the Intel Skylake based CEM501, is the company’s first COM Express Type 7 module. Axiomtek is starting its Type 7 adventure with a choice of a server-class, 16-core Xeon D-1500 and quad-core Pentium D1519 processors from Intel’s 5th Generation “Broadwell-DE” family, led by the 16-core, 1.3/2.1GHz Xeon D-1577 and quad-core, 1.5/2.1GH Pentium D1519. No OS support is listed, but Linux is a given with headless Type 7 modules.

CEM700, front and back

Designed for edge computing, microserver, data transmission, and other networking applications, the 125 x 95mm module ships standard with 0 to 60°C or -20 to 70°C support, depending on whether you believe the product page or announcement, and there’s an option for a -40 to 85°C model. There’s also 3.5 Grms vibration resistance and an optional heatpipe cooler and heatspreader.

The CEM700 provides dual DDR4-2400 SO-DIMM slots for up to 32GB RAM plus dual SATA III interfaces for storage. The module features dual 10GBASE-KR interfaces, a single GbE controller, and an NC-SI (Network Controller Sideband Interface) for remote management.

I/O includes 4x USB 3.0, 4x USB 2.0, and 2x serial TX/RX interfaces. You also get single LPC, SMBus, SPI, and I2C interfaces plus 4-in, 4-out DIO.

Expansion interfaces include single PCIe x16 Gen3 and PCIe x8 Gen3 interfaces, as well as 8x PCIe x1 Gen2. Other features include a watchdog timer, hardware monitoring, and Trusted Platform Module (TPM) 2.0. There are 12V AT and ATX inputs, with the latter also spec’d at +5VSB.

CEB94701 carrier

The CEM700 is available with a new Type 7 development baseboard called the CEB94701. The 305 x 244mm board offers dual SATA III interfaces, dual 10GBASE-KR SFP+ ports, and a single GbE port.

CEB94701 detail view

The CEB94701 is further equipped with 4x USB 3.0 ports, 2x RS-232/422/485 ports, and 2x TX/RX headers. You’ll find the I2C, DIO, SMBus, fan, and front-panel interfaces aligned along the edge of the board along with a BMC console port and VGA port. A buzzer, and hardware monitoring are also available.

The dev board carries through the PCIe x16 and PCIe x8 (both Gen3) interfaces and offers PCIe x4 and x1 connections. There are also 2x full-size mini-PCIe slots.

The board is powered via a 24-pin ATX connector as well as a 4-pin 12V connector to power the module. A 3V, 220mAH Lithium battery is also onboard. The board supports -20 to 70°C temperatures.

Further information

The CEM700 and CEB94701 Type 7 carrier are “coming soon,” with pricing undisclosed. More information may be found on Axiomtek’s CEM700 and CEB94701 (PDF) product pages.

Source

Blue Collar Linux: Something Borrowed, Something New | Reviews

Sometimes it takes more than a few tweaks to turn an old-style desktop design into a fresh new Linux distribution. That is the case with the public release of Blue Collar Linux.

“The guidance and design were shaped by real people — blue collar people,” Blue Collar developer Steven A. Auringer told LinuxInsider. “Think useful and guided by Joe and Jane Whitebread in Suburbia.”

Blue Collar Linux has been under development for the last four years. Until its public release this week, it has circulated only through an invitation for private use by the developer’s family, friends and associates looking for an alternative to the Windows nightmare.

Another large part of his user base is the University of Wisconsin, where he engages with the math and computer science departments.

This new release is anything but a just-out-of-beta edition. It is very polished and is constantly updated and improved. A growing cadre of users submit bug reports and contribute feature suggestions based on real-world user requests.

Auringer does as not bother with versioning each release, however. Average people do not care about those things, he claims.

“You don’t hear them talking about Windows 10-1824-06b-build257. They use Windows 10,” he noted.

That view in part led Auringer to develop a Linux distro with a goal that responded to typical users who had no interest in learning computer technology. The distro’s goal is to be easy to use and be useful for Joe and Jane Whitebread.

“There are Millions of home systems. Most are not powerful state-of-the-art office systems. There are a lot of older systems sitting on closet shelves waiting to be brought back to life. Some are hand-me-downs. Is Joe or Jane going to spend money to use Windows 10?” asked Auringer.

From Shell Script to OS

Auringer is a retired U.S. Marine with a doctoral degree in applied mathematics and a master’s in computer science. He worked 10-plus years as a senior software engineer.

He started developing Blue Collar Linux as a shell script that would add/delete and configure software/fonts/colors/drivers, etc. He used the scripts and shared them for simplifying automated installation routines. That led to developing his own Linux alternative to the Microsoft Windows nightmare.

Auringer was determined to avoid the frustrations nontechnical users experienced with so many Linux distros — overwhelming software packages and the daunting maze of desktops choices. To remove those barriers, Auringer selected easy, yet powerful, Linux applications. He adopted a one-of-each approach.

“I have looked at over 75 Distros. Most — even the supposed easy ones — assume some level of Linux geekiness. I spent a lot of time listening to my beta users. They want to point, click and go,” he said. “They don’t want to search the Net comparing six programs, downloading and polluting their systems just to solve an easy problem.”

Auringer learned from his beta users that they did NOT want three music players, three video players, four text editors, two video editors, three photo editors, a large complex office suite and Visual Studio IDE to develop software.

Most average people are not going to burn an evening trying to get a program to work, he explained. They are not going to log into blogs, ask questions, try six different answers they don’t understand, and still have a broken system.

For example, they have no idea why removing program A broke program B, or why reinstalling program A does not fix program B. They do not know that it also may have broken program C, Auringer added.

Most average people do not know or care about Xfce, KDE, Gnome or Unity desktops. They do not know or care about what a window manager is, he said.

“They want to turn it on and use it to accomplish a goal without turning it into a hobby,” Auringer maintained.

That is precisely what Blue Collar Linux gives nontechnical users. It is difficult not to love Blue Collar Linux. It has all of the usability boxes checked. It does just what the developer designed it to do: make computing simplified!

Blue Collar Overview

Blue Collar Linux offers both home and small business users an ideal computing platform. They are the developer’s intended user base.

What makes Blue Collar ideal? Installation is uncomplicated. When the process is finished, no tools or setup are required.

The desktop has a simple uncluttered look. You have plenty of options to change the default settings. Personalizing the desktop is easy.

Blue Collar Linux's modified Xfce desktop design

Blue Collar Linux’s modified Xfce desktop design has a panel bar with multiple menu buttons, system icons, and a collection of applets to display information on the bar.

Out of the box, everything works. Nothing is confusing. No time must be spent reading online how-to documents.

Blue Collar is Gnome 3.10/GTK-based and runs the Xfce 4.10-based desktop. However, the modifications Auringer built in specifically for his distro are responsible for the tremendous difference in how Xfce works and looks.

For example, the applications and controls/buttons look like they belong together. Unlike other desktop designs, each application’s appearance reinforces the design and gives users the feeling that it is part of a complete system.

Older Code Base vs. New

Blue Collar has one slight downside that might only be a concern for more tech-savvy users. This distro is based on Ubuntu 14.04.5 LTS, the Trusty Tahr series released in August 2016. Its long-term support ends this April. That means the developer will be issuing an updated release on a newer code base eventually.

In fact, Auringer is working on using Ubuntu 16.04 as a replacement base for Blue Collar Linux. Ubuntu supports 16.04, dubbed “Xenial Xerus,” until April 2021.

Still, he is happy with the continued performance of 14.04.5 and is not rushing to swap it out. Trusty Tahr code works well today and is not going to drop dead on any certain date in the near future, according to the developer. He plans to support critical issues himself if any develop when the long-term support from Ubuntu runs out, rather than rushing to change the code base.

A major advantage of 14.04.5 is the solid support by third-party drivers. Manufacturers and developers of printers, scanners, wireless and other systems have well-developed and tested drivers. Maintaining existing stability counts more than change.

“That is more of a concern to my user base than bleeding edge. They generally don’t know and don’t care what the base version is. All they know is that it never crashes — or worse, locks their box and loses their work,” Auringer said.

He prefers the Xenial Xerus code base to the current 18.04 LTS, AKA Bionic Beaver, released last year and supported to April 2023. The 18.04 code base is “squirrelly, unfinished and generally not recommended, or recommended [only] to experienced users.”

Only experienced users will put up with Bionic Beaver, just to be bleeding edge, he said.

The code base was impressive when it was released. It included an updated kernel and X stack for new installations to support new hardware. Since it has been an integral part of Blue Collar from the start, stability and reliability are of no concern.

Why Xfce Instead of Other Desktops?

There are several answers to the “Why Xfce?” question, noted Auringer, but they all have to do with Xfce having better desktop functionality and adaptability. Since Blue Collar must run on a wide range of legacy computers, a lightweight but powerful desktop environment is essential.

For example, newer options such as LXDE and LXLE are light, but the menus are sparse. Plus, their configuration is limited. Auringer sees the Cinnamon desktop as bloated, slow, buggy, and difficult to configure.

The Mate desktop lacks comments in the menu for new/beginning users — something Xfce’s Whisker Menu provides. Plus, the Whisker menu in Blue Collar Linux lets you add, delete or rearrange your favorite applications in the main menu. You also can resize the main menu.

MenuLibre, a menu-editing tool included in Xfce, makes it easy for Blue Collar users to arrange menu content their way. Xfce is mature; it runs well on minimal hardware and is fast.

Plankless and Dockless

Another major user benefit with the Xfce desktop is the ability to add or remove application launchers on the panel or the desktop itself. An even nicer feature that you will not find in other Xfce systems is the ability to unlock the panel and move it to the top or side if you prefer.

Some Linux distros use both panel bars and a Cairo-style dock or plank-style application launcher. You will not find modifications in Blue Collar Linux built around docks or planks.

They do not work well in general, according to Auringer. Some distros tried Awn or Plank and then dropped it. The Cairo dock has lots of bells and whistles, but nothing to add in terms of functionality or ease of use.

“I have also found that depending on the version and settings, Cairo can be a little unpredictable,” he said.

One more great feature with Blue Collar’s modified Xfce desktop is the triple menu system. Finding and launching applications is fast, thanks to an application search field built into each menu.

The menus live at either end of the panel bar. On the far left is the Whisker menu. At the far right end of the panel is a GNOME-style full-screen display of application icons. With either menu, hover the mouse over an icon to see a brief explanation of what the application does.

Right-click anywhere on the desktop not covered by a window to launch a third style menu. The bottom label cascades a list of installed applications. the rest of the column lists various system actions such as creating folders, UL links and application launchers.

Massive Software Inventory

Auringer’s decision to bundle a single software title for each computing task is a win-win. It actually lets the developer bundle more diverse applications without creating bloat.

His goal is not to make Blue Collar Linux minimalist in terms of its software inventory. To the contrary, this distro comes with more preinstalled titles than I see in most distros, whether they are Xfce systems or not.

The included applications are solid choices. They do not require hours of learning how to use them.

For example, typical users do not need feature-heavy office suites with separate components like spreadsheets and database managers they will never use, argues Auringer. So he includes the Abiword word processor with plugins already enabled.

Preinstalled applications include Homebank for personal finance management; LibreCAD, a professional-strength drafting program; Diagram for creating and editing designs; and RedNoteBook — a tool for keeping notes and daily journal entries and calendar.

Specialty Tools Included

This distro also has Wine, an emulator that lets you run Microsoft Windows programs within the Linux environment. I have used Linux for so many years that I no longer rely on Wine.

However, having Wine preinstalled in Blue Collar Linux gives newcomers to the Linux OS an added comfort zone that lets them continue using familiar programs until they find better Linux alternatives.

It creates a pseudo C: Drive in the Blue Collar directory to show Wine-installed Windows programs. It comes with tools to install and uninstall windows programs as if you were running them on an actual Windows computer

Another great find in Blue Collar Linux is the Parental Controls feature. I test and review hundreds of Linux distros. This is my first time seeing a parental control application. What a great idea for helping children learn responsible computer behavior.

It is as simple to use as creating an alarm in a computer calendar. You can set the number of hours per day a user can access the computer. You can add a check for the approved days of usage in general, as well as allotted times and days to use the Web browser, email client and Instant Messaging applications.

Using It

One of the essential features that a well-designed operating system can provide is access to virtual workspaces. This functionality lets you view different applications or sets of open application windows on separate screens. Some distros make navigating among workspaces confusing and difficult.

Not Blue collar Linux. The standard Xfce desktop does a nice job of handling virtual workspaces. Blue Collar Linux goes well beyond the normal functionality.

This distro includes the Brightside Properties tool, which enhances navigation options for workspace switching.

Blue Collar Linux's Brightside Properties Tool

The Brightside Properties Tool is very handy for adding new features to the Xfce desktop for controlling workspace navigation and hot corner actions.

For instance, rolling the wheel in the workspace switcher moves to other workspaces. So does this keyboard shortcut: CTRL-ALT and left/right or up/down arrow keys.

Other options let you change workstations by moving the mouse pointer off the left or right screen edges, or clicking the mouse wheel down or using the middle mouse button to display a switcher panel on the screen.

With the Brightside tool, you can set a different wallpaper for each workspace. The tool also lets you turn on hot corners, which usually is not a function available with the Xfce desktop.

You can select special actions from a dropdown list that activates when you push the mouse pointer into a chosen corner of the screen. You also can create your own action command using the custom option in the dropdown list.

One more need trick is rolling the mouse wheel on the sound icon in the system tray to raise or lower the volume.

Blue Collar Linux's workspace switcher panel

Click the mouse wheel down or use the middle mouse button to display a switcher panel on the screen.

Bottom Line

Blue Collar Linux is a seasoned operating system that will not disappoint you. It runs well on older computers with less-than-modest resources. It runs superbly on more recent hardware.

Even if you are not a fan of the Xfce desktop environment, give this modified iteration a try. What you find in Blue Collar Linux is not the same old thing. This distro is feature-rich. It is easy to install and easier to use.

Source

Linux Today – Understanding Debian GNU/Linux Releases

What is a Debian release?

Debian GNU/Linux is a non-commercial Linux distribution that was started in 1993 by Ian Murdock. Currently, it consists of about 51,000 software packages that are available for a variety of architectures such as Intel (both 32 and 64 bit), ARM, PowerPC, and others [2]. Debian GNU/Linux is maintained freely by a large number of contributors from all over the world. This includes software developers and package maintainers – a single person or a group of people that takes care of a package as a whole [3].

A Debian release is a collection of stable software packages that follow the Debian Free Software Guidelines (DFSG) [4]. These packages are well-tested and fit together in such a way that all the dependencies between the packages are met and you can install und use the software without problems. This results in a reliable operating system needed for your every-day work. Originally targeted for server systems it has no more a specific target (“The Universal OS”) and is widely used on desktop systems as well as mobile devices, nowadays.

In contrast to other Linux distributions like Ubuntu or Linux Mint, the Debian GNU/Linux distribution does not have a release cycle with fixed dates. It rather follows the slogan “Release only when everything is ready” [1]. Nethertheless, a major release comes out about every two years [8]. For example, version 9 came out in 2017, and version 10 is expected to be available in mid-2019. Security updates for Debian stable releases are provided as soon as possible from a dedicated APT repository. Additionally, minor stable releases are published in between, and contain important non-security bug fixes as well as minor security updates. Both the general selection and the major version number of software packages do not change within a release.

In order to see which version of Debian GNU/Linux you are running on your system have a look at the file /etc/debian_version as follows:

cat /etc/debian_version
9.6
$

This shows that the command was run on Debian GNU/Linux 9.6. Having installed the package “lsb-release” [14], you can get more detailed information by running the command “lsb_release -a”:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 9.6 (stretch)
Release: 9.6
Codename: stretch
$

What about these funny release names?

This shows that the command was run on Debian GNU/Linux 9.6. Having installed the package “lsb-release” [14], you can get more detailed information by running the command “lsb_release -a”:

You may have noted that for every Debian GNU/Linux release there is a funny release name. This is called an alias name which is taken from a character of the film series Toy Story [5] released by Pixar [6]. When the first Debian 1.x release was due, the Debian Project Leader back then, Bruce Perens, worked for Pixar [9]. Up to now the following names have been used for releases:

  • Debian 1.0 was never published officially, because a CD vendor shipped a development version accidentially labeled as “1.0” [10], so Debian and the CD vendor jointly announced that “this release was screwed” and Debian released version 1.1 about half a year later, instead.
  • Debian 1.1 Buzz (17 June 1996) – named after Buzz Lightyear, the astronaut
  • Debian 1.2 Rex (12 December 1996) – named after Rex the plastic dinosaur
  • Debian 1.3 Bo (5 June 1997) – named after Bo Peep the shepherd
  • Debian 2.0 Hamm (24 July 1998) – named after Hamm the piggy bank
  • Debian 2.1 Slink (9 March 1999) – named after the dog Slinky Dog
  • Debian 2.2 Potato (15 August 2000) – named after the puppet Mr Potato Head
  • Debian 3.0 Woody (19 July 2002) – named after the cowboy Woody Pride who is the main character of the Toy Story film series
  • Debian 3.1 Sarge (6 June 2005) – named after the Seargeant of the green plastic soldiers
  • Debian 4.0 Etch (8 April 2007) – named after the writing board Etch-A-Sketch
  • Debian 5.0 Lenny (14 February 2009) – named after the pull-out binocular
  • Debian 6.0 Squeeze (6 February 2011) – named after the green three-eyed aliens
  • Debian 7 Wheezy (4 May 2013) – named after Wheezy the penguin with the red bow tie
  • Debian 8 Jessie (25 April 2015) – named after the cowgirl Jessica Jane “Jessie” Pride
  • Debian 9 Stretch (17 June 2017) – named after the lila octopus
  • Debian 10 Buster (no release date known so far) – named after the puppy dog from Toy Story 2

As of the beginning of 2019, the release names for two future releases are also already known [8]:

  • Debian 11 Bullseye – named after Bullseye, the horse of Woody Pride
  • Debian 12 Bookworm – named after Bookworm, the intelligent worm toy with a built-in flashlight from Toy Story 3.

Relation between alias name and development state

New or updated software packages are uploaded to the unstable branch, first. After some days a package migrates to the testing branch if it fulfills a number of criterias. This later becomes the basis for the next stable release. The release of a distribution contains stable packages, only, that are actually a snapshot of the current testing branch.

Source

Yahoo Japan and EMQ X Join the OpenMessaging Project

 

The OpenMessaging project welcomes Yahoo Japan and EMQ X as new members.

We are excited to announce two new members to the OpenMessaging project: Yahoo Japan, one of the largest portal sites in Japan, and EMQ X, one of the most popular MQTT message middleware vendors. Yahoo Japan and EMQ X join Alibaba, JD.com, China Mobile Cloud, Qing Cloud, and other community members to form a standards community with 13 corporation members.

OpenMessaging is a standards project for messaging and streaming technology. Messaging and Streaming products have been widely used in modern architecture and data processing, for decoupling, queuing, buffering, ordering, replicating, etc. But when data transfers across different messaging and streaming platforms, compatibility problems arise, which always means much additional work. The OpenMessaging community looks to eliminate these challenges through creating a global, cloud-oriented, vendor-neutral industry standard for distributed messaging.

Yahoo Japan, operated by Yahoo Japan Corporation, is one of the largest portal site in Japan. Under the mission to be a “Problem-Solving Engine,” Yahoo Japan Corporation is committed in solving the problems of the people and society leveraging the power of information technologies. The company uses various messaging systems (e.g., Apache Pulsar, Apache Kafka and RabbitMQ) to create its services and is creating a centralized pub-sub messaging platform that deals with a vast number of service/application traffics.

“Yahoo Japan Corporation uses various messaging systems (e.g., Apache Pulsar, Apache Kafka and RabbitMQ) to create its services. However, differences in messaging interfaces make the whole system complicated and lead to extra costs in implementation and in studying each system. Thus, we need a standardized and unified interface that can be easily implemented and easily collaborated with other services.” said Nozomi Kurihara, the Manager of the Messaging Platform team in Yahoo Japan. “We think OpenMessaging is the key in achieving our “multi big data” system in which data can be cross-used among different services/applications we provide.”

Originated from a GitHub open source IoT project starting from 2012, EMQ X has become one of the most popular MQTT message middleware in community. EMQ X is based on the Erlang/OTP platform, which can support 10 million concurrent MQTT connections with high throughput and low latency. EMQ X now has 500k downloads, and 5000+ customer users in 50 countries and regions around the world, such as China, United States, Australia, British, and India.

“Our customers cover different industries, such as financial, IoV, telecom, smart home. We also partnered with Fortune 500 companies, such as HPE, Ericsson, VMware, to provide professional IoT solutions to customers around the world. OpenMessaging is vendor-neutral and language-independent, provides industry guidelines for areas of finance, e-commerce, IoT and Big Data, and aimed to develop messaging and streaming applications across heterogeneous systems and platforms.” said Feng Lee, Co-founder of EMQ X. “We’re glad to join OpenMessaging.”

As an effort to standardize distributed messaging and streaming systems, OpenMessaging is committed to embracing an open, collaborative, intelligent, and cloud-native era with all its community members.

Source

Linux Tools: The Meaning of Dot | Linux.com

Let’s face it: writing one-liners and scripts using shell commands can be confusing. Many of the names of the tools at your disposal are far from obvious in terms of what they do (grep, tee and awk, anyone?) and, when you combine two or more, the resulting “sentence” looks like some kind of alien gobbledygook.

None of the above is helped by the fact that many of the symbols you use to build a chain of instructions can mean different things depending on their context.

Location, location, location

Take the humble dot (.) for example. Used with instructions that are expecting the name of a directory, it means “this directory” so this:

find . -name “*.jpg”

translates to “find in this directory (and all its subdirectories) files that have names that end in .jpg“.

Both ls . and cd . act as expected, so they list and “change” to the current directory, respectively, although including the dot in these two cases is not necessary.

Two dots, one after the other, in the same context (i.e., when your instruction is expecting a directory path) means “the directory immediately above the current one“. If you are in /home/your_directory and run

cd ..

you will be taken to /home. So, you may think this still kind of fits into the “dots represent nearby directories” narrative and is not complicated at all, right?

How about this, then? If you use a dot at the beginning of a directory or file, it means the directory or file will be hidden:

$ touch somedir/file01.txt somedir/file02.txt somedir/.secretfile.txt
$ ls -l somedir/
total 0
-rw-r–r– 1 paul paul 0 Jan 13 19:57 file01.txt
-rw-r–r– 1 paul paul 0 Jan 13 19:57 file02.txt
$ # Note how there is no .secretfile.txt in the listing above
$ ls -la somedir/
total 8
drwxr-xr-x 2 paul paul 4096 Jan 13 19:57 .
drwx—— 48 paul paul 4096 Jan 13 19:57 ..
-rw-r–r– 1 paul paul 0 Jan 13 19:57 file01.txt
-rw-r–r– 1 paul paul 0 Jan 13 19:57 file02.txt
-rw-r–r– 1 paul paul 0 Jan 13 19:57 .secretfile.txt
$ # The -a option tells ls to show “all” files, including the hidden ones

And then there’s when you use . as a command. Yep! You heard me: . is a full-fledged command. It is a synonym of source and you use that to execute a file in the current shell, as opposed to running a script some other way (which usually mean Bash will spawn a new shell in which to run it).

Confused? Don’t worry — try this: Create a script called myscript that contains the line

myvar=”Hello”

and execute it the regular way, that is, with sh myscript (or by making the script executable with chmod a+x myscript and then running ./myscript). Now try and see the contents of myvar with echo $myvar (spoiler: You will get nothing). This is because, when your script plunks “Hello” into myvar, it does so in a separate bash shell instance. When the script ends, the spawned instance disappears and control returns to the original shell, where myvar never even existed.

However, if you run myscript like this:

. myscript

echo $myvar will print Hello to the command line.

You will often use the . (or source) command after making changes to your .bashrc file, like when you need to expand your PATH variable. You use . to make the changes available immediately in your current shell instance.

Double Trouble

Just like the seemingly insignificant single dot has more than one meaning, so has the double dot. Apart from pointing to the parent of the current directory, the double dot (..) is also used to build sequences.

Try this:

echo

It will print out the list of numbers from 1 to 10. In this context, .. means “starting with the value on my left, count up to the value on my right“.

Now try this:

echo

You’ll get 1 3 5 7 9. The ..2 part of the command tells Bash to print the sequence, but not one by one, but two by two. In other words, you’ll get all the odd numbers from 1 to 10.

It works backwards, too:

echo

You can also pad your numbers with 0s. Doing:

echo

will print out every even number from 0 to 121 like this:

000 002 004 006 … 050 052 054 … 116 118 120

But how is this sequence-generating construct useful? Well, suppose one of your New Year’s resolutions is to be more careful with your accounts. As part of that, you want to create directories in which to classify your digital invoices of the last 10 years:

mkdir _Invoices

Job done.

Or maybe you have a hundreds of numbered files, say, frames extracted from a video clip, and, for whatever reason, you want to remove only every third frame between the frames 43 and 61:

rm frame_

It is likely that, if you have more than 100 frames, they will be named with padded 0s and look like this:

frame_000 frame_001 frame_002 …

That’s why you will use 043 in your command instead of just 43.

Curly~Wurly

Truth be told, the magic of sequences lies not so much in the double dot as in the sorcery of the curly braces ({}). Look how it works for letters, too. Doing:

touch file_.txt

creates the files file_a.txt through file_z.txt.

You must be careful, however. Using a sequence like will run through a bunch of non-alphanumeric characters (glyphs that are neither numbers or letters) that live between the uppercase alphabet and the lowercase one. Some of these glyphs are unprintable or have a special meaning of their own. Using them to generate names of files could lead to a whole bevy of unexpected and potentially unpleasant effects.

One final thing worth pointing out about sequences encased between {…} is that they can also contain lists of strings:

touch _file.txt

Creates blahg_file.txt, splurg_file.txt and mmmf_file.txt.

Of course, in other contexts, the curly braces have different meanings (surprise!). But that is the stuff of another article.

Conclusion

Bash and the utilities you can run within it have been shaped over decades by system administrators looking for ways to solve very particular problems. To say that sysadmins and their ways are their own breed of special would be an understatement. Consequently, as opposed to other languages, Bash was not designed to be user-friendly, easy or even logical.

That doesn’t mean it is not powerful — quite the contrary. Bash’s grammar and shell tools may be inconsistent and sprawling, but they also provide a dizzying range of ways to do everything you can possibly imagine. It is like having a toolbox where you can find everything from a power drill to a spoon, as well as a rubber duck, a roll of duct tape, and some nail clippers.

Apart from fascinating, it is also fun to discover all you can achieve directly from within the shell, so next time we will delve ever deeper into how you can build bigger and better Bash command lines.

Until then, have fun!

Source

WP2Social Auto Publish Powered By : XYZScripts.com