Linux Task Apps: Plenty of Goodies in These Oldies | Reviews

Feb 7, 2019 12:45 PM PT

If you need a task manager application to run on your Linux operating system, tap into a software category filled with options that go far beyond the to-do list app you have stuffed into your smartphone.

Keeping up to date with multiple daily activity calendars, tons of information, and never-ending must-do lists can become a never-ending challenge. This week’s Linux Picks and Pans reviews the top open source task management and to-do apps that will serve you well on most Linux distributions.

Over the years, I have used these task management/to-do list applications on my own Linux computers. Few of them were capable of easily syncing their information to my tablet and my smartphone. The number of project management and to-do list tools have proliferated for Android devices in the last few years, but that is not the case with similar apps for Linux.

In fact, several of the better-known Linux apps in this category that I previously used or reviewed have disappeared. Most of the others have not had a feature update in years.

Task management and to-do list apps for Linux are a mixed bag. This category reflects an overlapping of features and functions. These standalone solutions go beyond the integration in Google Calendar provided by
Google Tasks.

Several of the products in this roundup offer complex interfaces that let you take the information with you on other devices. Some of the applications have sparser features and show signs of aging.

The applications included in this roundup are not presented in any ranked order. Some are readily available in distro repositories. Other packages require manual installation.

Task Coach Masters Details

How a task manager app handles details determines its real usefulness.
Task Coach goes out of its way to help you keep track of the details. Version 1.4.4, released on Dec. 2, 2018, is simply the latest example of this app’s ability to keep you on target and in control of your projects.

TaskCoach is actually two tools in one. It is both a personal task tracker and a to-do manager. It does both routines well. Other apps in this category usually excel at doing one or the other.

It is a master in combining composite functions with a basic task list. Its features include tracking time on task, categorizing activities, and keeping tabs on subevents aligned with larger projects.

Task Coach screenshot

Task Coach lacks an inviting or intuitive user interface, but it is still very functional.

If Task Coach did just those things, it would be a nearly perfect solution. Its additional two tricks put this app over the top in usability. You can add notes to each task and include attachments.

Task Coach makes it easy to maintain a variety of task lists on multiple computers, mobile devices and operating systems. Versions exist for Windows, OS X, Linux, BSD, iPhone, iPod touch and Android.

Task Coach lacks an inviting or intuitive user interface, but it is still very functional. Its detailed configuration panel gives you numerous choices to fine-tune the way it works.

For example, you get about nine tabs with multiple choices on each to set up the application’s general look and feel. These tabs include Window behavior, Files, Language, Task Dates, Task Reminders, Task Appearance, Features and Font Editor options.

The window display shows existing tasks on the left side of the application window. Next to the task name are the planned start and due dates for each task. Right-click the task name line to access available actions. Click the desired action or use that option’s keyboard shortcut.

You can double-click the task name line to access subcategories for entering additional sub-levels of information about the task. These categories contain the most important detail controls for getting Task Coach to manage and organize your tasks’ activities.

The right side of the application window shows categories and sub-categories you create for a task. This is where you can search for specifics in all of your tasks using filters.

Use Task Coach’s progress slider to track your ongoing completion stages. Double-click on a category to provide a detailed description, add notes about each task, and attach supporting documents to the file package.

The crowning glory of the Task Coach tracking system is the Budget tab. It lets you assign the maximum amount of time you allot for a task. It displays a bar showing the time spent on a task and the time remaining to complete it on schedule.

The Revenue option lets you calculate your billing or earning amounts. This budget feature can eliminate the need for any separate billing calculation tool.

Task Coach is a very useful application to help you drill deep down into sub-tasks and multiple categories. It becomes even more valuable if you work on different computers and need an app that lets you store its data file on a portable drive or in the cloud.

GNOME ToDo: Listicles and More

Gnome ToDo version 3.28 is a task management application that is designed to integrate with the GNOME desktop. Fear not if you run something else. It fits in perfectly with many Linux distros without regard to desktop flavor.

It is a simple app that in many ways mimics the look and feel of Google’s Notes app, but it is not embedded into the Chrome browser. gToDo creates multiple lists, sets alarm notifications when tasks are due, automatically purges completed tasks if you desire, exports tasks to HTML format, and sorts them according to priority.

This app also shows the upcoming due date or status of tasks, and can highlight or hide your tasks until their due time is reached.

Gnome ToDo interface

Gnome ToDo has a simple interface showing little more than a single pane with tasks and related information.

This app’s real beauty lies in its simple interface. It has little more than a single pane that shows tasks and related information.

The interface also shows add/remove buttons and a category filter dropdown box. Otherwise, it is devoid of overlapping right-click menus.

Everything you need is found in a few dropdown menus. The design is simple with high functionality.

gToDo automatically purges old tasks. It also highlights past due items and upcoming tasks.

Hovering over the tray icon displays scheduled tasks and provides for quick updates. It is easy to set up several different categories within a list.

If you prefer to keep separate lists for different activities, you can — and it is just as easy to set alarms and priority notifications, regardless of how you configure one or more lists.

AKIEE: Abandoned Potential

The game plan that drives most ToDo lists and task manager apps is a two-part thought process. Either you have a task to do or you are done with it.
Akiee adds a third part to that plan: doing It.

Akiee has a few other things going for it as well. It makes it easier to stay focused on your next task. Its unique algorithm-based ranking system helps you decide what to do next.

It avoids letting you waste time pondering inconclusive priorities. This approach to ordering your tasks makes it easy to decide what to do next. This, in turn, makes it a reliable tool to build your projects one step at a time.

Akiee screenshot

Akiee adds an in-progress element (Doing) as part of its simple-to-use user interface.

One of Akiee’s best features is its universal access. Akiee does not hide your to-do list in a database. You can store your Akiee task file in your cloud account — think Google or Dropbox — to access it over the Web.

Rather than impose its own platform, Akiee stores your task lists in a Markdown file that is readable in any text editor you use. This gives you access to your tasks on all of your computers and on your smartphone as well. You can arrange the order of your tasks easily, instead of just changing priorities and due dates of your tasks.

It is built with Node-Webkit, Clojurescript and React. It is available for Linux, Mac and Windows.

Akiee’s tasks have three states: To-do, Doing and Done. This way you can focus on the tasks you are currently working on.

Akiee has one major drawback, however. Its developer has not updated the application in more than four years. It is barely into beta phase and may not be compatible with every Linux distro.

To use it, download from here, Unpack the binaries files, and then click on the Akiee file to run it.

Remember the Milk: Forgets Nothing

Remember the Milk used to be one of my favorite to-do apps — but until recently, it was not an app, at least not for Linux users. It was a nifty smartphone and tablet tool. I had to piggyback the RTM service in my browser when I ran my Linux-powered desktop or laptop computers.

Now RTM is available for Linux as a standalone app. However, it is available only in 32-bit and 64-bit versions for Fedora and Ubuntu so far.

The app lets you see your tasks with one click of the cow launcher icon. You also can keep a thin version of the app on your screen at all times. Plus, desktop notifications appear in the notification center to make sure that you do not forget what you need to do.

Remember the Milk to-do app

Remember the Milk sports a somewhat cluttered user interface. Tasks and other features are accessible with a single click in most cases.

The Smart Add feature makes it fast and easy to add your tasks. Enter in a single line the task and its due date, priority, repeat reminder and tags. The app sorts the details and displays them in the appropriate locations within the window display.

The RTM app sends you reminders as you direct by email, text, IM, Twitter and mobile devices. Track your to-do items your way. You can create multiple lists and even organize them with colors and tags.

RTM’s project management feature lets you set subtasks to break down tasks into segments to give you a step-by-step description of what the task entails. Create any number of subtasks for a task, and even add subtasks to your subtasks.

The app makes it easy to track tasks in a project involving a team of collaborators. You easily can send entire task lists or delegated subtasks to your cohorts.

Easily plan and track multipart projects by attaching files to your tasks. The RTM app connects to Google Drive or Dropbox to keep all related information in one place. The supporting data can be documents, spreadsheets, presentations or photos.

RTM’s search wizard lets you search your tasks easily to find what you assigned to a particular person, or subtasks due by a certain date. You can search for tasks by the priority number or tag you assigned.

Two other features make Remember the Milk a top-notch task management tool. One is Smart Lists. These are special lists based on search criteria. Keeping on task is close to foolproof with some 50 different search operators. The other is the ability to synchronize with other tools you use.

For instance, you can integrate your lists with Gmail, Google Calendar, Twitter, Evernote, If This Than That (IFTTT), and more.

If the app is not compatible with your Linux distro, go to the
Remember the Milk website and sign up for the free Web-based service. You will have access to most of the same features as the RTM app

GnoTime: Not Just a Tracking Tool

The GnoTime Tracking Tool, formerly known as “GTT,” comes close to doing it all: keep to-do lists on target, organize your ideas, and track your projects.

GnoTime also can serve as your diary or work journal. Even better, it can track how much time you spend on projects, and generate reports and invoices based on that time log.

The graphical user interface in GnoTime takes some getting used to, however. This is especially the case if you keep a lot of open panels. The top row of the main application window is typical of a traditional GUI design.

GnoTime's user interface

GnoTime’s user interface is a familiar sight with clickable icons for the app’s features.

The similarity ends there, however. Access to all program features is available from the top row of dropdown menus. A limited toolbar provides quick access to some of the same functions. The categories make a lot of sense.

A limited toolbar row is located below the dropdown menus. You can click icons to open a New Entry, Activity Journal, Timer Stop and Start, Help and Quit. These serve as default shortcuts to the most essential menu options.

The app suffers from a busy interface. Tracking several projects fills in a lot of data in the various display panels of the main application window. For instance, the currently active projects display in a large window under the toolbar row. It shows details that include importance, urgency, status, total time spent, current working time, project title, description, and new diary entry.

Each line contains the summary data for a particular project. Click on a project line to see more specific data in two resizable panels under the project summary window. The Properties menu opens a tabbed panel that lists Projects, Rates, Intervals and Planning. Each tab has even more precise billing and time tracking options to regulate calculations for billing and reporting.

The Journal panel is a dizzying array of links to other panels and windows in the tracking system. The Journal panel presents a series of diary entries. Each one can be a short or long note about a project, a to-do list entry, or any comment you want to add to the mix.

The Journal lists each entry as a hot link that shows in blue the date of the entry and the starting and stopping time of the item. Elapsed time is shown but is not a link. Clicking on any of the linked elements opens a larger window with the related details.

Select Reports/Query to open the Custom Report Generator for the active project. Then select from the dropdown menu the custom report you want to generate. The options are Journal, Big Journal, Invoice, Daily, Status and ToDo. You can refine the date range for the compiled data. Or you can select a Daily, Monthly or Custom activity report. When you have completed all selections, click the Generate Report button. The results display in an XML file format in yet another window that pops open.

More cool features include the ability to maintain multiple to-do lists. This is a huge advantage over having tasks for different activities scrunched together in one list manager.

The Running Timer tallies time totals for each project or task viewable by day, week, month or year. It measures the amount of time that I sit at the computer actually working. When the keyboard or mouse is idle, the clock stops. If it stays stopped for too long, the program nags me to start it up again.

The Gnome Time Tracker is a very flexible and comprehensive tracking toolbox that auto-saves as I work. Despite GnoTime’s propensity for desktop clutter, its interface is simple to use.

GnoTime comes in pre-compiled binaries/packages for Debian, Ubuntu, RedHat/Fedora, Suse and Gentoo Linux families. Check your distro’s package manager tool. Otherwise, you will have to download a tarball file and manually compile from source. That is the only way to get the latest version, which was last updated on April 17, 2017. In that case,
go here.

Toodledo: Cloud-based Organizer Extraordinaire

One of the more modern and highly advanced options for managing your projects and keeping your task lists on schedule is
Toodledo. This highly customizable service lives in the clouds and syncs to all of your devices. It is platform-agnostic and connects from your browser to apps on your other supported devices.

Toodledo's user interface

Toodledo’s expansive user interface shows a highlighted view of all data for each module. Each component is a click away in the left panel.

Toodledo is a detailed solution that you might find more of an overkill approach. The interface provides labels, infinite lists that you can subdivide into categories, and much more.

Toodledo combines to-do lists with project management features with an added ability to tack on notes and attach files. Among this solution’s many talents is the ability to make custom to-do lists, create structured outlines, track your habits, write long notes, and comment on goals and projects.

One of its unique features is the Schedule module. It helps you to make the most of your free time and create repeating tasks. It can send you reminders based on your current location and let you view tasks on a calendar.

It is a great digital assistant for your personal needs. It is a superior method to stay connected and scheduled with your collaborators. You can assign tasks to your associates and track time spent on a project.

You can use Toodledo to record your ideas in the notes section. You can set and track goals. The entire system is based on the Get Things Done (GTD) method developed by David Allen. This approach organizes tasks to focus your energy and creativity on completing those tasks in a stress-free manner.

The basic version is free. It provides most of the core features but places a limitation of 30 items per list or outline. Other limitations also exist when using the basic version. Standard tier (US$2.99/month and Plus tier ($4.99/month) are also available.

Bottom Line

Task management applications for Linux offer an overlapping range of features and user interfaces. I deliberately avoided ranking these Linux products. I also suspended the usual star rating for each one in this roundup.

Task management and to-do List software for Linux is a software category being overshadowed by cloud services and dedicated apps on portable devices. That is one reason open source applications available for the Linux platform lack many new contenders.

The titles in this roundup offer a variety of options. They have a range of functionality that may take time to learn and use effectively. Compare the features and find the best choice for your needs.

Source

Linux Today – Ubuntu 14.04 LTS (Trusty Tahr) Reaches End of Life on April 30, 2019

Feb 06, 2019, 19:00

Released on April 17, 2014, the Ubuntu 14.04 LTS (Trusty Tahr) operating system series will reach its end of life in about three months from the moment of writing, on April 30, 2019. Ubuntu 14.04 was an LTS (Long Term Support) release, which means that it received software and security updates for five years. Last year on September 19, Canonical informed Ubuntu 14.04 LTS (Trusty Tahr) users that they would be able to purchase additional support for the operating system through its commercial offering called Extended Security Maintenance (ESM), which proved to be a huge success among Ubuntu 12.04 LTS (Precise Pangolin) users.

Source

A new bottle has been opened with the release of Wine 4.1

The first Wine development build of this year, as well as being the first build towards Wine 5.0 is now out.

Announcing the release last night, Wine’s Alexandre Julliard noted these improvements coming with Wine 4.1:

  • Support for NT kernel spinlocks.
  • Better glyph positioning in DirectWrite.
  • More accurate reporting of CPU information.
  • Context handle fixes in the IDL compiler.
  • Preloader fixes on macOS.
  • Various bug fixes.

With the bug fixes, they noted 30 in total being squashed. These include issues solved with GOG Galaxy, Gas Guzzlers Combat Carnage, Empire Earth and more. As usual though, some bugs were fixed previously and only seeing a status update now.

What are you hoping to see working in Wine during this cycle? What are you excited about? Do let us know in the comments.

Source

Download File Roller Linux 3.30.1

File Roller is an open source archive manager application for the GNOME desktop environment. As a matter of fact, the software is simply called Archive Manager and it is a GUI (Graphical User Interface) front-end to various command-line archiving utilities.

Designed for GNOME

Because it integrates well with the GNOME desktop environment, the application allows Linux users to effortlessly extract archives, as well as to view the contents of an archive or add/remove files to/from an existing archive.

In addition, users are able to view and modify a certain file contained in an archive, open, create and archives in various formats. The familiar user interface of the application can be used by novice and experienced users alike.

Features at a glance

The application supports numerous archive types, including gzip (tar.gz, tar.xz, tgz), bzip (tar.bz, tbz), bzip2 (tar.bz2, tbz2), Z (tar.Z, taz), lzop (tar.lzo, tzo), zip, jar (jar, ear, war), lha, lzh, rar, ace, 7z, alz, ar, and arj.

In addition, it supports the cab, cpio, deb, iso, cbr, rpm, bin, sit, tar.7z, cbz, and zoo archive types, as well as single files that are compressed with the xz, gzip, bzip, bzip2, lzop, lzip, z or rzip compression algorithms.

It’s compatible with other desktop environments

While File Roller is the default archive manager of the GNOME desktop environment, it can also be used on any other open source desktop environment, such as Xfce, MATE, Cinnamon, LXDE, Openbox or Fluxbox.

When used under GNOME, the application provides some unique functionality integrated into the panel entry, such as the ability to create a new archive, view all files from an archive, view the archive as a folder, as well as to enable the folders view mode.

Bottom line

All in all, the application provides GNOME users with a capable archive manager, crafted to perfection and engineered to extract almost all archive types, as long as the respective command-line programs are installed.

Source

15 Docker Commands You Should Know | Linux.com

In this article we’ll look at 15 Docker CLI commands you should know. If you haven’t yet, check out the rest of this series on Docker conceptsthe ecosystemDockerfiles, and keeping your images slim. In Part 6 we’ll explore data with Docker. I’ve got a series on Kubernetes in the works too, so follow me to make sure you don’t miss the fun!

There are about a billion Docker commands (give or take a billion). The Docker docs are extensive, but overwhelming when you’re just getting started. In this article I’ll highlight the key commands for running vanilla Docker.

At risk of taking the food metaphor thread running through these articles too far, let’s use a fruit theme. Veggies provided sustenance in the article on slimming down our images. Now tasty fruits will give us nutrients as we learn our key Docker commands.

Overview

Recall that a Docker image is made of a Dockerfile + any necessary dependencies. Also recall that a Docker container is a Docker image brought to life. To work with Docker commands, you first need to know whether you’re dealing with an image or a container.

  • A Docker image either exists or it doesn’t.
  • A Docker container either exists or it doesn’t.
  • A Docker container that exists is either running or it isn’t.

Once you know what you’re working with you can find the right command for the job.

Commmand Commonalities

Here are a few things to know about Docker commands:

  • Docker CLI management commands start with docker, then a space, then the management category, then a space, and then the command. For example, docker container stop stops a container.
  • A command referring to a specific container or image requires the name or id of that container or image.

For example, docker container run my_app is the command to build and run the container named my_app. I’ll use the name my_container to refer to a generic container throughout the examples. Same goes for my_imagemy_tag, etc.

I’ll provide the command alone and then with common flags, if applicable. A flag with two dashes in front is the full name of the flag. A flag with one dash is a shortcut for the full flag name. For example, -p is short for the --portflag.

Flags provide options to commands

The goal is to help these commands and flags stick in your memory and for this guide to serve as a reference. This guide is current for Linux and Docker Engine Version 18.09.1 and API version 1.39.

First, we’ll look at commands for containers and then we’ll look at commands for images. Volumes will be covered in the next article. Here’s the list of 15 commands to know — plus 3 bonus commands!

Containers

Use docker container my_command

create — Create a container from an image.
start — Start an existing container.
run — Create a new container and start it.
ls — List running containers.
inspect — See lots of info about a container.
logs — Print logs.
stop — Gracefully stop running container.
kill —Stop main process in container abruptly.
rm— Delete a stopped container.

Images

Use docker image my_command

build — Build an image.
push — Push an image to a remote registry.
ls — List images.
history — See intermediate image info.
inspect — See lots of info about an image, including the layers.
rm — Delete an image.

Misc

docker version — List info about your Docker Client and Server versions.
docker login — Log in to a Docker registry.
docker system prune — Delete all unused containers, unused networks, and dangling images.

Containers

Container Beginnings

The terms create, start, and run all have similar semantics in everyday life. But each is a separate Docker command that creates and/or starts a container. Let’s look at creating a container first.

docker container create my_repo/my_image:my_tag — Create a container from an image.

I’ll shorten my_repo/my_image:my_tag to my_image for the rest of the article.

There are a lot of possible flags you could pass to create.

docker container create -a STDIN my_image

-a is short for --attach. Attach the container to STDIN, STDOUT or STDERR.

Now that we’ve created a container let’s start it.

docker container start my_container — Start an existing container.

Note that the container can be referred to by either the container’s ID or the container’s name.

docker container start my_container

Start

Now that you know how to create and start a container, let’s turn to what’s probably the most common Docker command. It combines both create and start into one command: run.

docker container run my_imageCreate a new container and start it. It also has a lot of options. Let’s look at a few.

docker container run -i -t -p 1000:8000 --rm my_image

-i is short for --interactive. Keep STDIN open even if unattached.

-tis short for--tty. Allocates a pseudo terminal that connects your terminal with the container’s STDIN and STDOUT.

You need to specify both -i and -t to then interact with the container through your terminal shell.

-p is short for --port. The port is the interface with the outside world.1000:8000 maps the Docker port 8000 to port 1000 on your machine. If you had an app that output something to the browser you could then navigate your browser to localhost:1000 and see it.

--rm Automatically delete the container when it stops running.

Let’s look at some more examples of run.

docker container run -it my_image my_command

sh is a command you could specify at run time.sh will start a shell session inside your container that you can interact with through your terminal. sh is preferable to bash for Alpine images because Alpine images don’t come with bash installed. Type exit to end the interactive shell session.

Notice that we combined -i and -t into -it.

docker container run -d my_image

-d is short for --detach. Run the container in the background. Allows you to use the terminal for other commands while your container runs.

Checking Container Status

If you have running Docker containers and want to find out which one to interact with, then you need to list them.

docker container ls — List running containers. Also provides useful information about the containers.

docker container ls -a -s

-a is short for -all. List all containers (not just running ones).

-s is short for --size. List the size for each container.

docker container inspect my_container — See lots of info about a container.

docker container logs my_container — Print a container’s logs.

Logs. Not sure how virtual logs are related. Maybe via reams of paper?

Container Endings

Sometimes you need to stop a running container.

docker container stop my_container — Stop one or more running containers gracefully. Gives a default of 10 seconds before container shutdown to finish any processes.

Or if you are impatient:

docker container kill my_container — Stop one or more running containers abruptly. It’s like pulling the plug on the TV. Prefer stop in most situations.

docker container kill $(docker ps -q) — Kill all running containers.

docker kill cockroach

Then you delete the container with:

docker container rm my_container — Delete one or more containers.

docker container rm $(docker ps -a -q) — Delete all containers that are not running.

Those are the eight essential commands for Docker containers.

To recap, you first create a container. Then, you start the container. Or combine those steps with docker run my_container. Then, your app runs. Yippee!

Then, you stop a container with docker stop my_container. Eventually you delete the container with docker rm my_container.

Now, let’s turn to the magical container-producing molds called images.

Images

Here are seven commands for working with Docker images.

Developing Images

docker image build -t my_repo/my_image:my_tag . — Build a Docker image named my_image from the Dockerfile located at the specified path or URL.

-t is short for tag. Tells docker to tag the image with the provided tag. In this case my_tag .

The . (period) at the end of the command tells Docker to build the image according to the Dockerfile in the current working directory.

Build it

Once you have an image built you want to push it to a remote registry so it can be shared and pulled down as needed. Assuming you want to use Docker Hub, go there in your browser and create an account. It’s free. 😄

This next command isn’t an image command, but it’s useful to see here, so I’ll mention it.

docker login — Log in to a Docker registry. Enter your username and password when prompted.

Push

docker image push my_repo/my_image:my_tag — Push an image to a registry.

Once you have some images you might want to inspect them.

Inspecting Images

Inspection time

docker image ls — List your images. Shows you the size of each image, too.

docker image history my_image — Display an image’s intermediate images with sizes and how they were created.

docker image inspect my_image — Show lots of details about your image, including the layers that make up the image.

Sometimes you’ll need to clean up your images.

Removing Images

docker image rm my_image — Delete the specified image. If the image is stored in a remote repository, the image will still be available there.

docker image rm $(docker images -a -q) — Delete all images. Careful with this one! Note that images that have been pushed to a remote registry will be preserved — that’s one of the benefits of registries. 😃

Now you know most essential Docker image-related commands. We’ll cover data-related commands in the next article.

Misc

docker version — List info about your Docker Client and Server versions.

docker login— Log in to a Docker registry. Enter your username and password when prompted.

docker system prune makes an appearance in the next article. Readers on Twitter and Reddit suggested that it would be good to add to this list. I agree, so I’m adding it.

docker system prune —Delete all unused containers, unused networks, and dangling images.

docker system prune -a --volumes

-a is short for --all. Delete unused images, not just dangling ones.

--volumes Remove unused volumes. We’ll talk more about volumes in the next article.

UPDATE Feb. 7, 2019: Management Commands

In CLI 1.13 Docker introduced management command names that are logically grouped and consistently named. The old commands still work, but the new ones make it easier to get started with Docker. The original version of this article listed the old names. I’ve updated the article to use the management command names based on reader suggestions. Note that this change only introduces two command name changes — in most cases it just means adding container or image to the command. A mapping of the commands is here.

Wrap

If you are just getting started with Docker, these are the three most important commands:

docker container run my_image — Create a new container and start it. You’ll probably want some flags here.

docker image build -t my_repo/my_image:my_tag . — Build an image.

docker image push my_repo/my_image:my_tag — Push an image to a remote registry.

Here’s the larger list of essential Docker commands:

Containers

Use docker container my_command

create — Create a container from an image.
start — Start an existing container.
run — Create a new container and start it.
ls — List running containers.
inspect — See lots of info about a container.
logs — Print logs.
stop — Gracefully stop running container.
kill —Stop main process in container abruptly.
rm— Delete a stopped container.

Images

Use docker image my_command

build — Build an image.
push — Push an image to a remote registry.
ls — List images.
history — See intermediate image info.
inspect — See lots of info about an image, including the layers.
rm — Delete an image.

Misc

docker version — List info about your Docker Client and Server versions.
docker login — Log in to a Docker registry.
docker system prune — Delete all unused containers, unused networks, and dangling images.

To view the CLI reference when using Docker just enter the command dockerin the command line. You can see the Docker docs here.

Now you can really build things with Docker! As my daughter might say in emoji: 🍒 🥝 🍊 🍋 🍉 🍏 🍎 🍇. Which I think translates to “Cool!” So go forth and play with Docker!

If you missed the earlier articles in this series, check them out. Here’s the first one:

In the final article in this series we’ll spice things up with a discussion of data in Docker. Follow me to make sure you don’t miss it!

I hope you found this article helpful. If you did, please give it some love on your favorite social media channels. Docker on! 👏

Source

Red Hat Extends Datacenter Infrastructure Control, Automation with Latest Version of Red Hat CloudForms

RALEIGH, N.C.

February 7, 2019

Red Hat, Inc. (NYSE: RHT), the world’s leading provider of open source solutions, today announced the general availability of Red Hat Cloudforms 4.7, the latest version of its highly-scalable infrastructure management tool. Red Hat CloudForms 4.7 includes deeper integration with Red Hat Ansible Automation and new infrastructure integrations designed to help streamline and simplify IT management across hybrid cloud infrastructure.

“While migrating applications to hybrid cloud infrastructure has become a priority for many enterprise IT organizations, effective management across physical, virtual and private cloud infrastructure remains a key building block to fully adopting hybrid cloud. Red Hat CloudForms 4.7 provides a necessary stepping stone for organizations seeking to harness public cloud services in tandem with on-premises infrastructure, enabling users to gain increased control over datacenter environments while providing a unified and consistent set of management capabilities for disparate on-premises resources.”

/>

According to Gartner, “the landscape of cloud adoption is one of hybrid clouds and multiclouds. By 2020, 75% of organizations will have deployed a multicloud or hybrid cloud model.”1 Red Hat believes that to embrace hybrid cloud infrastructure, organizations first need foundational management capabilities and oversight across physical, virtual and private cloud environments. The latest version of Red Hat CloudForms helps to enable unified and consistent management across on-premises and virtual resources through policy controlled self-service for consumers of IT services.

Red Hat CloudForms provides the on-premises component of Red Hat’s robust hybrid cloud management portfolio, with Red Hat Ansible Tower offering automation capabilities for public cloud services. Combined, the two products drive a single pane for managing across enterprise IT’s footprints, delivering the capabilities to automate workflows, standardize system configurations and better maintain system stability and reliability across the IT estate.

Enhanced automation

Red Hat CloudForms 4.7 drives additional integration with Red Hat Ansible Automation by making more Ansible capabilities available natively within CloudForms. Users can now execute Red Hat Ansible Tower workflows directly from the CloudForms interface, helping users to more simply implement sophisticated automation. By further fusing the two solutions, Red Hat CloudForms 4.7 can help organizations to speed IT deployments while maintaining improved service stability and performance.

New infrastructure integrations

Red Hat CloudForms 4.7 provides new and enhanced integrations with physical infrastructure and networking providers, including Nuage Networks Virtualized Services Platform (VSP) and Lenovo XClarity, helping enable users to better manage physical and network compute infrastructure alongside virtual and multi-cloud through a single solution. The integration helps users provision physical infrastructure the same way they would virtual – improving performance and simplifying day 1 and day 2 operations.

Supporting Quote

Joe Fitzgerald, vice president, Management, Red Hat
“While migrating applications to hybrid cloud infrastructure has become a priority for many enterprise IT organizations, effective management across physical, virtual and private cloud infrastructure remains a key building block to fully adopting hybrid cloud. Red Hat CloudForms 4.7 provides a necessary stepping stone for organizations seeking to harness public cloud services in tandem with on-premises infrastructure, enabling users to gain increased control over datacenter environments while providing a unified and consistent set of management capabilities for disparate on-premises resources.”

1Source, Gartner, Inc., Market Insight: Making Lots of Money in the New World of Hybrid Cloud and Multicloud, Sid Nag and David Ackerman, September 7, 2018

About Red Hat

Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.

Forward-looking statements

Certain statements contained in this press release may constitute “forward-looking statements” within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements provide current expectations of future events based on certain assumptions and include any statement that does not directly relate to any historical or current fact. Actual results may differ materially from those indicated by such forward-looking statements as a result of various important factors, including: risks related to our pending merger with International Business Machines Corporation, the ability of the Company to compete effectively; the ability to deliver and stimulate demand for new products and technological innovations on a timely basis; delays or reductions in information technology spending; the integration of acquisitions and the ability to market successfully acquired technologies and products; risks related to errors or defects in our offerings and third-party products upon which our offerings depend; risks related to the security of our offerings and other data security vulnerabilities; fluctuations in exchange rates; changes in and a dependence on key personnel; the effects of industry consolidation; uncertainty and adverse results in litigation and related settlements; the inability to adequately protect Company intellectual property and the potential for infringement or breach of license claims of or relating to third party intellectual property; the ability to meet financial and operational challenges encountered in our international operations; and ineffective management of, and control over, the Company’s growth and international operations, as well as other factors contained in our most recent Quarterly Report on Form 10-Q (copies of which may be accessed through the Securities and Exchange Commission’s website at http://www.sec.gov), including those found therein under the captions “Risk Factors” and “Management’s Discussion and Analysis of Financial Condition and Results of Operations”. In addition to these factors, actual future performance, outcomes, and results may differ materially because of more general factors including (without limitation) general industry and market conditions and growth rates, economic and political conditions, governmental and public policy changes and the impact of natural disasters such as earthquakes and floods. The forward-looking statements included in this press release represent the Company’s views as of the date of this press release and these views could change. However, while the Company may elect to update these forward-looking statements at some point in the future, the Company specifically disclaims any obligation to do so. These forward-looking statements should not be relied upon as representing the Company’s views as of any date subsequent to the date of this press release.

Source

Top 5 open source network monitoring tools

Maintaining a live network is one of a system administrator’s most essential tasks, and keeping a watchful eye over connected systems is essential to keeping a network functioning at its best.

There are many different ways to keep tabs on a modern network. Network monitoring tools are designed for the specific purpose of monitoring network traffic and response times, while application performance management solutions use agents to pull performance data from the application stack. If you have a live network, you need network monitoring to make sure you aren’t vulnerable to an attacker. Likewise, if you rely on lots of different applications to run your daily operations, you will need an application performance management solution as well.

This article will focus on open source network monitoring tools. These tools help monitor individual nodes and applications for signs of poor performance. Through one window, you can view the performance of an entire network and even get alerts to keep you in the loop if you’re away from your desk.

Before we get into the top five network monitoring tools, let’s look more closely at the reasons you need to use one.

Network monitoring tools are vital to maintaining networks because they allow you to keep an eye on devices connected to the network from a central location. These tools help flag devices with subpar performance so you can step in and run troubleshooting to get to the root of the problem.

Running in-depth troubleshooting can minimize performance problems and prevent security breaches. In practical terms, this keeps the network online and eliminates the risk of falling victim to unnecessary downtime. Regular network maintenance can also help prevent outages that could take thousands of users offline.

A network monitoring tool enables you to:

  • Autodiscover devices connected to your network
  • View live and historic performance data for a range of devices and applications
  • Configure alerts to notify you of unusual activity
  • Generate graphs and reports to analyze network activity in greater depth

The top 5 open source network monitoring tools

Now, that you know why you need a network monitoring tool, take a look at the top 5 open source tools to see which might best meet your needs.

Cacti

If you know anything about open source network monitoring tools, you’ve probably heard of Cacti. It’s a graphing solution that acts as an addition to RRDTool and is used by many network administrators to collect performance data in LANs. Cacti comes with Simple Network Management Protocol (SNMP) support on Windows and Linux to create graphs of traffic data.

Cacti typically works by using data sourced from user-created scripts that ping hosts on a network. The values returned by the scripts are stored in a MySQL database, and this data is used to generate graphs.

This sounds complicated, but Cacti has templates to help speed the process along. You can also create a graph or data source template that can be used for future monitoring activity. If you’d like to try it out, download Cacti for free on Linux and Windows.

Nagios Core

Nagios Core is one of the most well-known open source monitoring tools. It provides a network monitoring experience that combines open source extensibility with a top-of-the-line user interface. With Nagios Core, you can auto-discover devices, monitor connected systems, and generate sophisticated performance graphs.

Support for customization is one of the main reasons Nagios Core has become so popular. For example, Nagios V-Shell was added as a PHP web interface built in AngularJS, searchable tables and a RESTful API designed with CodeIgniter.

If you need more versatility, you can check the Nagios Exchange, which features a range of add-ons that can incorporate additional features into your network monitoring. These range from the strictly cosmetic to monitoring enhancements like nagiosgraph. You can try it out by downloading Nagios Core for free.

Icinga 2

Icinga 2 is another widely used open source network monitoring tool. It builds on the groundwork laid by Nagios Core. It has a flexible RESTful API that allows you to enter your own configurations and view live performance data through the dashboard. Dashboards are customizable, so you can choose exactly what information you want to monitor in your network.

Visualization is an area where Icinga 2 performs particularly well. It has native support for Graphite and InfluxDB, which can turn performance data into full-featured graphs for deeper performance analysis.

Icinga2 also allows you to monitor both live and historical performance data. It offers excellent alerts capabilities for live monitoring, and you can configure it to send notifications of performance problems by email or text. You can download Icinga 2 for free for Windows, Debian, DHEL, SLES, Ubuntu, Fedora, and OpenSUSE.

Zabbix

Zabbix is another industry-leading open source network monitoring tool, used by companies from Dell to Salesforce on account of its malleable network monitoring experience. Zabbix does network, server, cloud, application, and services monitoring very well.

You can track network information such as network bandwidth usage, network health, and configuration changes, and weed out problems that need to be addressed. Performance data in Zabbix is connected through SNMP, Intelligent Platform Management Interface (IPMI), and IPv6.

Zabbix offers a high level of convenience compared to other open source monitoring tools. For instance, you can automatically detect devices connected to your network before using an out-of-the-box template to begin monitoring your network. You can download Zabbix for free for CentOS, Debian, Oracle Linux, Red Hat Enterprise Linux, Ubuntu, and Raspbian.

Prometheus

Prometheus is an open source network monitoring tool with a large community following. It was built specifically for monitoring time-series data. You can identify time-series data by metric name or key-value pairs. Time-series data is stored on local disks so that it’s easy to access in an emergency.

Prometheus’ Alertmanager allows you to view notifications every time it raises an event. Alertmanager can send notifications via email, PagerDuty, or OpsGenie, and you can silence alerts if necessary.

Prometheus’ visual elements are excellent and allow you to switch from the browser to the template language and Grafana integration. You can also integrate various third-party data sources into Prometheus from Docker, StatsD, and JMX to customize your Prometheus experience.

As a network monitoring tool, Prometheus is suitable for organizations of all sizes. The onboard integrations and the easy-to-use Alertmanager make it capable of handling any workload, regardless of its size. You can download Prometheus for free.

Which are best?

No matter what industry you’re working in, if you rely on a network to do business, you need to implement some form of network monitoring. Network monitoring tools are an invaluable resource that help provide you with the visibility to keep your systems online. Monitoring your systems will give you the best chance to keep your equipment in working order.

As the tools on this list show, you don’t need to spend an exorbitant amount of money to reap the rewards of network monitoring. Of the five, I believe Icinga 2 and Zabbix are the best options for providing you with everything you need to start monitoring your network to keep it online. Staying vigilant will help to minimize the change of being caught off-guard by performance issues.

Source

Getting started with acme.sh Let’s Encrypt SSL client

Acme.sh is a simple, powerful and easy to use ACME protocol client written purely in Shell (Unix shell) language, compatible with bash, dash, and sh shells. It helps manage installation, renewal, revocation of SSL certificates. It supports ACME version 1 and ACME version 2 protocols, as well as ACME v2 wildcard certificates. Being a zero dependencies ACME client makes it even better. You don’t need to download and install the whole internet to make it running. The tool does not require root or sudo access, but it’s recommended to use root.

Acme.sh supports the following validation methods that you can use to confirm domain ownership:

  • Webroot mode
  • Standalone mode
  • Standalone tls-alpn mode
  • Apache mode
  • Nginx mode
  • DNS mode
  • DNS alias mode
  • Stateless mode

What is Let’s Encrypt

Let’s Encrypt (LE) is a certificate authority (CA) and project that offers free and automated SSL/TLS certificates, with the goal of encrypting the entire web. If you own a domain name and have shell access to your server you can utilize Let’s Encrypt to obtain a trusted certificate at no cost. Let’s Encrypt can issue SAN certs for up to 100 hostnames and wildcard certificates. All certs are valid for the period of 90 days.

Acme.sh usage and basic commands

In this section, I will show some of the most common acme.sh commands and options.

Acme.sh installation

You have a few options to install acme.sh.

Install from web via curl or wget:

curl https://get.acme.sh | sh
source ~/.bashrc

or

wget -O - https://get.acme.sh | sh
source ~/.bashrc

Install from GitHub:

curl https://raw.githubusercontent.com/Neilpang/acme.sh/master/acme.sh | INSTALLONLINE=1 sh

or

wget -O - https://raw.githubusercontent.com/Neilpang/acme.sh/master/acme.sh | INSTALLONLINE=1 sh

Git clone and install:

git clone https://github.com/Neilpang/acme.sh.git
cd ./acme.sh
./acme.sh --install
source ~/.bashrc

The installer will perform 3 actions:

  1. Create and copy acme.sh to your home dir ($HOME): ~/.acme.sh/. All certs will be placed in this folder too.
  2. Create alias for: acme.sh=~/.acme.sh/acme.sh.
  3. Create daily cron job to check and renew the certs if needed.

Advanced installation:

git clone https://github.com/Neilpang/acme.sh.git
cd acme.sh
./acme.sh --install \
          --home ~/myacme \
          --config-home ~/myacme/data \
          --cert-home ~/mycerts \
          --accountemail "hi@acme.sh" \
          --accountkey ~/myaccount.key \
          --accountconf ~/myaccount.conf \
          --useragent "this is my client."

You don’t need to set all options, just set those ones you care about.

Options explained:

  • --home is a customized directory to install acme.sh in. By default, it installs into ~/.acme.sh.
  • --config-home is a writable folder, acme.sh will write all the files(including cert/keys, configs) there. By default, it’s in --home.
  • --cert-home is a customized dir to save the certs you issue. By default, it’s saved in --config-home.
  • --accountemail is the email used to register account to Let’s Encrypt, you will receive renewal notice email here. Default is empty.
  • --accountkey is the file saving your account private key. By default it’s saved in --config-home.
  • --useragent is the user-agent header value used to send to Let’s Encrypt.

After installation is complete, you can verify it by checking acme.sh version:

acme.sh --version
# v2.8.1

The program has a lot of commands and parameters that can be used. To get help you can run:

acme.sh --help

Issue an SSL cert

If you already have a web server running, you should use webroot mode. You will need write access to the web root folder. Here are some example commands that can be used to obtain cert via webroot mode:

Single domain + Webroot mode:

acme.sh --issue -d example.com --webroot /var/www/example.com

Multiple domains in the same cert + Webroot mode:

acme.sh --issue -d example.com -d www.example.com -d mail.example.com --webroot /var/www/example.com

Single domain ECC/ECDSA cert + Webroot mode:

acme.sh --issue -d example.com --webroot /var/www/example.com --keylength ec-256

Multiple domains in the same ECC/ECDSA cert + Webroot mode:

acme.sh --issue -d example.com -d www.example.com -d mail.example.com --webroot /var/www/example.com --keylength ec-256

Valid values for --keylength are: 2048 (default), 3072, 4096, 8192 or ec-256, ec-384.

If you don’t have a web server, maybe you are on a SMTP or FTP server, the 80 port is free, then you can use standalone mode. If you want to use this mode, you’ll need to install socat tools first.

Single domain + Standalone mode:

acme.sh --issue -d example.com --standalone

Multiple domains in the same cert + Standalone mode:

acme.sh --issue -d example.com -d www.example.com -d mail.example.com --standalone

If you don’t have a web server, maybe you are on a SMTP or FTP server, the 443 port is free. You can use standalone TLS ALPN mode. Acme.sh has a builtin standalone TLS web server, it can listen at 443 port to issue the cert.

Single domain + Standalone TLS ALPN mode:

acme.sh --issue -d example.com --alpn

Multiple domains in the same cert + Standalone TLS ALPN mode:

acme.sh --issue -d example.com -d www.example.com --alpn

Automatic DNS API integration

If your DNS provider has an API, acme.sh can use the API to automatically add the DNS TXT record for you. Your cert will be automatically issued and renewed. No manually work is required. Before requesting the certs configure your API keys and Email. Currently acme.sh has automatic DNS integration with around 60 DNS providers natively and can utilize Lexicon tool for those that are not supported natively.

Single domain + CloudFlare DNS API mode:

export CF_Key="sdfsdfsdfljlbjkljlkjsdfoiwje"
export CF_Email="xxxx@sss.com"
acme.sh --issue -d example.com --dns dns_cf

Wildcard cert + CloudFlare DNS API mode:

export CF_Key="sdfsdfsdfljlbjkljlkjsdfoiwje"
export CF_Email="xxxx@sss.com"
acme.sh --issue -d example.com -d '*.example.com' --dns dns_cf

If your DNS provider doesn’t support any API access, you can add the TXT record manually.

acme.sh --issue --dns -d example.com -d www.example.com -d cp.example.com

You should get an output like below:

Add the following txt record:
Domain:_acme-challenge.example.com
Txt value:9ihDbjYfTExAYeDs4DBUeuTo18KBzwvTEjUnSwd32-c

Add the following txt record:
Domain:_acme-challenge.www.example.com
Txt value:9ihDbjxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Please add those txt records to the domains. Waiting for the dns to take effect.

Then just rerun with renew argument:

acme.sh --renew -d example.com

Keep in mind that this is DNS manual mode and you can’t auto renew your certs. You will have to add a new TXT record to your domain by your hand when it’s time to renew certs. So use DNS API mode instead, because it can be automated.

Install Let’s encrypt SSL cert

After cert(s) are generated, you probably want to install/copy issued certificate(s) to the correct location on the disk. You must use this command to copy the certs to the target files, don’t use the certs files in ~/.acme.sh/ folder, they are for internal use only, the folder structure may change in the future. Before installation, create a sensible directory to store your certificates. That can be /etc/letsencrypt/etc/nginx/ssl or /etc/apache2/ssl for example, depending on your web server software and your own preferences to store SSL related stuff.

Apache example:

acme.sh --install-cert \
        --domain example.com \ 
        --cert-file /path/to/cert/cert.pem \
        --key-file /path/to/keyfile/key.pem \
        --fullchain-file /path/to/fullchain/fullchain.pem \
        --reloadcmd "sudo systemctl reload apache2.service"

Nginx example:

acme.sh --install-cert \
        --domain example.com \ 
        --cert-file /path/to/cert/cert.pem \
        --key-file /path/to/keyfile/key.pem \
        --fullchain-file /path/to/fullchain/fullchain.pem \
        --reloadcmd "sudo systemctl reload nginx.service"

The parameters are stored in the .acme.sh configuration file, so you need to get it right for your system as this file is read when the cron job runs renewal. “reloadcmd” is dependent on your operating system and init system.

Renew the Let’s Encrypt SSL certs

You don’t need to renew the certs manually. All the certs will be renewed automatically every 60 days.

However, you can also force to renew a cert:

acme.sh --renew -d example.com --force

or, for ECC cert:

acme.sh --renew -d example.com --force --ecc

How to upgrade acme.sh

You can update acme.sh to the latest code with:

acme.sh --upgrade

You can also enable auto upgrade:

acme.sh --upgrade --auto-upgrade

Then acme.sh will be kept up to date automatically.

That’s it. If you get stuck on anything visit acme.sh wiki page at https://github.com/Neilpang/acme.sh/wiki.

Source

Disk Encryption for Low-End Hardware

Eric Biggers and Paul Crowley were unhappy with the disk encryption
options available for Android on low-end phones and watches. For
them, it was an ethical issue. Eric said:

We believe encryption is
for everyone, not just those who can afford it. And while it’s
unknown how long CPUs without AES support will be around, there
will likely always be a “low end”; and in any case, it’s immensely
valuable to provide a software-optimized cipher that doesn’t depend
on hardware support. Lack of hardware support should not be an
excuse for no encryption.

Unfortunately, they were not able to find any existing encryption
algorithm that was both fast and secure, and that would work with existing
Linux kernel infrastructure. They, therefore, designed the Adiantum
encryption mode, which they described in a light, easy-to-read and
completely non-mathematical way
.

Essentially, Adiantum is not a new form of encryption; it relies
on the ChaCha stream cipher developed by D. J. Bernstein in 2008.
As Eric put it, “Adiantum is a construction, not a primitive. Its
security is reducible to that of XChaCha12 and AES-256, subject to
a security bound; the proof is in Section 5 of our paper. Therefore,
one need not ‘trust’ Adiantum; they only need trust XChaCha12 and
AES-256.”

Eric reported that Adiantum offered a 20% speed improvement over
his and Paul’s earlier HPolyC encryption mode, and it offered a very
slight improvement in actual security.

Eric posted some patches, adding Adiantum to the Linux kernel’s
crypto API. He remarked, “Some of these patches conflict with the
new ‘Zinc’ crypto library. But I don’t know when Zinc will be
merged, so for now, I’ve continued to base this patchset on the
current ‘cryptodev’.”

Jason A. Donenfeld’s Zinc (“Zinc Is Not crypto/”) is a front-runner
to replace the existing kernel crypto API, and it’s more simple and
low-level than that API, offering a less terrifying coding experience.

Jason replied to Eric’s initial announcement. He was very happy to
see such a good disk encryption alternative for low-end hardware,
but he asked Eric and Paul to hold off on trying to merge their
patches until they could rework them to use the new Zinc security
infrastructure. He said, “In fact, if you already want to build it
on top of Zinc, I’m happy to work with you on that in a shared repo
or similar.”

He also suggested that Eric and Paul send their paper through various
academic circles to catch any unanticipated problems with their
encryption system.

But Paul replied:

Unlike a new primitive whose strength can only
be known through attempts at cryptanalysis, Adiantum is a construction
based on well-understood and trusted primitives; it is secure if
the proof accompanying it is correct. Given that (outside competitions
or standardization efforts) no-one ever issues public statements
that they think algorithms or proofs are good, what I’m expecting
from academia is silence 🙂 The most we could hope for would be
getting the paper accepted at a conference, and we’re pursuing that
but there’s a good chance that won’t happen simply because it’s not
very novel. It basically takes existing ideas and applies them using
a stream cipher instead of a block cipher, and a faster hashing
mode; it’s also a small update from HPolyC. I’ve had some private
feedback that the proof seems correct, and that’s all I’m expecting
to get.

Eric also replied, regarding Zinc integration:

For now
I’m hesitant to completely abandon the current approach and bet the
farm on Zinc. Zinc has a large scope and various controversies
that haven’t yet been fully resolved to everyone’s satisfaction,
including unclear licenses on some of the essential assembly files.
It’s not appropriate to grind kernel crypto development to a halt
while everyone waits for Zinc.

He added that if Zinc is ready, he’d be happy to use it. He just
wasn’t sure whether it was.

However, in spite of the uncertainty, Eric later said, “I started
a branch based on Zinc:
https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git, branch
‘adiantum-zinc’.”

He listed the work he’d done so far and the work that remained to
be done. But regarding Zinc’s remaining non-technical issues, he said:

Both
myself and others have expressed concerns about these issues
previously too, yet they remain unaddressed nor is there a documentation
file explaining things. So please understand that until it’s clear
that Zinc is ready, I still have to have Adiantum ready to go without
Zinc, just in case.

Jason was happy to see the Zinc-based repository and promised to
look it over. He also promised to add a documentation file covering
many of Eric’s concerns before posting another series of Zinc
patches. And as far as Eric and Paul being ready to go without Zinc
integration, he added, “I do really appreciate you taking the time,
though, to try this out with Zinc as well. Thanks for that.”

Meanwhile, Herbert Xu accepted Eric and Paul’s original patch-set,
so there may be a bit of friendly shuffling as both Zinc and Adiantum
progress.

It’s nice to see this sort of attention being given to low-end
hardware. But, it’s nothing new. The entire Linux kernel is supposed
to be able to run on absolutely everything—or at least everything
that’s still in use in the world. I don’t think there are too many
actual 386 systems in use anymore, but for real hardware in the
real world, pretty much all of it should be able to run a fully
featured Linux OS.

Note: if you’re mentioned above and want to post a response above the comment section, send a message with your response text to ljeditor@linuxjournal.com.

Source

it – Bash Framework to Control Your Scripts and Aliases

Bash-it is a bundle of community Bash commands and scripts for Bash 3.2+, which comes with autocompletion, themes, aliases, custom functions, and more. It offers a useful framework for developing, maintaining and using shell scripts and custom commands for your daily work.

If you are using the Bash shell on a daily basis and looking for an easy way to keep track of all your scripts, aliases and functions, then Bash-it is for you! Stop polluting your ~/bin directory and .bashrc file, fork/clone Bash-it and begin hacking away.

How to Install Bash-it in Linux

To install Bash-it, first you need to clone the following repository to a location of your choice, for example:

$ git clone --depth=1 https://github.com/Bash-it/bash-it.git ~/.bash_it

Then run the following command to install Bash-it (it automatically backup your ~/.bash_profile or ~/.bashrc, depending on your OS). You will be asked “Would you like to keep your .bashrc and append bash-it templates at the end? [y/N]”, answer according to your preference.

$ ~/.bash_it/install.sh 
Install Bash-It in Linux

Install Bash-It in Linux

After installation, you can use ls command to verify the bash-it installation files and directories as shown.

$ ls .bash_it/
Verify Bash-It Installation

Verify Bash-It Installation

To start using Bash-it, open a new tab or run:

$ source $HOME/.bashrc

How to Customize Bash-it in Linux

To customize Bash-it, you need to edit your modified ~/.bashrc shell startup file. To list all installed and available aliases, completions, and plugins run the following commands, which should also shows you how to enable or disable them:

  
$ bash-it show aliases        	
$ bash-it show completions  
$ bash-it show plugins        	

Next, we will demonstrate how to enable aliases, but before that, first list the current aliases with the following command.

$ alias 
View Current Aliases in Linux

View Current Aliases in Linux

All the aliases are located in the $HOME/.bash_it/aliases/ directory. Now let’s enable the apt aliases as shown.

$ bash-it enable alias apt
Enable Alias in Linux

Enable Alias in Linux

Then reload bash-it configs and check the current aliases once more.

$ bash-it reload	
$ alias

From the output of the alias command, the apt aliases are now enabled.

Check Current Aliases in Linux

Check Current Aliases in Linux

You can disable newly enabled alias with the following commands.

$ bash-it disable alias apt
$ bash-it reload
Disable Aliases in Linux

Disable Aliases in Linux

In the next section, we will use similar steps to enable or disable completions ($HOME/.bash_it/completion/) and plugins ($HOME/..bash_it/plugins/). All enabled features are located in the $HOME/.bash_it/enableddirectory.

How to Manage Bash-it Theme

The default theme for bash-it is bobby; you can check this using the BASH_IT_THEME env variable as shown.

echo $BASH_IT_THEME
Check Bash-it Theme

Check Bash-it Theme

You can find over 50+ Bash-it themes in the $BASH_IT/themes directory.

$ ls $BASH_IT/themes
View Bash-It Themes

View Bash-It Themes

To preview all the themes in your shell before using any, run the following command.

$ BASH_PREVIEW=true bash-it reload
Preview All Bash-It Themes

Preview All Bash-It Themes

Once you have identified a theme to use, open your .bashrc file and find the following line in it and change it value to the name of the theme you want, for example:

$ export BASH_IT_THEME='essential'
Change Bash-It Theme

Change Bash-It Theme

Save the file and close, and source it as shown before.

$ source $HOME/.bashrc

Note: In case you have built a your own custom themes outside of $BASH_IT/themes directory, point the BASH_IT_THEME variable directly to the theme file:

export BASH_IT_THEME='/path/to/your/custom/theme/'

And to disable theming, leave the above env variable empty.

export BASH_IT_THEME=''

How to Search Plugins, Aliases or Completions

You can easily check out which of the plugins, aliases or completions are available for a specific programming language, framework or an environment.

The trick is simple: just search for multiple terms related to some of the commands you use frequently, for example:

$ bash-it search python pip pip3 pipenv
$ bash-it search git
Search in Bash-It

Search in Bash-It

To view help messages for the aliases, completions and plugins, run:

$ bash-it help aliases        	
$ bash-it help completions
$ bash-it help plugins

You can create you own custom scripts, and aliases, in the following files in the respective directories:

aliases/custom.aliases.bash 
completion/custom.completion.bash 
lib/custom.bash 
plugins/custom.plugins.bash 
custom/themes//<custom theme name>.theme.bash 

Updating and Uninstalling Bash-It

To update Bash-it to the latest version, simply run:

$ bash-it update

If you don’t like Bash-it anymore, you can uninstall it by running the following commands.

$ cd $BASH_IT
$ ./uninstall.sh

The uninstall.sh script will restore your previous Bash startup file. Once it has completed the operation, you need to remove the Bash-it directory from your machine by running.

$ rm -rf $BASH_IT  

And remember to start a new shell for the recent changes to work or source it again as shown.

$ source $HOME/.bashrc

You can see all usage options by running:

$ bash-it help

Finally, Bash-it comes with a number of cool features related to Git.

For more information, see the Bash-it Github repository: https://github.com/Bash-it/bash-it.

That’s all! Bash-it is an easy and productive way to keep all your bash scripts and aliases under control.

Source

WP2Social Auto Publish Powered By : XYZScripts.com