Install Docker on Raspberry Pi

Docker is a containerization system for Linux. It is used to run lightweight Linux containers on top of another Linux host operation system (a.k.a Docker host). If you’re trying to learn Docker on a real computer, then Raspberry Pi is a very cost effective solution. As Docker containers are lightweight, you can easily fit it 5-10 or more Docker containers on a Raspberry Pi host. I recommend you buy Raspberry Pi 3 Model B or Raspberry Pi 3 Model B+ if you want to setup Docker on it as these models of Raspberry Pi has 1GB of memory (RAM). The more memory you have the better. But sadly, there’s no Raspberry Pi released yet that has more than 1 GB of memory.

In this article, I will show you how to install Docker on Raspberry Pi 3 Model B. I will be using Ubuntu Core operating system on my Raspberry Pi 3 Model B for the demonstration.

You need:

  • A Raspberry Pi 3 Model B or Raspberry Pi 3 Model B+ Single Board Computer device.
  • At least 16GB of microSD Card for installing Ubuntu Core.
  • An Ethernet Cable for internet connection. You can also use the built-in Wi-Fi for the internet. But I prefer wired connection as I think it’s more reliable.
  • HDMI Cable.
  • A Monitor with HDMI port.
  • An USB Keyboard for configuring Ubuntu Core for the first time.
  • A Power Adapter for the Raspberry Pi.

Install Ubuntu Core on Raspberry Pi 3:

I showed you how to install and configure Ubuntu Core on Raspberry Pi 2 and Raspberry Pi 3 in another Raspberry Pi article I wrote on LinuxHint. You can check it at (Link to the Install Ubuntu on Raspberry Pi article)

Powering on Raspberry Pi 3:

Once you have everything set up, connect all the required devices and connectors to your Raspberry Pi and turn it on.

Connecting to Raspberry Pi 3 via SSH:

Once you have Ubuntu Core OS configured, you should be able to connect to your Raspberry Pi 3 via SSH. The required information to connect to your Raspberry Pi via SSH should be displayed on the Monitor connected to your Raspberry Pi as you can see in the marked section of the screenshot below.

Now, from any of the computer that you have SSH key added to your Ubuntu One account, run the following command to connect to the Raspberry Pi via SSH:

$ ssh dev.shovon8@192.168.2.15

NOTE: Replace the username and the IP address of the command with yours.

You may see an error while connecting to your Raspberry Pi via SSH, in that case, just run the following command:

$ ssh-keygen -f ~/.ssh/known_hosts -R 192.168.2.15

Now, you should be able to connect to your Raspberry Pi via SSH again. If it’s the first time you’re connecting to your Raspberry Pi via SSH, then you should see the following message. Just type in yes and then press <Enter>.

You should be connected.

Installing Docker on Raspberry Pi 3:

On Ubuntu Core, you can only install snap packages. Luckily, Ubuntu Core has Docker snap package in the official snap package repository. So, you won’t have any trouble installing Docker on Raspberry Pi 3. To install Docker on Raspberry Pi 3, run the following command:

$ sudo snap install docker

As you can see, Docker is being installed. It will take a while to complete.

At this point Docker is installed. As you can see, the version of Docker is 18.06.1. It is Docker Community Edition.

Now, run the following command to connect Docker to the system:

$ sudo snap connect docker:home

Using Docker on Raspberry Pi 3:

In this section, I will show you how to run Docker containers on Raspberry Pi 3. Let’s get started. You can search for Docker images with the following command:

$ sudo docker search KEYWORD

For example, to search for Ubuntu docker images, run the following command:

$ sudo docker search ubuntu

As you can see, the search result is displayed. You can download and use any Docker image from here. The first Docker image in the search result is ubuntu. Let’s download and install it.

To download (in Docker term pull) the ubuntu image, run the following command:

$ sudo docker pull ubuntu

As you can see, the Docker ubuntu image is being pulled.

The Docker ubuntu image is pulled.

You can list all the Docker images that you’ve pulled with the following command:

Now, you can create a Docker container using the ubuntu image with the following command:

$ sudo docker run -it ubuntu

As you can see, a Docker container is created and you’re logged into the shell of the new container.

Now, you can run any command you want here as you can see in the screenshot below.

To exit out of the shell of the container, run the following command:

You can list all the containers you’ve created with the following command:

As you can see, the container I’ve created earlier has the Container ID 0f097e568547. The container is not running anymore.

You can start the container 0f097e568547 again, with the following command:

$ sudo docker start 0f097e568547

As you can see, the container 0f097e568547 is running again.

To log in to the shell of the container, run the following command:

$ sudo docker attach 0f097e568547

As you can see, I am logged into the shell of the container 0f097e568547 again.

You can check how much memory, CPU, disk I/O, network I/O etc the running containers are using with the following command:

As you can see, I have two containers running and their ID, name, CPU usage, memory usage, network usage, disk usage, pid etc are displayed in a nicely formatted way.

I am running Docker and 2 containers on my Raspberry Pi 3 and I still have about 786 MB of memory available/free. Docker on Raspberry Pi 3 is amazing.

So, that’s how you install and use Docker on Raspberry Pi 3. Thanks for reading this article.

Source

Virtualizing the Clock – Linux Journal

Dmitry Safonov wanted to implement a namespace for time information. The
twisted and bizarre thing about virtual machines is that they get more
virtual all the time. There’s always some new element of the host system
that can be given its own namespace and enter the realm of the virtual
machine. But as that process rolls forward, virtual systems have to share
aspects of themselves with other virtual systems and the host system
itself—for example, the date and time.

Dmitry’s idea is that users should be able to set the day and time on their
virtual systems, without worrying about other systems being given the same
day and time. This is actually useful, beyond the desire to live in the past
or future. Being able to set the time in a container is apparently one of
the crucial elements of being able to migrate containers from one physical
host to another, as Dmitry pointed out in his post.

As he put it:

The kernel provides access to several clocks:
CLOCK_REALTIME,
CLOCK_MONOTONIC, CLOCK_BOOTTIME. Last two clocks are monotonous, but the
start points for them are not defined and are different for each running
system. When a container is migrated from one node to another, all clocks
have to be restored into consistent states; in other words, they have to
continue running from the same points where they have been dumped.

Dmitry’s patch wasn’t feature-complete. There were various questions still
to consider. For example, how should a virtual machine interpret the time
changing on the host hardware? Should the virtual time change by the same
offset? Or continue unchanged? Should file creation and modification times
reflect the virtual machine’s time or the host machine’s time?

Eric W. Biederman supported this project overall and liked the code in the
patch, but he did feel that the patch could do more. He thought it was a little
too lightweight. He wanted users to be able to set up new time namespaces at
the drop of a hat, so they could test things like leap seconds before
they actually occurred and see how their own projects’ code worked under
those various conditions.

To do that, he felt there should be a whole “struct timekeeper” data
structure for each namespace. Then pointers to those structures could be
passed around, and the times of virtual machines would be just as
manipulable and useful as times on the host system.

In terms of timestamps for filesystems, however, Eric felt that it might
be best to limit the feature set a little bit. If users could create files
with timestamps in the past, it could introduce some nasty security
problems. He felt it would be sufficient simply to “do what distributed
filesystems do when dealing with hosts with different clocks”.

The two went back and forth on the technical implementation details. At one
point, Eric remarked, in defense of his preference:

My experience with
namespaces is that if we don’t get the advanced features working there is
little to no interest from the core developers of the code, and the
namespaces don’t solve additional problems. Which makes the namespace a
hard sell. Especially when it does not solve problems the developers of the
subsystem have.

At one point, Thomas Gleixner came into the conversation to remind Eric that
the time code needed to stay fast. Virtualization was good, he said, but
“timekeeping_update() is already heavy and walking through a gazillion of
namespaces will just make it horrible.”

He reminded Eric and Dmitry that:

It’s not only timekeeping, i.e. reading time, this is also affecting all
timers which are armed from a namespace.

That gets really ugly because when you do settimeofday() or adjtimex() for a
particular namespace, then you have to search for all armed timers of that
namespace and adjust them.

The original posix timer code had the same issue because it mapped the clock
realtime timers to the timer wheel so any setting of the clock caused a full
walk of all armed timers, disarming, adjusting and requeing them. That’s
horrible not only performance wise, it’s also a locking nightmare of all
sorts.

Add time skew via NTP/PTP into the picture and you might have to adjust
timers as well, because you need to guarantee that they are not expiring
early.

So, there clearly are many nuances to consider. The discussion ended there,
but this is a good example of the trouble with extending Linux to create
virtual machines. It’s almost never the case that a whole feature can be
fully virtualized and isolated from the host system. Security concerns,
speed concerns, and even code complexity and maintainability come into the
picture. Even really elegant solutions can be shot down by, for example, the
possibility of hostile users creating files with unnaturally old timestamps.

Note: if you’re mentioned above and want to post a response above the comment section, send a message with your response text to ljeditor@linuxjournal.com.

Source

Gumstix enhances Geppetto board design service with new Board Builder UI

Nov 7, 2018

Gumstix has expanded its Linux-oriented Geppetto online embedded board development platform with a free “Board Builder” service that offers a checklist interface for selecting modules, ports, and more.

Gumstix has added a free Board Builder service to its Geppetto Design-to-Order (D2O) custom board design service. The Board Builder improvements make the drag-and-drop Geppetto interface even easier to use, enabling customization of ports, layout and other features.

With Board Builder, you can select items from a checklist, including computer-on-modules, memory, network, sensors, audio, USB, and other features. You can then select a custom size, and you’re presented with 2D and 3D views of board diagrams that you can further manipulate.

Geppetto Board Builder design process for a Raspberry Pi CM3 based IoT design

Board Builder will prompt you with suggestions for power and other features. These tips are based on your existing design, as well as Gumstix’s deep knowledge base about embedded Linux boards.

We quickly whipped up a little Raspberry Pi Compute Module 3 based carrier board (above), which admittedly needs a lot of work. Even if you’re not a serious board developer, it’s a painless, and rather addictive way to do hardware prototyping — sort of a Candy Crush for wannabe hardware geeks.

Serious developers, meanwhile, can go on to take full advantage of the Geppetto service. Once the board is created, “free Automated Board Support Package (AutoBSP), technical documentation (AutoDoc) and 3D previews can be instantly downloaded to anyone who designs a hardware device in the Geppetto online D2O,” says Gumstix.

You can then use Geppetto’s fast small-run manufacturing order service to quickly manufacture small runs of the board within 15 days. The initial $1,999 manufacturing price is reduced for higher quantity jobs and repeat board spins.

Since Gumstix launched its free, web-based Geppetto service several years ago, it designs most of its own boards with the service. Anyone can use Geppetto to modify Gumstix’s carrier board designs or start from scratch and build a custom board. The Geppetto service supports a growing number of Linux- and Android driven modules ranging from the company’s own DuoVero and Overo modules to the Nvidia Jetson TX2 that drives the recent Gumstix Aerocore 2 for Nvidia Jetson.

Further information

The Board Builder interface is available now on the free Geppetto D2O service. More information may be found on the Gumstix Geppetto Board Builder page. You can start right away with Board Builder here.

Source

Overcoming Your Terror of Arch Linux | Software

A recent episode of a Linux news podcast I keep up with featured an interview with a journalist who had written a piece for a non-Linux audience about giving it a try. It was surprisingly widely read. The writer’s experience with some of the more popular desktop distributions had been overwhelmingly positive, and he said as much in his piece and during the subsequent podcast interview.

However, when the show’s host asked whether he had tried Arch Linux — partly to gauge the depth of his experimentation and partly as a joke — the journalist immediately and unequivocally dismissed the idea, as if it were obviously preposterous.

Although that reaction came from an enthusiastic Linux novice, it is one that is not uncommon even among seasoned Linux users. Hearing it resurface in the podcast got me contemplating why that is — as I am someone who is comfortable with and deeply respects Arch.

What Are You Afraid Of?

1. “It’s hard to install.”

The most common issue skeptics raise, by far, is that the installation process is challenging and very much hands-on. Compared to modern day installers and wizards, this is undoubtedly true. In contrast to most mainstream Linux distributions (and certainly to proprietary commercial operating systems), installing Arch is a completely command line-driven process.

Parts of the operating system that users are accustomed to getting prefabricated, like the complete graphical user interface that makes up the desktop, have to be assembled from scratch out of the likes of the X Window server, the desired desktop environment, and the display manager (i.e. the startup login screen).

Linux did not always have installers, though, and Arch’s installation process is much closer to how it was in the days of yore. Installers are a huge achievement, and a solution to one of the biggest obstacles to getting non-expert general users to explore and join the Linux community, but they are a relative luxury in the history of Linux.

Also, installers can get it wrong, as I found out when trying to make some modest adjustments to the default Ubuntu installation settings. While Arch let me set up a custom system with a sequence of commands, Ubuntu’s installer nominally offered a menu for selecting the same configuration, but simply could not to execute it properly under the hood once the installer was set in motion.

2. “The rolling releases are unstable.”

In my experience, Arch’s implementation of the rolling release model has been overwhelmingly stable, so claims to the contrary are largely overblown as far as I am concerned.

When users have stability problems, it’s generally because they’re trying something that either is highly complicated or something for which there is little to no documentation. These precarious use cases are not unique to Arch. Combining too many programs or straying into uncharted territory are more or less equally susceptible to stability issues in Arch as with any other distribution — or any operating system, for that matter.

Just like any software developers, the Arch developers want people to like and have a good experience using their distro, so they take care to get it right. In a way, Arch’s modular approach, with each package optimized and sent out as soon as it’s ready, actually makes the whole operation run more smoothly.

Each sub-team at Arch receives a package from upstream (wherever that might be), makes the minimum number of changes to integrate it with Arch’s conventions, and then pushes it out to the whole Arch user base.

Because every sub-team is doing this and knows every other sub-team is doing the same, they can be sure of exactly what software environment they will be working with and integrating into: the most recent one.

The only times I’ve ever had an update break my system, the Arch mailing list warned me it would, and the Arch forums laid out exactly how to fix it. In other words, by checking the things that responsible users should check, you should be fine.

3. “I don’t want to have to roll back packages.”

Package downgrades are related to, and probably the more feared manifestation of, the above. Again, if you’re not doing anything crazy with your system and the software on it, and you read from Arch’s ample documentation, you probably won’t have to.

As with the risk of instability that comes from complicated setups on any distribution, package downgrades are potentially necessary on distributions besides Arch as well. In fact, whereas most distributions assume you never will have to perform a downgrade and thus don’t design their package management systems to easily (or at least intuitively) do it, Arch makes it easy and thoroughly outlines the steps.

4. “It doesn’t have as many packages,” and “I heard the AUR is scary.”

The criticism of Arch’s relatively smaller base of total available packages usually goes hand-in-hand with that of the unofficial repository being a sort of Wild West. As far as the official repositories are concerned, the number is somewhat smaller than in Debian- or Red Hat-based distributions. Fortunately, the Arch User Repository (AUR) usually contains whatever the official repos lack that most any user possibly could hope for.

This is where most naysayers chime in to note that malicious packages have been found in the AUR. This occasionally has been the case, but what most of us don’t always think about is that this also can be said of the Android Play Store, the Apple App Store, and just about every other software manager that you can think of.

Just as with every app store or software center, if users are careful to give a bit of scrutiny to the software they are considering — in AUR’s case by scanning the (very short) files associated with AUR packages and reading forum pages on the more questionable ones — they will generally be fine.

Others may counter that it’s not the potential hazards of the AUR that are at issue, but that more so than with, say, Debian-based distributions, there is software that falls outside of both the official Arch repos and the AUR. To start with, this is less the case than it once was, given the meteoric rise in the popularity of the Arch-based Manjaro distribution.

Beyond that, most software that isn’t in any of Arch’s repos can be compiled manually. Just as manual installations like Arch’s were the norm for Linux once upon a time, the same holds true for compilations being the default mode of software installation.

Arch’s Tricks Come With Some Major Treats

With those points in mind, hopefully Arch doesn’t seem so daunting. If that’s not enough to convince you to give it a whirl, here are a few points in Arch’s favor that are worth considering.

To start off, manual installation not only gives you granular control over your system, but also teaches you where everything is, because you put it there. Things like the root directory structure, the initial ram filesystem and the bootloader won’t be a mystery that computer use requires you to blindly accept, because during installation you directly installed and generated all these (and more) and arranged them in their proper places.

Manual installation also cuts way down on bloat, since you install everything one package at a time — no more accepting whatever the installer dumps onto your fresh system. This is an especially nice advantage considering that, as many Linux distributions become more geared toward mainstream audiences, their programs become more feature-rich, and therefore bulkier.

Depending on how you install it, Arch running the heaviest desktop environment still can be leaner than Ubuntu running the lightest one, and that kind of efficiency is never a bad thing.

Rolling releases are actually one of Arch’s biggest strengths. Arch’s release model gives you the newest features right away, long before distros with traditional synchronized, batch update models.

Most importantly, with Arch, security patches drop immediately. Every time a major Linux vulnerability comes out — there usually isn’t much malware that exploits these vulnerabilities, but there are a lot of vulnerabilities to potentially exploit — Arch is always the first to get a patch out and into the hands of its users, and usually within a day of the vulnerability being announced.

You’ll probably never have to roll back packages, but if you do, you will be armed with the knowledge to rescue your system from some of the most serious problems.

If you can live-boot the Arch installation image (which doubles as a repair image) from a USB, mount your non-booted installed system to the live system, chroot in to the non-booted system (i.e. switch from the root of the live system to treating your non-booted system as the temporary root), and install a cached previous version of problem packages, you know how to solve a good proportion of the most serious problems any system might have.

That sounds like a lot, but that’s also why Arch Linux has the best documentation of any Linux distribution, period.

Finally, plumbing the AUR for packages will teach you how to review software for security, and compiling source code will give you an appreciation for how software works. Getting in the habit of spotting sketchy behavior in package build and make files will serve you well as a computer user overall.

It also will prod you to reevaluate your relationship with your software. If you make a practice of seriously weighing every installation, you might start being pickier with what you do choose to install.

Once you’ve compiled a package or two, you will start to realize just how unbounded you are in how to use your system. App stores have gotten us used to thinking of computing devices in terms of what its developers will let us do with them, not in terms of what we want to do with them, or what it’s possible to do with them.

It might sound cheesy, but compiling a program really makes you reshape the way you see computers.

Safely Locked Away in a Virtual World of Its Own

If you’re still apprehensive about Arch but don’t want to pass on it, you can install it as a virtual machine to tinker with the installation configurations before you commit to running it on bare hardware.

Software like VirtualBox allows you to allocate a chunk of your hard drive and blocks of memory to running a little computer inside your computer. Since Linux systems in general, and Arch in particular, don’t demand much of your hardware resources, you don’t have to allocate much space to it.

To create a sandbox for constructing your Arch Linux, tell VirtualBox you want a new virtual system and set the following settings (with those not specified here left to default): 2 GB of RAM (though you can get away with 1 GB) and 8 GB of storage.

You will now have a blank system to choose in VirtualBox. All you have to do now is tell it where to find the Arch installation image — just enter the system-specific settings, go to storage, and set the Arch ISO as storage.

When you boot the virtual machine, it will live-boot this Arch image, at which point your journey begins. Once your installation is the way you want it, go back into the virtual system’s settings, remove the Arch installer ISO, reboot, and see if it comes to life.

There’s a distinct rush you feel when you get your own Arch system to boot for the first time, so revel in it.

Source

A Look At The AMD EPYC Performance On The Amazon EC2 Cloud

Of the announcements from yesterday’s AMD Next Horizon event, one that came as a surprise was the rolling out of current-generation EPYC processors to the Amazon Elastic Compute Cloud (EC2). Available so far are the AMD-powered M5a and R5a instance types to offer Amazon cloud customers more choice as well as being priced 10% lower than comparable instances. Here are some initial benchmarks of the AMD performance in the Amazon cloud.

 

 

Initially the AMD EPYC instances on EC2 are the M5a “general purpose” and R5a “memory optimized” instance types. For the purposes of this initial benchmarking over the past day, I focused on looking at the general purpose performance using the m5a.xlarge, m5a.2xlarge, m5a.4xlarge, and m5a.12xlarge sizes. More details on the different AMD EPYC options available can be found via this AWS blog post. Amazon will also be rolling out T3a instances in the near future as well.

 

 

Amazon says these new AMD instances are powered by “custom AMD EPYC processors running at 2.5 GHz.” In the testing of the M5a instance types, the reported CPU is an AMD EPYC 7571 at 2.5GHz and comprised of 32 cores / 64 threads, granted depending upon the instance type is just a subset of that computing capacity. The EPYC 7571 isn’t publicly available but appears to be a slightly faster version of the EPYC 7551.

 

 

With the AMD M5a instance types I compared them to the Intel-powered M5 instance types of the same size. These Intel-based instances offer the same vCPU and ECU ratings as well as the available system memory and other factors, but the EPYC-based instances are about 10% cheaper thanks to the more competitive pricing with AMD’s current server hardware. The Intel M5 instances were using Xeon Platinum 8175M processors.

Via the Phoronix Test Suite a range of benchmarks were carried out between these instances not only looking at the raw performance but also the performance-per-dollar for the on-demand cloud instance pricing in the US West (Oregon) region where the testing was carried out.

Amazon EC2 isn’t the only cloud service offering EPYC CPUs but among others is also SkySilk. Hopefully in the coming days I’ll have the time to wrap up some multi-cloud benchmark comparisons for performance and value. While benchmarking all of the instances, Ubuntu 18.04 LTS with the Linux 4.15 kernel was utilized. The default Spectre/Meltdown mitigations on each platform were active.

Source

imgp – multi-core batch image file resize and rotate

imgp

We’ve previously written about good open source software that batch converts image files. Batch conversion offers lots of benefits. Batch image converters let you process hundreds or thousands of images with a few clicks or even a single command. By optimizing image files that are displayed on websites, bandwidth is conserved, storage space is conserved, and sites load faster, which will help to provide a better end user experience.

The last time we surveyed the scene (article: Save Time and Effort with these Excellent Batch Image Processors) there was a fairly limited range available to recommend. But I want to recommend a further utility. It’s called imgp, a Python-based command-line tool that lets you resize and rotate JPEG and PNG files. The software can resize (or thumbnail) thousands of images with a single command. The software is a standalone utility, it’s not tied to a file manager or other software.

imgp was previously called imgd.

Installation

Packages for Arch Linux, CentOS, Debian, Fedora, openSUSE Leap and Ubuntu are available.

If your distribution doesn’t carry the latest version, you can clone the software’s GitHub repository.

git clone https://github.com/jarun/imgp.git
cd imgp

You can copy the imgp file into a directory in your PATH. There’s nothing to compile. imgp requires Python 3.5 or later.

In operation

If you type imgp at a shell, the software outputs the various flags that are available. There’s a pretty good range available with this tool.

The output below shows imgp in action, resizing a directory of png files that are at least 50KB in size.

imgp

imgp

Features of imgp include:

  • Resize by percentage or resolution.
  • Rotate clockwise and anti-clockwise by specified angle.
  • Adaptive resize considering orientation.
  • Brute force to a resolution.
  • Optimize images to save more space.
  • Limit processing by minimum image size.
  • Nearest neighbor interpolation for PNG files – a simple method of multivariate interpolation in one or more dimensions.
  • Convert PNG images to JPEG format.
  • Support for progressive JPEG files, images created using compression algorithms that load the image in successive waves until the entire image is downloaded. The website visitor perceives the image loads faster as they see the whole image straight away.
  • Erase exif metadata. This metadata contains information about the device that took the picture, the dimensions of the image and, when available, GPS coordinates identifying the location where the picture was taken.
  • Specify output JPEG image quality.
  • Force smaller to larger resize.
  • Process directories recursively. If you need to convert a lot of images in nested directories, this option is a massive time saver.
  • Overwrite source image option.
  • Completion scripts for bash, fish, zsh.
  • Identifies Multi Picture Object (MPO) files, a multi-image extension of the JPEG image format. This extension is often used for stereoscopic images.
  • Minimal dependencies.

Summary

If you need to process a bunch of PNG or JPEG files, imgp is a handy utility.

Arun Prakash Jana is notable for coding other useful open source software. In particular, I’m a regular user of his nnn, a fast console based file manager, and googler, a console based utility tool to Google from the command-line.

Source

Download Whonix 14.0.0.7.4

Whonix is an open source Linux operating system built around the popular Tor anonymity network software and based on the well known Debian GNU/Linux distribution. It allows users to install a secure, general purpose and anonymous Linux-based operating system that runs entirely in the VirtualBox virtualization software. It is distributed as gateway and workstation editions.

It’s distributed as OVA files

The developers doesn’t provide regular ISO images for their Linux distribution. Instead, only OVA (Open Virtualization Format) files are available for download, which can be imported into the VirtualBox application. To import the OVA file, you will need to go to the File menu and select the Import Appliance option. Next, you will be able to browse to the location where you saved the OVA file and import it (the import process will take a long time, because the application needs to create a new disk image).

Provides various attractive features

It is important to know that before you import the OVA file, VirtualBox will ask you to change various settings. Do not change anything on this step, just click the Import button. After importing, you can fire up the Whonix virtual machine as you normally start your other VMs. The system provides various attractive features, such as anonymous IRC, anonymous publishing, and anonymous email through TorBirdy and Mozilla Thunderbird. It allows users to add a proxy behind Tor, torify almost any application, and circumvent censorship.

Supports only the 32-bit architecture

Whonix also supports DNSSEC (Domain Name System Security Extensions) over Tor, encrypted DNS, full IP/DNS protocol leak protection, and transparent proxy. Additionally, user will be able to tunnel Freenet, I2P, JonDonym, Proxy, Retroshare, SSH, UDP and VPN through Tor, as well as to enforce Tor. It supports only the 32-bit (486/686) architecture. The Workstation edition uses the KDE desktop environment and include open source applications like the Iceweasel web browser, XChat IRC, Tor Browser, and many more.

Source

Download WebKitGTK+ Linux 2.22.3

WebKitGTK+ is a completely free, versatile, powerful and open source command-line software that aims to port the powerful WebKit rendering engine to the GTK+ GUI toolkit and, of course, the GNOME graphical desktop environment.

The project incorporates WebKit’s full functionality through a set of GObject-based APIs (Application Programming Interfaces), and it is suitable for applications that require any type of web integration, from mature web browsers to hybrid HTML/CSS apps.

Used in Epiphany, Midori, and other powerful apps

WebKitGTK+ is successfully used in popular and powerful applications that work under the GNOME desktop environment or require the GTK+ toolkit, such as the Epiphany and Midor web browsers.

The project is very useful on both desktop and embedded systems, it supports WebKit2, and allows developers to easily build applications that rely on the web platform for increased responsiveness and security.

Uses process separation to support GTK+2 plugins on GTK+3 apps

Another interesting feature is process separation, which is used by WebKitGTK+ to seamlessly support plugins that are written in the 2.x branch of GTK+, such as Adobe Flash Player, in GTK+3 apps.

In addition, WebKitGTK+ offers full support for video and audio streams in web pages through the GStreamer WebKit backend, supports the HTML canvas element, supports WebRTC and WebAudio technologies, as well as accelerated rendering and 3D CSS.

Under the hood

Among WebKitGTK+’s runtime requirements (be aware that the list will change in time, as the project evolves), we can mention GTK+ 3.6.0 or later, gail 3.0 or later, GLib 2.36.0 or higher, libsoup 2.42.0 or later, Cairo 1.10 or higher, Pango 1.30.0 or higher, libxml or later 2.6, fontconfig 2.5 or later, FreeType2 or higher 9.0, and libsecret.

Moreover, depending on your configuration options WebKitGTK+ may also require GObject introspection 1.32.0 or higher, libxslt 1.1.7 or later, SQLite 3.0 or later, GStreamer 1.0.3 or higher, gstreamer-plugins-base 1.0.3 or later, Enchant 0.22 or later, Clutter, as well as Clutter GTK+.

Rendering engine WebKit GTK+ GNOME WebKit WebKit GTK+ Rendering Engine

Source

17 Fun Linux Commands to Run in the Terminal | Linux.com

The terminal is a very powerful tool, and it’s probably the most interesting part in Unix. Among the plethora of useful commands and scripts you can use, some seem less practical, if not completely useless. Here are some Bash commands that are fun, and some of them are useful as well.

Oneko

This command adds some spice to your terminal by adding a cat to your screen which will chase after your (mouse) cursor. Install it by running this script:

Type oneko to display the cat.

linux-fun-commands-oneko

figlet

Figlet is a command for those who love to write in ASCII art. It greatly simplifies this task as it automatically transforms any given string. It comes with a bunch of fonts by default at “/usr/share/figlet/fonts/,” and you can of course add your own.

figlet [-f path to the font] [string]

Read more at MakeTechEasier

Source

WP2Social Auto Publish Powered By : XYZScripts.com