PyTorch 1.0 Preview Release: Facebook’s newest Open Source AI

Last updated October 4, 2018 By Avimanyu Bandyopadhyay Leave a Comment

Facebook already uses its own Open Source AI, PyTorch quite extensively in its own artificial intelligence projects. Recently, they have gone a league ahead by releasing a pre-release preview version 1.0.

For those who are not familiar, PyTorch is a Python-based library for Scientific Computing.

PyTorch harnesses the superior computational power of Graphical Processing Units (GPUs) for carrying out complex Tensor computations and implementing deep neural networks. So, it is used widely across the world by numerous researchers and developers.

This new ready-to-use Preview Release was announced at the PyTorch Developer Conference at The Midway, San Francisco, CA on Tuesday, October 2, 2018.

Highlights of PyTorch 1.0 Release Candidate

PyTorhc is Python based open source AI framework from Facebook

Some of the main new features in the release candidate are:

1. JIT

JIT is a set of compiler tools to bring research close to production. It includes a Python-based language called Torch Script and also ways to make existing code compatible with itself.

2. New torch.distributed library: “C10D”

“C10D” enables asynchronous operation on different backends with performance improvements on slower networks and more.

3. C++ frontend (experimental)

Though it has been specifically mentioned as an unstable API (expected in a pre-release), this is a pure C++ interface to the PyTorch backend that follows the API and architecture of the established Python frontend to enable research in high performance, low latency and C++ applications installed directly on hardware.

To know more, you can take a look at the complete update notes on GitHub.

The first stable version PyTorch 1.0 will be released in summer.

Installing PyTorch on Linux

To install PyTorch v1.0rc0, the developers recommend using conda while there also other ways to do that as shown on their local installation page where they have documented everything necessary in detail.

Prerequisites

  • Linux
  • Pip
  • Python
  • CUDA (For Nvidia GPU owners)

As we recently showed you how to install and use Pip, let’s get to know how we can install PyTorch with it.

Note that PyTorch has GPU and CPU-only variants. You should install the one that suits your hardware.

Installing old and stable version of PyTorch

If you want the stable release (version 0.4) for your GPU, use:

pip install torch torchvision

Use these two commands in succession for a CPU-only stable release:

pip install http://download.pytorch.org/whl/cpu/torch-0.4.1-cp27-cp27mu-linux_x86_64.whl
pip install torchvision

Installing PyTorch 1.0 Release Candidate

You install PyTorch 1.0 RC GPU version with this command:

pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cu92/torch_nightly.html

If you do not have a GPU and would prefer a CPU-only version, use:

pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html

Verifying your PyTorch installation

Startup the python console on a terminal with the following simple command:

python

Now enter the following sample code line by line to verify your installation:

from __future__ import print_function
import torch
x = torch.rand(5, 3)
print(x)

You should get an output like:

tensor([[0.3380, 0.3845, 0.3217],
[0.8337, 0.9050, 0.2650],
[0.2979, 0.7141, 0.9069],
[0.1449, 0.1132, 0.1375],
[0.4675, 0.3947, 0.1426]])

To check whether you can use PyTorch’s GPU capabilities, use the following sample code:

import torch
torch.cuda.is_available()

The resulting output should be:

True

Support for AMD GPUs for PyTorch is still under development, so complete test coverage is not yet provided as reported here, suggesting this resource in case you have an AMD GPU.

Lets now look into some research projects that extensively use PyTorch:

Ongoing Research Projects based on PyTorch

  • Detectron: Facebook AI Research’s software system to intelligently detect and classify objects. It is based on Caffe2. Earlier this year, Caffe2 and PyTorch joined forces to create a Research + Production enabled PyTorch 1.0 we talk about.
  • Unsupervised Sentiment Discovery: Such methods are extensively used with social media algorithms.
  • vid2vid: Photorealistic video-to-video translation
  • DeepRecommender (We covered how such systems work on our past Netflix AI article)

Nvidia, leading GPU manufacturer covered more on this with their own update on this recent development where you can also read about ongoing collaborative research endeavours.

How should we react to such PyTorch capabilities?

To think Facebook applies such amazingly innovative projects and more in its social media algorithms, should we appreciate all this or get alarmed? This is almost Skynet! This newly improved production-ready pre-release of PyTorch will certainly push things further ahead! Feel free to share your thoughts with us in the comments below!


About Avimanyu Bandyopadhyay

Avimanyu is a Doctoral Researcher on GPU-based Bioinformatics and a big-time Linux fan. He strongly believes in the significance of Linux and FOSS in Scientific Research. Deep Learning with GPUs is his new excitement! He is a very passionate video gamer (his other side) and loves playing games on Linux, Windows and PS4 while wishing that all Windows/Xbox One/PS4 exclusive games get support on Linux some day! Both his research and PC gaming are powered by his own home-built computer. He is also a former Ubisoft Star Player (2016) and mostly goes by the tag “avimanyu786” on web indexes.

Source

LinuxBoot for Servers: Enter Open Source, Goodbye Proprietary UEFI

Last updated October 6, 2018 By Avimanyu Bandyopadhyay 19 Comments

LinuxBoot is an Open Source alternative to Proprietary UEFI firmware. It was released last year and is now being increasingly preferred by leading hardware manufacturers as default firmware. Last year, LinuxBoot was warmly welcomed into the Open Source family by The Linux Foundation.

This project was an initiative by Ron Minnich, author of LinuxBIOS and lead of coreboot at Google, in January 2017.

Google, Facebook, Horizon Computing Solutions, and Two Sigma collaborated together to develop the LinuxBoot project (formerly called NERF) for server machines based on Linux.

Its openness allows Server users to easily customize their own boot scripts, fix issues, build their own runtimes and reflash their firmware with their own keys. They do not need to wait for vendor updates.

Following is a video of Ubuntu Xenial booting for the first time with NERF BIOS:

Let’s talk about some other advantages by comparing it to UEFI in terms of Server hardware.

Advantages of LinuxBoot over UEFI

LinuxBoot vs UEFI

Here are some of the major advantages of LinuxBoot over UEFI:

Significantly
faster startup

It can boot up Server boards in less than twenty seconds, versus multiple minutes on UEFI.

Significantly
more flexible

LinuxBoot
can make use of any devices, filesystems and protocols that Linux
supports.

Potentially
more secure

Linux device drivers and filesystems have significantly more scrutiny than through UEFI.

We can argue that UEFI is partly open with EDK II and LinuxBoot is partly closed. But it has been addressed that even such EDK II code does not have the proper level of inspection and correctness as the Linux Kernel goes through, while there is a huge amount of other Closed Source components within UEFI development.

On the other hand, LinuxBoot has a significantly smaller amount of binaries with only a few hundred KB, compared to the 32 MB of UEFI binaries.

To be precise, LinuxBoot fits a whole lot better into the Trusted Computing Base, unlike UEFI.

LinuxBoot has a kexec based bootloader which does not support startup on Windows/non-Linux kernels, but that is insignificant since most clouds are Linux-based Servers.

LinuxBoot adoption

In 2011, the Open Compute Project was started by Facebook who open-sourced designs of some of their Servers, built to make its data centers more efficient. LinuxBoot has been tested on a few Open Compute Hardware listed as under:

  • Winterfell
  • Leopard
  • Tioga Pass

More OCP hardware are described here in brief. The OCP Foundation runs a dedicated project on firmware through Open System Firmware.

Some other devices that support LinuxBoot are:

Last month end, Equus Compute Solutions announced the release of its WHITEBOX OPEN™ M2660 and M2760 Servers, as a part of their custom, cost-optimized Open-Hardware Servers and storage platforms. Both of them support LinuxBoot to customize the Server BIOS for flexibility, improved security, and create a blazingly fast booting experience.

What do you think of LinuxBoot?

LinuxBoot is quite well documented on GitHub. Do you like the features that set it apart from UEFI? Would you prefer using LinuxBoot rather than UEFI for starting up Servers, owing to the former’s open-ended development and future? Let us know in the comments below.


About Avimanyu Bandyopadhyay

Avimanyu is a Doctoral Researcher on GPU-based Bioinformatics and a big-time Linux fan. He strongly believes in the significance of Linux and FOSS in Scientific Research. Deep Learning with GPUs is his new excitement! He is a very passionate video gamer (his other side) and loves playing games on Linux, Windows and PS4 while wishing that all Windows/Xbox One/PS4 exclusive games get support on Linux some day! Both his research and PC gaming are powered by his own home-built computer. He is also a former Ubisoft Star Player (2016) and mostly goes by the tag “avimanyu786” on web indexes.

Source

Download Kodachi Linux 4.2

Kodachi Linux is an open source and free distribution of Linux based on the award winning Debian GNU/Linux operating system and built around the modern GNOME desktop environment. It is an anonymous, secure and anti forensic OS.

Distributed as a 64-bit Live DVD

This custom Debian-based operating system can be downloaded from its official homepage or via Softpedia (see download link above) as a single Live DVD ISO image that has been engineered to support only 64-bit (x86_64) hardware platforms.

In order to use it, users must burn the ISO image onto a blank DVD disc using any CD/DVD burning software, or write it on a USB flash drive of 4G or higher capacity in order to boot it from the BIOS of a computer.

Boot options

The boot menu is quite complex and will allow the user to run the live environment with default boot options, with the nosmp and noapic options, with the smp and noapic options, with splash screen, or in failsafe mode.

In addition, you can drop to a shell prompt, perform a system memory diagnostic test, as well as to boot an existing operating system that is installed on the first disk drive.

Slick desktop environment powered by GNOME 3

Kodachi Linux’s desktop environment is pretty slick, powered by GNOME 3, as it uses no panels, but only a dock (application launcher) located on the bottom edge of the screen, as well as a bunch of system monitoring widgets.

From the dock you can start, stop or restart the VPN (Virtual Private Network) service incorporated into the distribution, as well as to connect to a Tor network that is more appropriate to your current location.

Bottom line

Using GNOME (with GNOME Shell) as its default desktop environment, Kodachi Linux provides a secure, anonymous and anti-forensic operating system that features a VPN connection, a Tor connection, and a DNScrypt service.

Linux Kodachi Linux distribution Operating system Kodachi VPN Security Linux

Source

Download lighttpd Linux 1.4.51

lighttpd is an open source, totally free, secure, fast, compliant, and very flexible Web (HTTP) server software implemented in C and specifically engineered and optimized for high-performance GNU/Linux environments.

It’s a command-line program that comes with an advanced set of features, including FastCGI (load balanced), CGI (Common Gateway Interface), Auth, Output-Compression, URL-Rewriting, SSL (Secure Sockets Layer), etc.

It’s optimized for a large number of parallel connections

lighttpd is the perfect solution for Linux servers, where high performance AJAX applications are a must, because of its event-driven architecture, which has been optimized to support a large number of parallel connections (keep-alive).

Compared to other popular Web servers, such as Apache or Nginx, lighttpd has a small memory footprint, which means that it can be deployed on computers with old and semi-old hardware components, as well as an effective management of the CPU load.

Getting started with lighttpd

To install and use lighttpd on your GNU/Linux system, you have two options. First, open your favorite package manager and search for lighttpd in the main software repositories of your distribution, and install the package.

If lighttpd is not available in your Linux system’s repos, then you will have to download the latest version from Softpedia, where it’s distributed as a source tarball (tar archive), save the file on your computer, unpack its contents, open a terminal emulator and navigate to the location of the extracted archive file with the ‘cd’ command.

Then, you will be able to compile the software by executing the ‘make’ command in the terminal emulator, followed by the ‘make install’ command as root or with sudo to install it system wide and make it available to all users.

Command-line options

The program comes with a few command-line options, which can be viewed at a glance by running the ‘lighttpd –help’ command in a terminal. These include the ability to specify a configuration file and the location of the modules, test the config file, as well as to force the daemon to run in foreground.

Web server Internet server HTTP server Web Server HTTP High-performance

Source

Red Hat Enterprise Linux Identify Management Integration with Active Directory – Red Hat Enterprise Linux Blog

Identity Management in Red Hat Enterprise Linux (IdM) supports two different integration options with Active Directory: synchronization and trust.

I recently got a question about comparison of the two. I was surprised to find that I haven’t yet covered this topic in my blog. So let us close this gap!

The customer was interested in comparison of the two. Here is the question he asked:

To integrate IdM with AD 2016 I want to use winsync rather than trusts.

  • We would like to be able to manage the SUDO, SELinux, SSH key and other options that are not in AD.
  • I understand the advantages and disadvantages of each of the configurations and it seems to me that the synchronization is the best option to get the maximum of functionalities of IdM
  • But I would like to know the reason why Red Hat does not suggest the synchronisation.

Red Hat documentation states:

“In some integration scenarios, the user synchronization may be the only available option, but in general, use of the synchronization approach is discouraged in favor of the cross-realm trust-based integration.”

Is there any special reason why Red Hat recommends trusts (although more complected) vs. winsync?

Thank you for asking!

We in fact do not recommend synchronization for several reasons that I will lay down below but we also acknowledge some cases when synchronization might be the only option. So let us dive into the details…

When you have sync you really have two accounts: one in AD and one in IdM. These would be two different users. In this case you need to keep the passwords in sync too. Keeping password in sync requires putting a password intercepting plugin – passsync on every AD domain controller because it is never known which domain controller will be used for the password change operation. After you deploy the plugin to the domain controllers you need to reset the password for every account so that the plugin can intercept the password and store it in the IdM account. So in fact there is a lot of complexity that is related to synchronization. Let us add that this solution would work only for a single domain. If you have more than one domain in a forest or even several forests you can’t use sync. The synchronization also is done against one AD domain controller so if the connecting is down the synchronization is not going to work and there is no failover.

Another issue to keep in mind is that with synchronization you have two different places where the user authentication happens. For compliance purpose all your audit tools need to be pointed to yet another environment and they would have to collect and merge logs from IdM and AD. It is usually doable but yet another complexity to keep in mind. Another aspect is the account related policies, when you have two different accounts you need to make sure that policies are the same and not diverge.

Synchronization only works for user accounts not groups. Groups structure needs to be created on the IdM side.

Benefits of Trust

With trust there are no duplicate accounts. Users always authenticate against AD. All the audit trails are there in the single place. Since there is only one account for a user all the settings that apply to the account (password length, strength, expiration, etc.) are always consistent with the company wide policy and you do not need to check and enforce them in more than one place. This makes it easier to pass audits.

Trusts are established on the environment to environment level so there is really no single point of failure.

Trust allows users in all AD domains to access IdM managed environment, and since IdM can establish trusts with multiple AD forests if needed you really can cover all forests in your infrastructure.

With the trust setup POSIX attributes can be either managed in AD via schema extensions, if they are already there, dynamically created from AD SIDs on the fly by IdM and SSSD or set on the IdM side as explicit overrides. This capability also allows setting different POSIX attributes for different sets of clients. This is usually needed in the complicated environments where UID and GID namespace has duplicates due to NIS history or merges.

AD groups are transparently exposed by IdM to the clients without the need to recreate them. IdM groups can be created on top or in addition to AD groups.

The information above can be summarized in the following table:

So the promise of the trust setup is to provide a more flexible, reliable and feature rich solution. But this is the promise. This is why I put an asterisk in the table. The reality is more complex. In practice there are challenges with the trust setup too. It turns out the trust setup assumes a well configured and well behaved AD environment. In multiple deployments Red Hat consultants uncovered misconfiguration of AD, DNS, firewalls and other elements of the infrastructure that made deployments more painful than we would like them to be. Despite of the challenges some of which are covered in the article Discovery and Affinity published last year and some of which will be covered in my talk at Red Hat Summit in May most of the current deployments see a way to resolve the deficiencies of the existing infrastructure and get to a stable and reliable environment.

So synchronization might be attractive in the case of the small environment but even in such environment setting up a trust would not be a big complication.

The only case where I would call synchronization out is two factor authentication (2FA) using one time password (OTP) tokens. Customers usually want to have some subset of users to be able to use OTP tokens to login into Linux systems. Since AD does not support 2FA natively some other system needs to assign a token to AD user. It can be a 3rd party solution if customer has it or it can be IdM. In this case to provide centralized OTP based authentication for the Linux systems managed by IdM the accounts that would use OTP would need to be created in IdM. This can be done in different ways: by syncing them from AD using winsync, by syncing them from AD using ipa migrate-ds command, by a script that will load user data from some other source using IdM CLI or LDAP operation, just manually. Once the user is created a password and token can be assigned to him in IdM or the account can be configured to proxy authentication to an existing 2FA solution via RADIUS. IdM allows to enforce 2FA for selected set of systems and services. How to do it, please, read the Red Hat documentation about authentication indicators. This is the best approach. It allows for a general population of users to access systems with their AD password while a selected set of special users will be required to use 2FA on a specific subset of hosts. The only limitation is that this approach will work on Red Hat Enterprise Linux 7 systems. Older systems have limitations with OTP support.

If all the users need to have OTP tokens to log into the Linux systems then trust does not make sense and syncing accounts might be a more attractive option.

Thank you for reading! Comments and suggestions are welcome!

Source

Linux Now Dominates Azure – Slashdot

Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

Linux

Linux Now Dominates Azure (zdnet.com)

Posted
by

msmash

on Thursday September 27, 2018 @04:10PM

from the gaining-traction dept.

An anonymous reader shares a report:

Three years ago, Mark Russinovich, CTO of Azure, Microsoft’s cloud program, said, “One in four [Azure] instances are Linux.” Then, in 2017, it was 40 percent Azure virtual machines (VM) were Linux. Today, Scott Guthrie, Microsoft’s executive vice president of the cloud and enterprise group, said in an interview, “Slightly over half of Azure VMs are Linux. That’s right. Microsoft’s prize cloud, Linux, not Windows Server, is now the most popular operating system. Windows Server isn’t going to be making a come back. Every month, Linux goes up,” Guthrie said. And it’s not just Azure users who are turning to Linux.

“Native Azure services are often running on Linux,” Guthrie added. “Microsoft is building more of these services. For example, Azure’s Software Defined Network (SDN) is based on Linux.” It’s not just on Azure that Microsoft is embracing Linux. “Look at our simultaneous release of SQL Server on Linux. All of our projects now run on Linux,” Guthrie said.

 

Take everything in stride. Trample anyone who gets in your way.

Working…

Source

Charly’s Column – grepcidr » Linux Magazine

Often it is the very simple tools that, when used appropriately, lead to the greatest success. This time, sys admin columnist Charly employs an IP address filter to count the devices in his home and trip up spammers to boot.

Although Linux has many grep variants, you can always find a new one. I only discovered grepcidr [1] a few months ago. As the name suggests, the tool filters input by IP addresses and networks. It works equally well with IPv4 and IPv6. To show grepcidr’s capabilities, I will use it to compile a list of all IPv4 addresses on my home network. I got this from the Syslog on the firewall, which is also the DHCP server:

cd /var/log
grepcidr 10.0.0.0/24 syslog|grep DHCPACK|tail -n 1500|cut -f9 -d” “|sort|uniq > 1stlist

The 1stlist file now contains 46 IP addresses:

[…]

Use Express-Checkout link below to read the full article (PDF).

Source

FOSS Project Spotlight: Tutanota, the First Encrypted Email Service with an App on F-Droid

Seven years ago, we started building Tutanota, an encrypted email service
with a strong focus on security, privacy and open source. Long before the
Snowden revelations, we felt there was a need for easy-to-use encryption that
would
allow everyone to communicate online without being snooped upon.

""

Figure 1. The Tutanota team’s motto: “We fight for privacy with automatic
encryption.”

As developers, we know how easy it is to spy on email that travels through the
web. Email, with its federated setup is great, and that’s why it has
become the main form of online communication and still is. However, from a
security perspective, the federated setup is troublesome—to say the
least.

End-to-end encrypted email is difficult to handle on desktops (with key
generation, key sharing, secure storing of keys and so on), and it’s close to impossible on
mobile devices. For the average, not so tech-savvy internet user, there are a
lot of pitfalls, and the probability of doing something wrong is, unfortunately,
rather high.

That’s why we decided to build Tutanota: a secure email service that
is so easy to use, everyone can send confidential email, not only the
tech-savvy. The entire encryption process runs locally on users’
devices, and it’s fully automated. The automatic encryption also enabled us to build
fully encrypted email apps for Android and iOS.

Finally, end-to-end encrypted email is starting to become the standard:
58% of all email sent from Tutanota already are end-to-end encrypted, and
the percentage is constantly
rising
.

""

Figure 2. Easy email encryption on desktops and mobile devices is now possible for
everyone.

The Open-Source Email Service to Get Rid of Google

As open-source enthusiasts, our apps have been open source from the start, but
putting them on F-Droid was a challenge. As with all email services, we have used
Google’s FCM for push notifications. On top of that, our encrypted email
service was based on Cordova, which the F-Droid servers are not able to
build.

Not being able to publish our Android app on F-Droid was one of the main
reasons we started to re-build the entire Tutanota web client. We are privacy
and open-source enthusiasts; we ourselves use F-Droid. Consequently, we
thought that our app must be published there, no matter the effort.

When rebuilding our email client, we made sure not to use Cordova anymore and
to replace Google’s FCM for push notifications.

The Challenge to Replace Google’s FCM

GCM (or, as it’s now called, FCM, for Firebase Cloud Messaging) is a service
owned by Google. Unfortunately, FCM includes Google’s tracking code for
analytics purposes, which we didn’t want to use. And, even more
important: to use FCM, you have to send all your notification data to Google.
You also have to use Google’s proprietary libraries.

Because of privacy and security concerns, we didn’t send any info in
the notification messages. Therefore, the push notification
mentioned only that you received a new message without a reference to the mailbox
in which that message has been placed.

We wanted our users to be able to use Tutanota on every ROM and every device,
without the control of a third-party. That’s why we decided to take on the
challenge and to build a push notification service ourselves.

When we started designing our push system, we set the following goals:

  • It must be secure.
  • It must be fast.
  • It must be power-efficient.

We’ve researched how others (Signal, Wire, Conversations, Riot,
Facebook and Mastodon) have been solving similar problems, and we had several
options in mind, including WebSockets, MQTT, Server Sent Events and HTTP/2
Server Push.

We settled for the SSE (Server Sent Events), because it seemed like a simple
solution. By that, I mean “easy to implement, easy to debug”.
Debugging these types of things can be a major headache, so one should not
underestimate that factor. Another argument in favor of that solution was relative power
efficiency. We didn’t need upstream messages, and constant connection was
not our goal.

So, What Is SSE?

SSE is a web API that allows a server to send events to connected
clients. It’s a relatively old API, which is, in my opinion, underused.
We’d never heard of SSE before the federated network Mastodon, which
uses SSE for real-time timeline updates, and it works great.

The protocol itself is very simple and resembles good old polling. The client
opens a connection, and the server keeps it open. It’s different from
classical polling in that we keep this connection open for multiple events.
The server can send events and data messages, they’re just separated by
new lines. So the only thing the client needs to do is to open a connection
with a big timeout and read the stream in a loop.

SSE fits our needs better than WebSocket would (it’s cheaper and converges
faster, because it’s not duplex). We’ve seen multiple chat apps
trying to use WebSocket for push notifications, and it didn’t seem power-efficient.

We had some experience with WebSocket already, and we knew that firewalls
don’t like keepalive connections. To solve this, we used the same
workaround for SSE that we did for WebSocket. We sent “heartbeat” empty
messages every few minutes. We made this interval adjustable from the server
side and randomized it not to overwhelm the server.

In the end, we had to do some work—I could describe loads of challenges
we had to overcome to make this finally work, but maybe some other time. Yet,
it was totally worth it. Our new app is still in beta, but thanks to
non-blocking IO, we’ve been able to maintain thousands of simultaneous
connections without problems. Our users are no longer forced to use Google
Play Services, and we’ve been able to publish our app on
F-Droid
.

As a side-note: wouldn’t it be great if the user could just pick a
“push notifications provider” in the phone settings and the OS managed
all these hard details by itself, so every app that doesn’t want to be
policed by the platform owner didn’t have to invent the system anew? It
could be end-to-end encrypted between the app and the app server. There’s
no real technical difficulty in that, but as long as our systems are
controlled by big players, we as app developers have to solve this by
ourselves.

Tutanota Is the First App of an Email Service Available on F-Droid

Our app release on F-Droid really excites us, as it proves that it is possible
to build a secure email service that’s completely
Google-free, giving people a real open-source alternative to the data-hungry
market-leader Gmail.

This is a remarkable step, as so far no other email service has managed (or
cared) to publish its app on F-Droid. The reason for this is that, in
general, email services rely on Google’s FCM for push notifications, which
makes an F-Droid release impossible.

The F-Droid team also welcomed our move in the right direction:

We are
happy to see how enthusiastic Tutanota is about F-Droid and free software,
having rewritten their app from scratch so it could be included. Furthermore,
they take special measures to avoid tracking you, and the security looks
solid with support for end-to-end encryption and two-factor
authentication.

We are very excited about this release as well. And, we are thankful for the
dedication and hard work of the numerous F-Droid volunteers helping us to
publish our app there. We are also proud that the new Android app finally
comes without any ties to Google services. As a secure email service, this is
very important to us. We encourage our users to leave
Google
behind,
so offering a Google-free Android app, therefore, is a minimum requirement
for
us.

""

Figure 3. The new Tutanota client comes with a dark theme—a nice and minimalistic
design that lets you easily encrypt email messages to every email address in the
world.

A Privacy-Focused Email Service for Everyone

We’ve been using Tutanota ourselves for a couple years now. The new
Tutanota client and apps are fast, come with a nice and minimalistic design,
enable search on encrypted data, and support 2FA and auto-sync. Since we’ve
added search, there’s no major feature missing for professional use
any longer, and we’ve noticed the numbers of new users rising constantly. We recommend
that everyone who wants to stop third parties from reading their private
email to
just give it a try.

Source

Introducing CCVPN: A Project in Collaboration with China Mobile, Vodafone and Huawei

As operators continue to experience growing demands on their networks in the lead up to 5G, the need for high-bandwidth, flat, and super high-speed Optical Transport Networks (OTNs) is greater than ever. Combined with an increasingly global market, there is a clear need for service providers to work across international boundaries and provide end-to-end services for their customers that is carrier and geographic-agnostic.

Enter the Cross-domain, Cross-layer VPN (CCVPN) use case, coming with the next ONAP release, Casablanca (due in late 2018). Piloted by Linux Foundation Platinum members China Mobile, Vodafone and Huawei — with contributions from a handful of other vendors — in response to evolving market needs, CCVPN enables code that will allow ONAP to automate and orchestrate cloud-enabled, software-defined VPN services across network operator borders. This means that operators will be able to provision a VPN service that cross international borders by accessing and orchestrating resources on other carrier networks.

The use case was demonstrated on-stage at Open Networking Summit Europe and includes two ONAP instances: one deployed by China Mobile and one deployed by Vodafone. Both instances orchestrate the respective operator underlay OTNs networks, overlay SD-WAN networks and leverage each others networks for for cross-operator VPN service delivery.

In addition to provisioning cross-domain, cross-layer VPN, this effort represents true collaboration to solve industry challenges. By combining forces, developers from different companies are continuing to work together and with the community to refine features to fully enable CCVPN as part of the Casablanca release. To learn more about ONAP, please visit www.onap.org; more details on the CCVPN project are available on the project Wiki page here. Blog posts from Huawei and Vodafone are also available for additional information.

Source

Linux/Unix desktop fun: sl – a mirror version of ls

One of the most common mistakes is typing sl instead of ls command. I set up an alias, i.e., alias sl=ls; but then you may miss out the steam train with a whistle.

sl is a joke software or classic UNIX game. It is a steam locomotive runs across your screen if you type “sl” (Steam Locomotive) instead of “ls” by mistake. Now there is a twist to older sl command.

sl – a mirror version of ls

From the blog post:

I didn’t like it and made another program of the same name. My sl just mirrors the output of ls. It accepts most ls(1) arguments and is best enjoyed with -l.

source code

The program is written in the bash shell. Here is the source code:

#!/bin/bash
# sl – prints a mirror image of ls. (C) 2017 Tobias Girstmair, https://gir.st/, GPLv3

LEN=$(ls “$@” |wc -L) # get the length of the longest line

ls “$@” | rev | while read -r line
do
printf “%$.$s\n” “$line” | sed ‘s/^(s+)(S+)/21/’
done

#!/bin/bash
# sl – prints a mirror image of ls. (C) 2017 Tobias Girstmair, https://gir.st/, GPLv3 LEN=$(ls “$@” |wc -L) # get the length of the longest line ls “$@” | rev | while read -r line
do
printf “%$.$s\n” “$line” | sed ‘s/^(s+)(S+)/21/’
done

Run it as follows

First create ~/bin/ directory using the mkdir command:

$ mkdir ~/bin/ Next, store above source code. cd into the ~/bin/ using the cd command:

$ cd ~/bin/
$ vi sl Save and close the file. Set permission on your shell script using the chmod command:

$ chmod +x sl Test it:

$ ls -l
$ ./sl -l Sample outputs from sl command:

txt.qaf.detaeler.km >- txt.smc.detaeler.km 05:41 32 ceD 91 keviv keviv 1 xwrxwrxwrl
qaf.detaeler.km 72:41 11 ceD 709 keviv keviv 1 x-rx-rxwr-
etalpmet.qaf.detaeler.km 34:51 61 voN 121 keviv keviv 1 –r–r-wr-
txt.qaf.detaeler.km 85:00 01 beF 014 keviv keviv 1 –r–r-wr-
spit.detaeler.km 94:41 32 ceD 709 keviv keviv 1 x-rx-rxwr-
etalpmet.spit.detaeler.km 84:41 32 ceD 121 keviv keviv 1 –r–r-wr-
ssr.setadpu.km 95:00 7 naJ 618 keviv keviv 1 x-rx-rxwr-
etalpmet.ssr.setadpu.km 24:22 2 naJ 463 keviv keviv 1 –r–r-wr-
txt.ssr.setadpu.km 22:12 02 beF 4221 keviv keviv 1 –r–r-wr-
hs.014.xnign 43:11 6 naJ 684 keviv keviv 1 x-rx-rxwr-
hs.103.moc.tfarcxin 5102 52 rpA 631 keviv keviv 1 x-rx-rxwr-
etacsufbo 5102 91 luJ 9931 keviv keviv 1 –r–r-wr-
hs.lapyap 84:41 02 ceD 865 keviv keviv 1 x-rx-rxwr-
txt.lapyap 7102 03 naJ 4131 keviv keviv 1 –r–r-wr-
hs.daolputsop 3102 13 ceD 135 keviv keviv 1 x-rx-rxwr-
hs.daolpuerp 3102 13 ceD 734 keviv keviv 1 x-rx-rxwr-
hs.niamod.eralfduolc.lla.egrup 7102 81 yaM 6401 keviv keviv 1 x-rx-rxwr-
nohtyp 05:20 5 beF 6904 keviv keviv 2 x-rx-rxwrd
ls 92:61 13 raM 672 keviv keviv 1 x-rx-rxwr-
resu.tidder.ecruos 7102 42 naJ 911 keviv keviv 1 x-rx-rxwr-
014.deteled.sgat 95:32 02 raM 97732 keviv keviv 1 –r–r-wr-
hs.teewt 53:10 62 naJ 58653 keviv keviv 1 x-rx-rxwr-
tob-rettiwt 90:32 4 beF 6904 keviv keviv 2 x-rx-rxwrd
smc.elif.daolpu 7102 9 nuJ 907 keviv keviv 1 x-rx-rxwr-
qaf.elif.daolpu 7102 9 nuJ 807 keviv keviv 1 x-rx-rxwr-
pit.elif.daolpu 7102 9 nuJ 907 keviv keviv 1 x-rx-rxwr-
hs.egamidaolpu 3102 81 tcO 3911 keviv keviv 1 x-rx-rxwr-
nalnoekaw 00:41 21 tcO 1325 keviv keviv 1 x-rx-rxwr-
2x 7102 52 nuJ 017 keviv keviv 1 x-rx-rxwr-

txt.qaf.detaeler.km >- txt.smc.detaeler.km 05:41 32 ceD 91 keviv keviv 1 xwrxwrxwrl
qaf.detaeler.km 72:41 11 ceD 709 keviv keviv 1 x-rx-rxwr-
etalpmet.qaf.detaeler.km 34:51 61 voN 121 keviv keviv 1 –r–r-wr-
txt.qaf.detaeler.km 85:00 01 beF 014 keviv keviv 1 –r–r-wr-
spit.detaeler.km 94:41 32 ceD 709 keviv keviv 1 x-rx-rxwr-
etalpmet.spit.detaeler.km 84:41 32 ceD 121 keviv keviv 1 –r–r-wr-
ssr.setadpu.km 95:00 7 naJ 618 keviv keviv 1 x-rx-rxwr-
etalpmet.ssr.setadpu.km 24:22 2 naJ 463 keviv keviv 1 –r–r-wr-
txt.ssr.setadpu.km 22:12 02 beF 4221 keviv keviv 1 –r–r-wr-
hs.014.xnign 43:11 6 naJ 684 keviv keviv 1 x-rx-rxwr-
hs.103.moc.tfarcxin 5102 52 rpA 631 keviv keviv 1 x-rx-rxwr-
etacsufbo 5102 91 luJ 9931 keviv keviv 1 –r–r-wr-
hs.lapyap 84:41 02 ceD 865 keviv keviv 1 x-rx-rxwr-
txt.lapyap 7102 03 naJ 4131 keviv keviv 1 –r–r-wr-
hs.daolputsop 3102 13 ceD 135 keviv keviv 1 x-rx-rxwr-
hs.daolpuerp 3102 13 ceD 734 keviv keviv 1 x-rx-rxwr-
hs.niamod.eralfduolc.lla.egrup 7102 81 yaM 6401 keviv keviv 1 x-rx-rxwr-
nohtyp 05:20 5 beF 6904 keviv keviv 2 x-rx-rxwrd
ls 92:61 13 raM 672 keviv keviv 1 x-rx-rxwr-
resu.tidder.ecruos 7102 42 naJ 911 keviv keviv 1 x-rx-rxwr-
014.deteled.sgat 95:32 02 raM 97732 keviv keviv 1 –r–r-wr-
hs.teewt 53:10 62 naJ 58653 keviv keviv 1 x-rx-rxwr-
tob-rettiwt 90:32 4 beF 6904 keviv keviv 2 x-rx-rxwrd
smc.elif.daolpu 7102 9 nuJ 907 keviv keviv 1 x-rx-rxwr-
qaf.elif.daolpu 7102 9 nuJ 807 keviv keviv 1 x-rx-rxwr-
pit.elif.daolpu 7102 9 nuJ 907 keviv keviv 1 x-rx-rxwr-
hs.egamidaolpu 3102 81 tcO 3911 keviv keviv 1 x-rx-rxwr-
nalnoekaw 00:41 21 tcO 1325 keviv keviv 1 x-rx-rxwr-
2x 7102 52 nuJ 017 keviv keviv 1 x-rx-rxwr-

How to setup bash shell alias

The syntax is:

alias name=value Add the following to the ~/.bashrc file:

echo ‘alias sl=”/home/$USER/bin/sl -l”‘ >> ~/.bashrc Load it:

$ source ~/.bashrc Test it:

$ slsl - a mirror version of ls command

How to verify sl command execution path

Use the type command or command command as follows:

$ type -a sl

sl is aliased to `/home/vivek/bin/sl -l’
sl is /home/vivek/bin/sl
sl is /usr/games/sl $ command -V sl

alias sl=’/home/vivek/bin/sl -l’

You can temporarily disable an alias using any one of the following method:

“command”
command”
sl
ls
command ls
command sl For more info see this page.

Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

Source

WP2Social Auto Publish Powered By : XYZScripts.com