Linux Today – Understanding Debian GNU/Linux Releases

What is a Debian release?

Debian GNU/Linux is a non-commercial Linux distribution that was started in 1993 by Ian Murdock. Currently, it consists of about 51,000 software packages that are available for a variety of architectures such as Intel (both 32 and 64 bit), ARM, PowerPC, and others [2]. Debian GNU/Linux is maintained freely by a large number of contributors from all over the world. This includes software developers and package maintainers – a single person or a group of people that takes care of a package as a whole [3].

A Debian release is a collection of stable software packages that follow the Debian Free Software Guidelines (DFSG) [4]. These packages are well-tested and fit together in such a way that all the dependencies between the packages are met and you can install und use the software without problems. This results in a reliable operating system needed for your every-day work. Originally targeted for server systems it has no more a specific target (“The Universal OS”) and is widely used on desktop systems as well as mobile devices, nowadays.

In contrast to other Linux distributions like Ubuntu or Linux Mint, the Debian GNU/Linux distribution does not have a release cycle with fixed dates. It rather follows the slogan “Release only when everything is ready” [1]. Nethertheless, a major release comes out about every two years [8]. For example, version 9 came out in 2017, and version 10 is expected to be available in mid-2019. Security updates for Debian stable releases are provided as soon as possible from a dedicated APT repository. Additionally, minor stable releases are published in between, and contain important non-security bug fixes as well as minor security updates. Both the general selection and the major version number of software packages do not change within a release.

In order to see which version of Debian GNU/Linux you are running on your system have a look at the file /etc/debian_version as follows:

cat /etc/debian_version
9.6
$

This shows that the command was run on Debian GNU/Linux 9.6. Having installed the package “lsb-release” [14], you can get more detailed information by running the command “lsb_release -a”:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 9.6 (stretch)
Release: 9.6
Codename: stretch
$

What about these funny release names?

This shows that the command was run on Debian GNU/Linux 9.6. Having installed the package “lsb-release” [14], you can get more detailed information by running the command “lsb_release -a”:

You may have noted that for every Debian GNU/Linux release there is a funny release name. This is called an alias name which is taken from a character of the film series Toy Story [5] released by Pixar [6]. When the first Debian 1.x release was due, the Debian Project Leader back then, Bruce Perens, worked for Pixar [9]. Up to now the following names have been used for releases:

  • Debian 1.0 was never published officially, because a CD vendor shipped a development version accidentially labeled as “1.0” [10], so Debian and the CD vendor jointly announced that “this release was screwed” and Debian released version 1.1 about half a year later, instead.
  • Debian 1.1 Buzz (17 June 1996) – named after Buzz Lightyear, the astronaut
  • Debian 1.2 Rex (12 December 1996) – named after Rex the plastic dinosaur
  • Debian 1.3 Bo (5 June 1997) – named after Bo Peep the shepherd
  • Debian 2.0 Hamm (24 July 1998) – named after Hamm the piggy bank
  • Debian 2.1 Slink (9 March 1999) – named after the dog Slinky Dog
  • Debian 2.2 Potato (15 August 2000) – named after the puppet Mr Potato Head
  • Debian 3.0 Woody (19 July 2002) – named after the cowboy Woody Pride who is the main character of the Toy Story film series
  • Debian 3.1 Sarge (6 June 2005) – named after the Seargeant of the green plastic soldiers
  • Debian 4.0 Etch (8 April 2007) – named after the writing board Etch-A-Sketch
  • Debian 5.0 Lenny (14 February 2009) – named after the pull-out binocular
  • Debian 6.0 Squeeze (6 February 2011) – named after the green three-eyed aliens
  • Debian 7 Wheezy (4 May 2013) – named after Wheezy the penguin with the red bow tie
  • Debian 8 Jessie (25 April 2015) – named after the cowgirl Jessica Jane “Jessie” Pride
  • Debian 9 Stretch (17 June 2017) – named after the lila octopus
  • Debian 10 Buster (no release date known so far) – named after the puppy dog from Toy Story 2

As of the beginning of 2019, the release names for two future releases are also already known [8]:

  • Debian 11 Bullseye – named after Bullseye, the horse of Woody Pride
  • Debian 12 Bookworm – named after Bookworm, the intelligent worm toy with a built-in flashlight from Toy Story 3.

Relation between alias name and development state

New or updated software packages are uploaded to the unstable branch, first. After some days a package migrates to the testing branch if it fulfills a number of criterias. This later becomes the basis for the next stable release. The release of a distribution contains stable packages, only, that are actually a snapshot of the current testing branch.

Source

Yahoo Japan and EMQ X Join the OpenMessaging Project

 

The OpenMessaging project welcomes Yahoo Japan and EMQ X as new members.

We are excited to announce two new members to the OpenMessaging project: Yahoo Japan, one of the largest portal sites in Japan, and EMQ X, one of the most popular MQTT message middleware vendors. Yahoo Japan and EMQ X join Alibaba, JD.com, China Mobile Cloud, Qing Cloud, and other community members to form a standards community with 13 corporation members.

OpenMessaging is a standards project for messaging and streaming technology. Messaging and Streaming products have been widely used in modern architecture and data processing, for decoupling, queuing, buffering, ordering, replicating, etc. But when data transfers across different messaging and streaming platforms, compatibility problems arise, which always means much additional work. The OpenMessaging community looks to eliminate these challenges through creating a global, cloud-oriented, vendor-neutral industry standard for distributed messaging.

Yahoo Japan, operated by Yahoo Japan Corporation, is one of the largest portal site in Japan. Under the mission to be a “Problem-Solving Engine,” Yahoo Japan Corporation is committed in solving the problems of the people and society leveraging the power of information technologies. The company uses various messaging systems (e.g., Apache Pulsar, Apache Kafka and RabbitMQ) to create its services and is creating a centralized pub-sub messaging platform that deals with a vast number of service/application traffics.

“Yahoo Japan Corporation uses various messaging systems (e.g., Apache Pulsar, Apache Kafka and RabbitMQ) to create its services. However, differences in messaging interfaces make the whole system complicated and lead to extra costs in implementation and in studying each system. Thus, we need a standardized and unified interface that can be easily implemented and easily collaborated with other services.” said Nozomi Kurihara, the Manager of the Messaging Platform team in Yahoo Japan. “We think OpenMessaging is the key in achieving our “multi big data” system in which data can be cross-used among different services/applications we provide.”

Originated from a GitHub open source IoT project starting from 2012, EMQ X has become one of the most popular MQTT message middleware in community. EMQ X is based on the Erlang/OTP platform, which can support 10 million concurrent MQTT connections with high throughput and low latency. EMQ X now has 500k downloads, and 5000+ customer users in 50 countries and regions around the world, such as China, United States, Australia, British, and India.

“Our customers cover different industries, such as financial, IoV, telecom, smart home. We also partnered with Fortune 500 companies, such as HPE, Ericsson, VMware, to provide professional IoT solutions to customers around the world. OpenMessaging is vendor-neutral and language-independent, provides industry guidelines for areas of finance, e-commerce, IoT and Big Data, and aimed to develop messaging and streaming applications across heterogeneous systems and platforms.” said Feng Lee, Co-founder of EMQ X. “We’re glad to join OpenMessaging.”

As an effort to standardize distributed messaging and streaming systems, OpenMessaging is committed to embracing an open, collaborative, intelligent, and cloud-native era with all its community members.

Source

Linux Tools: The Meaning of Dot | Linux.com

Let’s face it: writing one-liners and scripts using shell commands can be confusing. Many of the names of the tools at your disposal are far from obvious in terms of what they do (grep, tee and awk, anyone?) and, when you combine two or more, the resulting “sentence” looks like some kind of alien gobbledygook.

None of the above is helped by the fact that many of the symbols you use to build a chain of instructions can mean different things depending on their context.

Location, location, location

Take the humble dot (.) for example. Used with instructions that are expecting the name of a directory, it means “this directory” so this:

find . -name “*.jpg”

translates to “find in this directory (and all its subdirectories) files that have names that end in .jpg“.

Both ls . and cd . act as expected, so they list and “change” to the current directory, respectively, although including the dot in these two cases is not necessary.

Two dots, one after the other, in the same context (i.e., when your instruction is expecting a directory path) means “the directory immediately above the current one“. If you are in /home/your_directory and run

cd ..

you will be taken to /home. So, you may think this still kind of fits into the “dots represent nearby directories” narrative and is not complicated at all, right?

How about this, then? If you use a dot at the beginning of a directory or file, it means the directory or file will be hidden:

$ touch somedir/file01.txt somedir/file02.txt somedir/.secretfile.txt
$ ls -l somedir/
total 0
-rw-r–r– 1 paul paul 0 Jan 13 19:57 file01.txt
-rw-r–r– 1 paul paul 0 Jan 13 19:57 file02.txt
$ # Note how there is no .secretfile.txt in the listing above
$ ls -la somedir/
total 8
drwxr-xr-x 2 paul paul 4096 Jan 13 19:57 .
drwx—— 48 paul paul 4096 Jan 13 19:57 ..
-rw-r–r– 1 paul paul 0 Jan 13 19:57 file01.txt
-rw-r–r– 1 paul paul 0 Jan 13 19:57 file02.txt
-rw-r–r– 1 paul paul 0 Jan 13 19:57 .secretfile.txt
$ # The -a option tells ls to show “all” files, including the hidden ones

And then there’s when you use . as a command. Yep! You heard me: . is a full-fledged command. It is a synonym of source and you use that to execute a file in the current shell, as opposed to running a script some other way (which usually mean Bash will spawn a new shell in which to run it).

Confused? Don’t worry — try this: Create a script called myscript that contains the line

myvar=”Hello”

and execute it the regular way, that is, with sh myscript (or by making the script executable with chmod a+x myscript and then running ./myscript). Now try and see the contents of myvar with echo $myvar (spoiler: You will get nothing). This is because, when your script plunks “Hello” into myvar, it does so in a separate bash shell instance. When the script ends, the spawned instance disappears and control returns to the original shell, where myvar never even existed.

However, if you run myscript like this:

. myscript

echo $myvar will print Hello to the command line.

You will often use the . (or source) command after making changes to your .bashrc file, like when you need to expand your PATH variable. You use . to make the changes available immediately in your current shell instance.

Double Trouble

Just like the seemingly insignificant single dot has more than one meaning, so has the double dot. Apart from pointing to the parent of the current directory, the double dot (..) is also used to build sequences.

Try this:

echo

It will print out the list of numbers from 1 to 10. In this context, .. means “starting with the value on my left, count up to the value on my right“.

Now try this:

echo

You’ll get 1 3 5 7 9. The ..2 part of the command tells Bash to print the sequence, but not one by one, but two by two. In other words, you’ll get all the odd numbers from 1 to 10.

It works backwards, too:

echo

You can also pad your numbers with 0s. Doing:

echo

will print out every even number from 0 to 121 like this:

000 002 004 006 … 050 052 054 … 116 118 120

But how is this sequence-generating construct useful? Well, suppose one of your New Year’s resolutions is to be more careful with your accounts. As part of that, you want to create directories in which to classify your digital invoices of the last 10 years:

mkdir _Invoices

Job done.

Or maybe you have a hundreds of numbered files, say, frames extracted from a video clip, and, for whatever reason, you want to remove only every third frame between the frames 43 and 61:

rm frame_

It is likely that, if you have more than 100 frames, they will be named with padded 0s and look like this:

frame_000 frame_001 frame_002 …

That’s why you will use 043 in your command instead of just 43.

Curly~Wurly

Truth be told, the magic of sequences lies not so much in the double dot as in the sorcery of the curly braces ({}). Look how it works for letters, too. Doing:

touch file_.txt

creates the files file_a.txt through file_z.txt.

You must be careful, however. Using a sequence like will run through a bunch of non-alphanumeric characters (glyphs that are neither numbers or letters) that live between the uppercase alphabet and the lowercase one. Some of these glyphs are unprintable or have a special meaning of their own. Using them to generate names of files could lead to a whole bevy of unexpected and potentially unpleasant effects.

One final thing worth pointing out about sequences encased between {…} is that they can also contain lists of strings:

touch _file.txt

Creates blahg_file.txt, splurg_file.txt and mmmf_file.txt.

Of course, in other contexts, the curly braces have different meanings (surprise!). But that is the stuff of another article.

Conclusion

Bash and the utilities you can run within it have been shaped over decades by system administrators looking for ways to solve very particular problems. To say that sysadmins and their ways are their own breed of special would be an understatement. Consequently, as opposed to other languages, Bash was not designed to be user-friendly, easy or even logical.

That doesn’t mean it is not powerful — quite the contrary. Bash’s grammar and shell tools may be inconsistent and sprawling, but they also provide a dizzying range of ways to do everything you can possibly imagine. It is like having a toolbox where you can find everything from a power drill to a spoon, as well as a rubber duck, a roll of duct tape, and some nail clippers.

Apart from fascinating, it is also fun to discover all you can achieve directly from within the shell, so next time we will delve ever deeper into how you can build bigger and better Bash command lines.

Until then, have fun!

Source

How to Install and Play War Thunder on Ubuntu – Linux Hint

After spending years on Windows, adjusting to a different operating system can be quite hectic for some people. Those people who have been accustomed to using more than one operating system may not find it hard but for some, the change can be quite daunting and also take a considerable time to adjust. One of the biggest changes that I myself experienced was to install anything on Ubuntu. It took me a long while to find out where the graphical user interface exists for the simple click and install method.

In the meantime, I slowly got used to using the command line interface for my installing requirements and it is needless to say that it became a normal and fun thing shortly. This allowed me to get an idea of how Ubuntu operates and made me more interested in using the OS as well. If you came from somewhere like me, you probably believed Ubuntu isn’t the best solution for gaming. For some, it may turn out to be more than a great experience when it comes to gaming – which I soon learned.

One of my favourite game on Windows was War Thunder and as soon as I made the shift to Ubuntu, I already knew which game I was going to download first. After all, I did have to find some way to pass the time at university.

Without any further ado, let’s move on to getting War Thunder installed and being ready to play on our systems. Since this is a game that is supported on Steam, we will install it through Steam to make sure that everything is installed the way it was intended to be. Since War thunder is free to play the game, we could have potentially downloaded it from some other source other than steam. We would then have to manually install it and that can cause us to run into problems that may only be properly fixed by installing it through Steam.

If you’ve gamed with Windows, then you probably must be familiar with Steam already. If not, then you might be asking, what is Steam? Steam is probably the biggest digital game distributor out in the market these days. It’s equivalent to Amazon for all your gaming needs. It is the go-to place for buying games online and for playing with friends, Whatever gaming needs you may have, Steam will most definitely have you covered in every department. If you are new to the whole gaming on Steam thing, we will guide you in installing it.

The first way to installing it is through the Ubuntu software center. Simply type in Steam and you will able to find it. Install it from there and you will be able to start downloading games in a short while.

The other way is to download and install it through the command line interface (CLI). To download it through the CLI, type in the following command through the terminal window:

sudo apt install steam-installer

Updating Steam

When you start steam for the first time, it will first update itself to the current stable version that is out in the market and that could take a while.

After you’ve followed the above steps correctly, now all that remains is to finally download the game. Use the search bar to search for the game and after you’ve made sure the specs are met, wait no longer and start downloading.

Do make sure to comment your views on the game and also comparison if you played on Windows. Happy gaming!

The next thing that you will need to do from here is to log in to your Steam account or create a new one if you do not have one already. To log in, simply enter your username and password into the respective fields, press the ‘log in’ button or press enter on your keyboard. The client will then validate your credentials and transfer you the store’s front page. From there you can do many things, such as browse and buy games, manage friends, manage profile etc.

To download War Thunder, go to the store tab by clicking on ‘Store’ from the top toolbar. From there, you will find a search bar located on the right side of your screen. It will be a bit below the toolbar you clicked ‘Store’ on. Enter ‘War Thunder’ into it and the search should result in some familiar terms and the main game at the top of the list. Click on the list entry and proceed to the game’s main page. From there, you can see tons of information on the game such as reviews, system requirements and trailers etc.

If you plan to install other games, make sure that the games you install are supported on Linux. To do that, go to the system requirements section and see if there is a tab for Linux OS. A way to make sure that you only search for Linux based games is by typing ‘Linux’ into the search bar. That way, you will only be presented with Linux supported games.

Normally there is a price to pay for each game, but War Thunder is one of the few ones that are free to play. This means that it can be downloaded for no fee and be ready to play. When you click on Play Game, you will be presented with the option to choose the game to be searchable within the operating system and for it to have a desktop shortcut.

Sit back and relax while your system does everything for you

Once it starts downloading, you can continue to use Steam for whatever purpose and use your system along as well. The download will continue to progress in the background and any other games you choose to download while the previous one has not yet completed will be queued into the system. You will also have the option to move things to the front of the queue depending on your preference.

Once Steam has finished downloading its files, War Thunder will continue to download the remaining files through its own client. This does not happen for every game, but a few are processed through their own third-party clients which are not controlled by Steam. They will be accessed by creating accounts on them just like a Steam account.

Once that has been done, the remaining game files will automatically start downloading and after that, War Thunder will be playable on your system by accessing it from your library in the Steam client.

Source

9 Cheat Sheets for Linux and Open Source | Linux.com

3 Linux and open source cheat sheets

  1. Python 3.7 Beginner’s Cheat Sheet
    The Python programming language is known for its large community and diverse extension menu. Get acquainted with Python’s built-in pieces.
  2. i3 Linux Window Manager Cheat Sheet
    Learn shortcuts to become even more productive with i3.
  3. Advanced SSH Cheat Sheet
    SSH is a tool for remote login, and it can be used in many other ways. Get common command-line options and their configuration file equivalents.

Source

Use SSH Commands in Windows 10 Command Prompt

How to ssh from a windows machine to linux

In many cases, to manage your Linux servers, you can need to allow remote access and this can be done via the Secure Shell (SSH). Since many years ago, Linux systems can use the native terminal to use SSH but it was not the case for Windows systems which need some tools to be installed.

Windows systems have seen many improvements so that you don’t need to install a tool but you can use native tools which are available. In this tutorial, we will learn how to ssh a Linux machine from Windows with the native tools.

What to know about SSH

Secure Shell is a secure and encrypted connection protocol allowing remote and secure sign-ins over unsecured connections. The connection works in the client-server mode, so the connection is established by the SSH client connecting to the SSH server.

SSH offers several options for user authentication and the most common ones are passwords and public key authentication methods:

  • password: it works like the usual process for a local computer which means that you need to have the username and password of an existing account on the server.
  • public key: the principle is to have a cryptographic key pair public key and private key where the public key is configured on the server to authorize access and grant anyone who has a copy of the private key access to the server.

1) Install feature OpenSSH windows 10 client

Windows machines now allow you to use native tools to establish a SSH connection but you need first to make sure that the feature Openssh windows client is installed. Normally is not installed by default so you will need first to do it. Go to Windows -> Settings -> Apps -> Manage optional feature

Click Add a feature

Select OpenSSH Client and then install.

Now it’s installed

2) SSh connection with Windows Powershell and command prompt

Now you can decide to use the command prompt or Windows PowerShell to access your Linux server via ssh.

a) SSh with Windows Powershell

The Windows Powershell native tool allows you to remotely connect to a server via ssh. You just have to open it with Windows + r then hit the key A

Now enter the command the ssh command for the connection to your remote Linux server: ssh [email protected]

b) SSh with command prompt

To remotely access your server via the command, you just have to launch it with the key combination Windows + r and then enter cmd

Now in the command prompt, you can use the ssh command as with powershell

Now you know how you can connect to your remote Linux server with SSH with the native tools offered by Windows. You can choose to use putty tool as well but now it’s easiest and more comfortable to use the tools which are offered by default.

Read Also:

Source

Why Windows Isn’t Hell Or Why Linux Isn’t Bliss – OSnews

To me, it’s a miracle how every tiny article on OSNews.com, or any other tech-site, ends up in people shouting all sorts of nonsense at each other like “Linux is gonna bring back Elvis”, “Windows shot president Kennedy”, “Linux kept the cold war cold” or “Bill Gates wants to buy the moon and charge people for looking at it”. Do these people really know what they are saying, or are they just going with the Open-Source flow? Update: Rebuttal article here.Editorial Notice: All opinions are those of the author and not necessarily those of osnews.com
General Note: Please forgive any grammar mistakes as the author is not a native english speaker.

Intro

I tend to think the latter. Not because I am not a Linux fan (I happily set up my Computer with Mandrake about two years ago, they are still merrily in love), but because I have not heard anything new in the past two years. It is always “my god, not another security hole in Windows 95/98/98SE/ME/2000/XP/Server 2003”, “Microsoft aggressively bought company X”, “Microsoft launches another way to protect their software” and “Microsoft software is too exspensive”. And Linux, on the other hand, is all bliss.

Well, I think Linux is not all “bliss”. Linux would be all “bliss” if we forget the slow boot-up/shutdown times, if we forget the lousy hardware support for, let’s say, Ati products (Ati being the number two in graphics cards!), if we forget the “geek” image of Linux, if we forget the fact that some distributions suddenly have to be paid for, if we forget that some distributions suddenly get discontinued, if we forget the crappy way software is installed (with the exception of apt-get, or so I’ve heard).

You can go the same way when it comes to Windows. Windows would be all hell if we forget the ease with which it is installed, if we forget the great hardware support, if we forget the uniform look of all the programs, if we forget InstallShield and look-a-likes, if we forget the clear structure (Program Files, My Documents etc, and of course this only goes for the not-so-technical end-user), if we forget Windows Update (still beats the Distribution-specific update tools, in my opinion).

If you confront Linux addicts with the disadvantages I just named, you always get the same reaction: “When Linux becomes (more) mainstream, those problems will disappear.” Well, I think you should turn that around: Linux will become (more) mainstream, when those problems are solved, or at least addressed. Your OS can be great when it comes to its inner workings, but it are the looks of the OS that really matter to the masses. Would Marylin Monroe have become as famous if she was not so darn pretty? I do not think so. I mean, consumers do not want to wait forever for their PC to boot (you can read a Donna Tart in the meantime… twice), they do not want twelve different applications for one task, they do not want to choose between six different Window Managers, even though all of them are quite good. I mean, do you line up six tv’s in your living room just because they look a bit different from each other? Again, I do not think so (imagine the remote-control interference…).

What Should We Do?

So, what should happen to Linux in order to gain more marketshare at the cost of Windows? Well, a lot has been said when it comes to this particular issue.

I think the major Distributions should all “join hands” to create one version of Linux, with one desktop, a uniform look, with one update system and so on. They can still develop their own Distributions (for the fans, I do not think my Computer and Mandrake will ever divorce). By creating a standard, you will make it more accessible for the masses. Just look at the dvd recording standards now: the number of standards are really stopping people from buying a dvd recorder. They are heavily influenced by articles stating the risk of buying one: “Your standard may be unsupported in a few years”.

It will be no problem if Linux XP (couldn’t resist the temptation 😉 , sorry) will cost something, they can spend the earned money on research. The newly developed applications can first be put in the Distributions, and, when the community is satisfied, they can be integrated into the next Linux version, Linux Longhorn (okay, this is getting silly). This way you get the best of both worlds: the knowledge, experience and diversity of the Open-Source world, combined with the easiness and clarity of standardized software. A very good example is, in my eyes, LindowsOS 4.0. I have used it for a couple of weeks now and I must say I am impressed. Despite critizism from the Open-Source commmunity (“It’s too Windows”, “It’s not free” and “They don’t supply source-code (which is a plain lie, by the way)”), I believe LindowsOS is kind of what that new standardized Linux should look like.

Of course that kind of takes away the essence of the Open-Source concept. Open-Source is all about letting everybody not only use the software, bu also letting everybody improve the software. This has led to a diversity in the available software. This is a good thing, if you are an expert willing to put time and effort into your OS, but if you are not, than Linux just isn’t for you, at this moment.

But, as always, this is just my opinion. So please, do not send any suicide penguins my way…

Source

Easy File Sharing from Linux Commandline

Transfer.sh – Easy File Sharing from Linux Commandline

Transfer.sh is a simple, easy and fast service for file sharing from the command-line. It allows you to upload up to 10GB of data and files are stored for 14 days, for free.

You can maximize amount of downloads and it also supports encryption for security. It supports the local file system (local); together with s3 (Amazon S3), and gdrive (Google Drive) cloud storage services.

Transfer.sh - Easy File Sharing in Linux Terminal

Transfer.sh – Easy File Sharing in Linux Terminal

It is designed to be used with the Linux shell. In addition, you can preview your files in the browser. In this article, we will show how to use transfer.sh in Linux.

Upload a Single File

To upload a file, you can use the curl program with the --upload-file option as shown.

$ curl --upload-file ./tecmint.txt https://transfer.sh/tecmint.txt

Download a File

To download your file, a friend or colleague can run the following command.

$ curl https://transfer.sh/Vq3Kg/tecmint.txt -o tecmint.txt 

Upload Multiple Files

You can upload multiple files at once, for example:

$ curl -i -F filedata=@/path/to/tecmint.txt -F filedata=@/path/to/usernames.txt https://transfer.sh/ 

Encrypt Files Before Transfer

To encrypt your files before the transfer, use the following command (you must have the gpg tool installed on the system). You will be prompted to enter a password to encrypt the file.

$ cat usernames.txt | gpg -ac -o- | curl -X PUT --upload-file "-" https://transfer.sh/usernames.txt 

To download and decrypt the above file, use the following command:

$ curl https://transfer.sh/11Rnw5/usernames.txt | gpg -o- > ./usernames.txt

Use Wget Tool

Transfer.sh also supports the wget tool. To upload a file, run.

$ wget --method PUT –body-file=./tecmint.txt https://transfer.sh/tecmint.txt -O --nv 

Create Alias Command

To use the short transfer command, add an alias to your .bashrc or .zshrc startup file.

$ vim ~/.bashrc
OR
$ vim ~/.zshrc

Then add the lines below in it (you can only choose one tool, either curl or wget).

##using curl
transfer() {
    curl --progress-bar --upload-file "$1" https://transfer.sh/$(basename $1) | tee /dev/null;
}

alias transfer=transfer
##using wget
transfer() {
    wget -t 1 -qO - --method=PUT --body-file="$1" --header="Content-Type: $(file -b --mime-type $1)" https://transfer.sh/$(basename $1);
}

alias transfer=transfer

Save the changes and close the file. Then source it to apply the changes.

$ source ~/.bashrc
OR
$ source ~/.zshrc

From now on, you upload a file using the transfer command as shown.

$ transfer users.list.gz

To setup your own sharing server instance, download the program code from the Github repository.

You can find more information and sample use cases in the project homepage: https://transfer.sh/

Transfer.sh is a simple, easy and fast service for file sharing from the command-line.

Source

Linux has its Nails on UNIX’s Coffin – OSnews

Today we feature a very interesting interview with Havoc Pennington. Havoc works for Red Hat, he is heading the desktop team, while he is well known also for his major contributions to GNOME, his GTK+ programming book, plus the freedesktop.org initiative which aims to standardize the X11 desktop environments. In the following interview we discuss about the changes inside Red Hat, Xouvert, freedesktop.org and Gnome’s future, and how Linux, in general, is doing in the desktop market.

1. Looking Red Hat’s recent press releases and web site lately, it reveals a new, stronger effort to shift focus further into the Enterprise and leaving Red Hat Linux to the hands of the community for the home/desktop market. This seems to leave a “hole” in the previous target of Red Hat at the “Corporate Desktop market”. The new Red Hat Linux might sound like “power to the people”, but to me sounds like an action that will have consequences (good & bad) in the quality, testing, development of what we got to know as your “corporate/desktop” product. Given the fact that Red Hat is the No1 Linux distribution on the planet, do you think that this new direction will slow down the Linux penetration to the desktop market?

Havoc Pennington: In my view it’s a mistake to create an “Enterprise vs. Desktop” contrast; these are largely separate dimensions. There are enterprise desktops, enterprise servers, consumer desktops, and consumer servers. Quite possibly small business desktops and servers are another category in between.

I don’t think we’ll see a slowdown in Linux penetration into the desktop market. In fact I hope to see it speed up. Today there are many large software companies making investments in the Linux desktop.

2. How have things changed internally after the [further] focus shift to Enterprise? Is your desktop team still fully working on Gnome/GTK+/X/etc or have developers been pulled into other projects that are more in line with this new focus at Red Hat?

Havoc Pennington: We’re still working on the desktop, more so than ever. (Including applications such as Mozilla, OpenOffice, and Evolution, not just the base environment.)

3. In the past (pre-SCO), Red Hat has admitted that was growing wary of patent issues that might arise in the future. Do you believe that desktop open source software written by many different individuals around the globe might be infringing on patents in some cases without the knowledge of these developers? At the end of the day, we have seen some patents that were issued so shortsightedly that many have said that writing software is almost impossible nowadays. What kind of solution for this issue might OSS software developers find, to ensure a future that is not striken by lawsuits left and right?

Havoc Pennington: As you know we’ve been more aggressive than other Linux vendors about removing potentially patented software from our distribution, specifically we took a lot of criticism for removing mp3 support.

One strategy for helping defend the open source community is to create defensive patents, as described here.

Another strategy is the one taken by Lawrence Rosen in the Academic Free License and Open Software License.

These licenses contain a “Termination for Patent Action” clause that’s an interesting approach.

Political lobbying and education can’t hurt either. These efforts become stronger as more people rely upon open source software.

4. What major new features are scheduled for GTK+ 2.4/2.6 and for the future in general? Once, you started a C++ wrapper for GTK+, but then the project got sterile. Do you believe that Gnome needs a C++ option, and if yes, do you believe that Gtkmm is a good one? Are there plans to sync GTK+ and Gtkmm more often and include it by default on Gnome releases?

Havoc Pennington: GTK+ 2.4 and 2.6 plans are pretty well described here.

One theme of these releases are to make GTK+ cover all the GUI functionality provided historically by libgnomeui. So there will be a single clear GUI API, rather than “plain GTK+” and “GNOME libs” – at that point being a “GNOME application” is really just a matter of whether you follow the GNOME user interface guidelines, rather than an issue of which libs you link to. This cuts down on bloat and developer confusion.

The main user-visible change in 2.4 is of course the new file selector.

The other user-visible effects of 2.4 and 2.6 will mostly be small tweaks and improved consistency between applications as they use the new standard widgets.

At some point we’ll support Cairo which should allow for some nice themes. Cairo also covers printing.

Regarding C++, honestly I’m not qualified to comment on the current state of gtkmm, because I haven’t evaluated it in some time. I do think a C++ option is important. There are two huge wins I’d consider even more important for your average one-off in-house simple GUI app though. 1) to use a language such as Python, Java, C#, Visual Basic, or whatever with automatic memory management, high-level library functions, and so forth; 2) use a user interface builder such as Glade. Both of those will save you more time than the difference between a C and a C++ UI toolkit.

5. What do you think of the XFree86 fork, Xouvert? Do you support the fork, and if yes, what exactly you want to see changed with Xouvert (feature-wise and architecture-wise for X)?

Havoc PenningtonHavoc Pennington: The huge architectural effort I want to see in the X server is to move to saving all the window contents and using the 3D engine of the graphics cards, allowing transparency, faster redraws, nice visual effects, and thumbnailing/magnification, for example.

The trick is that there are *very* few people in the world with the qualifications to architect this change. I don’t know if the Xouvert guys have the necessary knowledge, but if they do that would be interesting. It may well be that no single person understands how to do this right; we may need a collaboration between toolkit people, X protocol people, and 3D hardware experts.

Aside from that, most of the changes to X I’d like to see aren’t really to the window system. Instead, I’d like us to think of the problem as building a base desktop platform. This platform would include a lot of things currently in the X tarball, a lot of things currently on freedesktop.org, and a lot of things that GNOME and KDE and GTK+ and Qt are doing independently. You can think of it as implementing the common backend or framework that GUI toolkits and applications are ported to when they’re ported to Linux.

This may be of interest. If we can negotiate the scary political waters, I’d like to see the various X projects, freedesktop.org, and the desktop environments and applications work together on a single base desktop platform project. With the new freedesktop.org server I’m trying to encourage such a thing.

6. How are things with freedesktop.org; what is its status? Do these standards get implemented in KDE and Gnome, or do they find resistance by hardcore devs on either projects? When do you think KDE and Gnome will reach a good level of interoperability as defined by freedesktop.org? What work has being done so far?

Havoc Pennington: freedesktop.org is going pretty well, I recently posted about the status of the hosting move. See here, I also had a lot of fun at the KDE conference in Nove Hrady and really enjoyed meeting a lot of quality developers I hadn’t met before.

I find that hardcore devs understand the importance of what we’re trying to do, though they also understand the difficulty of changing huge codebases such as Mozilla, OpenOffice, GNOME, or KDE so are understandably careful.

There are people who think of things in “GNOME vs. KDE” terms but in general the people who’ve invested the most time are interested in the bigger picture of open source vs. proprietary, Linux vs. Microsoft, and democratizing access to software.

Of course everyone has their favorite technologies – I think GNOME is great and have a lot of investment in it, and I also like Emacs and Amazon.com and Red Hat Linux. These preferences change over time. When it comes down to it the reason I’m here is larger than any particular technology.

As to when freedesktop.org will achieve interoperability, keep in mind that currently any app will run with any desktop. The issue is more sustaining that fact as the desktop platforms add new bells and whistles; and factoring new features down into the base desktop platform so that apps are properly integrated into any desktop. So it’s a process that I don’t think will ever end. There are always new features and those will tend to be tried out in several apps or desktops before they get spec’d out and documented on the freedesktop.org level.

7. Gnome 2.4 was released last week. Are you satisfied with the development progress of Gnome? What major features/changes do you want to see in Gnome in the next couple of years?

Havoc Pennington: I’m extremely satisfied with GNOME’s progress. Time based releases (see here
for the long definition) are the smartest thing a free software project can do.

This mail has some of my thoughts on what we need to add.

Honestly though the major missing bits of the Linux desktop are not on the GNOME/KDE level anymore. The desktop environments can be endlessly tweaked but they are pretty usable already.

We need to be looking at issues that span and integrate the large desktop projects – WINE, Mozilla, OpenOffice, Evolution on top of the desktops, X below them. And integrate all of them with the operating system.

Some of the other major problems, as explained here, have “slipped through the cracks” in that they don’t clearly fall under the charter of any of the existing large projects.

And of course manageability, administration, security, and application features.

8. Your fellow Red Hat engineer Mike Harris said recently that “There will be a time and a place for Linux on the home desktop. When and where it will be, and wether it will be something that can turn a profit remains to be seen. When Red Hat believes it may be a viable market to enter, then I’m sure we will. Personally, in my own opinion, I don’t think it will be viable for at least 1.5 – 2 years minimum.” Do you agree with this time frame and if yes, what parts exactly need to be “fixed/changed” in the whole Linux universe (technical or not) before Linux becomes viable to the home/desktop market?

Havoc Pennington: I wouldn’t try to guess the timeframe exactly. My guess would be something like “0 to 7 years” 😉

On the technology side, we need some improvements to robustness, to hardware handling, to usability.

However the consumer barriers have a lot to do with consumer ISV and IHV support. And you aren’t going to get that until you can point to some desktop marketshare. That’s why you can’t bootstrap the Linux desktop by targeting consumers. You need to get some initial marketshare elsewhere.

There’s also the business issue that targeting consumers involves very expensive mass market advertising.

9. Have you had a look at the Mac OS X 10.3 Panther previews? Apple is introducing some new widgets, like the new Tabs that look like buttons instead of tabs, and there is of course, Expose, which by utilizing the GL-based QuartzExtreme, offers new usability enhancements, plus cool and modern eye-candy. Do you think that X with GTK+/Gnome will be able to have such innovations in a timely manner, or will it take some years before we see those to a common Linux desktop?

Havoc Pennington: I haven’t tried Panther, though I saw some screenshots and articles.

As I mentioned earlier, the big X server feature I think we need is to move to this kind of 3D-based architecture. If we got the right 2 or 3 people working on it today, we could have demoware in a few months and something usable in a couple of years. I’m just making up those numbers of course.

However, nobody can predict when the right 2 or 3 people will start to work on it. As always in free software, the answer to “when will this be done?” is “faster if you help.”

One stepping stone is to create a robust base desktop platform project where these people could do their work, and some of us are working hard on that task.

10. How do you see the Linux and Unix landscape today? Do you feel that Linux is replacing Unix slowly but steadily, or do they follow parallel and different directions in your opinion?

Havoc Pennington: I would say that the nails are firmly in the UNIX coffin, and it’s just a matter of time.

Source

Python Testing with pytest: Fixtures and Coverage

Python

Improve your Python testing even more.

In my last two articles, I introduced pytest, a library for testing Python code (see “Testing Your Code with Python’s pytest” Part I and Part II). pytest has become quite popular, in no small part because it’s so easy to write tests and integrate those tests into your software development process. I’ve become a big fan, mostly because after years of saying I should get better about testing my software, pytest finally has made it possible.

So in this article, I review two features of pytest that I haven’t had a chance to cover yet: fixtures and code coverage, which will (I hope) convince you that pytest is worth exploring and incorporating into your work.

Fixtures

When you’re writing tests, you’re rarely going to write just one or two. Rather, you’re going to write an entire “test suite”, with each test aiming to check a different path through your code. In many cases, this means you’ll have a few tests with similar characteristics, something that pytest handles with “parametrized tests”.

But in other cases, things are a bit more complex. You’ll want to have some objects available to all of your tests. Those objects might contain data you want to share across tests, or they might involve the network or filesystem. These are often known as “fixtures” in the testing world, and they take a variety of different forms.

In pytest, you define fixtures using a combination of the pytest.fixture decorator, along with a function definition. For example, say you have a file that returns a list of lines from a file, in which each line is reversed:


def reverse_lines(f):
   return [one_line.rstrip()[::-1] + '\n'
           for one_line in f]

Note that in order to avoid the newline character from being placed at the start of the line, you remove it from the string before reversing and then add a '\n' in each returned string. Also note that although it probably would be a good idea to use a generator expression rather than a list comprehension, I’m trying to keep things relatively simple here.

If you’re going to test this function, you’ll need to pass it a file-like object. In my last article, I showed how you could use a StringIO object for such a thing, and that remains the case. But rather than defining global variables in your test file, you can create a fixture that’ll provide your test with the appropriate object at the right time.

Here’s how that looks in pytest:


@pytest.fixture
def simple_file():
   return StringIO('\n'.join(['abc', 'def', 'ghi', 'jkl']))

On the face of it, this looks like a simple function—one that returns the value you’ll want to use later. And in many ways, it’s similar to what you’d get if you were to define a global variable by the name of “simple_file”.

At the same time, fixtures are used differently from global variables. For example, let’s say you want to include this fixture in one of your tests. You then can mention it in the test’s parameter list. Then, inside the test, you can access the fixture by name. For example:


def test_reverse_lines(simple_file):
   assert reverse_lines(simple_file) == ['cba\n', 'fed\n',
 ↪'ihg\n', 'lkj\n']

But it gets even better. Your fixture might act like data, in that you don’t invoke it with parentheses. But it’s actually a function under the hood, which means it executes every time you invoke a test using that fixture. This means that the fixture, in contrast with regular-old data, can make calculations and decisions.

You also can decide how often a fixture is run. For example, as it’s written now, this fixture will run once per test that mentions it. That’s great in this case, when you want to compare with a list or file-like structure. But what if you want to set up an object and then use it multiple times without creating it again? You can do that by setting the fixture’s “scope”. For example, if you set the scope of the fixture to be “module”, it’ll be available throughout your tests but will execute only a single time. You can do this by passing the scope parameter to the @pytest.fixture decorator:


@pytest.fixture(scope='module')
def simple_file():
   return StringIO('\n'.join(['abc', 'def', 'ghi', 'jkl']))

I should note that giving this particular fixture “module” scope is a bad idea, since the second test will end up having a StringIO whose location pointer (checked with file.tell) is already at the end.

These fixtures work quite differently from the traditional setup/teardown system that many other test systems use. However, the pytest people definitely have convinced me that this is a better way.

But wait—perhaps you can see where the “setup” functionality exists in these fixtures. And, where’s the “teardown” functionality? The answer is both simple and elegant. If your fixture uses “yield” instead of “return”, pytest understands that the post-yield code is for tearing down objects and connections. And yes, if your fixture has “module” scope, pytest will wait until all of the functions in the scope have finished executing before tearing it down.

Coverage

This is all great, but if you’ve ever done any testing, you know there’s always the question of how thoroughly you have tested your code. After all, let’s say you’ve written five functions, and that you’ve written tests for all of them. Can you be sure you’ve actually tested all of the possible paths through those functions?

For example, let’s assume you have a very strange function, only_odd_mul, which multiplies only odd numbers:


def only_odd_mul(x, y):
   if x%2 and y%2:
       return x * y
   else:
       raise NoEvenNumbersHereException(f'{x} and/or {y}
 ↪not odd')

Here’s a test you can run on it:


def test_odd_numbers():
   assert only_odd_mul(3, 5) == 15

Sure enough, the test passed. It works great! The software is terrific!

Oh, but wait—as you’ve probably noticed, that wasn’t a very good job of testing it. There are ways in which the function could give a totally different result (for example, raise an exception) that the test didn’t check.

Perhaps it’s easy to see it in this example, but when software gets larger and more complex, it’s not going to be so easy to eyeball it. That where you want to have “code coverage”, checking that your tests have run all of the code.

Now, 100% code coverage doesn’t mean that your code is perfect or that it lacks bugs. But it does give you a greater degree of confidence in the code and the fact that it has been run at least once.

So, how can you include code coverage with pytest? It turns out that there’s a package called pytest-cov on PyPI that you can download and install. Once that’s done, you can invoke pytest with the --cov option. If you don’t say anything more than that, you’ll get a coverage report for every part of the Python library that your program used, so I strongly suggest you provide an argument to --cov, specifying which program(s) you want to test. And, you should indicate the directory into which the report should be written. So in this case, you would say:


pytest --cov=mymul .

Once you’ve done this, you’ll need to turn the coverage report into something human-readable. I suggest using HTML, although other output formats are available:


coverage html

This creates a directory called htmlcov. Open the index.html file in this directory using your browser, and you’ll get a web-based report showing (in red) where your program still lacks coverage. Sure enough, in this case, it showed that the even-number path wasn’t covered. Let’s add a test to do this:


def test_even_numbers():
   with pytest.raises(NoEvenNumbersHereException):
       only_odd_mul(2,4)

And as expected, coverage has now gone up to 100%! That’s definitely something to appreciate and celebrate, but it doesn’t mean you’ve reached optimal testing. You can and should cover different mixtures of arguments and what will happen when you pass them.

Summary

If you haven’t guessed from my three-part focus on pytest, I’ve been bowled over by the way this testing system has been designed. After years of hanging my head in shame when talking about testing, I’ve started to incorporate it into my code, including in my online “Weekly Python Exercise” course. If I can get into testing, so can you. And although I haven’t covered everything pytest offers, you now should have a good sense of what it is and how to start using it.

Resources

  • The pytest website is at http://pytest.org.
  • An excellent book on the subject is Brian Okken’s Python testing with pytest, published by Pragmatic Programmers. He also has many other resources, about pytest and code testing in general, athttp://pythontesting.net.
  • Brian’s blog posts about pytest’s fixtures are informative and useful to anyone wanting to get started with them.

Source

WP2Social Auto Publish Powered By : XYZScripts.com