How to Use fdisk in Linux – Linux Hint

fdisk

is a tool for partitioning hard drives (HDDs), solid state drives (SSDs), USB thumb drives etc. The best thing about fdisk is that it is installed by default on almost every Linux distribution these days. Fdisk is also very easy to use.

In this article, I will show you how to use fdisk to partition storage devices such as HDDs, SSDs, and USB thumb drives in Linux. So, let’s get started.

In Linux, the block devices or hard drives has unique identifiers such as sda, sdb, sdc etc. Before you start partitioning your hard drive, you must make sure that you’re partitioning the right one. Otherwise, you may lose data in the process.

You can use fdisk to list all the storage/block devices on your Linux computer with the following command:

As you can see, I have a hard drive (sda) and a USB thumb drive (sdb) attached to my computer. The lsblk command also lists the partitions. The raw storage device has the TYPE disk. So, make sure you don’t use a partition identifier instead of raw disk identifier.

As you can see, the hard drive (sda) is 20GB in size and the USB thumb drive (sdb) is 3.8GB in size.

You can access the device identifier, let’s say sdb, as /dev/sdb.

In the next section, I will show you how to open it with fdisk.

Opening Storage Devices with fdisk:

To open a storage/block device with fdisk, first, you have to make sure that none of its partition is mounted.

Let’s say, you want to open your USB thumb drive /dev/sdb with fdisk. But, it has a single partition /dev/sdb1, which is mounted somewhere on your computer.

To unmount /dev/sdb1, run the following command:

Now, open /dev/sdb with fdisk with the following command:

As you can see, /dev/sdb storage/block device is opened with fdisk.

In the next sections, I will show you how to use the fdisk command line interface to do common partitioning tasks.

Listing Existing Partitions with fdisk:

You can press p and then press <Enter> to list all the existing partitions of the storage/block device you opened with fdisk.

As you can see in the screenshot below, I have a single partition.

Creating a New Partition Table with fdisk:

A partition table holds information about the partition of your hard drive, SSD or USB thumb drive. DOS and GPT are the most common types of partition table.

DOS is an old partition table scheme. It is good for small size storage devices such as a USB thumb drive. In a DOS partition table, you can’t create more than 4 primary partitions.

GPT is the new partition table scheme. In GPT, you can have more than 4 primary partitions. It is good for big storage devices.

With fdisk, you can create both DOS and GPT partition table.

To create a DOS partition table, press o and then press <Enter>.

To create a GPT partition table, press g and then press <Enter>.

Creating and Removing Partitions with fdisk:

To create a new partition with fdisk, press n and then press <Enter>.

Now, enter the partition number and press <Enter>. Usually, the default partition number is okay. So, you can just leave it as it is unless you want to do something very specific.

Now, enter the sector number on your hard drive from which you want the partition to start from. Usually, the default value is alright. So, just press <Enter>.

The last sector number or size is the most important here. Let’s say, you want to create a partition of size 100 MB, you just type in +100M here. For 1GB, you type in +1G here. The same way, for 100KB, +1K. For 2TB, +2T. For 2PT, +2P. Very simple. Don’t type in fractions here, only type in real numbers. Otherwise, you will get an error.

As you can see, I created a 100MB partition. The partition is created.

If you had a partition that started and ended in the same sector before, you may see something like this. Just, press y and then press <Enter> to remove the partition signature.

As you can see, fdisk tells you that when you write the changes, the signature will be removed.

I am going to create another partition of 1GB in size.

I am going to create another 512MB partition just to show you how to remove partitions with fdisk.

Now, if you list the partitions, you should be able to see the partitions that you created. As you can see, the 100MB, 1GB and 512MB partitions that I just created are listed here.

Now, let’s say you want to delete the third partition /dev/sdb3 or the 512MB partition. To do that, press d and then press <Enter>. Now, type in the partition number and press <Enter>. In my case, it is the partition number 3.

As you can see, partition number 3 is deleted.

As you can see, the 512MB partition or the 3rd partition is no more.

To permanently save the changes to the disk, press w and then press <Enter>. The partition table should be saved.

Formatting and Mounting Partitions:

Now that you’ve created some partitions using fdisk, you can format it and start using them. To format the second partition, let’s say /dev/sdb2, to ext4 filesystem, run the following command:

$ sudo mkfs.ext4 -L MySmallPartition /dev/sdb2

NOTE: Here, MySmallPartition is the label for the /dev/sdb2 partition. You can put anything meaningful here that describes what this partition is for.

The partition is formatted to ext4 filesystem.

Now that the partition /dev/sdb2 is formatted to ext4, you can use the mount command to mount it on your computer. To mount the partition /dev/sdb2 to /mnt, run the following command:

$ sudo mount /dev/sdb2 /mnt

As you can see, the partition /dev/sdb2 is mounted successfully to /mnt mount point.

So, that’s how you use fdisk in Linux to partition disks in Linux. Thanks for reading this article.

Source

How to Install JetBrains PyCharm on Ubuntu – Linux Hint

PyCharm is an awesome Python IDE from JetBrains. It has a lot of awesome features and a beautiful looking UI (User Interface). It is really easy to use.

In this article, I will show you how to install PyCharm on Ubuntu. The procedure shown here will work on Ubuntu 16.04 LTS and later. I will be using Ubuntu 18.04 LTS for the demonstration in this article. So, let’s get started.

Before you install PyCharm on Ubuntu, you should install some pre-requisites packages. Otherwise, PyCharm won’t work correctly.

You have to install the Python interpreters that you want to use with PyCharm to run your project. You also have to install PIP for the Python interpreters that you wish to use.

If you want to use Python 2.x with PyCharm, then you can install all the required packages with the following command:

$ sudo apt install python2.7 python-pip

Now, press y and then press <Enter>.

All the required packages for working with Python 2.x in PyCharm should be installed.

If you want to use Python 3.x with PyCharm, then install all the required packages with the following command:

$ sudo apt install python3-pip python3-distutils

Now, press y and then press <Enter> to continue.

All the required packages for working with Python 3.x in PyCharm should be installed.

Installing PyCharm:

PyCharm has two versions. The Community version, and the Professional versions. The Community version is free to download and use. The Professional version is not free. You have to purchase a license to use the Professional version. The Community version is okay mostly. But it lacks some of the advance features of the Professional version. So, if you need these features, then buy a license and install the Professional version.

On Ubuntu 16.04 LTS and later, PyCharm Community and Professional both versions are available as a snap package in the official snap package repository.

To install PyCharm Community version snap package on Ubuntu 16.04 LTS and later, run the following command:

$ sudo snap install pycharm-community –classic

To install PyCharm Professional version snap package on Ubuntu 16.04 LTS and later, run the following command:

$ sudo snap install pycharm-professional –classic

In this article, I will go with the PyCharm Community version.

As you can see, PyCharm Community version snap package is being downloaded.

PyCharm Community version is installed.

Initial Configuration of PyCharm:

Now that PyCharm is installed, you can start it from the Application Menu of Ubuntu. Just search for pycharm in the Application Menu and you should see PyCharm icon as marked in the screenshot below. Just click on it.

As you’re running PyCharm for the first time, you will have to do some initial configuration. Once you see the following window, click on Do not import settings and click on OK.

Now, you will see the JetBrains license agreement window.

Now, click on I confirm that I have read and accept the terms of this User Agreement and click on Continue to accept the license agreement.

Now, you have to select a UI theme for PyCharm. You can select either the dark theme – Darcula or the Light theme.

Once you select a theme, you can click on Skip Remaining and Set Defaults to leave everything else the default and start PyCharm.

Otherwise, click on Next: Featured plugins.

Once you click on Next: Featured plugins, PyCharm will suggest you some common plugins that you may want to install. If you want to install any plugins from here, click on Install.

Now, click on Start using PyCharm.

As you can see, PyCharm is starting.

PyCharm has started. This is the dashboard of PyCharm.

Creating a Project in PyCharm:

In this section, I will show you how to create a Python project in PyCharm.First, open PyCharm and click on Create New Project.

Now, select a location for your new project. This is where all the files of this project will be saved.

If you want, you can also change the default Python version of your project. To do that, click on the Project Interpreter section to expand it.

Here, you can see in the Base interpreter section, Python 3.6 is selected by default. It is the latest version of Python 3 installed on my Ubuntu 18.04 LTS machine. To change the Python version, click on the Base interpreter drop down menu.

As you can see, all the Python versions installed on my Ubuntu 18.04 LTS machine is listed here. You can pick the one you need from the list. If you want any version of Python which is not listed here, just install it on your computer, and PyCharm should be able to detect it.

Once you’re happy with all the settings, click on Create.

The project should be created.

Now, to create a new Python script, right click on the project and go to New > Python File as marked in the screenshot below.

Now, type in a file name for your Python script and click on OK.

As you can see, test.py file is created and opened in the editor section of PyCharm.

I wrote a very basic Python script as you can see.

Now, to run the Python script currently opened in the editor, press <Alt> + <Shift> + <F10> or go to Run > Run… as marked in the screenshot below.

As you can see, the Python script which is currently opened in the editor is shown here. Just press <Enter>.

As you can see, the script is running.

Once the type in all the inputs, I get the desired output as well.

So, that’s how you install and use PyCharm on Ubuntu. Thank you for reading this article.

Source

The Benefits of Multi-Factor Authentication – NoobsLab

Using a password is arguably the most common and popular security measure available for most types of accounts and information storage. But unfortunately, it is often one of the most vulnerable to hackings and other types of cyber-attacks.

A 2016 study revealed that hackers and other cybercriminals usually perform the most significant data breaches with the aim of stealing individual identities. However, after these attacks take place, the most common response from companies is simply to change your password to avoid having it happen to you.

The problem with this is that using a password has a lot of flaws and imperfections. For starters, passwords don’t generally provide a very strong way to identify a person and basically, anyone who gets their hands on a password can simply log on to an account and do or take whatever they wish.

Additionally, an account’s security level is based only on the strength of the password being used. And most of the time, this isn’t a very strong measure since people don’t usually want to remember long strings of upper and lowercase letters, numbers and special characters. Instead, users typically use a password that is simple and easy to remember. In other words, they unknowingly use something that’s easy to hack.

Because of the deficiencies with using a single password, many companies are now turning to more secure solutions, such as email encryption and multi-factor authentication, to control account access and provide users with alternatives to traditional passwords.

Still, this might leave you wondering what precisely multi-factor authentication is and how it can help improve your online safety and security.

What Is Multi-Factor Authentication?

In simple terms, multi-factor authentication is the process used to identify an individual online by validating more than one claim from several different categories of authentication, which are presented by the user.

The process is also sometimes referred to as advanced authentication, step-up authentication, or two-factor authentication. Regardless, it is simply the means of verifying an individual’s identity by using more than just a simple password as recognition.

In most cases, multi-factor authentication will involve two or more of the following basic elements:

Something is known by the user, such as a PIN or password.

Something owned by the user, such as a mobile device or email account.

A user biometric, such as a fingerprint, facial scan, or voice recognition.

Essentially, the concept of multi-factor authentication understands that no authentication factor is completely secure. In fact, any type of authentication factor will have both its own strengths and weaknesses. However, multi-factor authentication improves identification by using a second or third factor to compensate for the weaknesses of the other factors being used.

By now, it should be easy to see why multi-factor authentication is needed to ensure the safety and security of your online presence. Below, we’ll go over a few of the benefits of using multi-factor authentication.

Increased Security

As we’ve already mentioned, traditional password security is no longer as effective as it once was. Fortunately, multi-factor authentication can supplement a traditional password with an additional factor that cannot easily be guessed, such as a verification code sent to your mobile device or a given biometric factor such as your fingerprint or a facial scan.

These additional factors make it much harder for a cybercriminal to get into your accounts unless they actually possess all the factors required by the system.

Simple Account Access with Single Sign-On Software

Most people assume that using multi-factor authentication will make logging into their accounts a more complicated and time-consuming process. However, the added security of multi-factor authentication allows companies to provide their users will login options such as single sign-on.

Single sign-on, or SSO, is the process of authentication which allows a user to gain access to multiple applications or accounts by using a single set of login credentials. Therefore, once a user is authenticated, they will be logged onto the single sign-on account and will have access to any other apps or accounts that are covered by the software.

One of the biggest challenges in implementing multi-factor authentication is the belief that it will make logging in more complicated. But when combined with software such as single sign-on, multi-factor authentication benefits internet users by making it easier to sign into a variety of different apps at the same time.

Regulatory Compliance

Today, there are many different compliance standards when it comes to organisations that use and store their customer’s sensitive information. And, in many cases, some laws specify the need for these organisations to implement safety features such as multi-factor authentication.

So, not only does multi-factor authentication keep your personal identifiable information safe and simplify logging into your accounts, but it is also a step towards regulatory compliance for most companies and organisations, which ensures that your online safety is held in the highest priority.

Source

5 Screen Recorders for the Linux Desktop | Linux.com

There are so many reasons why you might need to record your Linux desktop. The two most important are for training and for support. If you are training users, a video recording of the desktop can go a long way to help them understand what you are trying to impart. Conversely, if you’re having trouble with one aspect of your Linux desktop, recording a video of the shenanigans could mean the difference between solving the problem and not. But what tools are available for the task? Fortunately, for every Linux user (regardless of desktop), there are options available. I want to highlight five of my favorite screen recorders for the Linux desktop. Among these five, you are certain to find one that perfectly meets your needs. I will only be focusing on those screen recorders that save as video. What video format you prefer may or may not dictate which tool you select.

And, without further ado, let’s get on with the list.

Simple Screen Recorder

I’m starting out with my go-to screen recorder. I use Simple Screen Recorder on a daily basis, and it never lets me down. This particular take on the screen recorder is available for nearly every flavor of Linux and is, as the name implies, very simple to use. With Simple Screen Recorder you can select a single window, a portion of the screen, or the entire screen to record. One of the best features of Simple Screen Recorder is the ability to save profiles (Figure 1), which allows you to configure the input for a recording (including scaling, frame rate, width, height, left edge and top edge spacing, and more). By saving profiles, you can easily use a specific profile to meet a unique need, without having to go through the customization every time. This is handy for those who do a lot of screen recording, with different input variables for specific jobs.

Simple screen recorder also:

  • Records audio input
  • Allows you to pause and resume recording
  • Offers a preview during recording
  • Allows for the selection of video containers and codecs
  • Adds timestamp to file name (optional)
  • Includes hotkey recording and sound notifications
  • Works well on slower machines
  • And much more

Simple Screen Recorder is one of the most reliable screen recording tools I have found for the Linux desktop. Simple Screen Recorder can be installed from the standard repositories on many desktops, or via easy to follow instructions on the application download page.

Gtk-recordmydesktop

The next entry, gtk-recordmydesktop, doesn’t give you nearly the options found in Simple Screen Recorder, but it does offer a command line component (for those who prefer not working with a GUI). The simplicity that comes along with this tool also means you are limited to a specific video output format (.ogv). That doesn’t mean gtk-recordmydesktop isn’t without appeal. In fact, there are a few features that make this option in the genre fairly appealing. First and foremost, it’s very simple to use. Second, the record window automatically gets out of your way while you record (as opposed to Simple Screen Recorder, where you need to minimize the recording window when recording full screen). Another feature found in gtk-recordmydesktop is the ability to have the recording follow the mouse (Figure 2).

Unfortunately, the follow the mouse feature doesn’t always work as expected, so chances are you’ll be using the tool without this interesting option. In fact, if you opt to go the gtk-recordmydesktop route, you should understand the GUI frontend isn’t nearly as reliable as is the command line version of the tool. From the command line, you could record a specific position of the screen like so:

recordmydesktop -x X_POS -y Y_POS –width WIDTH –height HEIGHT -o FILENAME.ogv

where:

  • X_POS is the offset on the X axis
  • Y_POS is the offset on the Y axis
  • WIDTH is the width of the screen to be recorded
  • HEIGHT is the height of the screen to be recorded
  • FILENAME is the name of the file to be saved

To find out more about the command line options, issue the command man recordmydesktop and read through the manual page.

Kazam

If you’re looking for a bit more than just a recorded screencast, you might want to give Kazam a go. Not only can you record a standard screen video (with the usual—albeit limited amount of—bells and whistles), you can also take screenshots and even broadcast video to YouTube Live (Figure 3).

Kazam falls in line with gtk-recordmydesktop, when it comes to features. In other words, it’s slightly limited in what it can do. However, that doesn’t mean you shouldn’t give Kazam a go. In fact, Kazam might be one of the best screen recorders out there for new Linux users, as this app is pretty much point and click all the way. But if you’re looking for serious bells and whistles, look away.

The version of Kazam, with broadcast goodness, can be found in the following repository:

ppa:sylvain-pineau/kazam

For Ubuntu (and Ubuntu-based distributions), install with the following commands:

sudo apt-add-repository ppa:sylvain-pineau/kazam

sudo apt-get update

sudo apt-get install kazam -y

Vokoscreen

The Vokoscreen recording app is for new-ish users who need more options. Not only can you configure the output format and the video/audio codecs, you can also configure it to work with a webcam (Figure 4).

As with most every screen recording tool, Vokoscreen allows you to specify what on your screen to record. You can record the full screen (even selecting which display on multi-display setups), window, or area. Vokoscreen also allows you to select a magnification level (200×200, 400×200, or 600×200). The magnification level makes for a great tool to highlight a specific section of the screen (the magnification window follows your mouse).

Like all the other tools, Vokoscreen can be installed from the standard repositories or cloned from its GitHub repository.

OBS Studio

For many, OBS Studio will be considered the mack daddy of all screen recording tools. Why? Because OBS Studio is as much a broadcasting tool as it is a desktop recording tool. With OBS Studio, you can broadcast to YouTube, Smashcast, Mixer.com, DailyMotion, Facebook Live, Restream.io, LiveEdu.tv, Twitter, and more. In fact, OBS Studio should seriously be considered the de facto standard for live broadcasting the Linux desktop.

Upon installation (the software is only officially supported for Ubuntu Linux 14.04 and newer), you will be asked to walk through an auto-configuration wizard, where you setup your streaming service (Figure 5). This is, of course, optional; however, if you’re using OBS Studio, chances are this is exactly why, so you won’t want to skip out on configuring your default stream.

I will warn you: OBS Studio isn’t exactly for the faint of heart. Plan on spending a good amount of time getting the streaming service up and running and getting up to speed with the tool. But for anyone needing such a solution for the Linux desktop, OBS Studio is what you want. Oh … it can also record your desktop screencast and save it locally.

There’s More Where That Came From

This is a short list of screen recording solutions for Linux. Although there are plenty more where this came from, you should be able to fill all your desktop recording needs with one of these five apps.

Source

How much data in the world by 2025?

I was doing Enterprise Storage training and one of the presentations quoted an IDC report as having recently revised the estimated total amount of data produced on the planet as reaching 163 Zetabytes by 2025. That is 163 followed by 21 zeros.That is just a number, a very big number to be sure. But what does it really mean? How can we gain some perspective from it?

Then I was watching the rest of the video presentation which was about 4 minutes long. So I googled a few things and discovered the average video clip on YouTube is about 4.2 minutes. Now it also seems that an amazing number of people watch YouTube on their mobile devices, often in low resolution. So if we average the available resolutions to 480p (YouTube default) we discover the average YouTube clip consumes about 21MB of storage. Great! But so what?

Well, if my math is correct, and it could possibly be flawed, we can extrapolate the following:
163 Zetabytes could hold 7.7 quadrillion clips (more than 7 with 15 zeroes)
With an average length of 4.2 minutes that equals over 543 trillion hours of video.That is around 62 billion years of YouTube watching.

In 2018 the world population is estimated at over 7.6 billion. So every man, woman, and child on the planet could sit in front of the screen, or mobile, and watch over 71,000 hours of YouTube fun and frolics that no one else has.
So when your partner says there is nothing to watch on the box you could perhaps point out that he/she is wrong and there must be something to interest.

Another mind boggling fact I have discovered during my journey along this path is when we talk of billions and trillions and quintillions there are some inconsistencies in what the number actually are. The US tend to place a billion at 1000 million or 1 followed by 9 zeroes. The British have a billion as a million million or 1 followed by 12 zeroes and so on. In my calculations I have used US terminology as all these zeroes were doing my head in. Commonly referred to as ‘short scale’ and ‘long scale’ it leads me to wonder how, without a crystal ball, how you would know what scale is being used for any quoted number.All this leads to the fact that data collected and stored is accelerating at a mind blowing rate and will continue to do so. It may well be time for your organisation to think about how much data you need to store and for how long. It may be a surprise the amount of data, especially the 80% that is typically unstructured, that needs to be stored. A flexible, infinitely expandable storage solution may be just be applicable to your business.

Source

Bash Variables: Environmental and Otherwise | Linux.com

Bash variables, including those pesky environment variables, have been popped up several times in previous articles, and it’s high time you get to know them better and how they can help you.

So, open your terminal window and let’s get started.

Environment Variables

Consider HOME. Apart from the cozy place where you lay down your hat, in Linux it is a variable that contains the path to the current user’s home directory. Try this:

echo $HOME

This will show the path to your home directory, usually /home/.

As the name indicates, variables can change according to the context. Indeed, each user on a Linux system will have a HOME variable containing a different value. You can also change the value of a variable by hand:

HOME=/home/<your username>/Documents

will make HOME point to your Documents/ folder.

There are three things to notice here:

  1. There are no spaces between the name of the variable and the = or between the = and the value you are putting into the variable. Spaces have their own meaning in the shell and cannot be used any old way you want.
  2. If you want to put a value into a variable or manipulate it in any way, you just have to write the name of the variable. If you want to see or use the contents of a variable, you put a $ in front of it.
  3. Changing HOME is risky! A lot programs rely on HOME to do stuff and changing it can have unforeseeable consequences. For example, just for laughs, change HOME as shown above and try typing cd and then [Enter]. As we have seen elsewhere in this series, you use cd to change to another directory. Without any parameters, cd takes you to your home directory. If you change the HOME variable, cd will take you to the new directory HOME points to.

Changes to environment variables like the one described in point 3 above are not permanent. If you close your terminal and open it back up, or even open a new tab in your terminal window and move there, echo $HOME will show its original value.

Before we go on to how you make changes permanent, let’s look at another environment variable that it does make sense changing.

PATH

The PATH variable lists directories that contain executable programs. If you ever wondered where your applications go when they are installed and how come the shell seems to magically know which programs it can run without you having to tell it where to look for them, PATH is the reason.

Have a look inside PATH and you will see something like this:

$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/bin:/sbin

Each directory is separated by a colon (:) and if you want to run an application installed in any directory other than the ones listed in PATH, you will have to tell the shell where to find it:

/home/<user name>/bin/my_program.sh

This will run a program calle my_program.sh you have copied into a bin/ directory in your home directory.

This is a common problem: you don’t want to clutter up your system’s bin/ directories, or you don’t want other users running your own personal scripts, but you don’t want to have to type out the complete path every time you need to run a script you use often. The solution is to create your own bin/ directory in your home directory:

mkdir $HOME/bin

And then tell PATH all about it:

PATH=$PATH:$HOME/bin

After that, your /home//bin will show up in your PATH variable. But… Wait! We said that the changes you make in a given shell will not last and will lose effect when that shell is closed.

To make changes permanent for your user, instead of running them directly in the shell, put them into a file that gets run every time a shell is started. That file already exists and lives in your home directory. It is called .bashrc and the dot in front of the name makes it a hidden file — a regular ls won’t show it, but ls -a will.

You can open it with a text editor like kate, gedit, nano, or vim (NOT LibreOffice Writer — that’s a word processor. Different beast entirely). You will see that .bashrc is full of shell commands the purpose of which are to set up the environment for your user.

Scroll to the bottom and add the following on a new, empty line:

export PATH=$PATH:$HOME/bin

Save and close the file. You’ll be seeing what export does presently. In the meantime, to make sure the changes take effect immediately, you need to source .bashrc:

source .bashrc

What source does is execute .bashrc for the current open shell, and all the ones that come after it. The alternative would be to log out and log back in again for the changes to take effect, and who has the time for that?

From now on, your shell will find every program you dump in /home//bin without you having to specify the whole path to the file.

DYI Variables

You can, of course, make your own variables. All the ones we have seen have been written with ALL CAPS, but you can call a variable more or less whatever you want.

Creating a new variables is straightforward: just set a value within it:

new_variable=”Hello”

And you already know how to recover a value contained within a variable:

echo $new_variable

You often have a program that will require you set up a variable for things to work properly. The variable may set an option to “on”, or help the program find a library it needs, and so on. When you run a program in Bash, the shell spawns a daughter process. This means it is not exactly the same shell that executes your program, but a related mini-shell that inherits some of the mother’s characteristics. Unfortunately, variables, by default, are not one of them. This is because, by default again, variables are local. This means that, for security reasons, a variable set in one shell cannot be read in another, even if it is a daughter shell.

To see what I mean, set a variable:

robots=”R2D2 & C3PO”

… and run:

bash

You just ran a Bash shell program within a Bash shell program.

Now see if you can read the contents of you variable with:

echo $robots

You should draw a blank.

Still inside your bash-within-bash shell, set robots to something different:

robots=”These aren’t the ones you are looking for”

Check robots’ value:

$ echo $robots
These aren’t the ones you are looking for

Exit the bash-within-bash shell:

exit

And re-check the value of robots:

$ echo $robots
R2D2 & C3P0

This is very useful to avoid all sorts of messed up configurations, but this presents a problem also: if a program requires you set up a variable, but the program can’t access it because Bash will execute it in a daughter process, what can you do? That is exactly what export is for.

Try doing the prior experiment, but, instead of just starting off by setting robots=”R2D2 & C3PO”, export it at the same time:

export robots=”R2D2 & C3PO”

You’ll notice that, when you enter the bash-within-bash shell, robots still retains the same value it had at the outset.

Interesting fact: While the daughter process will “inherit” the value of an exported variable, if the variable is changed within the daughter process, changes will not flow upwards to the mother process. In other words, changing the value of an exported variable in a daughter process does not change the value of the original variable in the mother process.

You can see all exported variables by running

export -p

The variables you create should be at the end of the list. You will also notice some other interesting variables in the list: USER, for example, contains the current user’s user name; PWD points to the current directory; and OLDPWD contains the path to the last directory you visited and since left. That’s because, if you run:

cd –

You will go back to the last directory you visited and cd gets the information from OLDPWD.

You can also see all the environment variables using the env command.

To un-export a variable, use the -n option:

export -n robots

Next Time

You have now reached a level in which you are dangerous to yourself and others. It is time you learned how to protect yourself from yourself by making your environment safer and friendlier through the use of aliases, and that is exactly what we’ll be tackling in the next episode. See you then.

Source

How to Install Jetbrains DataGrip on Ubuntu – Linux Hint

DataGrip is a SQL database IDE from JetBrains. It has auto completion support for SQL language. It even analyzes your existing databases and helps you write queries faster. DataGrip can be used to manage your SQL databases graphically as well. You can also export your database to various formats like JSON, CSV, XML etc. It is very user friendly and easy to use.

In this article, I will show you how to install DataGrip on Ubuntu. The procedure shown here will work on Ubuntu 16.04 LTS and later. I will use Ubuntu 18.04 LTS in this article for demonstration. So, let’s get started.

On Ubuntu 16.04 LTS and later, the latest version of DataGrip is available as a snap package in the official snap repository. So, you can easily install DataGrip on Ubuntu 16.04 LTS and later.

To install DataGrip snap package on Ubuntu 16.04 LTS and later, run the following command:

$ sudo snap install datagrip –classic

As you can see, DataGrip is being installed.

DataGrip is installed.

Now, you can start DataGrip from the Application Menu of Ubuntu. Search for datagrip in the Application Menu and you should see the DataGrip icon. Just click on it.

As you’re running DataGrip for the first time, you will have to do some initial configuration. From this window, select Do not import settings and then click on OK.

Now, you will see the activation window. DataGrip is not free. To use DataGrip, you will have to buy it from JetBrains. Once you buy it, you will be able to use this window to activate DataGrip.

If you want to try out DataGrip before you buy it, select Evaluate for free and click on Evaluate.

DataGrip is being started.

Now, you will have to customize DataGrip. From here, select an UI theme. You can either use Darcula dark theme from JetBrains or the Light theme depending on your preferences. Just, select the one you like.

If you don’t want to customize DataGrip now, instead leave the defaults, then click on Skip Remaining and Set Defaults.

Otherwise, click on Next: Database Options.

Now, select the default SQL dialect. For example, if you mostly use MySQL, then, you should select MySQL. You may also set the default script directory for your chosen database dialect. It’s optional.

Once you’re done, click on Start using DataGrip.

DataGrip should start. You may click on Close to close to Tip of the Day.

This is the main window of DataGrip.

Connecting to a Database:

In this section, I will show you how to connect to a SQL database with DataGrip.

First, from the Database tab, click on the + icon as marked in the screenshot below.

Now, from Data Source, select the database you want to connect to. I will pick MariaDB.

As you are running DataGrip for this database (MariaDB in my case) for the first time, you will have to download the database driver. You can click on Download as marked in the screenshot below to download the database driver.

As you can see, the required database driver files are being downloaded.

Once the driver is downloaded, fill in all the details and click on Test Connection.

If everything is alright, you should see a green Successful message as shown in the screenshot below.

Finally, click on OK.

You should be connected to your desired database.

Creating Tables with DataGrip:

You can create tables in your database graphically using DataGrip. First, right click your database from the list and go to New > Table as marked in the screenshot below.

Now, type in your table name. To add new columns to the table, click on + icon as marked in the screenshot below.

Now, type in the column name, type, default value if it does have in your design, and check the column attributes such as Auto Increment, Not null, Unique, Primary key depending on your need.

If you want to create another column, just click on the + icon again. As you can see, I created id, firstName, lastName, address, age, phone, and country columns. You can also use the – icon to remove a column, Up and Down arrow icons to change the position of the column. Once you’re satisfied with your table, click on Execute.

Your table should be created.

You can double click on the table to open it in a graphical editor. From here, you can add, modify, delete table rows very easily. This is the topic of the next section of this article.

Working with Tables in DataGrip:

To add a new row, from the table editor, just click on the + icon as marked in the screenshot below.

A new blank row should show up.

Now, click on the columns and type in the values that you want for the new row. Once you’re done, click on DB upload icon as marked in the screenshot below.

As you can see, the changes are saved permanently in the database.

I added another row of dummy data just to demonstrate how delete and modify works.

To delete a row, select any column of the row you want to delete and click on the – icon marked in the screenshot below.

As you can see, the row is not in gray color. To save the changes, click on the DB upload icon as marked in the screenshot below.

As you can see, the table is gone.

To edit any row, just double click on the column of the row that you want to edit and type in the new value.

Finally, click somewhere else and then click on DB upload icon for the changes to be saved.

Running SQL Statements in DataGrip:

To run SQL statements, just type in the SQL statement, move the cursor to the end of the SQL statement and press <Ctrl> + <Enter>. It will execute and the result will be displayed as you can see in the screenshot below.

So, that’s how you install and use DataGrip on Ubuntu. Thanks for reading this article.

Source

Interfacing with GitHub API using Python 3 – Linux Hint

GitHub as a web application is a huge and complex entity. Think about all the repositories, users, branches, commits, comments, SSH keys and third party apps that are a part of it. Moreover, there are multiple ways of communicating with it. There are desktop apps for GitHub, extensions for Visual Studio Code and Atom Editor, git cli, Android and iOS apps to name a few.

People at GitHub, and third party developers alike, can’t possibly manage all this complexity without a common interface. This common interface is what we call the GitHub API. Every GitHub utility like a cli, web UI, etc uses this one common interface to manage resources (resources being entities like repositories, ssh keys, etc).

In this tutorial we will learn a few basics of how one interfaces with an API using GitHub API v3 and Python3. The latest v4 of GitHub API requires you to learn about GraphQL which results in steeper learning curve. So I will stick to just version three which is still active and pretty popular.

Web APIs are what enable you to use all the services offered by a web app, like GitHub, programmatically using language of your choice. For example, we are going to use Python for our use case, here. Technically, you can do everything you do on GitHub using the API but we will restrict ourselves to only reading the publicly accessible information.

Your Python program will be talking to an API just the same way as your browser talks to a website. That is to say, mostly via HTTPS requests. These requests will contain different ‘parts’, starting from the method of the request [GET, POST, PUT, DELETE], the URL itself, a query string, an HTTP header and a body or a payload. Most of these are optional. We will however need to provide a request method and the URL to which we are making the request.

What these are and how they are represented in an HTTPS request is something we will see slow as we start writing Python Scripts to interact with GitHub.

Source

Container: Docker Compose on Ubuntu 16.04

docker compose logo

What is Docker Compose

Docker Compose is a tool for running multi-container Docker applications. To configure an application’s services with Compose we use a configuration file, and then, executing a single command, it is possible to create and start all the services specified in the configuration.

Docker Compose is useful for many different projects like:

  • Development: with the Compose command line tools we create (and interact with) an isolated environment which will host the application being developed.
    By using the Compose file, developers document and configure all of the application’s service dependencies.
  • Automated testing: this use case requires an environment for running tests in. Compose provides a convenient way to manage isolated testing environments for a test suite. The full environment is defined in the Compose file.

Docker Compose was made on the Fig source code, a community project now unused.

In this tutorial we will see how to install Docker Compose on an Ubuntu 16.04 machine.

Install Docker

We need Docker in order to install Docker Compose. First, add the public key for the official Docker repository:

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add-

Next, add the Docker repository to apt sources list:

$ sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable”

Update the packages database and install Docker with apt:

$ sudo apt-get update
$ sudo apt install docker-ce

At the end of the installation process, the Docker daemon should be started and enabled to load at boot time. We can check its status with the following command:

$ sudo systemctl status docker
———————————

● docker.service – Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running)

Install Docker Compose

At this point it is possible to install Docker Compose. Download the current release by executing the following command:

# curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose

Make the downloaded binary executable:

# chmod +x /usr/local/bin/docker-compose

Check the Docker Compose version:

$ docker-compose -v

The output should be something like this:

docker-compose version 1.14.0, build c7bdf9e

Testing Docker Compose

The Docker Hub includes a Hello World image for demonstration purposes, illustrating the configuration required to run a container with Docker Compose.

Create a new directory and move into it:

$ mkdir hello-world
$ cd hello-world

Create a new YAML file:

$ $EDITOR docker-compose.yml

In this file paste the following content:

unixmen-compose-test:
image: hello-world

Note: the first line is used as part of the container name.

Save and exit.

Run the container

Next, execute the following command in the hello-world directory:

$ sudo docker-compose up

If everything is correct, this should be the output shown by Compose:

Pulling unixmen-compose-test (hello-world:latest)…
latest: Pulling from library/hello-world
b04784fba78d: Pull complete
Digest: sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d72261b0d26ff74f
Status: Downloaded newer image for hello-world:latest
Creating helloworld_unixmen-compose-test_1 …
Creating helloworld_unixmen-compose-test_1 … done
Attaching to helloworld_unixmen-compose-test_1
unixmen-compose-test_1 |
unixmen-compose-test_1 | Hello from Docker!
unixmen-compose-test_1 | This message shows that your installation appears to be working correctly.
unixmen-compose-test_1 |
unixmen-compose-test_1 | To generate this message, Docker took the following steps:
unixmen-compose-test_1 | 1. The Docker client contacted the Docker daemon.
unixmen-compose-test_1 | 2. The Docker daemon pulled the “hello-world” image from the Docker Hub.
unixmen-compose-test_1 | 3. The Docker daemon created a new container from that image which runs the
unixmen-compose-test_1 | executable that produces the output you are currently reading.
unixmen-compose-test_1 | 4. The Docker daemon streamed that output to the Docker client, which sent it
unixmen-compose-test_1 | to your terminal.
unixmen-compose-test_1 |
unixmen-compose-test_1 | To try something more ambitious, you can run an Ubuntu container with:
unixmen-compose-test_1 | $ docker run -it ubuntu bash
unixmen-compose-test_1 |
unixmen-compose-test_1 | Share images, automate workflows, and more with a free Docker ID:
unixmen-compose-test_1 | https://cloud.docker.com/
unixmen-compose-test_1 |
unixmen-compose-test_1 | For more examples and ideas, visit:
unixmen-compose-test_1 | https://docs.docker.com/engine/userguide/
unixmen-compose-test_1 |
helloworld_unixmen-compose-test_1 exited with code 0

Docker containers only run as long as the command is active, so the container will stop when the test finishes running.

Conclusion

This concludes the tutorial about the installation of Docker Compose on an Ubuntu 16.04 machine. We have also seen how to create a simple project through the Compose file in YAML format.

Source

WP2Social Auto Publish Powered By : XYZScripts.com