How to Install Bacula Systems Enterprise – Linux Hint

Bacula Enterprise is an amazing backup solution for your data. It is easy to install in a Virtual Machine or in a Bare Metal server. Bacula Enterprise also has an easy to use web based management panel. You can configure backups, make backups, monitor backups etc from the web based management panel. In this article, I will show you how to install Bacula Enterprise on your computer/server. So, let’s get started.

Downloading Bacula Enterprise:

Bacula Enterprise ISO image can be downloaded from the official website of Bacula Systems. To download Bacula Enterprise ISO image, visit the official website of Bacula Systems at https://www.baculasystems.com/try and click on Download Bacula Enterprise Backup Trial Now.

Now, fill in the details and click on Download Trial.

Now, Bacula Systems will mail you a link from where you can download Bacula Enterprise ISO installer image. Open your email and click on the download link. Then, click on the Download ISO button.

Now, click on the ISO image link as marked in the screenshot below.

Your browser should start downloading the Bacula Enterprise ISO installer image.

Making a Bootable USB of Bacula Enterprise:

Once you have Bacula Enterprise ISO image downloaded, you can use Rufus to make a bootable USB of Bacula Enterprise. Once you have the Bacula Enterprise bootable USB installer, you can use it to install Bacula Enterprise on your computer/server.

You can download Rufus from the official website of Rufus at https://rufus.ie

If you want to install Bacula Enterprise as a VMware/VirtualBox virtual machine, then you can use the ISO image directly. You don’t have to make a bootable USB thumb drive of Bacula Enterprise.

Installing Bacula Enterprise:

Once you boot Bacula Enterprise from the ISO installer image or the bootable USB thumb drive, you should see the following GRUB menu. Select Install on Virtual Machine if you’ve booted Bacula Enterprise installer in a virtual machine. Otherwise, select Install on Physical Hardware. Then, press <Enter>.

Bacula Enterprise is loading.

Now, select OK and press <Enter>.

Press <Enter> to continue.

Now, you have to set your keyboard layout. Some keyboard layout keymap codes are given as examples. For example, keymap code us for United States keyboard layout, uk for United Kingdom etc.

NOTE: For more keymap code, visit https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/installation_guide/s1-kickstart2-options and scroll down to the keyboard section.

Now, type in the timezone keyword and press <Enter>. For example, if you’re on US Eastern timezone, then the timezone keyword would be US/Eastern.

You can find a list of supported timezone keywords at https://en.wikipedia.org/wiki/List_of_tz_database_time_zones

Now, all the available storage devices should be listed. I have only one storage device sda of size 300GB. Just type in the name of the storage device where you want to install Bacula Enterprise and press <Enter>.

Now, type in the amount of disk space you want to allocate for the root (/) directory in GB and press <Enter>. You should allocate at least 16 GB of disk space here.

Now, type in your swap size in GB and press <Enter>. It should be twice the amount of RAM/memory you have.

Now, type in the amount of diskspace you want to allocate for the /var directory and press <Enter>. Allocate at least 4GB of diskspace for the /var directory.

Now, type in the amount of diskspace you want to allocate for the /opt directory and press <Enter>. Allocate at least 4GB of diskspace for the /opt directory.

Now, type in the amount of diskspace you want to allocate for the /tmp directory and press <Enter>. Allocate at least 4GB of diskspace for the /tmp directory.

Now, type in the amount of diskspace you want to allocate for the /catalog directory and press <Enter>. Allocate at least 8GB of diskspace for the /catalog directory.

Now, type in the amount of diskspace you want to allocate for the /opt/bacula/working directory and press <Enter>. Allocate at least 8GB of diskspace for the /opt/bacula/working directory.

As you can see, about 184 GB of disk space will be allocated for the OS and 116 GB of disk space is still left for data. Press <Enter> to confirm.

Bacula Enterprise installation should start.

All the required packages are being installed.

Bacula Enterprise is being installed.

Now, type in a password for the root user and press <Enter>.

Now, type in a password for the bacula user and press <Enter>.

Now, type in the hostname for your Bacula Enterprise server and press <Enter>.

Now, you have to configure a network interface. To do that, press y and then press <Enter>.

If you want to use DHCP to configure the network interface, then press y and then press <Enter>. If you want to configure the network interface manually, then press n and then press <Enter>.

If this network interface is the default route, then press y and then press <Enter> to continue.

If you’ve decided to manually configure the network, then you have to type in an IP address for the network at this point and press <Enter>.

Then, type in the netmask and press <Enter>.

Now, type in the default gateway and press <Enter>.

Now, press y and then press <Enter> to confirm the details that you’ve provided.

Now, type in a domain name for your Bacula Enterprise server and press <Enter>.

Now, type in the IP address of your primary DNS server and press <Enter>.

Now, type in the IP address of your secondary DNS server and press <Enter>.

Now, press y and then press <Enter> to confirm.

If you want to configure NTP, press y. Otherwise, press n. Then, press <Enter>. NTP is optional. I am not configuring NTP in this article.

If you want to configure email, press y. Otherwise, press n. Then, press <Enter>. Email configuration is optional. I am not configuring email in this article.

Now, type in the amount of disk space you want to allocate for the Bacula Enterprise file storage and press <Enter>.

If you don’t want to use Virtual Tape Library, then press n. Otherwise press y. Then press <Enter>.

If you want to enable DeDuplication, then press y and then press <Enter>.

Now, type in the amount of disk space you want to allocate for dedupe storage and press <Enter>.

Now, type in the number of deduplication devices you want and press <Enter>. The default is 4.

If you don’t want to set any default storage, then press n. Otherwise, press y. Then press <Enter>.

Normally, you don’t want any demo configuration in a production server. So, press n and then press <Enter>.

Now, type in the number of days Bacula Enterprise will keep backups (retention period) for restore. The default is 90 days. At most, you can keep backups for 365 days.

Now, Bacula Enterprise will install additional software packages depending on how you configured it.

Once Bacula Enterprise is installed, you should be booted into the following GRUB menu. Just press <Enter>.

You should be booted into Bacula Enterprise and you should be able to log into the system. Your management IP address is available here. You can access it from any web browser (Bacula prefers Firefox) to manage your Bacula Enterprise server.

Now visit the management IP address (in my case https://192.168.21.5) from any web browser and you should see the BWeb dashboard. From here, you can configure Bacula Enterprise and backup your import data.

So, that’s how you install Bacula Enterprise on your computer/server or a Virtual machine. Thanks for reading this article.

Source

How to Install PyCharm on Ubuntu 18.04 and CentOS 7

How to Install PyCharm on Ubuntu 18.04

Install PyCharm on Ubuntu

PyCharm is an intelligent and fully featured IDE for Python developed by JetBrains. It also provides support for Javascript, Typescript, and CSS etc. You can also extend PyCharm features by using plugins. By using PyCharm plugins you can also get support for frameworks like Django, Flask. We can also use PyCharm for other programming languages like HTML, SQL, Javascript, CSS and more. In this tutorial, you are going to learn how to install PyCharm on Ubuntu 18.04.

Prerequisites

Before you start to install PyCharm on Ubuntu 18.04. You must have the non-root user account on your system with sudo privileges.

Install Snappy Package Manager

Snappy provides better package management support for Ubuntu 18.04. It’s quick and easy to use. To install Snappy package manager type following command. If its already installed on the system skip to next step

NOTE: Ubuntu 18.04 may have already installed Snappy package manager.

sudo apt install snapd snapd-xdg-open

Install PyCharm

Now to download and installed PyCharm snap package run following command. It will take some time to download and install package.

sudo snap install pycharm-community --classic

After successfully downloading and installing the package you will get the following output.

pycharm-community 2018.2.4 from 'jetbrains' installed

Start PyCharm

After successful instalation to start PyCharm via terminal run following command.

pycharm-community

You can also start PyCharm from activities

pycharm-start-from-activities
pycharm start fro activities

You will get the following output after accepting the license and setting up the initial configuration.

pycharm-launcher-window
PyCharm Launcher Window

Conclusion

You have successfully learned how to install PyCharm on Ubuntu 18.04. If you have any queries regarding this then please dont forget to comment below.

—————————————————————————————–

How to Install PyCharm on CentOS 7
How to Install PyCharm on CentOS 7

Install PyCharm on CentOS 7

PyCharm is an intelligent and fully featured IDE for Python developed by JetBrains. It also provides support for Javascript, Typescript, and CSS etc. You can also extend PyCharm features by using plugins. By using PyCharm plugins you can also get support for frameworks like Django, Flask. We can also use PyCharm for other programming languages like HTML, SQL, Javascript, CSS and more. In this tutorial, you are going to learn how to install PyCharm on CentOS 7.

Prerequisites

Before you start to install PyCharm on CentOS 7. You must have the non-root user account on your system with sudo privileges.

Install PyCharm

First we will download PyCharm using official PyCharm download page using wget command. At the time writing this tutorial the current latest version available is 2018.3.2. You can check latest version for installation if you want.

sudo wget https://download-cf.jetbrains.com/python/pycharm-professional-2018.3.2.tar.gz

Now extract the downloaded package using following command.

tar -xvf pycharm-professional-2018.3.2.tar.gz

Navigate inside the extracted directory.

cd pycharm-professional-2018.3.2

Now to run PyCharm like normal programs you should create symbolic link using the following command.

sudo ln -s ./pycharm-community-2018.3.2/bin/pycharm.sh /usr/bin/pycharm

Start PyCharm

You can launch PyCharm using following command.

pycharm

On starting PyCharm first time you will be asked to import settings. If you have settings from older version then you can import or select “Do not import settings”.

Pycharm Import Settings
Pycharm Import Settings

You will get the following output after accepting the license and setting up the initial configuration.

PyCharm Welcome Screen
PyCharm Welcome Screen

Conclusion

You have successfully learned how to install PyCharm on CentOS 7. If you have any queries regarding this then please dont forget to comment below.

Source

Back to Basics: Sort and Uniq

Learn the fundamentals of sorting and de-duplicating text on the command line.

If you’ve been using the command line for a long time, it’s easy to take the commands you use every day for granted. But, if you’re new to the Linux command line, there are several commands that make your life easier that you may not stumble upon automatically. In this article, I cover the basics of two commands that are essential in anyone’s arsenal: sort and uniq.

The sort command does exactly what it says: it takes text data as input and outputs sorted data. There are many scenarios on the command line when you may need to sort output, such as the output from a command that doesn’t offer sorting options of its own (or the sort arguments are obscure enough that you just use the sort command instead). In other cases, you may have a text file full of data (perhaps generated with some other script), and you need a quick way to view it in a sorted form.

Let’s start with a file named “test” that contains three lines:


Foo
Bar
Baz

sort can operate either on STDIN redirection, the input from a pipe, or, in the case of a file, you also can just specify the file on the command. So, the three following commands all accomplish the same thing:


cat test | sort
sort < test
sort test

And the output that you get from all of these commands is:


Bar
Baz
Foo

Sorting Numerical Output

Now, let’s complicate the file by adding three more lines:


Foo
Bar
Baz
1. ZZZ
2. YYY
11. XXX

If you run one of the above sort commands again, this time, you’ll see different output:


11. XXX
1. ZZZ
2. YYY
Bar
Baz
Foo

This is likely not the output you wanted, but it points out an important fact about sort. By default, it sorts alphabetically, not numerically. This means that a line that starts with “11.” is sorted above a line that starts with “1.”, and all of the lines that start with numbers are sorted above lines that start with letters.

To sort numerically, pass sort the -n option:


sort -n test

Bar
Baz
Foo
1. ZZZ
2. YYY
11. XXX

Find the Largest Directories on a Filesystem

Numerical sorting comes in handy for a lot of command-line output—in particular, when your command contains a tally of some kind, and you want to see the largest or smallest in the tally. For instance, if you want to find out what files are using the most space in a particular directory and you want to dig down recursively, you would run a command like this:


du -ckx

This command dives recursively into the current directory and doesn’t traverse any other mountpoints inside that directory. It tallies the file sizes and then outputs each directory in the order it found them, preceded by the size of the files underneath it in kilobytes. Of course, if you’re running such a command, it’s probably because you want to know which directory is using the most space, and this is where sortcomes in:


du -ckx | sort -n

Now you’ll get a list of all of the directories underneath the current directory, but this time sorted by file size. If you want to get even fancier, pipe its output to the tail command to see the top ten. On the other hand, if you wanted the largest directories to be at the top of the output, not the bottom, you would add the-r option, which tells sort to reverse the order. So to get the top ten (well, top eight—the first line is the total, and the next line is the size of the current directory):


du -ckx | sort -rn | head

This works, but often people using the du command want to see sizes in more readable output than kilobytes. The du command offers the -h argument that provides “human-readable” output. So, you’ll see output like 9.6G instead of 10024764 with the -k option. When you pipe that human-readable output to sort though, you won’t get the results you expect by default, as it will sort 9.6G above 9.6K, which would be above 9.6M.

The sort command has a -h option of its own, and it acts like -n, but it’s able to parse standard human-readable numbers and sort them accordingly. So, to see the top ten largest directories in your current directory with human-readable output, you would type this:


du -chx | sort -rh | head

Removing Duplicates

The sort command isn’t limited to sorting one file. You might pipe multiple files into it or list multiple files as arguments on the command line, and it will combine them all and sort them. Unfortunately though, if those files contain some of the same information, you will end up with duplicates in the sorted output.

To remove duplicates, you need the uniq command, which by default removes any duplicate lines that are adjacent to each other from its input and outputs the results. So, let’s say you had two files that were different lists of names:


cat namelist1.txt
Jones, Bob
Smith, Mary
Babbage, Walter

cat namelist2.txt
Jones, Bob
Jones, Shawn
Smith, Cathy

You could remove the duplicates by piping to uniq:


sort namelist1.txt namelist2.txt | uniq
Babbage, Walter
Jones, Bob
Jones, Shawn
Smith, Cathy
Smith, Mary

The uniq command has more tricks up its sleeve than this. It also can output only the duplicated lines, so you can find duplicates in a set of files quickly by adding the -d option:


sort namelist1.txt namelist2.txt | uniq -d
Jones, Bob

You even can have uniq provide a tally of how many times it has found each entry with the -c option:


sort namelist1.txt namelist2.txt | uniq -c
1 Babbage, Walter
2 Jones, Bob
1 Jones, Shawn
1 Smith, Cathy
1 Smith, Mary

As you can see, “Jones, Bob” occurred the most times, but if you had a lot of lines, this sort of tally might be less useful for you, as you’d like the most duplicates to bubble up to the top. Fortunately, you have the sort command:


sort namelist1.txt namelist2.txt | uniq -c | sort -nr
2 Jones, Bob
1 Smith, Mary
1 Smith, Cathy
1 Jones, Shawn
1 Babbage, Walter

Conclusion

I hope these cases of using sort and uniq with realistic examples show you how powerful these simple command-line tools are. Half the secret with these foundational command-line tools is to discover (and remember) they exist so that they’ll be at your command the next time you run into a problem they can solve.

Source

Drugs on the command line

Drugs on the command line

There’s a lot of raw material on the Web for data auditors to tinker with, but I’ve found only one website that advertises Datasets for data cleaning practice. It’s a 2018 blog post by computational linguist Rachael Tatman. Among the offerings is a link to the National Drug Code Directory website of the US Food and Drug Administration, and one of the FDA downloadables there contains a table with 123,841 product records (2018-12-28 version).

The product table is plain text and tab-separated, but it’s in windows-1252 encoding with a Windows carriage return at the end of each line. (Sigh.) I deleted the carriage returns and converted the table to UTF-8 as the file “prods0”.

Tatman writes Issue: Non-trivial duplication (which drugs are different names for the same things?)”

Answering Tatman’s question isn’t straightforward, because the product table contains partially duplicated records. Although each record has a unique product ID, if that ID is ignored there’s a set of more than 1100 duplicates:

dupes1

Duplicate pair example from “prods0”, with PRODUCTID in red:

0009-0039_5e394712-e775-435b-a4e0-32e1d9647ff5 0009-0039 HUMAN PRESCRIPTION DRUG SOLU-MEDROL methylprednisolone sodium succinate INJECTION, POWDER, FOR SOLUTION INTRAMUSCULAR; INTRAVENOUS 19590402 NDA NDA011856 Pharmacia and Upjohn Company LLC METHYLPREDNISOLONE SODIUM SUCCINATE 40 mg/mL Corticosteroid [EPC],Corticosteroid Hormone Receptor Agonists [MoA] N 20191231

0009-0039_95289567-4341-4b6c-bc3c-aa13036bc9b4 0009-0039 HUMAN PRESCRIPTION DRUG SOLU-MEDROL methylprednisolone sodium succinate INJECTION, POWDER, FOR SOLUTION INTRAMUSCULAR; INTRAVENOUS 19590402 NDA NDA011856 Pharmacia and Upjohn Company LLC METHYLPREDNISOLONE SODIUM SUCCINATE 40 mg/mL Corticosteroid [EPC],Corticosteroid Hormone Receptor Agonists [MoA] N 20191231

cut away the unique ID and sorted and uniquified the records to build the file “prods1”, retaining the header line:

cat <(cut -f1 –complement prods0 | head -n 1) <(tail -n +2 prods0 | cut -f1 –complement | sort | uniq) > prods1

Next, I focused in “prods1” on the fields SUBSTANCENAME (field 13), ACTIVE_NUMERATOR_STRENGTH (14) and ACTIVE_INGRED_UNIT (15). The FDA’s explainer page describes these as follows:

SubstanceName
This is the active ingredient list. Each ingredient name is the preferred term of the UNII code submitted.

StrengthNumber [older field name?]
These are the strength values (to be used with units below) of each active ingredient, listed in the same order as the SubstanceName field above.

StrengthUnit [older field name?]
These are the units to be used with the strength values above, listed in the same order as the SubstanceName and SubstanceNumber.

If these 3 fields are the same, then the product is the same so far as the active ingredients are concerned. To find these partial duplicates I used the two-pass method described in a previous BASHing data post:

awk -F”\t” ‘FNR==NR {a[$13,$14,$15]++; next} $13 != “” && $14 != “” && $15 != “” && a[$13,$14,$15]>1’ prods1 prods1 | wc -l

dupes2

Wow! That’s a lot of “same product” out of 123,205 unique product records. To investigate further I added the fields STARTMARKETINGDATE (field 8 in prods1), PROPRIETARYNAME (3), PROPRIETARYNAMESUFFIX (4) and LABELERNAME (12) to a print as the new file “prods2” (no header this time).

awk -F”\t” ‘FNR==NR {a[$13,$14,$15]++; next} $13 != “” && $14 != “” && $15 != “” && a[$13,$14,$15]>1 {print $8 FS $3 FS $4 FS $12 FS $13 FS $14 FS $15}’ prods1 prods1 > prods2

—–

StartMarketingDate
This is the date that the labeler indicates was the start of its marketing of the drug product.

ProprietaryName
Also known as the trade name. It is the name of the product chosen by the labeler.

ProprietaryNameSuffix
A suffix to the proprietary name, a value here should be appended to the ProprietaryName field to obtain the complete name of the product. This suffix is often used to distinguish characteristics of a product such as extended release (“XR”) or sleep aid (“PM”). Although many companies follow certain naming conventions for suffices, there is no recognized standard.

LabelerName
Name of Company corresponding to the labeler code segment of the ProductNDC.

That still apparently doesn’t capture all the variation in FDA’s database, because “prods2” contains a lot of exact duplicates

dupes3

and there don’t seem to be any differences between the duplicated records in the original downloaded table (“prods0”), apart from the FDA product code and the unique ID based on that code:

Example from “prods0”:

17518-080_7aa3171b-36c0-48d6-e053-2991aa0a6aec 17518-080 HUMAN OTC DRUG 3M SoluPrep chlorhexidine gluconate and isopropyl alcohol SOLUTION TOPICAL 20181008 NDA NDA208288 3M Company CHLORHEXIDINE GLUCONATE; ISOPROPYL ALCOHOL 20; .7 mg/mL; mL/mL N 20191231

17518-081_7aa3171b-36c0-48d6-e053-2991aa0a6aec 17518-081 HUMAN OTC DRUG 3M SoluPrep chlorhexidine gluconate and isopropyl alcohol SOLUTION TOPICAL 20181008 NDA NDA208288 3M Company CHLORHEXIDINE GLUCONATE; ISOPROPYL ALCOHOL 20; .7 mg/mL; mL/mL N 20191231

Once again I sorted and uniquified, converting “prods2” to “prods3”, which has 92,452 records. One source of duplication in “prods3” is in the proprietary name suffix field, because the same basic product can be sold with slightly different formulations not affecting the active ingredients. Here’s an example (from “prods0”) — a dental fluoride paste that comes in 3 different flavours:

65222-401_6155acd9-8ec2-a87d-e053-2991aa0a7b43 65222-401 HUMAN PRESCRIPTION DRUG Nupro Fluorides NaF Oral Solution Mint Sodium Fluoride GEL DENTAL 19000101 UNAPPROVED DRUG OTHER Dentsply LLC. Professional Division Trading as “DENTSPLY Professional” SODIUM FLUORIDE 20 mg/g N 20191231

65222-411_6155acd9-8ec2-a87d-e053-2991aa0a7b43 65222-411 HUMAN PRESCRIPTION DRUG Nupro Fluorides NaF Oral Solution Mandarin Orange Sodium Fluoride GEL DENTAL 19000101 UNAPPROVED DRUG OTHER Dentsply LLC. Professional Division Trading as “DENTSPLY Professional” SODIUM FLUORIDE 20 mg/g N 20191231

65222-421_6155acd9-8ec2-a87d-e053-2991aa0a7b43 65222-421 HUMAN PRESCRIPTION DRUG Nupro Fluorides NaF Oral Solution Apple Cinnamon Sodium Fluoride GEL DENTAL 19000101 UNAPPROVED DRUG OTHER Dentsply LLC. Professional Division Trading as “DENTSPLY Professional” SODIUM FLUORIDE 20 mg/g N 20191231

A larger source of duplication in “prods3” is the marketing date. Walgreens, for instance, has 2 different registrations for an allergy medicine, differing only in start and end marketing dates (example from “prods0”):

0363-0211_997032ac-0110-4004-9697-d82146ba7128 0363-0211 HUMAN OTC DRUG 24 Hour Allergy Cetirizine HCl CAPSULE ORAL 20130301 NDA NDA022429 Walgreens CETIRIZINE HYDROCHLORIDE 10 mg/1 N 20181231

0363-1219_f3168f2c-27a7-4dd7-9770-e91443a580f1 0363-1219 HUMAN OTC DRUG 24 Hour Allergy Cetirizine HCl CAPSULE ORAL 20180914 NDA NDA022429 Walgreens CETIRIZINE HYDROCHLORIDE 10 mg/1 N 20191231

I generated “prods4” from “prods3” by cutting out marketing date and sorting and uniquifying again. That reduced the set of “basically the same product” records to 84,163. Here are the top 10 formulations:

cut -f4-6 prods4 | sort | uniq -c | sort -nr | head

dupes4

Those 25 mg lots of diphenhydramine hydrochloride (an antihistamine) were sold by a nominal 188 labelling entities, but again there’s duplication. The FDA lists multiple strings for what’s presumably the same company

Allergy relief     [no suffix]     Topco Associates LLC
Allergy relief     [no suffix]     Topco Associates, LLC
Allergy relief     [no suffix]     TopCo Associates LLC

the same product

Sleep Aid     Nighttime     CVS Pharmacy
Sleep- Aid     Nighttime     CVS Pharmacy
Sleep-Aid     Nighttime     CVS Pharmacy

or both

sleep aid     nighttime     Target Corporation
Sleep Aid     NightTime     TARGET Corporation

Summing up, the answer to Tatman’s question about this dataset, namely “Which drugs are different names for the same things?” has several answers depending on how you define “things”. But even after you’ve decided what you’re looking for, the surprising messiness of the FDA’s data means you have a lot of data cleaning to do before you can start looking. The FDA’s product table is indeed a good dataset for data cleaning practice!


Some of the ingredient fields in the product table contain semicolon-and-space-separated strings, like

ACETALDEHYDE; ARSENIC TRIOXIDE; BALSAM PERU; OYSTER SHELL CALCIUM CARBONATE, CRUDE; PHENOL; CONIUM MACULATUM FLOWERING TOP; COUMARIN; SAFFRON; HISTAMINE DIHYDROCHLORIDE; LACHESIS MUTA VENOM; LYCOPODIUM CLAVATUM SPORE; PHOSPHORUS; SEPIA OFFICINALIS JUICE

Could there be additional duplication in these entries, with the same items listed in different orders in different records? Checking for different orders of items within a single field is an interesting exercise in data auditing: see the next BASHing data post.

Source

How to scan for IP addresses on your network with Linux

Are you having trouble remembering what IP addresses are in use on your network? Jack Wallen shows you how to discover those addresses with two simple commands.

How many times have you tried to configure a static IP address for a machine on your network, only to realize you had no idea what addresses were already taken? If you happen to work with a desktop machine, you could always install a tool like Wireshark to find out what addresses were in use. But what if you’re on a GUI-less server? You certainly won’t rely on a graphical-based tool for scanning IP addresses. Fortunately, there are some very simple-to-use command line tools that can handle this task.

I’m going to show you how to scan your Local Area Network (LAN) for IP addresses in use with two different tools (one of which will be installed on your server by default). I’ll demonstrate on Ubuntu Server 18.04.

Let’s get started.

The arp command

The first tool we’ll use for the task is the built-in arp command. Most IT admins are familiar with arp, as it is used on almost every

platform. If you’ve never used arp (which stands for Address Resolution Protocol), the command is used to manipulate (or display) the kernel’s IPv4 network neighbor cache. If you issue arp with no mode specifier or options, it will print out the current content of the ARP table. That’s not what we’re going to do. Instead, we’ll issue the command like so:

arp -a

The -a option uses and alternate BSD-style output and prints all known IP addresses found on your LAN. The output of the command will display IP addresses as well as the associated ethernet device (Figure A).

Figure A

Figure A

I have a lot of virtual machines on my LAN.

You now have a listing of each IP address in use on your LAN. The only caveat, is that (unless you know the MAC address of every device on your network), you won’t have a clue as to which machine the IP addresses are assigned. Even without knowing what machine is associated with what address you at least know what addresses are being used.

Nmap

Next, we use a command that offers more options. Said command is nmap. You won’t find nmap installed on your Linux machine by default, so we must add it to the system. Open a terminal window (or log into your GUI-less server) and issue the command:

sudo apt-get install nmap -y

Once the installation completes, you are ready to scan your LAN with nmap. To find out what addresses are in use, issue the command:

nmap -sP 192.168.1.0/24

Note: You will need to alter the IP address scheme to match yours.

The output of the command (Figure B), will show you each address found on your LAN.

Figure B

Figure B

Nmap is now giving us slightly more information.

Let’s make nmap more useful. Because it offers a bit more flexibility, we can also discover what operating system is associated with an IP address. To do this, we’ll use the options -sT (TCP connect scan) and -O (operating system discovery). The command for this is:

sudo nmap -sT -O 192.168.1.0/24

Depending on the size of your network, this command can take some time. And if your network is large, consider sending the output of the command to a file like so:

sudo nmap -sT -O 192.168.1.0/24 > nmap_output

You can then view the file with a text editor to find out what operating system is attached to an IP address (Figure C).

Figure C

Figure C

Operating systems are associated with IP addresses.

With the help of these two simple commands, you can locate IP addresses on your network that are in use. Now, when you’re assigning a static IP address, you won’t accidentally assign one already in use. We all know what kind of headaches that can cause.

Source

Keep your edge with these powerful Linux admini… » Linux Magazine

Source

Wipro Joins Linux Foundation Networking (LFN) As Gold Member

 

Leading global IT services provider to help accelerate open technology development and industry adoption

SAN FRANCISCO – January 8, 2019 – The LF Networking Fund (LFN), which facilitates collaboration and operational excellence across open networking projects, continues its membership growth and deepens its global presence with the addition of new Gold member Wipro Limited, a leading global information technology, consulting and business process services company. Wipro Limited joins LFN to support the development of next-generation Open Networking Automation Platform (ONAP) technologies and use cases for current and future networks.

Wipro Limited joins six other LFN Gold members, including Accenture, Aptira, Inocybe Technologies, Lumina Networks, Microsoft and Telstra. A full list of LFN members by category is available at https://www.lfnetworking.org/members/.

K.R. Sanjiv, Chief Technology Officer, Wipro Limited, said, “Today, open source has become the preferred computing model for communications, artificial intelligence and analytics-driven technology solutions to facilitate innovation, cost efficiency and greater industry collaboration. Given Wipro’s focus on and investments in 5G, analytics and the Wipro HOLMESTM artificial intelligence platform, we believe ONAP is the right platform for us to leverage, for network management, automation and orchestration. We are committed to bringing best-of-breed open source-based solutions to the market and are excited to be a part of ONAP and LF Networking.”

Wipro is committed to collaborating with partners across the ecosystem to enable technologies that help organizations transform their digital networks. This collaboration will allow Wipro to leverage open source-based solutions, frameworks and accelerators to help enterprises develop open source strategies and enable their application modernization, cloud and digital transformation journeys.

LFN supports the momentum of open source networking, integrating governance of participating projects in order to enhance operational excellence, simplify member engagement, and increase collaboration.

“Wipro caps off a great first year for LFN and the propagation of open source networking technologies,” said Arpit Joshipura, general manager of Networking and Orchestration, The Linux Foundation. “The company’s global expertise will be a great asset as LFN enters its second year and continues to build a strong international community to accelerate continued deployment and global adoption of open source networking technologies by end users and commercial ecosystems.”

About The Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and industry adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

 

Source

Migrating to Linux: Network and System Settings | Linux.com

Migrating to Linux: Network and System Settings

migration

This series provides an overview of fundamentals to help you make the move to Linux; here we cover some common settings you’ll use on your desktop Linux system.

Learn how to transition to Linux in this tutorial series from our archives.

In this series, we provide an overview of fundamentals to help you successfully make the transition to Linux from another operating system. If you missed the earlier articles in the series, you can find them here:

Part 1 – An Introduction

Part 2 – Disks, Files, and Filesystems

Part 3 – Graphical Environments

Part 4 – The Command Line

Part 5 – Using sudo

Part 6 – Installing Software

Linux gives you a lot of control over network and system settings. On your desktop, Linux lets you tweak just about anything on the system. Most of these settings are exposed in plain text files under the /etc directory. Here I describe some of the most common settings you’ll use on your desktop Linux system.

A lot of settings can be found in the Settings program, and the available options will vary by Linux distribution. Usually, you can change the background, tweak sound volume, connect to printers, set up displays, and more. While I won’t talk about all of the settings here, you can certainly explore what’s in there.

Connect to the Internet

Connecting to the Internet in Linux is often fairly straightforward. If you are wired through an Ethernet cable, Linux will usually get an IP address and connect automatically when the cable is plugged in or at startup if the cable is already connected.

If you are using wireless, in most distributions there is a menu, either in the indicator panel or in settings (depending on your distribution), where you can select the SSID for your wireless network. If the network is password protected, it will usually prompt you for the password. Afterward, it connects, and the process is fairly smooth.

You can adjust network settings in the graphical environment by going into settings. Sometimes this is called System Settings or just Settings. Often you can easily spot the settings program because its icon is a gear or a picture of tools (Figure 1).

figure-1.png

Network Settings

Figure 1: Gnome Desktop Network Settings Indicator Icon.

Network Interface Names

Under Linux, network devices have names. Historically, these are given names like eth0 and wlan0 — or Ethernet and wireless, respectively. Newer Linux systems have been using different names that appear more esoteric, like enp4s0 and wlp5s0. If the name starts with en, it’s a wired Ethernet interface. If it starts with wl, it’s a wireless interface. The rest of the letters and numbers reflect how the device is connected to hardware.

Network Management from the Command Line

If you want more control over your network settings, or if you are managing network connections without a graphical desktop, you can also manage the network from the command line.

Note that the most common service used to manage networks in a graphical desktop is the Network Manager, and Network Manager will often override setting changes made on the command line. If you are using the Network Manager, it’s best to change your settings in its interface so it doesn’t undo the changes you make from the command line or someplace else.

Changing settings in the graphical environment is very likely to be interacting with the Network Manager, and you can also change Network Manager settings from the command line using the tool called nmtui. The nmtui tool provides all the settings that you find in the graphical environment but gives it in a text-based semi-graphical interface that works on the command line (Figure 2).

figure-2.png

nmtui

Figure 2: nmtui interface

On the command line, there is an older tool called ifconfig to manage networks and a newer one called ip. On some distributions, ifconfig is considered to be deprecated and is not even installed by default. On other distributions, ifconfig is still in use.

Here are some commands that will allow you to display and change network settings:

Process and System Information

In Windows, you can go into the Task Manager to see a list of the all the programs and services that are running. You can also stop programs from running. And you can view system performance in some of the tabs displayed there.

You can do similar things in Linux both from the command line and from graphical tools. In Linux, there are a few graphical tools available depending on your distribution. The most common ones are System Monitor or KSysGuard. In these tools, you can see system performance, see a list of processes, and even kill processes (Figure 3).

figure-3.png

NetHogs

Figure 3: Screenshot of NetHogs.

In these tools, you can also view global network traffic on your system (Figure 4).

figure-4.png

System Monitor

Figure 4: Screenshot of Gnome System Monitor.

Managing Process and System Usage

There are also quite a few tools you can use from the command line. The command ps can be used to list processes on your system. By default, it will list processes running in your current terminal session. But you can list other processes by giving it various command line options. You can get more help on ps with the commands info ps, or man ps.

Most folks though want to get a list of processes because they would like to stop the one that is using up too much memory or CPU time. In this case, there are two commands that make this task much easier. These are top and htop (Figure 5).

figure-5.png

top

Figure 5: Screenshot of top.

The top and htop tools work very similarly to each other. These commands update their list every second or two and re-sort the list so that the task using the most CPU is at the top. You can also change the sorting to sort by other resources as well such as memory usage.

In either of these programs (top and htop), you can type ‘?’ to get help, and ‘q’ to quit. With top, you can press ‘k’ to kill a process and then type in the unique PID number for the process to kill it.

With htop, you can highlight a task by pressing down arrow or up arrow to move the highlight bar, and then press F9 to kill the task followed by Enter to confirm.

The information and tools provided in this series will help you get started with Linux. With a little time and patience, you’ll feel right at home.

Source

How to Create Bootable Ubuntu 18.04 USB Stick on Windows

This tutorial will walk you through the process of creating a bootable Ubuntu USB stick on Windows. You can use this USB stick to boot and test out or install Ubuntu on any computer that supports booting from USB.

  • A 4GB or larger USB stick drive
  • Microsoft Windows XP or later

Creating Bootable Ubuntu 18.04 USB Stick on Windows is a relatively straightforward process, just follow the steps outlined below.

To download the Ubuntu ISO file visit the Ubuntu downloads page where you can find download links for Ubuntu Desktop, Ubuntu Server and various Ubuntu flavours.

Most likely you will want to download the latest Ubuntu LST Desktop version.

There are several different applications available for free use which will allow you to flash ISO images to USB drives. In this tutorial we will create a bootable ubuntu 18.04 USB stick using Etcher.

Etcher is a free and open-source utility for flashing images to SD cards & USB drives and supports Windows, macOS and Linux.

Head over to the balenaEtcher downloads page, and download the most recent Etcher for Windows.

Once the installation file is downloaded, double-click on it to launch the installation wizard. Follow the installation wizard’s steps to install Etcher on your Windows desktop.

Creating bootable Ubuntu USB stick with Etcher is an easy task to perform.

  1. Insert the USB flash drive into the USB port and Launch Etcher.
  2. Click on the Select image button and locate your Ubuntu .iso file. If you downloaded the file using a web browser then it should be stored in Downloads folder located in your user account.
  3. Etcher will autoselect the USB drive if only one drive is present. Otherwise if more than one SD cards or USB sticks are attached make sure you have selected the correct USB drive before flashing the image.
  4. Click on the Flash image button and the flashing process will start.

    Etcher will show a progress bar and ETA while flashing the image.

    The process may take several minutes, depending on the size of the ISO file and the USB stick speed. Once completed the following screen will appear informing you that the image is successfully flashed.

    Click on the [X] to close the Etcher window.

That’s all! You have a bootable Ubuntu on your USB stick.

Source

Linux Apps on Chromebooks Getting Display Scaling for High-Res Devices

Linux Apps on Chromebooks Getting Display Scaling for High-Res Devices

In retrospect, the entire project bringing Linux apps to Chrome OS has been a relatively smooth, fast, and painless process for end users. Unlike the years-long Play Store transition (which is still playing out in quite a few ways even a few years later), bringing Linux apps to Chromebooks has been a process that has evolved quite rapidly.

There are still a few notable missing pieces, however, like the lack of proper audio support, GPU support, and scaling for high DPI displays. We know GPU support is inbound in 2019 and that audio support is also in the works (though the team has missed the Chrome OS 71 target set earlier in the year), but little has been solidified on the resolution scaling front.

With Chrome OS 72 in Developer Channel, that all changes.

Discovered by Kevin Tofel over at About Chromebooks, Linux apps on Chromebooks now have a nifty scaling feature users can toggle right from the app shelf. With a right-click on your Linux app, a context menu will appear that will allow you to choose a “Use low density” option.

Select that, restart the app, and now your app will scale much better on your high-res display. Remember, on devices like the Pixelbook, Pixel Slate, HP Chromebook x2, and basically any 1080p device that has a screen smaller than 15-inches, Chrome OS scales the entire interface to make things a bit larger on the screen while keeping icons, text and graphics nice and sharp.

With Linux being unable to take advantage of this graphical trick, interface elements on a 12.3-inch screen with a 3000×2000 pixel resolution will render incredibly small to the point of obfuscation. Allowing Linux apps to leverage display scaling removes this problem.

An added benefit is Chrome OS can actually remember the setting for each app and scale that particular app the same way each time you open it in the future. For general usability, this is a very important step in the process.

As a test, we installed Inkscape: a great SVG editor that has been hard to use because of the ridiculously small interface elements. Take a look below at the before and after pics. Both are screenshots of the app running full-screen on the Pixelbook.

Inkscape running in native resolution.Inkscape running with “Use Low Density” option enabled.

As you can see, this little setting makes using apps like this a much better experience. Though Inkscape wouldn’t run when the low density option was selected initially, a quick restart of the Pixelbook allowed it to work. Just remember all this is firmly in Developer Mode, so you’ll have hiccups here and there. I have little doubt that things will get ironed out as this option makes its way to the Stable Channel.

As we see GPU acceleration and audio fixes come later this year, it is really exciting to think of all that will be possible on a Chromebook if users choose to leverage them.

Source

WP2Social Auto Publish Powered By : XYZScripts.com