Things You Should Know : Wireless Hacking Intermediate

In the previous post in the ‘things you should know’ series I discussed Wireless Hacking basics. It’s recommended that you go through it before starting this tutorial.

Pre-requisites

You should know (all this is covered in Wireless Hacking basics)-

  • What are the different flavors of wireless networks you’ll encounter and how difficult it is to hack each of them.
  • What are hidden networks, and whether they offer a real challenge to a hacker.
  • Have a very rough idea how each of the various ‘flavors’ of wireless networks is actually hacked.

Post-reading

You will know –

  • Know even more about different flavors of wireless networks.
  • How to go about hacking any given wireless network.
  • Common tools and attacks that are used in wireless hacking.

The last two points would be covered in detail in the coming posts.
A rough idea about the cryptographic aspects of the attacks, the vulnerabilities and the exploits.
A rough idea about the cryptographic aspects of each ‘flavor’ of wireless network security.

Pirates of the Caribbean

Suppose you are in ship manufacturing business.
These are times when pirates were rampaging the seas. You observed how the
merchant ships are all floating unguarded in the seas, and the pirate industry is
booming because of easy targets. You decide to create fortified ships,
which can defend themselves against the pirates. For this, you use an alloy
X. Your idea was appreciated by merchants and everyone started using your ships….

The most iconic pirates of modern times

Unfortunately, your happiness was short lived. Soon, the pirates found out flaws in
your ships and any pirate who knew what he was doing could easily get
past your ship’s defense mechanisms. For a while you tried to fix the
known weaknesses in the ship, but soon realized that there were too many
problems, and that the very design of the ship was flawed.

You knew what flaws the pirates were exploiting, and could build a new and stronger ship. However, the merchants weren’t willing to pay for new ships. You
then found out that by remodeling some parts of the ship in a very
cost efficient way, you could make the ship’s security almost
impenetrable. In the coming years, some pirates found a few structural
weaknesses in alloy X, and some issues with the core design of the ship (remnant weaknesses of the original ship). However,
these weaknesses were rare and your customers were overall happy.

After
some time you decided to roll out an altogether new model of the ship. This time,
you used a stronger allow, Y. Also, you knew all the flaws in the
previous versions of the ship, and didn’t make any errors in the design
this time. Finally, you had a ship which could withstand constant
bombardment for months on end, without collapsing. There was still scope
for human error, as the sailors can sometimes be careless, but other
than that, it was an invincible ship.

WEP, WPA and WPA-2

WEP is the flawed ship in the above discussion. The aim of Wireless Alliance was to write an algorithm to make wireless network (WLAN) as secure as wired networks (LAN). This is why the protocol was called Wired Equivalent Privacy (privacy equivalent to the one expected in a traditional wired network). Unfortunately, while in theory the idea behind WEP sounded bullet-proof, the actual implementation was very flawed. The main problems were static keys and weak IVs. For a while attempts were made to fix the problems, but nothing worked well enough(WEP2, WEPplus, etc. were made but all failed).

WPA was a new WLAN standard which was compatible with devices using WEP encryption. It fixed pretty much all the flaws in WEP encryption, but the limitation of having to work with old hardware meant that some remnants of the WEPs problems would still continue to haunt WPA. Overall, however, WPA was quite secure. In the above story, this is the remodeled ship.

WPA-2 is the latest and most robust security algorithm for wireless networks. It wasn’t backwards compatible with many devices, but these days all the new devices support WPA-2. This is the invincible ship, the new model with a stronger alloy.

But wait…

In last tutorial I assumed WPA and WPA-2 are the same thing. In this
one, I’m telling you they are quite different. What’s the matter?

Well actually, the two standards are indeed quite different. However, while it’s true there are some remnant flaws in WPA that are absent in WPA-2, from a hacker’s perspective, the technique to hack the two networks is often the same. Why?

 

  • Very few tools exist which carry out the attacks against WPA networks properly (the absence of proof-of-concept scripts means that you have to do everything from scratch, which most people can’t).
  • All these attacks work only under certain conditions (key renewal period must be large, QoS must be enabled, etc.)

Because of these reasons, despite WPA being a little less secure than WPA-2, most
of the time, a hacker has to use brute-force/dictionary attack and other
methods that he would use against WPA-2, practically making WPA and
WPA-2 the same thing from his perspective
.

PS: There’s more to the WPA/WPA-2 story than what I’ve captured here. Actually WPA or WPA-2 are ambiguous descriptions, and the actual intricacy (PSK, CCMP, TKIP, X/EAP, AES w.r.t. cipher used and authentication used) would required further diving into personal and enterprise versions of WPA as well as WPA-2.

How to Hack

Now that you know the basics of all these network, let’s get to how actually these networks are hacked. I will only name the attacks, further details would be provided in coming tutorials-

WEP

The Initialization vector v passed to the RC4 cipher is the
weakness of WEP

Most of the attacks rely on

inherent weaknesses in IVs

(initialization vectors). Basically, if you collect enough of them, you will get the password.

  1. Passive method
    • If you don’t want to leave behind any footprints, then passive method is the way to go. In this, you simply listen to the channel on which the network is on, and capture the data packets (airodump-ng). These packets will give you IVs, and with enough of these, you can crack the network (aircrack-ng). I already have a tutorial on this method, which you can read here – Hack WEP using aircrack-ng suite.
  2. Active methods
  • ARP request replayThe above method can be incredibly slow, since you need a lot of packets (there’s no way to say how many, it can literally be anything due the nature of the attack. However, usually the number of packets required ends up in 5 digits). Getting these many packets can be time consuming. However, there are many ways to fasten up the process. The basic idea is to initiate some sort of conversation in the network, and then capture the packets that arise as a result of the conversation. The problem is, not all packets have IVs. So, without having the password to the AP, you have to make it generate packets with IVs. One of the best ways to do this is by requesting ARP packets (which have IVs and can be generated easily once you have captured at least one ARP packet). This attack is called ARP replay attack. We have a tutorial for this attack as well, ARP request replay attack.
  • Chopchop attack
  • Fragmentation attack
  • Caffe Latte attack

I’ll cover all these attacks in detail separately (I really can’t sumarrize the bottom three). Let’s move to WPA-

WPA-2 (and WPA)

There are no vulnerabilities here that you can easily exploit. The only two options we have are to guess the password or to fool a user into giving us the password.

  1. Guess the password – For guessing something, you need two things : Guesses (duh) and validation. Basically, you need to be able to make a lot of guess, and also be able to verify if they are correct or not. The naive way would be to enter the guesses into the password field that your OS provides when connecting to the wifi. That would be slow, since you’d have to do it manually. Even if you write a script for that, it would take time since you have to communicate with the AP for every guess(that too multiple times for each guess). Basically, validation by asking the AP every time is slow. So, is there a way to check the correctness of our password without asking the AP? Yes, but only if you have a 4-way handshake. Basically, you need the capture the series of packets transmitted when a valid client connects to the AP. If you have these packets (the 4-way handshake), then you can validate your password against it. More details on this later, but I hope the abstract idea is clear. There are a few different ways of guessing the password :-
  • Bruteforce – Tries all possible passwords. It is guaranteed that this will work, given sufficient time. However, even for alphanumeric passwords of length 8, bruteforce takes incredibly long. This method might be useful if the password is short and you know that it’s composed only of numbers.
  • Wordlist/Dictionary – In this attack, there’s a list of words which are possible candidates to be the password. These word list files contains english words, combinations of words, misspelling of words, and so on. There are some huge wordlists which are many GBs in size, and many networks can be cracked using them. However, there’s no guarantee that the network you are trying to crack would have it’s password in the list. These attacks get completed within a reasonable timeframe.
  • Rainbow table – The validation process against the 4-way handshake that I mentioned earlier involves hashing of the plaintext password which is then compared with the hash in handshake. However, hashing (WPA uses PBKDF2) is a CPU intensive task and is the limiting factor in the speed at which you can test keys (this is the reason why there are so many tools which use GPU instead of CPU to speed up cracking). Now, a possible solution to this is that the person who created the wordlist/dictionary that we are using can also convert the plaintext passwords into hashes so that they can be checked directly. Unfortunately, WPA-2 uses a salt while hashing, which means that two networks with the same password can have different hashing if they use different salts. How does WPA-2 choose the salt? It uses the network’s name (SSID) as the salt. So two networks with the same SSID and the same password would have the same salt. So, now the guy who made the wordlist has to create separate hashes for all possible SSID’s. Practically, what happens is that hashes are generated for the most common SSID’s (the default one when a router is purchases like -linksys, netgear, belkin, etc.). If the target network has one of those SSID’s then the cracking time is reduced significantly by using the precomputed hashes. This precomputed table of hashes is called rainbow table. Note that these tables would be significantly larger than the wordlists tables. So, while we saved ourselves some time while cracking the password, we had to use a much larger file (some are 100s of GBs) instead of a smaller one. This is referred to as time-memory tradeoff. This page has rainbow tables for 1000 most common SSIDs.

 

  • Fool a user into giving you the password – Basically this just a combination of Man in the middle attacks and social engineering attacks. More specifically, it is a combination of evil twin and phishing. In this attack, you first force a client to disconnect from the original WPA-2 network, then force him to connect to a fake open network that you create, and then send him a login page in his browser where you ask him to enter the password of the network. You might be wondering, why do we need to keep the network open and then ask for the password in the browser (can’t we just create a WPA-2 network and let the user give us the password directly). The answer to this lies in the fact that WPA-2 performs mutual authentication during the 4-way handshake. Basically, the client verifies that the AP is legit, and knows the password, and the AP verifies that the client is legit and knows the password (throughout the process, the password is never sent in plaintext). We just don’t have the information necessary enough to complete the 4-way handshake.
  • Bonus : WPS vulnerability and reaver [I have covered it in detail seperately so not explaining it again (I’m only human, and a very lazy one too)]

 

The WPA-2 4 way handshake procedure. Both AP and the client authenticate each other

Tools (Kali)

In this section I’ll name some common tools in the wireless hacking category which come preintalled in Kali, along with the purpose they are used for.

  1. Capture packets
  • airodump-ng
  • wireshark (really versatile tool, there are books just covering this tool for packet analysis)
  • WPS
  • reaver
  • pixiewps (performs the “pixie dust attack”)
  • Cool tools
  • aireplay-ng (WEP mostly)
  • mdk3 (cool stuff)
  • Automation
  • wifite
  • fluxion (actually it isn’t a common script at all, but since I wrote a tutorial on it, I’m linking it)

You can find more details about all the tools installed on

Kali Tools page

.

Okay guys, this is all that I had planned for this tutorial. I hope you learnt a lot of stuff. Will delve into further depths in coming tutorials.

Source

Stranded Deep adds a new experimental couch co-op mode to survive together

Fancy surviving on a desert island with a friend? That’s now possible with a new experimental build of Stranded Deep [Steam].

To go along with this new feature, they also added a Player Ragdoll for when you’re knocked out or dead. You partner can help you up with bandages before you bleed out and bodies can be dragged as well for maximum fun. It’s good to see them add more from their roadmap, with plenty more still to come before it leaves Early Access.

They also added a Raft Passenger Seat, fixed a bunch of bugs and updated Unity to “2017.4.13f1”. Also the shark music won’t play until you’re actually attacked so no more early warnings for you.

To access it, you will need to opt-in to the “experimental” Beta on Steam.

Source

Canta: Best Theme And Icons Pack Around For Ubuntu/Linux Mint – NoobsLab

If you are a person who changes themes on your Linux system frequently then you are on the right page. Today, we present you best theme under development so far for Ubuntu 18.04/Linux Mint 19, it has variants in light and dark with different styles: normal, compact and square. If you are a fan of material design or not, most probably you are going to like this theme and icons pack. The initial release of Canta was back in March, 2018 and released under GNU General Public License V3.

Canta theme

is based on Materia Gtk theme.

This pack mainly targets Gnome Shell desktop but can be used on other desktops as well such as: Cinnamon, Xfce, Mate etc. Canta icons are supplied with the same pack and designed by same author. Both themes and icons are available in our PPAs. Basically these icons are designed to go with this theme pack but you can use them with any theme. Using our PPA themes are available for Ubuntu 18.10/18.04 and Linux Mint 19. Icons available for Ubuntu 18.10/18.04/16.04/14.04/Linux Mint 19/18/17. If you find any kind of bug or problem with this theme then report it to author and it will get fixed in the next update.

canta theme


canta theme
canta theme
canta theme

Available for Ubuntu 18.10/18.04 Bionic/Linux Mint 19/and other Ubuntu derivatives
To install Canta themes in Ubuntu/Linux Mint open Terminal (Press Ctrl+Alt+T) and copy the following commands in the Terminal:
Available for Ubuntu 18.10/18.04 Bionic/16.04 Xenial/14.04 Trusty/Linux Mint 19/18/17/and other Ubuntu derivatives
To install Canta icons in Ubuntu/Linux Mint open Terminal (Press Ctrl+Alt+T) and copy the following commands in the Terminal:
Did you like this pack?

Source

How to Use RAR files in Ubuntu Linux

Last updated September 27, 2018

RAR is a quite good archive file format. But, it isn’t the best when you’ve got 7-zip offering great compression ratios and Zip files being easily supported across multiple platforms by default. It is one of the most popular archive formats, but, Ubuntu‘s archive manager does not support extracting RAR files nor does it let you create RAR files.

Fret not, we have a solution for you. To enable the support to extract RAR files, you need to install UNRAR – which is a freeware by RARLAB. And, to create and manage RAR files, you need to install RAR – which is available as a trial.

RAR files in Ubuntu Linux

Extracting RAR Files

Unless you have it installed, extracting RAR files will show you an error “Extraction not performed“. Here’s how it should look like (Ubuntu 18.04):

Error in RAR extraction in Ubuntu

If you want to resolve the error and easily be able to extract RAR files, follow the instructions below to install unrar:

-> Launch the terminal and type in:

sudo apt-get install unrar

-> After installing unrar, you may choose to type in “unrar” (without the inverted commas) to know more about its usage and how to use RAR files with the help of it.

The most common usage would obviously be extracting the RAR file you have. So, you can either perform a right-click on the file and proceed to extract it from there or you can do it via the terminal with the help of this command:

unrar x FileName.rar

You can see that in action here:

Using unrar in Ubuntu

If the file isn’t present in the Home directory, then you have to navigate to the target folder by using the “cd” command. For instance, if you have the archive in the Music directory, simply type in “cd Music” to navigate to the location and then extract the RAR file.

Creating & Managing RAR files

Using rar archive in Ubuntu Linux

UNRAR does not let you create RAR files. So, you need to install the RAR command-line tool to be able to create RAR archives.

To do that, you need to type in the following command:

sudo apt-get install rar

Here, we will help you create a RAR file. In order to do that, follow the command syntax below:

rar a ArchiveName File_1 File_2 Dir_1 Dir_2

When you type a command in this format, it will add every item inside the directory to the archive. In either case, if you want specific files, just mention the exact name/path.

By default, the RAR files reside in HOME directory.

In the same way, you can update/manage the RAR files. Just type in a command using the following syntax:

rar u ArchiveName Filename

To get the list of commands for the RAR tool, just type “rar” in the terminal.

Wrapping Up

Now that you’ve known how to use RAR files on Ubuntu, will you prefer using it over 7-zip, Zip, or Tar.xz?

Let us know your thoughts in the comments below.

About Ankush Das

A passionate technophile who also happens to be a Computer Science graduate. He has had bylines at a variety of publications that include Ubergizmo & Tech Cocktail. You will usually see cats dancing to the beautiful tunes sung by him.

Source

Download Jenkins Linux 2.147

Jenkins (also known as Jenkins CI) is the world’s most powerful open source continuous integration server designed from the offset to provide over 300 plugins for building and testing any software project. It is a web-based application that runs on top of a web server, such as Apache.

Features at a glance

With Jenkins, you can monitor the execution of repeated jobs, including those run by cron or a similar automation software. It is easily installable, configurable and supports third-party plugins, distributed builds, as well as file fingerprinting.

In addition, Jenkins’ highlights include after-the-fact tagging, JUnit and TestNG test reporting, support for permanent links, support for mainstream operating systems and architectures, change set support, RSS, Instant Messaging and email integration.

Getting started with Jenkins

Jenkins is an easy-to-use and easy-to-install software project, but it has a great number of advanced feartures, for which its developers offer a detailed getting started with Jenkins guide, teaching you how to start, access and administering Jenkins, as well as to do various operations.

For example, you will learn how to build a software project, a Maven project, a matrix project, an Android app, monitor external jobs, use Jenkins plugins, file fingerprint tracking, secure Jenkins, change the timezone, use other shells, split a large job in smaller pieces, use Jenkins for non-Java projects, as well as to access the Jenkins script console, the command-line interface and SSH (Secure Shell).

Additionally, the user will learn how to integrate Jenkins with Drupal, Python, Perl and .NET projects, remove and disable third-party plugins, run Jenkins from behind a HTTP/HTTPS proxy, and many other useful things.

Supported operating systems

Being designed for the Web, Jenkins is a platform-independent application that has been successfully tested on several GNU/Linux distributions, including Ubuntu, Debian, Red Hat Enterprise Linux, Fedora, CentOS, openSUSE and Gentoo, various BSD flavors, including FreeBSD and OpenBSD, Solaris (OpenIndiana), Microsoft Windows and Mac OS X operating systems.

Continuous integration server Continuous integration CI server Continuous Integration Server CI

Source

Joint Venture formed to ensure South African businesses seamlessly migrate to the cloud

SUSE, Microsoft, Mint & SAB&T join forces to help businesses do more with less

Blog by Matthew Lee, Cloud and Strategic Alliances Manager at SUSE

I am very pleased today, as SUSE, Microsoft South Africa, Mint and SAB&T have entered into a local joint venture designed to assist organisations across industry sectors in migrating their SAP workloads to Azure given the imminent arrival of two Microsoft data centres in Africa.

SUSE will be providing the SAP-optimised Linux operating system tuned for Azure with Microsoft the cloud infrastructure provider. Mint will deliver the required Azure expertise and SAB&T will offer the SAP partner skills to support companies with the transition.

This joint venture is significant as it shows companies that the local tools, processes, programmes, and skills are in place for a successful SAP migration when the local Microsoft data centres go live. This partnership between these four experts in their field will provide the comfort levels needed when it comes to running SAP in the cloud – something South African businesses are looking for.

For Carel du Toit, CEO of the Mint Group, this partnership reflects a growing trend to deliver customer value propositions that transform their computing, storage, and communication into utilities that are easily available through cloud resources on an as-needed basis. “Opportunities exist for organisations to look at operationalising their current environments, driving down running costs, and aligning their operational cost model with the actual utilisation requirement for their solutions. Azure is a compelling hosting option for customers who are also making use of Office365 – since their SAP and Office environments would essentially be hosted in the same Azure Regions – enabling deep integration between the systems for workflow and reporting,” he says.

According to Riedwaan Bassadien, Azure Open Source Lead at Microsoft SA, cloud migrations are becoming popular with many organisations as they look to downsize their data centre footprint. “This is an opportunity for IT solution providers in the local ecosystem to help customers move to the cloud and for software vendors and start-ups to deliver cloud native solutions to Africa and the world stage. With the advent of Azure data centre regions in SA, it is seen as a big enabler.”

Tinus Brink, Director of Consulting at SAB&T feels part of this migration entails putting the skills in place to deliver an integrated offering to customers that have decided to enhance their SAP environments for a digital world. “The cloud offers numerous opportunities to deliver enhanced business value. This joint venture is designed to provide a comprehensive and professional offering that removes the challenges of migrating to the cloud, so businesses can remain focused on delivering their strategic objectives,” says Brink.

Given the infrastructure challenges that still exist in Africa, the cloud provides a viable alternative that addresses many business continuity concerns. I believe that leveraging the respective skills of our four organisations will create an enabling environment for companies to easily and cost-effectively move to the Azure-based data centres.

With mission-critical systems such as those delivered through SAP environments, companies do not have the luxury of down-time or losing data. Our joint venture is designed to deliver the best value possible and make the cloud journey an empowering one for business.

According to du Toit, a successful SAP on Azure cloud migration requires a solid partner in terms of the cloud infrastructure, an expert on deploying and configuring SAP, and a reliable and cost-effective operating system to use as a platform between these worlds. “By combining the efforts of Mint (as a Microsoft Cloud Gold Partner), SAB&T (as one of the de facto names in SAP knowledge and training in the South African market), and SUSE’s cost effective, performant, resilient, specially-tailored SAP workloads, we give customers a no-compromise value proposition which covers all the bases,” he says.

Bassadien from Microsoft agrees. “I believe that each party in the joint venture brings something special to the market. It speaks of depth of expertise and high levels of trust between each party as well as trust that our joint customers can rely on. Experienced CIOs and business decision-makers know that there is no one organisation that can give you everything. What we have tried to do here is to bring together a dream team of sorts, for the benefit of our joint customers.”

 

Share with friends and colleagues on social media

Source

FOSSPicks » Linux Magazine

Graham reviews Thunderbird 60, Stress-Terminal UI, Taskbook, SolveSpace, Star Ruler 2, and more!

Email client

Thunderbird 60

As much as online proprietary services would like old-school email to go away, it’s not dead yet. The great thing about email is that it’s truly peer-to-peer and open. It enables any of us to run our own mail domain and send and receive messages from our own servers or computers, which causes the major problem with email too – anyone includes spammers, and there are thousands of them. There are solutions to spammers (SpamAssassin and Rspamd), and email is still amazingly useful. In the end, we still need a desktop email client. Roundcube and other online services are great, but they can’t compete with the desktop integration and offline access of a proper application like Mozilla’s Thunderbird.

Thunderbird used to be the go-to desktop email application, regardless of your operating system and desktop environment. Its development stalled. Fortunately, there was enough community concern for Thunderbird and its pivotal role as one of the only usable open source email clients that development has restarted. This is the first major Thunderbird release under this new regime, and one hopes the first of many as Mozilla rewrites the codebase, drops the old Firefox technologies, and builds an email client fit for the future. This doesn’t mean that this release doesn’t include lots of updates – it does. After a long period of stable release stasis, version 60 really does contain many new features and fixes. For that reason, it doesn’t automatically upgrade from old versions. Keeping with the times, there are now light and dark themes thanks to the use of Firefox’s Photon design and excellent FIDO U2F support for two-factor authentication with various devices. There’s also experimental support for the conversion between MBOX and Maildir mail storage formats, which is particularly useful for Linux users who have historically started with one and now want to switch to the other.

When composing messages, there are several improvements to the way attachments are handled, allowing you to reorder them. The attachment pane appears when you first start writing an email, along with a hidden but non-empty attachment pane showing a paperclip. You can also remove recipients by clicking on a delete button that’s displayed when you move your cursor over the To/Cc/Bcc selector, and you can save a message as a template for other messages, creating them with the New Message from Template command. Native Linux notifications have been also reinstated. Besides these changes, there are lots of fixes that aren’t obvious. The calendar now allows for copying, cutting, and deleting across a single or recurring event, and it’s now much easier to see event locations in the week and day calendar views. Thunderbird is starting to feel alive again. While there are still some major features we’d like to see, such as integrated and simplified OpenPGP to strengthen Thunderbird’s privacy credentials, we’re just pleased the project is being worked on at all. Here’s to the next release!

[…]

Use Express-Checkout link below to read the full article (PDF).

Source

Understanding Linux Links: – Linux.com

Along with cp and mv, both of which we talked about at length in the previous installment of this series, links are another way of putting files and directories where you want them to be. The advantage is that links let you have one file or directory show up in several places at the same time.

As noted previously, at the physical disk level, things like files and directories don’t really exist. A filesystem conjures them up for our human convenience. But at the disk level, there is something called a partition table, which lives at the beginning of every partition, and then the data scattered over the rest of the disk.

Although there are different types of partition tables, the ones at the beginning of a partition containing your data will map where each directory and file starts and ends. The partition table acts like an index: When you load a file from your disk, your operating system looks up the entry on the table and the table says where the file starts on the disk and where it finishes. The disk header moves to the start point, reads the data until it reaches the end point and, hey presto: here’s your file.

Hard Links

A hard link is simply an entry in the partition table that points to an area on a disk that has already been assigned to a file. In other words, a hard link points to data that has already been indexed by another entry. Let’s see how this works.

Open a terminal, create a directory for tests and move into it:

mkdir test_dir
cd test_dir

Create a file by touching it:

touch test.txt

For extra excitement (?), open test.txt in a text editor and add some a few words into it.

Now make a hard link by executing:

ln test.txt hardlink_test.txt

Run ls, and you’ll see your directory now contains two files… Or so it would seem. As you read before, really what you are seeing is two names for the exact same file: hardlink_test.txt contains the same content, has not filled any more space in the disk (try with a large file to test this), and shares the same inode as test.txt:

$ ls -li *test*
16515846 -rw-r–r– 2 paul paul 14 oct 12 09:50 hardlink_test.txt
16515846 -rw-r–r– 2 paul paul 14 oct 12 09:50 test.txt

ls‘s -i option shows the inode number of a file. The inode is the chunk of information in the partition table that contains the location of the file or directory on the disk, the last time it was modified, and other data. If two files share the same inode, they are, to all practical effects, the same file, regardless of where they are located in the directory tree.

Fluffy Links

Soft links, also known as symlinks, are different: a soft link is really an independent file, it has its own inode and its own little slot on the disk. But it only contains a snippet of data that points the operating system to another file or directory.

You can create a soft link using ln with the -s option:

ln -s test.txt softlink_test.txt

This will create the soft link softlink_test.txt to test.txt in the current directory.

By running ls -li again, you can see the difference between the two different kinds of links:

$ ls -li
total 8
16515846 -rw-r–r– 2 paul paul 14 oct 12 09:50 hardlink_test.txt
16515855 lrwxrwxrwx 1 paul paul 8 oct 12 09:50 softlink_test.txt -> test.txt
16515846 -rw-r–r– 2 paul paul 14 oct 12 09:50 test.txt

hardlink_test.txt and test.txt contain some text and take up the same space *literally*. They also share the same inode number. Meanwhile, softlink_test.txt occupies much less and has a different inode number, marking it as a different file altogether. Using the ls‘s -l option also shows the file or directory your soft link points to.

Why Use Links?

They are good for applications that come with their own environment. It often happens that your Linux distro does not come with the latest version of an application you need. Take the case of the fabulous Blender 3D design software. Blender allows you to create 3D still images as well as animated films and who wouldn’t to have that on their machine? The problem is that the current version of Blender is always at least one version ahead of that found in any distribution.

Fortunately, Blender provides downloads that run out of the box. These packages come, apart from with the program itself, a complex framework of libraries and dependencies that Blender needs to work. All these bits and piece come within their own hierarchy of directories.

Every time you want to run Blender, you could cd into the folder you downloaded it to and run:

./blender

But that is inconvenient. It would be better if you could run the blender command from anywhere in your file system, as well as from your desktop command launchers.

The way to do that is to link the blender executable into a bin/ directory. On many systems, you can make the blender command available from anywhere in the file system by linking to it like this:

ln -s /path/to/blender_directory/blender /home/<username>/bin

Another case in which you will need links is for software that needs outdated libraries. If you list your /usr/lib directory with ls -l, you will see a lot of soft-linked files fly by. Take a closer look, and you will see that the links usually have similar names to the original files they are linking to. You may see libblah linking to libblah.so.2, and then, you may even notice that libblah.so.2 links in turn to libblah.so.2.1.0, the original file.

This is because applications often require older versions of alibrary than what is installed. The problem is that, even if the more modern versions are still compatible with the older versions (and usually they are), the program will bork if it doesn’t find the version it is looking for. To solve this problem distributions often create links so that the picky application believes it has found the older version, when, in reality, it has only found a link and ends up using the more up to date version of the library.

Somewhat related is what happens with programs you compile yourself from the source code. Programs you compile yourself often end up installed under /usr/local: the program itself ends up in /usr/local/bin and it looks for the libraries it needs / in the /usr/local/lib directory. But say that your new program needs libblah, but libblah lives in /usr/lib and that’s where all your other programs look for it. You can link it to /usr/local/lib by doing:

ln -s /usr/lib/libblah /usr/local/lib

Or, if you prefer, by cding into /usr/local/lib

cd /usr/local/lib

… and then linking with:

ln -s ../lib/libblah

There are dozens more cases in which linking proves useful, and you will undoubtedly discover them as you become more proficient in using Linux, but these are the most common. Next time, we’ll look at some linking quirks you need to be aware of.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Source

Linux Today – KDE Plasma 5.14 Desktop Environment Gets First Point Release, Update Now

Oct 17, 2018, 14:00

(Other stories by Marius Nestor)

Released last week on October 9, 2018, the KDE Plasma 5.14 desktop environment improvements to Plasma Discover package manager, a new Firmware Update feature, various user interface enhancements, better and new desktop effects, as well as slicker animations in the KWin window manager.

Now, the first point release, KDE Plasma 5.14.1, is available with an extra layer of improvements. Among the highlights of the KDE Plasma 5.14.1 point release, we can mention keyboard support for navigating desktop icons and the KonsoleProfiles applet, as well as focus handling fixes, addressed visual artifacts caused by the maximize KWin effect, better Flatpak and Snap support in Plasma Discover, and firmware update (fwupd) improvements.

Complete Story

Source

Linux Logical Volume Manager Video Tutorial look at source.

In this series of video tutorials, you will learn what LVM is and when you should use it. You’ll discover how LVM creates and uses layers of abstraction between storage devices and file systems including Physical Volumes, Volume Groups, and Logical Volumes.

More importantly, you’ll learn how to configure LVM, starting with the pvcreate command to configure physical volumes, the vgcreate command to configure volume groups, and the lvcreate command to create logical volumes.

Plus, you’ll see how easy it is to extend file systems and logical volumes using the lvextend command. Likewise, adding more space to the storage pool is painless with the vgextend command.

Next, you’ll learn how to create mirrored logical volumes and even how to migrate data from one storage device to another, without taking any downtime.

Introduction to the Logical Volume Manager (LVM)

Layers of Abstraction in LVM

Creating Physical Volumes (PVs), Volume Groups (VGs), and Logical Volumes (LVs)

Extending Volume Groups and Logical Volumes

Mirroring Logical Volumes

Removing Logical Volumes, Physical Volumes, and Volume Groups

Migrating Data from One Storage Device to Another

Logical Volume Manager – Summary

More Linux System Administration Resources

LVM Companion Workbook

Source

WP2Social Auto Publish Powered By : XYZScripts.com