How to Enable and Disable Root Login in Ubuntu – LinuxCloudVPS Blog

How to Enable and Disable Root Login in Ubuntu

We will show you how to enable and disable root login in Ubuntu. Root access it required when you need to perform administrative operations which are not permitted for the regular system users, but in the same time the root access may be a huge security risk if it is enabled or if it is not used properly. In this tutorial we will show you how to enable and disable root login on a Linux VPS running Ubuntu as an operating system.

What is root?

In Ubuntu, and Linux in general there is a super user named root which can perform any administrative tasks on the system. In case you type some command incorrectly it can be really dangerous so the root login in Ubuntu is disabled by default. You can still perform super user operations on the system by using the sudo command with your system user if sudo privileges are granted for that user.

If root login is disabled on your Ubuntu VPS and you want to enable it, we will show you how to do that. Please follow the steps below.

Enable Root Login on Ubuntu

To enable root login on your Ubuntu server, first you need to set up password for your root user as it is not set during the OS installation. You can set up the password for your root user by using the following command:

sudo passwd root

You will be prompted to enter a new password. Enter the same password twice to confirm it and it will be updated successfully. Our recommendation is to use a very strong password for your root user so you can avoid it to be compromised via brute-force. Generally, a password utilizing at least 12 characters including alphanumeric and grammatical symbols is sufficient. Never use passwords based upon dictionary words or significant dates.

#sudo passwd root
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully

Now that your have your root user enabled, you can check the OpenSSH settings and make sure your root user can use the service to access the server from remote locations. Open the OpenSSH server configuration file using a text editor of your choice. In our example, we are using nano:

sudo nano /etc/ssh/sshd_config

Find the line that starts with PermitRootLogin and make sure that line is not commented:

PermitRootLogin yes

If the line starts with # it means it is commented out, remove the # sign and save the file. Next, you need to restart the OpenSSH service for the changes to take effect. You can do that by using the following command:

sudo systemctl restart ssh.service

You can now connect to your server via SSH using your root user. Be careful though, with root login comes great responsibility.

You can also consider using SSH keys (private and public key) to login to your server. This method provides a more secure way of connecting to your server, instead of just using a password.

Disable Root Login on Ubuntu

If you have root login enabled on your Ubuntu VPS and you want it to be disabled you can follow the steps below.

First, delete the password of your root user and lock the root user using the following command:

sudo passwd -dl root

Then, open and edit the OpenSSH server configuration file using a text editor of your choice. We are using nano in our example:

sudo nano /etc/ssh/sshd_config

Find the line that starts with PermitRootLogin and make sure the value is set to no.

PermitRootLogin no

Once you make the appropriate changes in the OpenSSH configuration file, you need to restart the OpenSSH service for the changes to take effect. You can do that by using the following command:

sudo systemctl restart ssh.service

Of course, you don’t have to enable or disable root login on Ubuntu, if you use one of our Linux VPS Hosting services, in which case you can simply ask our expert Linux admins to enable or disable the root login on Ubuntu for you. They are available 24×7 and will take care of your request immediately.

PS. If you liked this post on how to Enable and Disable Root Login in Ubuntu, please share it with your friends on the social networks using the buttons on the left or simply leave a reply below. Thanks.

Be the first to write a comment.

Source

How To Setup mod_rewrite In Apache

Mod_rewrite on Apache

mod_rewrite is a Apache module installed on linux servers to manipulate URLs submitted in the browser to perform other functions than it appears. Mod_rewrite can improve SEO appearing to give it a static appearance.

This guide assumes you already have Apache installed, if you do not please see How to Install Apache

Enable mod_rewrite

You will want to edit the main Apache configuration file

nano /etc/httpd/conf/httpd.conf

Add or un-comment the following line

LoadModule rewrite_module modules/mod_rewrite.so

Once you have saved the file you can go ahead and restart Apache

systemctl restart httpd

or in CentOS 6 or below

service httpd restart

You should now see the module loaded by doing the following command

# httpd -M 2>&1|grep rewrite
rewrite_module (shared)

That is for enabling the module. Mod_rewrite rules can either be inserted directly into the VirtualHost block for a specific domain or in a .htaccess for that given domain.

Mod_rewrite Examples

Rewrite domain.com to www.domain.com

RewriteEngine On
RewriteCond % !^www. [NC]
RewriteRule ^(.*)$ http://www.%% [R=301,L]

The above redirect will take all requests to the non-www domain and redirect them with a 301 code to the www.domain.com url and appendedthe rest of the url to it.

Redirect all requests to https / SSL

RewriteEngine On
RewriteCond % ^domain.com [NC]
RewriteCond % off
RewriteRule ^(.*)$ https://%%

The above redirect will take all non-ssl requests and redirect them to https:// URLs.

Redirect request from one directory to another

RewriteRule ^subdirectory/(.*)$ /anotherdirectory/$1 [R=301,NC,L]

The above redirect will take any requests towards a single directory and redirect it to another directory with the rest of URL appended to it.

Redirect one domain to another

RewriteEngine On
RewriteCond % ^olddomain.com [NC,OR]
RewriteCond % ^www.olddomain.com [NC]
RewriteRule ^(.*)$ http://newdomain.com/$1 [L,R=301,NC]

This will redirect any requests with the destination of the olddomain and change them to the new domain. There are numerous redirects you can perform with mod_rewrite these are just a couple of common examples.

Sep 5, 2017LinuxAdmin.io

Source

Katello: Separate Lifecycle for Puppet Modules | Lisenet.com :: Linux | Security

Working with Katell. We’re going to configure a separate lifecycle for Puppet modules. This article is part of the Homelab Project with KVM, Katello and Puppet series.

Homelab

We have Katello installed on a CentOS 7 server:

katello.hl.local (10.11.1.4) – see here for installation instructions

See the image below to identify the homelab part this article applies to.

Separate Lifecycle for Puppet Modules

The idea for using a separate lifecycle for Puppet modules was taken from a Red Hat blog post that was published by Maxim Burgerhout.

We already know that we can create a repository that contains RPM files. We can then create a content view by snapshotting the repository.

We can create a content view with Puppet modules, just like we would do with RPMs. Based on that content view, Katello creates a special directory on the filesystem and it’s where the Puppet master looks for Puppet modules.

Katello creates a Puppet environment from the Puppet module content view the moment we publish it. As a result, using a Puppet module content view as a Puppet environment directly makes it easy to iterate quickly during development of our homelab Puppet modules.

The Plan

Below is a step-by-step plan that we’ll be following in this article.

  1. Step 1: create a Puppet product.
  2. Step 2: build Puppet modules.
  3. Step 3: create a Puppet repository.
  4. Step 4: sync Puppet repository.
  5. Step 5: create a content view.
  6. Step 6: add Puppet modules to the content view.
  7. Step 7: publish Puppet content view.
  8. Step 8: backup Katello configuration.

Configure Katello

Step 1: Create a Puppet Product

# hammer product create –name “puppet”

Step 2: Build Puppet Modules

See here for more info: Build and Import Puppet Modules into Katello

The idea here is to have a single Katello repository containing all our Puppet modules.

A Katello repository may be a plain directory containing a Pulp manifest and packaged Puppet modules. According to the Pulp project documentation, the Pulp manifest is a file listing each Puppet module contained in the directory. Each module is listed on a separate line which has the following format: <name>,<checksum>,<size>. The name is the file name, the checksum is SHA256 digest of the file, and the size is the size of the file in bytes. The Pulp manifest must be named PULP_MANIFEST. Having all this information, we can build Puppet modules manually, generate a Pulp manifest and import everything into Katello.

Get the source from GitHub:

# cd /opt
# git clone https://github.com/crylium/build-puppet-modules-for-katello.git

Build the modules, providing the path to the modules’ directory:

# bash ./build-puppet-modules-for-katello/puppet-module-build.sh
/etc/puppetlabs/code/environments/homelab/modules/

This will also create the file PULP_MANIFEST.

Step 3: Create a Puppet Repository

# hammer repository create
–product “puppet”
–name “homelab_modules”
–content-type “puppet”
–url “file:///etc/puppetlabs/code/environments/homelab/modules/”

Step 4: Synchronise Puppet Repository

# hammer repository synchronize
–product “puppet”
–name “homelab_modules”

Step 5: Create a Content View

# hammer content-view create
–name “puppet_content”
–description “Puppet modules”

Step 6: Add Puppet Modules to the Content View

View the module list:

# hammer puppet-module list
—|————————–|————–|———|————————————-
ID | NAME | AUTHOR | VERSION | UUID
—|————————–|————–|———|————————————-
38 | graylog | graylog | 0.6.0 | f27d9a89-9e0a-44fe-b72d-f101d94629a4
37 | sudo | saz | 5.0.0 | f088fa68-bfa3-4429-a8f2-f9c893d52bfc
36 | ruby | puppetlabs | 1.0.0 | eaaef4ba-bf52-4275-8eff-0340d98aa3f7
35 | archive | puppet | 2.3.0 | e09d2bc5-ec62-488c-a1a8-df6364448378
34 | elasticsearch | elastic | 6.2.1 | d965e7b4-ec88-4813-b575-745f9e78c2f1
33 | augeasproviders_shellvar | herculesteam | 2.2.2 | cbbe2521-890b-476d-b3b5-beef1b72fd73
32 | haproxy | puppetlabs | 2.1.0 | c9113401-719a-4d19-8ee8-8faca9a30317
31 | mongodb | puppet | 2.1.0 | c8e47d0c-e54c-4cef-9b16-c1bad02e7fba
30 | sysctl | thias | 1.0.6 | c23fabcc-0d62-4ecb-8ac3-ebe06e9772e6
29 | nfs | derdanne | 2.0.7 | c09f3853-43a8-4d30-b81d-7ce160d8b3b8
28 | stdlib | puppetlabs | 4.24.0 | 9ec2939a-3b08-4fbe-a7ff-1c34984350d7
27 | ssh | saz | 3.0.1 | 99b1c530-fbe7-487a-8842-cfeacc688b74
26 | apache | puppetlabs | 2.3.1 | 93f56575-da3d-41b6-964c-a70af87bcb0c
25 | concat | puppetlabs | 2.2.1 | 9379ce64-6135-4b17-a1c3-5731b0ac89c3
24 | mysql | puppetlabs | 5.3.0 | 92695de8-45c0-4271-832c-5721bdb5ffd9
23 | openldap | camptocamp | 1.16.1 | 924b998d-b361-4f75-9e41-55f825d209da
22 | accounts | puppetlabs | 1.3.0 | 8bf8366e-81f1-4dd1-8de6-9e330e7de759
21 | sssd | sgnl05 | 2.7.0 | 8afc1e88-9d4a-46ad-8107-5d457f4cd740
20 | snmp | razorsedge | 3.9.0 | 8aed966e-e973-4d87-af1d-6f4b63051c32
19 | lisenet_firewall | lisenet | 1.0.0 | 8513e8ec-7cdd-4606-8d8c-92a660dc5da5
18 | corosync | puppet | 6.0.0 | 7b4dba49-c793-47f7-b872-a683a4b8d131
17 | augeasproviders_core | herculesteam | 2.1.4 | 77afedf9-65b8-4168-a8a1-5e534e84462d
16 | pe_gem | puppetlabs | 0.2.0 | 5e639097-072a-4486-bc19-0b3ab6a8bbae
15 | keepalived | arioch | 1.2.5 | 4ff5c45b-0a93-4cbd-8574-1b246363378c
14 | firewall | puppetlabs | 1.12.0 | 3a86241a-3c52-4339-a05d-6f6de0a033ac
13 | rsyslog | saz | 5.0.0 | 330447a4-010a-4cfb-8b99-5cbcf327adaa
12 | systemd | camptocamp | 1.1.1 | 2fea15c7-99d4-49cd-9eea-578c5e249657
11 | ntp | puppetlabs | 7.1.1 | 2fd3c5d5-4943-4f54-bd60-3bd1d73af0d3
10 | translate | puppetlabs | 1.1.0 | 2e46f4e3-34f6-41a0-9466-4b163b87f5d9
9 | selinux | puppet | 1.5.2 | 2e12d841-2801-45d2-a70c-e287d134b1e8
8 | postgresql | puppetlabs | 5.3.0 | 28f11fd1-223b-46fe-a92c-cfc485aa28ef
7 | datacat | richardc | 0.6.2 | 24f45f62-7012-4ac1-809e-3efd9d5d9daa
6 | zabbix | puppet | 6.2.0 | 2426fdbc-9dc2-4cf2-8810-a7702fdd7faa
5 | limits | saz | 3.0.2 | 1b893348-11e9-45e7-9d64-5fb2819c1e96
4 | apt | puppetlabs | 4.5.1 | 13c33cf0-acbe-4369-b44e-def9933e6d87
3 | wordpress | hunner | 1.0.0 | 0f928270-7b36-407b-b603-1efe6e261812
2 | staging | puppet | 3.1.0 | 0a6ffb28-5049-4556-923d-7af3850ece63
1 | java | puppetlabs | 2.4.0 | 081cb24f-cec7-4c12-a203-5685edc1936d
—|————————–|————–|———|————————————-

We can loop the module IDs to add them to the content view:

# for i in $(seq 1 38);do
hammer content-view puppet-module add
–content-view “puppet_content”
–id “$i”; done

Step 7: Publish Puppet Content View

Let us check the environments that we have available before we publish the content view:

# hammer environment list
—|———–
ID | NAME
—|———–
2 | homelab
1 | production
—|———–

The production environment is the default one, and the homelab environment is the one we created manually. Publish Puppet content view:

# hammer content-view publish
–name “puppet_content”
–description “Publishing Puppet modules”

As mentioned earlier, Katello creates a Puppet environment from the Puppet module content view the moment we publish it. Verify:

# hammer environment list
—|————————————
ID | NAME
—|————————————
3 | KT_lisenet_Library_puppet_content_4
2 | homelab
1 | production
—|————————————

We can now associate a host or hostgroup with whatever Puppet environment we want, including the one created for the Puppet module content view.

Step 8: Backup Katello Configuration

Let us create a backup of our Katello configuration so that we don’t lose any changes that we’ve made so far:

# katello-backup /mnt/backup/ –features=all -y

Source

AWS Lambda announces service level agreement

Posted On: Oct 16, 2018

We have published a service level agreement (SLA) for AWS Lambda. We will use commercially reasonable efforts to make Lambda available with a Monthly Uptime Percentage for each AWS region, during any monthly billing cycle, of at least 99.95% (the “Service Commitment”). In the event Lambda does not meet the Service Commitment, you will be eligible to receive a Service Credit as described in the AWS Lambda Service Level Agreement.

AWS Lambda is a compute service that runs your code in response to triggers and automatically manages the compute resources for you, making it easy to build applications that respond quickly to new information.

This SLA is now available in all regions where Lambda is available. For more information on where AWS Lambda is available, see the AWS region table. Please visit our product page for more information about AWS Lambda.

Source

Debian 8.8 MATE Installation on Oracle VirtualBox

Debian 8.8 MATE Installation on VirtualBox
Debian GNU/Linux 8.8 MATE Installation on Oracle VirtualBox

This video tutorial shows

Debian 8.8 MATE Desktop installation

on

Oracle VirtualBox

step by step. This tutorial is also helpful to install Debian 8.8 on physical computer or laptop hardware. We also install Guest Additions on Debian 8.8 MATE Desktop for better performance and usability features: Automatic Resizing Guest Display, Shared Folder, Seamless Mode and Shared Clipboard, Improved Performance and Drag and Drop.

Debian GNU/Linux 8.8 MATE Desktop Installation Steps:

  1. Create Virtual Machine on Oracle VirtualBox
  2. Start Debian 8.8 MATE Desktop Installation
  3. Install Guest Additions
  4. Test Guest Additions Features: Automatic Resizing Guest Display and Shared Clipboard

Installing Debian 8.8 MATE Desktop on Oracle VirtualBox

 

Debian 8.8 New Features and Improvements

Debian GNU/Linux 8.8

mainly adds corrections for security problems to the stable release, along with a few adjustments for serious problems. Security advisories were already published separately and are referenced where available. Those who frequently install updates from security.debian.org won’t have to update many packages and most updates from security.debian.org are included in this update. Debian 8.8 is not a new version of Debian. It’s just a Debian 8 image with the latest updates of some of the packages. So, if you’re running a Debian 8 installation with all the latest updates installed, you don’t need to do anything.

Debian Website:

https://www.debian.org/

What is MATE Desktop?

The MATE Desktop Environment is the continuation of GNOME 2. It provides an intuitive and attractive desktop environment using traditional metaphors for Linux and other Unix-like operating systems. MATE is under active development to add support for new technologies while preserving a traditional desktop experience.

MATE Desktop Website:

http://mate-desktop.com/

Hope you found this Debian 8.8 MATE Desktop installation on Oracle VirtualBox tutorial helpful and informative. Please consider sharing it. Your feedback and questions are welcome!

Source

How to install Ubuntu 18.04 LTS

(Last Updated On: September 1, 2018)

PREPARATION

1. Create bootable DVD or USB media.

* Download ISO image from https://www.ubuntu.com/download/desktop
* You can burn a bootable DVD in Windows 7 and up simply by inserting a blank DVD and then double-clicking the ISO file.
* Creating a bootable USB drive will require you to install software. Find out more here: https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-windows#0 for Windows users and https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-macos#0 for Mac users.

2. Boot Ubuntu 18.04

* You will have to turn off Secure Boot in your computer’s BIOS settings to be able to boot from a DVD or USB drive.
* Once you get Ubuntu booted, select “Try Ubuntu” and take time to play around and ensure that all of you hardware is working properly.
* Check to see if you will need any proprietary drives for your system.

3. Backup ALL Data You Wish To Keep!

* Do NOT use commercial backup software or the built-in Windows backup utility! Ubuntu MUST be able to read the files you create.
* Backups MUST be stored on a USB drive or other removable media.
* It is OK to store backup data in a Zip file. Ubuntu can open them with Archive Manager.

INSTALLATION

WARNING! Proceed at your own risk. Installing Ubuntu will wipe out your current Windows installation and all data you have stored on the computer. There is no way to “uninstall” Ubuntu!
* It is a good idea to have another computer, smartphone or tablet available so you can have access to the Internet in case you need to look something up.
* Turn off Secure Boot in your computer’s BIOS settings.
* Hook computer to the Internet with an Ethernet cable if drivers will be needed to use Wi-Fi.
* Boot Ubuntu
* Launch Ubuntu’s installer and follow the directions.
* Restart the computer. You are now Running Ubuntu!

POST-INSTALLATION SETUP

* Review and change settings for Software Updater.
* Change to local mirrors (Optional)
* Install ALL updates and restart the computer.
* Check for and install drivers.
* Restart the computer again.
* Install GNOME Tweaks
sudo apt install gnome-tweak-tool
* Configure the Desktop
* Setup Timeshift:
sudo apt-add-repository -y ppa:teejee2008/ppa
sudo apt install timeshift
* Optional: Install Google Chrome browser: https://www.google.com/chrome/index.html
Here’s how to activate and install GNOME Extensions with Chrome and Firefox: https://linuxconfig.org/how-to-install-gnome-shell-extensions-on-ubuntu-18-04-bionic-beaver-linux
* Install Ubuntu Restricted Extras:
sudo apt install ubuntu-restricted-extras
* Remove Fkuendo mp3 codec:
sudo apt remove gstreamer1.0-fluendo-mp3
* Install GNOME Tracker for faster file operations in Nautilus:
sudo apt install tracker
* Update locate command database to activate search function. ( This command will be run automatically in about a day or so. Running it now is optional.)
sudo updatedb

More recommended software:
sudo apt install htop gdebi synaptic net-tools

Ubuntu is now fully ready to use. Have fun!

Please be sure to give EzeeLinux a ‘Like’ on Facebook! Thanks! https://www.facebook.com/EzeeLinux

Joe Collins

Joe Collins worked in radio and TV stations for over 20 years where he installed, maintained and programmed computer automation systems. Joe also worked for Gateway Computer for a short time as a Senior Technical Support Professional in the early 2000’s and has offered freelance home computer technical support and repair for over a decade.

Joe is a fan of Ubuntu Linux and Open Source software and recently started offering Ubuntu installation and support for those just starting out with Linux through EzeeLinux.com. The goal of EzeeLinux is to make Linux easy and start them on the right foot so they can have the best experience possible.

Joe lives in historic Portsmouth, VA in a hundred year old house with three cats, three kids and a network of computers built from scrounged parts, all happily running Linux.

Source

Creating REST API in Python

REST or Representational State Transfer is a software development style used mainly in API or Application Programming Interface design to build interactive and modern web services. It is also known as RESTful web service.

Python is a powerful programming language. It has many libraries for building REST or RESTful APIs. One of the popular library for building web apps and writing REST APIs is Flask.

In this article, I will show you how to create REST API in Python using Flask. Let’s get started.

You should have

  • Python 2 or Python 3 installed on your computer.
  • PIP or PIP3 installed on your computer.
  • The basic understanding of Python programming language.
  • The basic understanding of executing commands in the shell.

You should be able to find articles and tutorials on all these topics on LinuxHint.com

I will be using Python 3 on Debian 9 Stretch in this article. If you’re using Python 2, you will have to adjust a little bit. You should be able to figure it out yourself as it will be simple as writing python instead of python3 and pip instead of pip3.

Setting Up Virtual Environment:

To put it simply, virtual environment is used to isolate one Python app from another. The Python package used to do that is virtualenv.

You can easily install virtualenv using PIP on your computer with the following command:

$ sudo -H pip3 install virtualenv

Now create a project directory (let’s call it pyrest/) with the following command:

Now create a Python virtual environment on the pyrest/ project directory with the following command:

Now navigate into the project directory with the following command:

Then, activate the Python virtual environment with the following command:

Finally, run the following command to install the Flask Python library:

Writing Your First Flask Script:

In this section, I will write a hello world program in Python Flask.

First, create a file hello.py in your project directory:

Now add the following lines to hello.py file and save it.

In the next section, I will show you how to run Flask scripts.

Running Flask Script:

Now to start the hello.py Flask server, run the following command:

As you can see, the server has started on http://127.0.0.1:8080.

Now, you can access the Flask server http://127.0.0.1:8080 from the web browser or API testing softwares such as Postman. I am going to use CURL.

$ curl http://127.0.0.1:8080

As you can see, the correct output is printed on the screen.

Congrats! Flask is working.

Accessing Data Using GET in REST API:

GET request on REST API is used to fetch information from the API server. You set some API endpoints and do a GET request on that end point. It’s simple.

First, create a new file get.py in your project directory with the following command:

Now add the following lines in your get.py file and save it.

Here, on line 1, the Flask constructor function and jsonify function is imported from the flask module.

On line 3, a Flask object is created and stored on app variable.

On line 5, I created a Python array of dictionaries of some dummy data and stored it in the accounts variable.

On line 10, I defined the API endpoint /accounts and the request method, which is GET.

On line 11, I defined the function getAccounts(). getAccounts() function will execute when a GET request to /accounts endpoint is made.

Line 12, which is a part of getAccounts() function, I converted the accounts array of dictionaries to JSON using jsonify() function and returned it.

On line 14-15, I called the app.run() to tell Flask to run the API server on port 8080.

Now run the Flask API server with the following command:

The server has started on port 8080.

Now make a GET request to the /accounts endpoint with CURL as follows:

$ curl http://127.0.0.1:8080/accounts

As you can see, the accounts data is displayed as JSON format on GET request on /accounts endpoint.

You can also get specific account data as well. To do that, I am going to create another API endpoint /account/<id>. Here, <id> will be the ID the account holder. The ID here is the index of the array.

Edit the get.py script and add the marked lines to it.

Here, on line 14, I defined the API endpoint /account/<id> and the method to be used, which is GET.

On line 15-17, the function getAccount() for the API endpoint /account/<id> is defined. The getAccount() function accepts a id as an argument. The value of <id> from the API endpoint is set to the id variable of getAccount() function.

On line 16, the id variable is converted to an integer. I also deduced 1 from the id variable. Because the array index starts from 0. I want to start the account ID from 1. So if I put 1 as the account <id>, 1 – 1 = 0, I will get the element at index 0 from the array accounts.

On line 17, the array at index <id> is returned as JSON.

The rest of the codes are the same.

Now run the API server again.

I requested data for account 1 and 2 separately and I got the expected output as you can see from the screenshot below.

$ curl http://127.0.0.1:8080/account/1
$ curl http://127.0.0.1:8080/account/2

Adding Data Using POST in REST API:

Now I am going to rename get.py to api.py and add an API endpoint /account for adding new data.

Rename get.py to api.py:

First, add the lines (19-26) as marked in the screenshot below to the api.py file.

Now run the api.py server:

To insert new data into the /account endpoint, run the following command:

$ curl -X POST -H “Content-Type: application/json” -d ‘{“name”: “Shovon”, “balance”: 100}’
http://127.0.0.1:8080/account

NOTE: Here, ‘{“name”: “Shovon”, “balance”: 100}’ is the JSON input data.

The data should be inserted.

As you can see, the new data is added.

So that’s it for this article. Thanks for reading this article.

Source

Google Adds Kubernetes to Rebranded Cloud Marketplace | Enterprise

Google Adds Kubernetes to Rebranded Cloud Marketplace

Google’s goal is to make containers accessible to everyone, especially the enterprise, according to Anil Dhawan, product manager for the Google Cloud Platform.

When Google released Kubernetes as open source, one of the first challenges that the industry tackled was management, he said.

Google’s hosted Kubernetes Engine takes care of cluster orchestration and management. A bigger challenge to getting apps running on a Kubernetes cluster can be a manual, time-consuming process. GCP Marketplace provides prepackaged apps and deploys them onto any cluster, Dhawan noted.

Google makes the process safer by testing and vetting all Kubernetes apps listed on GCP Marketplace. That process includes vulnerability scanning and partner agreements for maintenance and support.

The security umbrella extends to all solutions available through the marketplace. That includes virtual machines, managed services, data sets, application programming interfaces, and Software as a Service.

The name change at one level is purely an effort to heighten the visibility of the Google Cloud Platform brand and to point attention to the new marketplace for ready-to-deploy apps, suggested Charles King, principal analyst at Pund-IT.

“Ideally, it will move interested businesses toward the marketplace, meaning that developers will see improved sales of their apps for GCP,” he told LinuxInsider.

More Behind the Move

Ultimately, Google’s enterprise cloud platform rebranding should make life easier for people managing container environments, said King. That will be the case especially if they happen to be studying or considering apps to buy and deploy.

“The impact on hybrid/multicloud is a bit harder to parse, though,” said King. “If the effort succeeds, it should impact Google’s GCP-related sales and business for the better.”

Google’s marketing move could be important for the future of hybrid and multicloud strategies, said Glen Kosaka, vice president of product at Kubernetes security firm
NeuVector.

“This is one really important step towards supporting and simplifying app deployment across clouds,” he told LinuxInsider.

Further, developers now have access to apps that can boost their own apps without having to worry about production deployment and scaling issues, noted Kosaka.

That should be a big deal to many devs, he added.

“Container management of marketplace apps now becomes more simplified, and customers — those responsible for container management — have the confidence that these Google Marketplace apps are tested and compatible with their cloud infrastructure,” Kosaka said.

Broader View Counts

Looking at the news in a strict and narrow sense, Google’s action appears to be little more than a rebranding witha clearer, more descriptive name. That is a fairly sensible move, suggested Alex Gounares, CEO of
Polyverse.

“From a broader perspective, this is the proverbial tip of the iceberg around a series of much bigger industry shifts to server-less computing and Containers as a Service,” he told LinuxInsider.

For one thing, Google’s rebranded platform means changes for developers. In the Internet’s early years, you had to build your own data centers, and build and manage your own applications. Effectively, everything was hand-built, on-premises and expensive, Gounares explained.

Then Salesforce.com came along, and the Software as a Service revolution was born. The same apps could be run in the cloud and delivered via a Web page.

That led to Amazon Web Services and to other cloud services providers letting folks effectively rent a data center on demand — the Infrastructure as a Service revolution.

For the application developer, physically acquiring the *hardware* became trivial, but from a software perspective, actually getting everything set up, configured, and running essentially was just as complicated as running things on premises, said Gounares.

Big Deal for Devs

Containers have revolutionized that. Now all of the complexity of something like a database or content management system or similar software can be packaged in a neat little box, according to Gounares.

That box can run alongside all the other pieces needed for a full solution. Configuration and management that used to take days or weeks to accomplish now can be done in a single command line.

“Renaming the service to address making one-click deployment of containers, and to open up new business models for software platform technology is a big, big, big deal,” remarked Gounares. “It is doing for software what Amazon AWS did for hardware and data centers.”

Deployment Factors

A big advantage to containers is their portability across environments. Users can develop their content and then move their workloads to any production environment, noted Google’s Dhawan.

Google works with open source Special Interest Groups, or SIGs, to create standards for Kubernetes apps. This brings the expertise of the open source community to the enterprise.

Google’s enhanced cloud platform speeds deployment on Kubernetes clusters, Kubernetes Engine, on-premises servers or other public clouds. A Marketplace window displays directly in the Kubernetes Engine console. That process involves clicking-to-deploy and specifying the location.

Third-party partners develop commercial Kubernetes apps, which come with support and usage-based billing on many parameters, such as API calls, number of hosts, and storage per month.

Google uses simplified license usage and offers more consumption options. For instance, the usage charges for apps are consolidated and billed through GCP, no matter where they are deployed. However, the non-GCP resources on which they run are not included, according to Dhawan.

A Win-Win Proposition

It is going to be orders of magnitude easier for developers to deploy both commercial and open source applications on Kubernetes. The launch of GCP Marketplace solves problems around scalability, operating in a virtual private cloud, and billing, noted Dan Garfield, chief evangelist at
Codefresh.

“For example, with Codefresh, our users can deploy our hybrid agent on Kubernetes so they can build on their own clusters, access code in their virtual private network, and bill everything through Google,” he told LinuxInsider. “As a third party, this makes the billing PO approval process a lot easier with enterprise customers where you normally have to become an approved vendor.”

For those responsible for container management, DevOps teams can consume “official” versions of software built for Kubernetes and keep them up to date, he added. This is especially important when you consider the security issues people have had with Dockerhub.

“The GCP Marketplace supports much more complex software stacks, and you can get the official packages rather than whatever someone has built and pushed to Dockerhub,” Garfield said.

What’s Available

GCP Marketplace features popular open source projects that are ready to deploy into Kubernetes. Each app includes clustered images and documented upgrade steps. This makes them ready to run in production. Packaged and maintained by Google Cloud, they implement best practices for running on Kubernetes Engine and GCP, according to Google.

Here is a sampling of some of the hundreds of apps featured on the GCP Marketplace:

  • WordPress for blogging and content management;
  • InfluxDB, Elasticsearch and Cassandra for big data and database;
  • Apache Spark for big data/analytics;
  • RabbitMQ for networking; and
  • NGinx for Web serving.

A complete listing of apps available on the GCP Marketplace is
available here. No signup is needed to view the full list of available solutions.

Good for Hybrid and Multicloud Too

Anything that creates an additional wave of cloud-native applications is really great for the entire cloud marketplace. This includes different public cloud vendors, private cloud solutions, and even edge computing vendors, according to Roman Shaposhnik, VP for product & strategy at
Zededa.

“The platform can only be as successful as the number of killer apps that it hosts. This move by Google creates just the right kind of dynamics to get us the next crop of candidates for a killer cloud-native app,” he told LinuxInsider.

Hybrid cloud and multicloud deployments still have some way to go, however. What is missing is a way for seamlessly stretching container orchestrators across geographically distant locations, suggested Gaurav Yadav, founding engineering and product manager at
Hedvig.

“Unless we standardize container management operations across data centers and cloud locations separated by an unreliable network, true cloud-agnostic applications will always be hard to materialize,” he told LinuxInsider.

VMware became the de facto standard for virtual machine management because it took it out of the hands of admins, Yadav said. VMware made it simple, automated and scalable.

“For cloud-native applications, containers have the potential of replacing VMs as the standard for resource virtualization,” he suggested. “This is only possible if we bring the advanced capabilities that have been built over a decade for VM orchestration to the container management. This announcement is the next step towards this goal.”

The Right Move

Fundamentally, Google’s actions create a huge incentive for developers to transition their apps to a cloud-native architecture, said Zededa’s Shaposhnik.

“I would expect all of these to start being refactored and packaged for the GCP Marketplace,” he said, “and that is a good thing that goes way beyond immediate value on Google’s own cloud. Once an application is refactored and packaged to be truly cloud-native, you can transition between different clouds — including on-premises private clouds — in a much easier fashion.”

For container management, it is “yet another move in the right direction of making containers a ubiquitous, but completely invisible part of the IT infrastructure,” Shaposhnik added.

The Bottom Line

The changes that Google implemented are good news for developers, said Oleg Atamanenko, lead platform developer at
Kublr.

“It is a big step towards simplifying the experience when trying new applications,” he told LinuxInsider.

For IT management, on the other hand, the changes in Google’s GCP Marketplace mean cost reduction, reduced time-to-market, and faster innovation through streamlining application installation, Atamanenko said.

For developers, a change in name means little, but the direction Google has taken means a step forward in the enterprise world, said Stefano Maffulli, community director at
Scality.

Still, there is a down side, he told LinuxInsider.

Bitnami, which has been pushing to be the packaging tool to ship applications to the clouds, added support for Kubernetes early on.

“Now Google is making them less relevant on GCP, which can be a threat,” said Maffulli. “I wonder how long Bitnami and Google will stay partners.”

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.

Source

Things You Should Know : Wireless Hacking Intermediate

In the previous post in the ‘things you should know’ series I discussed Wireless Hacking basics. It’s recommended that you go through it before starting this tutorial.

Pre-requisites

You should know (all this is covered in Wireless Hacking basics)-

  • What are the different flavors of wireless networks you’ll encounter and how difficult it is to hack each of them.
  • What are hidden networks, and whether they offer a real challenge to a hacker.
  • Have a very rough idea how each of the various ‘flavors’ of wireless networks is actually hacked.

Post-reading

You will know –

  • Know even more about different flavors of wireless networks.
  • How to go about hacking any given wireless network.
  • Common tools and attacks that are used in wireless hacking.

The last two points would be covered in detail in the coming posts.
A rough idea about the cryptographic aspects of the attacks, the vulnerabilities and the exploits.
A rough idea about the cryptographic aspects of each ‘flavor’ of wireless network security.

Pirates of the Caribbean

Suppose you are in ship manufacturing business.
These are times when pirates were rampaging the seas. You observed how the
merchant ships are all floating unguarded in the seas, and the pirate industry is
booming because of easy targets. You decide to create fortified ships,
which can defend themselves against the pirates. For this, you use an alloy
X. Your idea was appreciated by merchants and everyone started using your ships….

The most iconic pirates of modern times

Unfortunately, your happiness was short lived. Soon, the pirates found out flaws in
your ships and any pirate who knew what he was doing could easily get
past your ship’s defense mechanisms. For a while you tried to fix the
known weaknesses in the ship, but soon realized that there were too many
problems, and that the very design of the ship was flawed.

You knew what flaws the pirates were exploiting, and could build a new and stronger ship. However, the merchants weren’t willing to pay for new ships. You
then found out that by remodeling some parts of the ship in a very
cost efficient way, you could make the ship’s security almost
impenetrable. In the coming years, some pirates found a few structural
weaknesses in alloy X, and some issues with the core design of the ship (remnant weaknesses of the original ship). However,
these weaknesses were rare and your customers were overall happy.

After
some time you decided to roll out an altogether new model of the ship. This time,
you used a stronger allow, Y. Also, you knew all the flaws in the
previous versions of the ship, and didn’t make any errors in the design
this time. Finally, you had a ship which could withstand constant
bombardment for months on end, without collapsing. There was still scope
for human error, as the sailors can sometimes be careless, but other
than that, it was an invincible ship.

WEP, WPA and WPA-2

WEP is the flawed ship in the above discussion. The aim of Wireless Alliance was to write an algorithm to make wireless network (WLAN) as secure as wired networks (LAN). This is why the protocol was called Wired Equivalent Privacy (privacy equivalent to the one expected in a traditional wired network). Unfortunately, while in theory the idea behind WEP sounded bullet-proof, the actual implementation was very flawed. The main problems were static keys and weak IVs. For a while attempts were made to fix the problems, but nothing worked well enough(WEP2, WEPplus, etc. were made but all failed).

WPA was a new WLAN standard which was compatible with devices using WEP encryption. It fixed pretty much all the flaws in WEP encryption, but the limitation of having to work with old hardware meant that some remnants of the WEPs problems would still continue to haunt WPA. Overall, however, WPA was quite secure. In the above story, this is the remodeled ship.

WPA-2 is the latest and most robust security algorithm for wireless networks. It wasn’t backwards compatible with many devices, but these days all the new devices support WPA-2. This is the invincible ship, the new model with a stronger alloy.

But wait…

In last tutorial I assumed WPA and WPA-2 are the same thing. In this
one, I’m telling you they are quite different. What’s the matter?

Well actually, the two standards are indeed quite different. However, while it’s true there are some remnant flaws in WPA that are absent in WPA-2, from a hacker’s perspective, the technique to hack the two networks is often the same. Why?

 

  • Very few tools exist which carry out the attacks against WPA networks properly (the absence of proof-of-concept scripts means that you have to do everything from scratch, which most people can’t).
  • All these attacks work only under certain conditions (key renewal period must be large, QoS must be enabled, etc.)

Because of these reasons, despite WPA being a little less secure than WPA-2, most
of the time, a hacker has to use brute-force/dictionary attack and other
methods that he would use against WPA-2, practically making WPA and
WPA-2 the same thing from his perspective
.

PS: There’s more to the WPA/WPA-2 story than what I’ve captured here. Actually WPA or WPA-2 are ambiguous descriptions, and the actual intricacy (PSK, CCMP, TKIP, X/EAP, AES w.r.t. cipher used and authentication used) would required further diving into personal and enterprise versions of WPA as well as WPA-2.

How to Hack

Now that you know the basics of all these network, let’s get to how actually these networks are hacked. I will only name the attacks, further details would be provided in coming tutorials-

WEP

The Initialization vector v passed to the RC4 cipher is the
weakness of WEP

Most of the attacks rely on

inherent weaknesses in IVs

(initialization vectors). Basically, if you collect enough of them, you will get the password.

  1. Passive method
    • If you don’t want to leave behind any footprints, then passive method is the way to go. In this, you simply listen to the channel on which the network is on, and capture the data packets (airodump-ng). These packets will give you IVs, and with enough of these, you can crack the network (aircrack-ng). I already have a tutorial on this method, which you can read here – Hack WEP using aircrack-ng suite.
  2. Active methods
  • ARP request replayThe above method can be incredibly slow, since you need a lot of packets (there’s no way to say how many, it can literally be anything due the nature of the attack. However, usually the number of packets required ends up in 5 digits). Getting these many packets can be time consuming. However, there are many ways to fasten up the process. The basic idea is to initiate some sort of conversation in the network, and then capture the packets that arise as a result of the conversation. The problem is, not all packets have IVs. So, without having the password to the AP, you have to make it generate packets with IVs. One of the best ways to do this is by requesting ARP packets (which have IVs and can be generated easily once you have captured at least one ARP packet). This attack is called ARP replay attack. We have a tutorial for this attack as well, ARP request replay attack.
  • Chopchop attack
  • Fragmentation attack
  • Caffe Latte attack

I’ll cover all these attacks in detail separately (I really can’t sumarrize the bottom three). Let’s move to WPA-

WPA-2 (and WPA)

There are no vulnerabilities here that you can easily exploit. The only two options we have are to guess the password or to fool a user into giving us the password.

  1. Guess the password – For guessing something, you need two things : Guesses (duh) and validation. Basically, you need to be able to make a lot of guess, and also be able to verify if they are correct or not. The naive way would be to enter the guesses into the password field that your OS provides when connecting to the wifi. That would be slow, since you’d have to do it manually. Even if you write a script for that, it would take time since you have to communicate with the AP for every guess(that too multiple times for each guess). Basically, validation by asking the AP every time is slow. So, is there a way to check the correctness of our password without asking the AP? Yes, but only if you have a 4-way handshake. Basically, you need the capture the series of packets transmitted when a valid client connects to the AP. If you have these packets (the 4-way handshake), then you can validate your password against it. More details on this later, but I hope the abstract idea is clear. There are a few different ways of guessing the password :-
  • Bruteforce – Tries all possible passwords. It is guaranteed that this will work, given sufficient time. However, even for alphanumeric passwords of length 8, bruteforce takes incredibly long. This method might be useful if the password is short and you know that it’s composed only of numbers.
  • Wordlist/Dictionary – In this attack, there’s a list of words which are possible candidates to be the password. These word list files contains english words, combinations of words, misspelling of words, and so on. There are some huge wordlists which are many GBs in size, and many networks can be cracked using them. However, there’s no guarantee that the network you are trying to crack would have it’s password in the list. These attacks get completed within a reasonable timeframe.
  • Rainbow table – The validation process against the 4-way handshake that I mentioned earlier involves hashing of the plaintext password which is then compared with the hash in handshake. However, hashing (WPA uses PBKDF2) is a CPU intensive task and is the limiting factor in the speed at which you can test keys (this is the reason why there are so many tools which use GPU instead of CPU to speed up cracking). Now, a possible solution to this is that the person who created the wordlist/dictionary that we are using can also convert the plaintext passwords into hashes so that they can be checked directly. Unfortunately, WPA-2 uses a salt while hashing, which means that two networks with the same password can have different hashing if they use different salts. How does WPA-2 choose the salt? It uses the network’s name (SSID) as the salt. So two networks with the same SSID and the same password would have the same salt. So, now the guy who made the wordlist has to create separate hashes for all possible SSID’s. Practically, what happens is that hashes are generated for the most common SSID’s (the default one when a router is purchases like -linksys, netgear, belkin, etc.). If the target network has one of those SSID’s then the cracking time is reduced significantly by using the precomputed hashes. This precomputed table of hashes is called rainbow table. Note that these tables would be significantly larger than the wordlists tables. So, while we saved ourselves some time while cracking the password, we had to use a much larger file (some are 100s of GBs) instead of a smaller one. This is referred to as time-memory tradeoff. This page has rainbow tables for 1000 most common SSIDs.

 

  • Fool a user into giving you the password – Basically this just a combination of Man in the middle attacks and social engineering attacks. More specifically, it is a combination of evil twin and phishing. In this attack, you first force a client to disconnect from the original WPA-2 network, then force him to connect to a fake open network that you create, and then send him a login page in his browser where you ask him to enter the password of the network. You might be wondering, why do we need to keep the network open and then ask for the password in the browser (can’t we just create a WPA-2 network and let the user give us the password directly). The answer to this lies in the fact that WPA-2 performs mutual authentication during the 4-way handshake. Basically, the client verifies that the AP is legit, and knows the password, and the AP verifies that the client is legit and knows the password (throughout the process, the password is never sent in plaintext). We just don’t have the information necessary enough to complete the 4-way handshake.
  • Bonus : WPS vulnerability and reaver [I have covered it in detail seperately so not explaining it again (I’m only human, and a very lazy one too)]

 

The WPA-2 4 way handshake procedure. Both AP and the client authenticate each other

Tools (Kali)

In this section I’ll name some common tools in the wireless hacking category which come preintalled in Kali, along with the purpose they are used for.

  1. Capture packets
  • airodump-ng
  • wireshark (really versatile tool, there are books just covering this tool for packet analysis)
  • WPS
  • reaver
  • pixiewps (performs the “pixie dust attack”)
  • Cool tools
  • aireplay-ng (WEP mostly)
  • mdk3 (cool stuff)
  • Automation
  • wifite
  • fluxion (actually it isn’t a common script at all, but since I wrote a tutorial on it, I’m linking it)

You can find more details about all the tools installed on

Kali Tools page

.

Okay guys, this is all that I had planned for this tutorial. I hope you learnt a lot of stuff. Will delve into further depths in coming tutorials.

Source

Stranded Deep adds a new experimental couch co-op mode to survive together

Fancy surviving on a desert island with a friend? That’s now possible with a new experimental build of Stranded Deep [Steam].

To go along with this new feature, they also added a Player Ragdoll for when you’re knocked out or dead. You partner can help you up with bandages before you bleed out and bodies can be dragged as well for maximum fun. It’s good to see them add more from their roadmap, with plenty more still to come before it leaves Early Access.

They also added a Raft Passenger Seat, fixed a bunch of bugs and updated Unity to “2017.4.13f1”. Also the shark music won’t play until you’re actually attacked so no more early warnings for you.

To access it, you will need to opt-in to the “experimental” Beta on Steam.

Source

WP2Social Auto Publish Powered By : XYZScripts.com