Linux Mint 19 “Tara” Xfce released! – The Linux Mint Blog

The team is proud to announce the release of Linux Mint 19 “Tara” Xfce Edition.

Linux Mint 19 Tara Xfce Edition

Linux Mint 19 is a long term support release which will be supported until 2023. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.

New features:

This new version of Linux Mint contains many improvements.

For an overview of the new features please visit:

What’s new in Linux Mint 19 Xfce“.

Important info:

The release notes provide important information about known issues, as well as explanations, workarounds and solutions.

To read the release notes, please visit:

Release Notes for Linux Mint 19 Xfce

System requirements:

  • 1GB RAM (2GB recommended for a comfortable usage).
  • 15GB of disk space (20GB recommended).
  • 1024×768 resolution (on lower resolutions, press ALT to drag windows with the mouse if they don’t fit in the screen).

Notes:

  • The 64-bit ISO can boot with BIOS or UEFI.
  • The 32-bit ISO can only boot with BIOS.
  • The 64-bit ISO is recommended for all modern computers (Almost all computers sold since 2007 are equipped with 64-bit processors).

Upgrade instructions:

Announcements will be made shortly with instructions on how to upgrade from Linux Mint 18.3.

If you are running the BETA, perform a system snapshot, use the Update Manager to apply available updates, run the following commands and reboot:

apt remove ttf-mscorefonts-installer

apt install libreoffice-sdbc-hsqldb sessioninstaller ttf-mscorefonts-installer xserver-xorg-input-synaptics

sudo rm -f /etc/systemd/logind.conf

apt install –reinstall -o Dpkg::Options::=”–force-confmiss” systemd

sudo rm -f /etc/polkit-1/localauthority/50-local.d/com.ubuntu.enable-hibernate.pkla

Download links:

Here are the download links for the 64-bit ISO:

A 32-bit ISO image is also available at https://www.linuxmint.com/download_all.php.

Integrity and authenticity checks:

Once you have downloaded an image, please verify its integrity and authenticity.

Anyone can produce fake ISO images, it is your responsibility to check you are downloading the official ones.

Enjoy!

We look forward to receiving your feedback. Thank you for using Linux Mint and have a lot of fun with this new release!

Source

Linux Scoop — Linux Mint 19 Cinnamon Edition

Linux Mint 19 Cinnamon Edition – See What’s New

Linux Mint 19 has been released and announced by Linux Mint Project, now available to download which ship with the Cinnamon, Mate and Xfce editions both for both 32-bit and 64-bit architectures.

Based on Ubuntu 18.04 LTS an powered by Linux Kernel 4.15 series, Linux Mint 19 include new tool for creates snapshots of the system, called Timeshift. it may user can restore a previous version of the system using the functionality. The Software Manager support flatpak package, ships with a brand new welcome screen and Update Manager was improved.

Linux Mint 19 cinnamon edititon features the latest cinnamon desktop 3.8. The cinnamon 3.8 as main desktop now feels snappier because it is faster and more efficient at launching applications and rendering new windows

The Nemo File Manager search was simplified and is easier to use, Notifications are smarter. They now have a close button (which unlike the notification itself doesn’t send you towards the source application) and no longer fade-out on mouse-over and the maximum sound volume was currently set to 150% and more..

Download Linux Mint 19 : https://www.linuxmint.com/download.php

Source

How to Enable and Disable Root Login in Ubuntu – LinuxCloudVPS Blog

How to Enable and Disable Root Login in Ubuntu

We will show you how to enable and disable root login in Ubuntu. Root access it required when you need to perform administrative operations which are not permitted for the regular system users, but in the same time the root access may be a huge security risk if it is enabled or if it is not used properly. In this tutorial we will show you how to enable and disable root login on a Linux VPS running Ubuntu as an operating system.

What is root?

In Ubuntu, and Linux in general there is a super user named root which can perform any administrative tasks on the system. In case you type some command incorrectly it can be really dangerous so the root login in Ubuntu is disabled by default. You can still perform super user operations on the system by using the sudo command with your system user if sudo privileges are granted for that user.

If root login is disabled on your Ubuntu VPS and you want to enable it, we will show you how to do that. Please follow the steps below.

Enable Root Login on Ubuntu

To enable root login on your Ubuntu server, first you need to set up password for your root user as it is not set during the OS installation. You can set up the password for your root user by using the following command:

sudo passwd root

You will be prompted to enter a new password. Enter the same password twice to confirm it and it will be updated successfully. Our recommendation is to use a very strong password for your root user so you can avoid it to be compromised via brute-force. Generally, a password utilizing at least 12 characters including alphanumeric and grammatical symbols is sufficient. Never use passwords based upon dictionary words or significant dates.

#sudo passwd root
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully

Now that your have your root user enabled, you can check the OpenSSH settings and make sure your root user can use the service to access the server from remote locations. Open the OpenSSH server configuration file using a text editor of your choice. In our example, we are using nano:

sudo nano /etc/ssh/sshd_config

Find the line that starts with PermitRootLogin and make sure that line is not commented:

PermitRootLogin yes

If the line starts with # it means it is commented out, remove the # sign and save the file. Next, you need to restart the OpenSSH service for the changes to take effect. You can do that by using the following command:

sudo systemctl restart ssh.service

You can now connect to your server via SSH using your root user. Be careful though, with root login comes great responsibility.

You can also consider using SSH keys (private and public key) to login to your server. This method provides a more secure way of connecting to your server, instead of just using a password.

Disable Root Login on Ubuntu

If you have root login enabled on your Ubuntu VPS and you want it to be disabled you can follow the steps below.

First, delete the password of your root user and lock the root user using the following command:

sudo passwd -dl root

Then, open and edit the OpenSSH server configuration file using a text editor of your choice. We are using nano in our example:

sudo nano /etc/ssh/sshd_config

Find the line that starts with PermitRootLogin and make sure the value is set to no.

PermitRootLogin no

Once you make the appropriate changes in the OpenSSH configuration file, you need to restart the OpenSSH service for the changes to take effect. You can do that by using the following command:

sudo systemctl restart ssh.service

Of course, you don’t have to enable or disable root login on Ubuntu, if you use one of our Linux VPS Hosting services, in which case you can simply ask our expert Linux admins to enable or disable the root login on Ubuntu for you. They are available 24×7 and will take care of your request immediately.

PS. If you liked this post on how to Enable and Disable Root Login in Ubuntu, please share it with your friends on the social networks using the buttons on the left or simply leave a reply below. Thanks.

Be the first to write a comment.

Source

How To Setup mod_rewrite In Apache

Mod_rewrite on Apache

mod_rewrite is a Apache module installed on linux servers to manipulate URLs submitted in the browser to perform other functions than it appears. Mod_rewrite can improve SEO appearing to give it a static appearance.

This guide assumes you already have Apache installed, if you do not please see How to Install Apache

Enable mod_rewrite

You will want to edit the main Apache configuration file

nano /etc/httpd/conf/httpd.conf

Add or un-comment the following line

LoadModule rewrite_module modules/mod_rewrite.so

Once you have saved the file you can go ahead and restart Apache

systemctl restart httpd

or in CentOS 6 or below

service httpd restart

You should now see the module loaded by doing the following command

# httpd -M 2>&1|grep rewrite
rewrite_module (shared)

That is for enabling the module. Mod_rewrite rules can either be inserted directly into the VirtualHost block for a specific domain or in a .htaccess for that given domain.

Mod_rewrite Examples

Rewrite domain.com to www.domain.com

RewriteEngine On
RewriteCond % !^www. [NC]
RewriteRule ^(.*)$ http://www.%% [R=301,L]

The above redirect will take all requests to the non-www domain and redirect them with a 301 code to the www.domain.com url and appendedthe rest of the url to it.

Redirect all requests to https / SSL

RewriteEngine On
RewriteCond % ^domain.com [NC]
RewriteCond % off
RewriteRule ^(.*)$ https://%%

The above redirect will take all non-ssl requests and redirect them to https:// URLs.

Redirect request from one directory to another

RewriteRule ^subdirectory/(.*)$ /anotherdirectory/$1 [R=301,NC,L]

The above redirect will take any requests towards a single directory and redirect it to another directory with the rest of URL appended to it.

Redirect one domain to another

RewriteEngine On
RewriteCond % ^olddomain.com [NC,OR]
RewriteCond % ^www.olddomain.com [NC]
RewriteRule ^(.*)$ http://newdomain.com/$1 [L,R=301,NC]

This will redirect any requests with the destination of the olddomain and change them to the new domain. There are numerous redirects you can perform with mod_rewrite these are just a couple of common examples.

Sep 5, 2017LinuxAdmin.io

Source

Katello: Separate Lifecycle for Puppet Modules | Lisenet.com :: Linux | Security

Working with Katell. We’re going to configure a separate lifecycle for Puppet modules. This article is part of the Homelab Project with KVM, Katello and Puppet series.

Homelab

We have Katello installed on a CentOS 7 server:

katello.hl.local (10.11.1.4) – see here for installation instructions

See the image below to identify the homelab part this article applies to.

Separate Lifecycle for Puppet Modules

The idea for using a separate lifecycle for Puppet modules was taken from a Red Hat blog post that was published by Maxim Burgerhout.

We already know that we can create a repository that contains RPM files. We can then create a content view by snapshotting the repository.

We can create a content view with Puppet modules, just like we would do with RPMs. Based on that content view, Katello creates a special directory on the filesystem and it’s where the Puppet master looks for Puppet modules.

Katello creates a Puppet environment from the Puppet module content view the moment we publish it. As a result, using a Puppet module content view as a Puppet environment directly makes it easy to iterate quickly during development of our homelab Puppet modules.

The Plan

Below is a step-by-step plan that we’ll be following in this article.

  1. Step 1: create a Puppet product.
  2. Step 2: build Puppet modules.
  3. Step 3: create a Puppet repository.
  4. Step 4: sync Puppet repository.
  5. Step 5: create a content view.
  6. Step 6: add Puppet modules to the content view.
  7. Step 7: publish Puppet content view.
  8. Step 8: backup Katello configuration.

Configure Katello

Step 1: Create a Puppet Product

# hammer product create –name “puppet”

Step 2: Build Puppet Modules

See here for more info: Build and Import Puppet Modules into Katello

The idea here is to have a single Katello repository containing all our Puppet modules.

A Katello repository may be a plain directory containing a Pulp manifest and packaged Puppet modules. According to the Pulp project documentation, the Pulp manifest is a file listing each Puppet module contained in the directory. Each module is listed on a separate line which has the following format: <name>,<checksum>,<size>. The name is the file name, the checksum is SHA256 digest of the file, and the size is the size of the file in bytes. The Pulp manifest must be named PULP_MANIFEST. Having all this information, we can build Puppet modules manually, generate a Pulp manifest and import everything into Katello.

Get the source from GitHub:

# cd /opt
# git clone https://github.com/crylium/build-puppet-modules-for-katello.git

Build the modules, providing the path to the modules’ directory:

# bash ./build-puppet-modules-for-katello/puppet-module-build.sh
/etc/puppetlabs/code/environments/homelab/modules/

This will also create the file PULP_MANIFEST.

Step 3: Create a Puppet Repository

# hammer repository create
–product “puppet”
–name “homelab_modules”
–content-type “puppet”
–url “file:///etc/puppetlabs/code/environments/homelab/modules/”

Step 4: Synchronise Puppet Repository

# hammer repository synchronize
–product “puppet”
–name “homelab_modules”

Step 5: Create a Content View

# hammer content-view create
–name “puppet_content”
–description “Puppet modules”

Step 6: Add Puppet Modules to the Content View

View the module list:

# hammer puppet-module list
—|————————–|————–|———|————————————-
ID | NAME | AUTHOR | VERSION | UUID
—|————————–|————–|———|————————————-
38 | graylog | graylog | 0.6.0 | f27d9a89-9e0a-44fe-b72d-f101d94629a4
37 | sudo | saz | 5.0.0 | f088fa68-bfa3-4429-a8f2-f9c893d52bfc
36 | ruby | puppetlabs | 1.0.0 | eaaef4ba-bf52-4275-8eff-0340d98aa3f7
35 | archive | puppet | 2.3.0 | e09d2bc5-ec62-488c-a1a8-df6364448378
34 | elasticsearch | elastic | 6.2.1 | d965e7b4-ec88-4813-b575-745f9e78c2f1
33 | augeasproviders_shellvar | herculesteam | 2.2.2 | cbbe2521-890b-476d-b3b5-beef1b72fd73
32 | haproxy | puppetlabs | 2.1.0 | c9113401-719a-4d19-8ee8-8faca9a30317
31 | mongodb | puppet | 2.1.0 | c8e47d0c-e54c-4cef-9b16-c1bad02e7fba
30 | sysctl | thias | 1.0.6 | c23fabcc-0d62-4ecb-8ac3-ebe06e9772e6
29 | nfs | derdanne | 2.0.7 | c09f3853-43a8-4d30-b81d-7ce160d8b3b8
28 | stdlib | puppetlabs | 4.24.0 | 9ec2939a-3b08-4fbe-a7ff-1c34984350d7
27 | ssh | saz | 3.0.1 | 99b1c530-fbe7-487a-8842-cfeacc688b74
26 | apache | puppetlabs | 2.3.1 | 93f56575-da3d-41b6-964c-a70af87bcb0c
25 | concat | puppetlabs | 2.2.1 | 9379ce64-6135-4b17-a1c3-5731b0ac89c3
24 | mysql | puppetlabs | 5.3.0 | 92695de8-45c0-4271-832c-5721bdb5ffd9
23 | openldap | camptocamp | 1.16.1 | 924b998d-b361-4f75-9e41-55f825d209da
22 | accounts | puppetlabs | 1.3.0 | 8bf8366e-81f1-4dd1-8de6-9e330e7de759
21 | sssd | sgnl05 | 2.7.0 | 8afc1e88-9d4a-46ad-8107-5d457f4cd740
20 | snmp | razorsedge | 3.9.0 | 8aed966e-e973-4d87-af1d-6f4b63051c32
19 | lisenet_firewall | lisenet | 1.0.0 | 8513e8ec-7cdd-4606-8d8c-92a660dc5da5
18 | corosync | puppet | 6.0.0 | 7b4dba49-c793-47f7-b872-a683a4b8d131
17 | augeasproviders_core | herculesteam | 2.1.4 | 77afedf9-65b8-4168-a8a1-5e534e84462d
16 | pe_gem | puppetlabs | 0.2.0 | 5e639097-072a-4486-bc19-0b3ab6a8bbae
15 | keepalived | arioch | 1.2.5 | 4ff5c45b-0a93-4cbd-8574-1b246363378c
14 | firewall | puppetlabs | 1.12.0 | 3a86241a-3c52-4339-a05d-6f6de0a033ac
13 | rsyslog | saz | 5.0.0 | 330447a4-010a-4cfb-8b99-5cbcf327adaa
12 | systemd | camptocamp | 1.1.1 | 2fea15c7-99d4-49cd-9eea-578c5e249657
11 | ntp | puppetlabs | 7.1.1 | 2fd3c5d5-4943-4f54-bd60-3bd1d73af0d3
10 | translate | puppetlabs | 1.1.0 | 2e46f4e3-34f6-41a0-9466-4b163b87f5d9
9 | selinux | puppet | 1.5.2 | 2e12d841-2801-45d2-a70c-e287d134b1e8
8 | postgresql | puppetlabs | 5.3.0 | 28f11fd1-223b-46fe-a92c-cfc485aa28ef
7 | datacat | richardc | 0.6.2 | 24f45f62-7012-4ac1-809e-3efd9d5d9daa
6 | zabbix | puppet | 6.2.0 | 2426fdbc-9dc2-4cf2-8810-a7702fdd7faa
5 | limits | saz | 3.0.2 | 1b893348-11e9-45e7-9d64-5fb2819c1e96
4 | apt | puppetlabs | 4.5.1 | 13c33cf0-acbe-4369-b44e-def9933e6d87
3 | wordpress | hunner | 1.0.0 | 0f928270-7b36-407b-b603-1efe6e261812
2 | staging | puppet | 3.1.0 | 0a6ffb28-5049-4556-923d-7af3850ece63
1 | java | puppetlabs | 2.4.0 | 081cb24f-cec7-4c12-a203-5685edc1936d
—|————————–|————–|———|————————————-

We can loop the module IDs to add them to the content view:

# for i in $(seq 1 38);do
hammer content-view puppet-module add
–content-view “puppet_content”
–id “$i”; done

Step 7: Publish Puppet Content View

Let us check the environments that we have available before we publish the content view:

# hammer environment list
—|———–
ID | NAME
—|———–
2 | homelab
1 | production
—|———–

The production environment is the default one, and the homelab environment is the one we created manually. Publish Puppet content view:

# hammer content-view publish
–name “puppet_content”
–description “Publishing Puppet modules”

As mentioned earlier, Katello creates a Puppet environment from the Puppet module content view the moment we publish it. Verify:

# hammer environment list
—|————————————
ID | NAME
—|————————————
3 | KT_lisenet_Library_puppet_content_4
2 | homelab
1 | production
—|————————————

We can now associate a host or hostgroup with whatever Puppet environment we want, including the one created for the Puppet module content view.

Step 8: Backup Katello Configuration

Let us create a backup of our Katello configuration so that we don’t lose any changes that we’ve made so far:

# katello-backup /mnt/backup/ –features=all -y

Source

AWS Lambda announces service level agreement

Posted On: Oct 16, 2018

We have published a service level agreement (SLA) for AWS Lambda. We will use commercially reasonable efforts to make Lambda available with a Monthly Uptime Percentage for each AWS region, during any monthly billing cycle, of at least 99.95% (the “Service Commitment”). In the event Lambda does not meet the Service Commitment, you will be eligible to receive a Service Credit as described in the AWS Lambda Service Level Agreement.

AWS Lambda is a compute service that runs your code in response to triggers and automatically manages the compute resources for you, making it easy to build applications that respond quickly to new information.

This SLA is now available in all regions where Lambda is available. For more information on where AWS Lambda is available, see the AWS region table. Please visit our product page for more information about AWS Lambda.

Source

Debian 8.8 MATE Installation on Oracle VirtualBox

Debian 8.8 MATE Installation on VirtualBox
Debian GNU/Linux 8.8 MATE Installation on Oracle VirtualBox

This video tutorial shows

Debian 8.8 MATE Desktop installation

on

Oracle VirtualBox

step by step. This tutorial is also helpful to install Debian 8.8 on physical computer or laptop hardware. We also install Guest Additions on Debian 8.8 MATE Desktop for better performance and usability features: Automatic Resizing Guest Display, Shared Folder, Seamless Mode and Shared Clipboard, Improved Performance and Drag and Drop.

Debian GNU/Linux 8.8 MATE Desktop Installation Steps:

  1. Create Virtual Machine on Oracle VirtualBox
  2. Start Debian 8.8 MATE Desktop Installation
  3. Install Guest Additions
  4. Test Guest Additions Features: Automatic Resizing Guest Display and Shared Clipboard

Installing Debian 8.8 MATE Desktop on Oracle VirtualBox

 

Debian 8.8 New Features and Improvements

Debian GNU/Linux 8.8

mainly adds corrections for security problems to the stable release, along with a few adjustments for serious problems. Security advisories were already published separately and are referenced where available. Those who frequently install updates from security.debian.org won’t have to update many packages and most updates from security.debian.org are included in this update. Debian 8.8 is not a new version of Debian. It’s just a Debian 8 image with the latest updates of some of the packages. So, if you’re running a Debian 8 installation with all the latest updates installed, you don’t need to do anything.

Debian Website:

https://www.debian.org/

What is MATE Desktop?

The MATE Desktop Environment is the continuation of GNOME 2. It provides an intuitive and attractive desktop environment using traditional metaphors for Linux and other Unix-like operating systems. MATE is under active development to add support for new technologies while preserving a traditional desktop experience.

MATE Desktop Website:

http://mate-desktop.com/

Hope you found this Debian 8.8 MATE Desktop installation on Oracle VirtualBox tutorial helpful and informative. Please consider sharing it. Your feedback and questions are welcome!

Source

How to install Ubuntu 18.04 LTS

(Last Updated On: September 1, 2018)

PREPARATION

1. Create bootable DVD or USB media.

* Download ISO image from https://www.ubuntu.com/download/desktop
* You can burn a bootable DVD in Windows 7 and up simply by inserting a blank DVD and then double-clicking the ISO file.
* Creating a bootable USB drive will require you to install software. Find out more here: https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-windows#0 for Windows users and https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-macos#0 for Mac users.

2. Boot Ubuntu 18.04

* You will have to turn off Secure Boot in your computer’s BIOS settings to be able to boot from a DVD or USB drive.
* Once you get Ubuntu booted, select “Try Ubuntu” and take time to play around and ensure that all of you hardware is working properly.
* Check to see if you will need any proprietary drives for your system.

3. Backup ALL Data You Wish To Keep!

* Do NOT use commercial backup software or the built-in Windows backup utility! Ubuntu MUST be able to read the files you create.
* Backups MUST be stored on a USB drive or other removable media.
* It is OK to store backup data in a Zip file. Ubuntu can open them with Archive Manager.

INSTALLATION

WARNING! Proceed at your own risk. Installing Ubuntu will wipe out your current Windows installation and all data you have stored on the computer. There is no way to “uninstall” Ubuntu!
* It is a good idea to have another computer, smartphone or tablet available so you can have access to the Internet in case you need to look something up.
* Turn off Secure Boot in your computer’s BIOS settings.
* Hook computer to the Internet with an Ethernet cable if drivers will be needed to use Wi-Fi.
* Boot Ubuntu
* Launch Ubuntu’s installer and follow the directions.
* Restart the computer. You are now Running Ubuntu!

POST-INSTALLATION SETUP

* Review and change settings for Software Updater.
* Change to local mirrors (Optional)
* Install ALL updates and restart the computer.
* Check for and install drivers.
* Restart the computer again.
* Install GNOME Tweaks
sudo apt install gnome-tweak-tool
* Configure the Desktop
* Setup Timeshift:
sudo apt-add-repository -y ppa:teejee2008/ppa
sudo apt install timeshift
* Optional: Install Google Chrome browser: https://www.google.com/chrome/index.html
Here’s how to activate and install GNOME Extensions with Chrome and Firefox: https://linuxconfig.org/how-to-install-gnome-shell-extensions-on-ubuntu-18-04-bionic-beaver-linux
* Install Ubuntu Restricted Extras:
sudo apt install ubuntu-restricted-extras
* Remove Fkuendo mp3 codec:
sudo apt remove gstreamer1.0-fluendo-mp3
* Install GNOME Tracker for faster file operations in Nautilus:
sudo apt install tracker
* Update locate command database to activate search function. ( This command will be run automatically in about a day or so. Running it now is optional.)
sudo updatedb

More recommended software:
sudo apt install htop gdebi synaptic net-tools

Ubuntu is now fully ready to use. Have fun!

Please be sure to give EzeeLinux a ‘Like’ on Facebook! Thanks! https://www.facebook.com/EzeeLinux

Joe Collins

Joe Collins worked in radio and TV stations for over 20 years where he installed, maintained and programmed computer automation systems. Joe also worked for Gateway Computer for a short time as a Senior Technical Support Professional in the early 2000’s and has offered freelance home computer technical support and repair for over a decade.

Joe is a fan of Ubuntu Linux and Open Source software and recently started offering Ubuntu installation and support for those just starting out with Linux through EzeeLinux.com. The goal of EzeeLinux is to make Linux easy and start them on the right foot so they can have the best experience possible.

Joe lives in historic Portsmouth, VA in a hundred year old house with three cats, three kids and a network of computers built from scrounged parts, all happily running Linux.

Source

Creating REST API in Python

REST or Representational State Transfer is a software development style used mainly in API or Application Programming Interface design to build interactive and modern web services. It is also known as RESTful web service.

Python is a powerful programming language. It has many libraries for building REST or RESTful APIs. One of the popular library for building web apps and writing REST APIs is Flask.

In this article, I will show you how to create REST API in Python using Flask. Let’s get started.

You should have

  • Python 2 or Python 3 installed on your computer.
  • PIP or PIP3 installed on your computer.
  • The basic understanding of Python programming language.
  • The basic understanding of executing commands in the shell.

You should be able to find articles and tutorials on all these topics on LinuxHint.com

I will be using Python 3 on Debian 9 Stretch in this article. If you’re using Python 2, you will have to adjust a little bit. You should be able to figure it out yourself as it will be simple as writing python instead of python3 and pip instead of pip3.

Setting Up Virtual Environment:

To put it simply, virtual environment is used to isolate one Python app from another. The Python package used to do that is virtualenv.

You can easily install virtualenv using PIP on your computer with the following command:

$ sudo -H pip3 install virtualenv

Now create a project directory (let’s call it pyrest/) with the following command:

Now create a Python virtual environment on the pyrest/ project directory with the following command:

Now navigate into the project directory with the following command:

Then, activate the Python virtual environment with the following command:

Finally, run the following command to install the Flask Python library:

Writing Your First Flask Script:

In this section, I will write a hello world program in Python Flask.

First, create a file hello.py in your project directory:

Now add the following lines to hello.py file and save it.

In the next section, I will show you how to run Flask scripts.

Running Flask Script:

Now to start the hello.py Flask server, run the following command:

As you can see, the server has started on http://127.0.0.1:8080.

Now, you can access the Flask server http://127.0.0.1:8080 from the web browser or API testing softwares such as Postman. I am going to use CURL.

$ curl http://127.0.0.1:8080

As you can see, the correct output is printed on the screen.

Congrats! Flask is working.

Accessing Data Using GET in REST API:

GET request on REST API is used to fetch information from the API server. You set some API endpoints and do a GET request on that end point. It’s simple.

First, create a new file get.py in your project directory with the following command:

Now add the following lines in your get.py file and save it.

Here, on line 1, the Flask constructor function and jsonify function is imported from the flask module.

On line 3, a Flask object is created and stored on app variable.

On line 5, I created a Python array of dictionaries of some dummy data and stored it in the accounts variable.

On line 10, I defined the API endpoint /accounts and the request method, which is GET.

On line 11, I defined the function getAccounts(). getAccounts() function will execute when a GET request to /accounts endpoint is made.

Line 12, which is a part of getAccounts() function, I converted the accounts array of dictionaries to JSON using jsonify() function and returned it.

On line 14-15, I called the app.run() to tell Flask to run the API server on port 8080.

Now run the Flask API server with the following command:

The server has started on port 8080.

Now make a GET request to the /accounts endpoint with CURL as follows:

$ curl http://127.0.0.1:8080/accounts

As you can see, the accounts data is displayed as JSON format on GET request on /accounts endpoint.

You can also get specific account data as well. To do that, I am going to create another API endpoint /account/<id>. Here, <id> will be the ID the account holder. The ID here is the index of the array.

Edit the get.py script and add the marked lines to it.

Here, on line 14, I defined the API endpoint /account/<id> and the method to be used, which is GET.

On line 15-17, the function getAccount() for the API endpoint /account/<id> is defined. The getAccount() function accepts a id as an argument. The value of <id> from the API endpoint is set to the id variable of getAccount() function.

On line 16, the id variable is converted to an integer. I also deduced 1 from the id variable. Because the array index starts from 0. I want to start the account ID from 1. So if I put 1 as the account <id>, 1 – 1 = 0, I will get the element at index 0 from the array accounts.

On line 17, the array at index <id> is returned as JSON.

The rest of the codes are the same.

Now run the API server again.

I requested data for account 1 and 2 separately and I got the expected output as you can see from the screenshot below.

$ curl http://127.0.0.1:8080/account/1
$ curl http://127.0.0.1:8080/account/2

Adding Data Using POST in REST API:

Now I am going to rename get.py to api.py and add an API endpoint /account for adding new data.

Rename get.py to api.py:

First, add the lines (19-26) as marked in the screenshot below to the api.py file.

Now run the api.py server:

To insert new data into the /account endpoint, run the following command:

$ curl -X POST -H “Content-Type: application/json” -d ‘{“name”: “Shovon”, “balance”: 100}’
http://127.0.0.1:8080/account

NOTE: Here, ‘{“name”: “Shovon”, “balance”: 100}’ is the JSON input data.

The data should be inserted.

As you can see, the new data is added.

So that’s it for this article. Thanks for reading this article.

Source

Google Adds Kubernetes to Rebranded Cloud Marketplace | Enterprise

Google Adds Kubernetes to Rebranded Cloud Marketplace

Google’s goal is to make containers accessible to everyone, especially the enterprise, according to Anil Dhawan, product manager for the Google Cloud Platform.

When Google released Kubernetes as open source, one of the first challenges that the industry tackled was management, he said.

Google’s hosted Kubernetes Engine takes care of cluster orchestration and management. A bigger challenge to getting apps running on a Kubernetes cluster can be a manual, time-consuming process. GCP Marketplace provides prepackaged apps and deploys them onto any cluster, Dhawan noted.

Google makes the process safer by testing and vetting all Kubernetes apps listed on GCP Marketplace. That process includes vulnerability scanning and partner agreements for maintenance and support.

The security umbrella extends to all solutions available through the marketplace. That includes virtual machines, managed services, data sets, application programming interfaces, and Software as a Service.

The name change at one level is purely an effort to heighten the visibility of the Google Cloud Platform brand and to point attention to the new marketplace for ready-to-deploy apps, suggested Charles King, principal analyst at Pund-IT.

“Ideally, it will move interested businesses toward the marketplace, meaning that developers will see improved sales of their apps for GCP,” he told LinuxInsider.

More Behind the Move

Ultimately, Google’s enterprise cloud platform rebranding should make life easier for people managing container environments, said King. That will be the case especially if they happen to be studying or considering apps to buy and deploy.

“The impact on hybrid/multicloud is a bit harder to parse, though,” said King. “If the effort succeeds, it should impact Google’s GCP-related sales and business for the better.”

Google’s marketing move could be important for the future of hybrid and multicloud strategies, said Glen Kosaka, vice president of product at Kubernetes security firm
NeuVector.

“This is one really important step towards supporting and simplifying app deployment across clouds,” he told LinuxInsider.

Further, developers now have access to apps that can boost their own apps without having to worry about production deployment and scaling issues, noted Kosaka.

That should be a big deal to many devs, he added.

“Container management of marketplace apps now becomes more simplified, and customers — those responsible for container management — have the confidence that these Google Marketplace apps are tested and compatible with their cloud infrastructure,” Kosaka said.

Broader View Counts

Looking at the news in a strict and narrow sense, Google’s action appears to be little more than a rebranding witha clearer, more descriptive name. That is a fairly sensible move, suggested Alex Gounares, CEO of
Polyverse.

“From a broader perspective, this is the proverbial tip of the iceberg around a series of much bigger industry shifts to server-less computing and Containers as a Service,” he told LinuxInsider.

For one thing, Google’s rebranded platform means changes for developers. In the Internet’s early years, you had to build your own data centers, and build and manage your own applications. Effectively, everything was hand-built, on-premises and expensive, Gounares explained.

Then Salesforce.com came along, and the Software as a Service revolution was born. The same apps could be run in the cloud and delivered via a Web page.

That led to Amazon Web Services and to other cloud services providers letting folks effectively rent a data center on demand — the Infrastructure as a Service revolution.

For the application developer, physically acquiring the *hardware* became trivial, but from a software perspective, actually getting everything set up, configured, and running essentially was just as complicated as running things on premises, said Gounares.

Big Deal for Devs

Containers have revolutionized that. Now all of the complexity of something like a database or content management system or similar software can be packaged in a neat little box, according to Gounares.

That box can run alongside all the other pieces needed for a full solution. Configuration and management that used to take days or weeks to accomplish now can be done in a single command line.

“Renaming the service to address making one-click deployment of containers, and to open up new business models for software platform technology is a big, big, big deal,” remarked Gounares. “It is doing for software what Amazon AWS did for hardware and data centers.”

Deployment Factors

A big advantage to containers is their portability across environments. Users can develop their content and then move their workloads to any production environment, noted Google’s Dhawan.

Google works with open source Special Interest Groups, or SIGs, to create standards for Kubernetes apps. This brings the expertise of the open source community to the enterprise.

Google’s enhanced cloud platform speeds deployment on Kubernetes clusters, Kubernetes Engine, on-premises servers or other public clouds. A Marketplace window displays directly in the Kubernetes Engine console. That process involves clicking-to-deploy and specifying the location.

Third-party partners develop commercial Kubernetes apps, which come with support and usage-based billing on many parameters, such as API calls, number of hosts, and storage per month.

Google uses simplified license usage and offers more consumption options. For instance, the usage charges for apps are consolidated and billed through GCP, no matter where they are deployed. However, the non-GCP resources on which they run are not included, according to Dhawan.

A Win-Win Proposition

It is going to be orders of magnitude easier for developers to deploy both commercial and open source applications on Kubernetes. The launch of GCP Marketplace solves problems around scalability, operating in a virtual private cloud, and billing, noted Dan Garfield, chief evangelist at
Codefresh.

“For example, with Codefresh, our users can deploy our hybrid agent on Kubernetes so they can build on their own clusters, access code in their virtual private network, and bill everything through Google,” he told LinuxInsider. “As a third party, this makes the billing PO approval process a lot easier with enterprise customers where you normally have to become an approved vendor.”

For those responsible for container management, DevOps teams can consume “official” versions of software built for Kubernetes and keep them up to date, he added. This is especially important when you consider the security issues people have had with Dockerhub.

“The GCP Marketplace supports much more complex software stacks, and you can get the official packages rather than whatever someone has built and pushed to Dockerhub,” Garfield said.

What’s Available

GCP Marketplace features popular open source projects that are ready to deploy into Kubernetes. Each app includes clustered images and documented upgrade steps. This makes them ready to run in production. Packaged and maintained by Google Cloud, they implement best practices for running on Kubernetes Engine and GCP, according to Google.

Here is a sampling of some of the hundreds of apps featured on the GCP Marketplace:

  • WordPress for blogging and content management;
  • InfluxDB, Elasticsearch and Cassandra for big data and database;
  • Apache Spark for big data/analytics;
  • RabbitMQ for networking; and
  • NGinx for Web serving.

A complete listing of apps available on the GCP Marketplace is
available here. No signup is needed to view the full list of available solutions.

Good for Hybrid and Multicloud Too

Anything that creates an additional wave of cloud-native applications is really great for the entire cloud marketplace. This includes different public cloud vendors, private cloud solutions, and even edge computing vendors, according to Roman Shaposhnik, VP for product & strategy at
Zededa.

“The platform can only be as successful as the number of killer apps that it hosts. This move by Google creates just the right kind of dynamics to get us the next crop of candidates for a killer cloud-native app,” he told LinuxInsider.

Hybrid cloud and multicloud deployments still have some way to go, however. What is missing is a way for seamlessly stretching container orchestrators across geographically distant locations, suggested Gaurav Yadav, founding engineering and product manager at
Hedvig.

“Unless we standardize container management operations across data centers and cloud locations separated by an unreliable network, true cloud-agnostic applications will always be hard to materialize,” he told LinuxInsider.

VMware became the de facto standard for virtual machine management because it took it out of the hands of admins, Yadav said. VMware made it simple, automated and scalable.

“For cloud-native applications, containers have the potential of replacing VMs as the standard for resource virtualization,” he suggested. “This is only possible if we bring the advanced capabilities that have been built over a decade for VM orchestration to the container management. This announcement is the next step towards this goal.”

The Right Move

Fundamentally, Google’s actions create a huge incentive for developers to transition their apps to a cloud-native architecture, said Zededa’s Shaposhnik.

“I would expect all of these to start being refactored and packaged for the GCP Marketplace,” he said, “and that is a good thing that goes way beyond immediate value on Google’s own cloud. Once an application is refactored and packaged to be truly cloud-native, you can transition between different clouds — including on-premises private clouds — in a much easier fashion.”

For container management, it is “yet another move in the right direction of making containers a ubiquitous, but completely invisible part of the IT infrastructure,” Shaposhnik added.

The Bottom Line

The changes that Google implemented are good news for developers, said Oleg Atamanenko, lead platform developer at
Kublr.

“It is a big step towards simplifying the experience when trying new applications,” he told LinuxInsider.

For IT management, on the other hand, the changes in Google’s GCP Marketplace mean cost reduction, reduced time-to-market, and faster innovation through streamlining application installation, Atamanenko said.

For developers, a change in name means little, but the direction Google has taken means a step forward in the enterprise world, said Stefano Maffulli, community director at
Scality.

Still, there is a down side, he told LinuxInsider.

Bitnami, which has been pushing to be the packaging tool to ship applications to the clouds, added support for Kubernetes early on.

“Now Google is making them less relevant on GCP, which can be a threat,” said Maffulli. “I wonder how long Bitnami and Google will stay partners.”

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.

Source

WP2Social Auto Publish Powered By : XYZScripts.com