Install JetBrains IntelliJ IDEA Java IDE on Ubuntu – Linux Hint

IntelliJ IDEA is a very powerful Java IDE from JetBrains. You can develop Java apps, Java Swing GUI apps, Android apps and many more with IntelliJ IDEA. It has intelligent auto completion feature that will help you code efficiently in Java. It is a one stop solution to all your Java related tasks. In this article, I will show you how to install JetBrains IntelliJ IDEA on Ubuntu. So, let’s get started.

IntelliJ IDEA is not available in the official package repository of Ubuntu. But we can easily download IntelliJ IDEA from the official website of JetBrains and install it on Ubuntu.

First, visit the official website of Jetbrains at https://www.jetbrains.com. Once the page loads, go to Tools > IntelliJ IDEA as marked in the screenshot below.

Now, click on Download.

Now, make sure Linux is selected and click on the Download button as marked in the screenshot below.

NOTE: IntelliJ IDEA has 2 versions, Ultimate and Community. The Community version is free to use, but the Ultimate versions is not. You must buy a license from JetBrains if you want to use the Ultimate version fo IntelliJ IDEA. In this article, I will go with the Community version.

Now, select Save File and click on OK.

Your browser should start downloading the IntelliJ IDEA archive file. It may take a while to complete.

Installing and Configuring IntelliJ IDEA:

First, navigate to the directory where you downloaded IntelliJ IDEA with the following command:

As you can see, the file I just downloaded is here.

Now, run the following command to install IntelliJ IDEA to the /opt directory.

$ sudo tar xzf ideaIC-2018.3.2.tar.gz -C /opt

NOTE: If you want to install IntelliJ IDEA somewhere else, replace /opt with the directory path where you want to install it.

Once IntelliJ IDEA archive is extracted, you should see a new directory in the /opt directory as you can see in the screenshot below. Take a note of the directory name. You will need it to run IntelliJ IDEA for the first time.

Now, run IntelliJ IDEA as follows:

$ /opt/idea-IC-183.4886.37/bin/idea.sh

As you’re running IntelliJ IDEA for the first time, you will have to configure IntelliJ IDEA.

Here, select Do not import settings and click on OK.

Now, check the I confirm that I have read and accept the terms of this User Agreement checkbox and click on Continue to confirm the JetBrains Privacy Policy.

Now, you may click on any one of these buttons depending on whether you would like to share statistics data with JetBrains to help them improve their products.

Now, select a UI theme and click on Next: Desktop Entry as marked in the screenshot below.

Now, you have to create a Desktop Entry for IntelliJ IDEA. That way, you can easily access IntelliJ IDEA from the Application Menu of Ubuntu. To do that, make sure the marked checkboxes are checked and click on the Next: Launcher Script button.

If you want to launch IntelliJ IDEA projects from the command line, check the marked checkbox. Otherwise, leave it unchecked. Once you’re done, click on Next: Default plugins.

Now, you can enable/disable the plugins you need to tune IntelliJ IDEA to your need from here. Once you’re done, click on Next: Featured plugins.

Now, IntelliJ IDEA will suggest you some plugins that you may need. If you want to install any of them, just click on Install and it will be installed. Once you’re done, click on Start using IntelliJ IDEA.

Now, type in your login password and click on Authenticate.

IntelliJ IDEA is loading as you can see in the screenshot below.

As you can see, IntelliJ IDEA is running. It is the dashboard of IntelliJ IDEA.

Now that IntelliJ IDEA is installed, you can also run it from the Application Menu of Ubuntu as you can see in the screenshot below.

Creating a New Java Project in IntelliJ IDEA:

In this section, I will show you how to create a new Java project in IntelliJ IDEA. So, let’s get started.

First, start IntelliJ IDEA and click on Create New Project.

Now, select Java from the list and click on Next.

From here, check Create project from template and select Command Line App. Once you’re done, click on Next.

Now, type in your project name, project location, and package namespace. Once you’re done, click on Finish.

Now, write your Java program. Once you’re done, you need to compile and run your Java program. To do that click on the Play button as marked in the screenshot below.

As you can see, the correct output is displayed.

So, that’s how you install JetBrains IntelliJ IDEA Java IDE on Ubuntu. Thanks for reading this article.

Source

Guide to Sort a List with Python Sort List() Command

How to sort list in python

We have a list of numbers or strings, and we want to sort the items in this list. Basically, we can either use sort method or sorted function to achieve what we want.

The difference between sort and sorted is that sort is a list method that modifies the list in place and we can’t use sort method with another object like tuple whereas sorted is a built-in function that creates a new list and the original one didn’t change.

In this article, we will mention how to use these functions to sort, in an ascending or descending manner, a list of numbers, strings, tuples, or literally any object.

We will also mention how to define your own custom sort functions.

1) Sorting a list of numbers

By sort method, we can sort list of numbers whether integer or float.

Note: the method comes after the object as example below

>>> Lst = [20, 25.4, 12, -16, -3.14, 3, -5, 7]
>>> Lst.sort()
>>> Lst
[-16, -5, -3.14, 3, 7, 12, 20, 25.4]

As you see the order of the object Lst has been changed.

But if we want to keep the original list without changing and assigns the new order to new list, we will use sorted function as below.

>>> Lst = [20, 25.4, 12, -16, -3.14, 3, -5, 7]
>>> Sorted_Lst = sorted(Lst)
>>> Sorted_Lst
[-16, -5, -3.14, 3, 7, 12, 20, 25.4]

If we want to sort the list in a descending order, we will use keyword reverse and assigning to it True as below.

>>> Lst = [20, 25.4, 12, -16, -3.14, 3, -5, 7]
>>> Sorted_Lst = sorted(Lst, reverse = True)
>>> Sorted_Lst
[25.4, 20, 12, 7, 3, -3.14, -5, -16]

2) Sorting a list of strings

We will use sort or sorted for sorting strings

>>> Lst = [‘bmw’,’ford’,’audi’,’toyota’]
>>> Lst.sort()
>>> Lst
[‘audi’, ‘bmw’, ‘ford’, ‘toyota’]

Also we still use the reverse keyword to sort in a descending order.

>>> Lst = [‘bmw’,’ford’,’audi’,’toyota’]
>>> Lst.sort(reverse=True)
>>> Lst
[‘toyota’, ‘ford’, ‘bmw’, ‘audi’]

Note: Python treats all uppercase letters before lowercase letters

We will see that on the example below.

>>> Lst = [‘bmw’,’ford’,’audi’,’Toyota’]
>>> Lst.sort()
>>> Lst
[‘Toyota’, ‘audi’, ‘bmw’, ‘ford’]

But we can sort a list of strings in a case insensitive manner by using keyword key=str.lower for sort method as below.

>>> Lst = [‘bmw’,’ford’,’audi’,’Toyota’]
>>> Lst.sort(key=str.lower)
>>> Lst
[‘audi’, ‘bmw’, ‘ford’, ‘Toyota’]

3) Sorting a list of tuples

We should know that tuple is immutable object when list is mutable object, also when we sort two tuples, we start out by comparing the first elements of the tuples and if they are not equal, this is the result of the comparison.

>>> (5, 7) >> (5, 7) > (5, 9)
False

Of course we can use sort method or sorted function for sorting tuples as below.

>>> sorted([(5, 4), (3, 3), (3, 10)])
[(3, 3), (3, 10), (5, 4)]

>>> sorted([(‘watermelon’, ‘green’), (‘apple’, ”), (‘banana’, ”)])
[(‘apple’, ”), (‘banana’, ”), (‘watermelon’, ‘green’)]

Note: We can sort tuple by second element

we will use lambda to define our own key function as below.

>>> Lst = [(“Amanda”, 35), (“John”, 30), (“Monica”, 25)]
>>> Lst.sort(key=lambda x: x[1])
>>> print(Lst)
[(‘Monica’, 25), (‘John’, 30), (‘Amanda’, 35)]

4) Sorting a list of objects

Objects are type of data types for Python, to sort this type of data type, we should put this objects in a list as below.
Assume we have a Student class has name and age methods that looks like this:-

class Student:
def __init__(self, name, age):
self.name = name
self.age = age

Now we will create some Student objects and add them to a list for sorting them.

>>> John = Student(‘John’, 30)
>>> Amanda = Student(‘Amanda’, 35)
>>> Monica = Student(‘Monica’, 25)
>>> Lst = [John, Amanda, Monica]

If we sort the list upon name method.

>>> Lst.sort(key=lambda x: x.name)
>>> print([item.name for item in Lst])
[‘Amanda’, ‘John’, ‘Monica’]

If we sort the list upon age method.

>>> Lst.sort(key=lambda x: x.age)
>>> print([item.name for item in Lst])
[‘Monica’, ‘John’, ‘Amanda’]

Python has many built-in functions and methods helping us to solve problems,sort and sorted are used to sort any list of numbers, strings, tuples and objects.

Read Also:

Source

NSA to Open Source its Reverse Engineering Tool GHIDRA

NSA to Open Source its Reverse Engineering Tool GHIDRA

Last updated January 8, 2019 By Ankush Das

GHIDRA – NSA’s reverse engineering tool is getting ready for a free public release this March at the RSA Conference 2019 to be held in San Francisco.

The National Security Agency (NSA) did not officially announce this – however – a senior NSA advisor, Robert Joyce’s session description on the official RSA conference website revealed about it before any official statement or announcement.

Here’s what it mentioned:

Image Credits: Twitter

In case the text in the image isn’t properly visible, let me quote the description here:

NSA has developed a software reverse engineering framework known as GHIDRA, which will be demonstrated for the first time at RSAC 2019. An interactive GUI capability enables reverse engineers to leverage an integrated set of features that run on a variety of platforms including Windows, Mac OS, and Linux and supports a variety of processor instruction sets. The GHISDRA platform includes all the features expected in high-end commercial tools, with new and expanded functionality NSA uniquely developed. and will be released for free public use at RSA.

What is GHIDRA?

GHIDRA is a software reverse engineering framework developed by NSA that is in use by the agency for more than a decade.

Basically, a software reverse engineering tool helps to dig up the source code of a proprietary program which further gives you the ability to detect virus threats or potential bugs. You should read how reverse engineering works to know more.

The tool is is written in Java and quite a few people compared it to high-end commercial reverse engineering tools available like IDA.

A Reddit thread involves more detailed discussion where you will find some ex-employees giving good amount of details before the availability of the tool.

NSA open source

GHIDRA was a secret tool, how do we know about it?

The existence of the tool was uncovered in a series of leaks by WikiLeaks as part of Vault 7 documents of CIA.

Is it going to be open source?

We do think that the reverse engineering tool to be released could be made open source. Even though there is no official confirmation mentioning “open source” – but a lot of people do believe that NSA is definitely targeting the open source community to help improve their tool while also reducing their effort to maintain this tool.

This way the tool can remain free and the open source community can help improve GHIDRA as well.

You can also check out the existing Vault 7 document at WikiLeaks to come up with your prediction.

Is NSA doing a good job here?

The reverse engineering tool is going to be available for Windows, Linux, and Mac OS for free.

Of course, we care about the Linux platform here – which could be a very good option for people who do not want to or cannot afford a thousand dollar license for a reverse engineering tool with the best-in-class features.

Wrapping Up

If GHIDRA becomes open source and is available for free, it would definitely help a lot of researchers and students and on the other side – the competitors will be forced to adjust their pricing.

What are your thoughts about it? Is it a good thing? What do you think about the tool going open source? Let us know what you think in the comments below.

About Ankush Das

A passionate technophile who also happens to be a Computer Science graduate. He has had bylines at a variety of publications that include Ubergizmo & Tech Cocktail. You will usually see cats dancing to the beautiful tunes sung by him.

Source

How to Downgrade RHEL/CentOS to Previous Minor Release

Have you upgraded your kernel and redhat-release packages and you are encountering some issues. Do you want to downgrade to a lower minor release. In this article, we will describe how to do downgrade RHEL or CentOS version to previous minor version.

Note: The following steps will only work for downgrades within the same major version (such as from RHEL/CentOS 7.6 to 7.5) but not between major versions (such as from RHEL/CentOS 7.0 to 6.9).

A minor version is a release of RHEL that does not (in most cases) add new features or content. It focuses on solving minor problems, typically bugs or security issues. Most of what makes a specific minor version is included in the kernel, so you will need to find out which kernels are supported as part of the minor version you are targeting.

For the purpose of this article, we will show how to downgrade from 7.6 to 7.5. Before we proceed, note that the kernel version for 7.5 is 3.10.0-862. Got to Red Hat Enterprise Linux Release Dates for a complete list of minor releases and associated kernel versions.

Let’s check if the required kernel packages “kernel-3.10.0-862” is installed or not, using the following yum command.

 
# yum list kernel-3.10.0-862*
Check Kernel Version Package

Check Kernel Version Package

If the output of the previous command shows that the kernel package is not installed, you need to install it on the system.

# yum install kernel-3.10.0-862.el7

Once the kernel installation is compete, to apply the changes, you need to reboot the system.

Then downgrade the redhat-release package to complete the process. The command below targets the latest minor version that is lower than the current running one, such as from 7.6 to 7.5, or from 7.5 o 7.4.

# yum downgrade redhat-release

Finally, confirm the downgrade by checking the contents of /etc/redhat-release using the cat command.

# cat /etc/redhat-release
Check Release Version

Check Release Version

That’s all! In this article, we have explained how to downgrade RHEL or CentOS distribution to a lower minor release. If you have any queries, use the feedback form below to reach us.

Source

How to Install Apache Tomcat 9 on CentOS 7

How to install Tomcat 9 on CentOS 7
How to install Tomcat 9 on CentOS 7

Install Apache Tomcat on CentOS

Apache Tomcat is an opensource web server used to server Java Applications. It is an opensource implementation of Java Servlet, Java Server Pages and Java Expression Language. In this tutorial, you are going to learn how to Install Apache Tomcat on CentOS 7.

Prerequisites

Before you start to install Apache Tomcat on CentOS 7. You must have the non-root user account on your system with sudo privileges.

Install Java with OpenJDK

It required to have Java installed on your system before we start to install Tomcat. Run following commands to install Java.

First, check if Java is already installed on your system running following command.

java -version

If Java does not installed on your system install it by running following command.

sudo yum install java-1.8.0-openjdk-devel

Now Java is installed on your system.

Create Tomcat User

Becuase of security reason Tomcat should not run as root user. So now you should create a non-root user for Tomcat typing following command.

sudo useradd -r -m -U -d /opt/tomcat -s /bin/false tomcat

Now you are ready to install Tomcat on CentOS 7.

Install Tomcat

To install Tomcat 9 you need to download latest binaries from Tomcat Download Page. At the time creating this tutorial latest version is 9.0.14. But you can use the latest stable version.

First navigate insode /tmp directory.

cd /tmp

To download Tomcat run following command.

wget http://www-eu.apache.org/dist/tomcat/tomcat-9/v9.0.14/bin/apache-tomcat-9.0.14.tar.gz -P

After downloading extract Tomcat archive and move to /opt/tomcat directory.

sudo tar xf /apache-tomcat-9*.tar.gz -C /opt/tomcat

Now create a symbolic link for installation directory so if you want to migrate to next Tomcat version you need to only change this symbolic link.

sudo ln -s /opt/tomcat/apache-tomcat-9.0.14 /opt/tomcat/enabled

Set Permissions

As Tomcat should run under tomcat user created previously. You need to give permissions to tomcat user to access tomcat installation directory.

Run following command to give installation directory ownership to tomcat user and tomcat group.

sudo chown -RH tomcat: /opt/tomcat/enabled

Set non executable flag for bin directory.

sudo chmod o+x /opt/tomcat/enabled/bin/

Create Systemd Unit File

To run Tomcat as a service you need to create a new unit file.

Run following command to create tomcat.service unit file inside /etc/systemd/system/ directory;

sudo nano /etc/systemd/system/tomcat.service

Copy the following code and paste it inside the above file.
NOTE: Modify JAVA_HOME path if it does not match with the value found on your system.

[Unit]
Description=Tomcat 9 servlet container
After=network.target

[Service]
Type=forking

User=tomcat
Group=tomcat

Environment="JAVA_HOME=/usr/lib/jvm/default-java"
Environment="JAVA_OPTS=-Djava.security.egd=file:///dev/urandom -Djava.awt.headless=true"

Environment="CATALINA_BASE=/opt/tomcat/latest"
Environment="CATALINA_HOME=/opt/tomcat/latest"
Environment="CATALINA_PID=/opt/tomcat/latest/temp/tomcat.pid"
Environment="CATALINA_OPTS=-Xms512M -Xmx1024M -server -XX:+UseParallelGC"

ExecStart=/opt/tomcat/latest/bin/startup.sh
ExecStop=/opt/tomcat/latest/bin/shutdown.sh

[Install]
WantedBy=multi-user.target

Now reload systemd daemon to notify new file created.

sudo systemctl daemon-reload

Now start the Tomcat service running following command.

sudo systemctl start tomcat

Check the status if tomcat running using the following command.

sudo systemctl status tomcat

If everything is ok then run the following command to autostart Tomcat after boot.

sudo systemctl enable tomcat

Update The Firewall Settings

If you are running Firewall then update settings then you should open port 8080 to access Tomcat from outside of your local system.

Run following command to allow traffic on port 8080:

sudo firewall-cmd --zone=public --permanent --add-port=8080/tcp
sudo firewall-cmd --reload

Configure Tomcat Web Management Interface

To use manager web app you should edit tomcat-users.xml file. This file contains users and roles. Edit tomcat-users.xml file by running following command:

sudo nano /opt/tomcat/latest/conf/tomcat-users.xml

Now add username and password for admin-gui and manager-gui. Make it sure you are setting strong username and password.

....
....
<role rolename="admin-gui"/>
<role rolename="manager-gui"/>
<user username="admin" password="admin_password" roles="admin-gui,manager-gui"/>

Now save and close the above file opened.

By default, Apache Tomcat restricts access to Manager and Host Manager apps to connections coming from the server also. You should remove these restrictions.

To change IP address restriction open following files.

Open Manager app context file using below command.

sudo nano /opt/tomcat/latest/webapps/manager/META-INF/context.xml

Open Host Manager app context file using below command.

sudo nano /opt/tomcat/latest/webapps/host-manager/META-INF/context.xml

Add commnets as given in following file.

<Context antiResourceLocking="false" privileged="true" >
<!--
  <Valve className="org.apache.catalina.valves.RemoteAddrValve"
         allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" />
-->
</Context>

Save and close the file and restart the Tomcat server.

sudo systemctl restart tomcat

NOTE: You can add only IP address to the file to allow connection as given below. In following file for example 192.0.0.0 IP address added.

<Context antiResourceLocking="false" privileged="true" >
  <Valve className="org.apache.catalina.valves.RemoteAddrValve"
         allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1|192.0.0.0" />
</Context>

Testing Tomcat

Open browser and visit following link : http://YOUR_SERVER_DOMAIN_OR_IP_ADDRESS:8080

You should get the following output for the successful installation.

How to install tomcat 9 - homepage
How to install tomcat 9 – homepage

Now use Manager App visiting http://YOUR_SERVER_DOMAIN_OR_IP_ADDRESS:8080/manager/html. Now to login enter username and password you have created in tomcat-users.xml file.

how to install tomcat 9 - manager app
how to install tomcat 9 – manager app

The Virtual Host Manager App is available at http://YOUR_SERVER_DOMAIN_OR_IP_ADDRESS:8080/host-manager/html. By using this app you can manage virtual hosts.

how to install tomcat 9 - virtual host manager
how to install tomcat 9 – virtual host manager

Conclusion

You have successfully installed Tomcat 9 on CentOS 7. If you have any queries regarding this please don’t forget to comment below.

Source

How to Install Jetbrains RubyMine Ruby IDE on Ubuntu – Linux Hint

RubyMine is a powerful Ruby IDE from JetBrains. Like all the other JetBrains IDE, Ruby mine also has intelligent auto completion and many other tools to help you write and debug your Ruby application fast. In this article, I will show you how to install RubyMine on Ubuntu. The procedures shown here should work on Ubuntu 16.04 LTS and later. I will be using Ubuntu 18.04 LTS for the demonstration. So, let’s get started.

In order to run Ruby programs on RubyMine, you must have Ruby programming language installed on your machine.

On Ubuntu, you can install Ruby programming language with the following command:

$ sudo apt install ruby-full

Now, press y and then press <Enter> to continue.

Ruby should be installed.

Installing RubyMine:

On Ubuntu 16.04 LTS and later, RubyMine is available as a SNAP package. So, you can install the latest version of RubyMine on Ubuntu 16.04 LTS and later from the official SNAP package repository of Ubuntu.

To install RubyMine SNAP package on Ubuntu 16.04 LTS and later, run the following command:

$ sudo snap install rubymine –classic

Now, type in the password of your login user and press <Enter> to continue.

RubyMine snap package is being downloaded.

RubyMine is installed.

Initial Configuration of RubyMine:

Now, you can start RubyMine from the Application Menu of Ubuntu as you can see in the screenshot below.

As you’re running RubyMine for the first time, you may not have any settings to import. Just select Do not import settings and click on OK.

Now, you have to accept the JetBrains User Agreement. To do that, check I confirm that I have read and accept the terms of this User Agreement checkbox and click on Continue.

Now, select an UI theme and click on Next: Keymaps.

Now, select the keymap that you’re comfortable with and click on Next: default plugins.

Now, you can enable/disable certain features to tune RubyMine to your needs. Once you’re done, click on Next: Featured plugins.

Now, JetBrains will suggest you some popular plugins for RubyMine. If you like/need any of them, just click on Install to install it. Once you’re done, click on Start using RubyMine.

Now, you have to activate RubyMine. RubyMine is not free. In order to use RubyMine, you have to buy a license from JetBrains. Once you have the credentials, you can activate RubyMine from here.

If you want to try out RubyMine before you buy a license, you can do so for 30 days at the time of this writing. To do that, select Evaluate for free and click on Evaluate.

RubyMine is being loaded.

This is the dashboard of RubyMine. From here, you can create new projects and manage existing projects.

Creating a Ruby Project with RubyMine:

In this section, I will show you how to create a new Ruby project with RubyMine and run a simple Ruby program. So, let’s get started.

First, start RubyMine and click on Create New Project.

Now, select your project type. I selected Empty Project. Now, set your project Location (where RubyMine will save the files for this project) and make sure the Ruby SDK is correct. Once you’re done, click on Create.

A new project should be created.

Now, create a new file hello.rb and type in the following lines as shown in the screenshot below.

Once you’re done, click on the Play button as marked in the screenshot below to run the hello.rb Ruby program.

At times, the Play button I showed you earlier may be grayed out. Don’t worry. You can also run your favorite Ruby program from Rub > Run… as you can see in the screenshot below.

Now, select your Ruby program from the list.

Your desired Ruby program should be executed and the correct output should be displayed as you can see in the screenshot below.

So, that’s how you install RubyMine Ruby IDE from JetBrains on Ubuntu. Thanks for reading this article.

Source

Kubernetes Federation Evolution

Wednesday, December 12, 2018

Kubernetes Federation Evolution

Authors: Irfan Ur Rehman (Huawei), Paul Morie (RedHat) and Shashidhara T D (Huawei)

Deploying applications to a kubernetes cluster is well defined and can in some cases be as simple as kubectl create -f app.yaml. The user’s story to deploy apps across multiple clusters has not been that simple. How should an app workload be distributed? Should the app resources be replicated into all clusters, or replicated into selected clusters or partitioned into clusters? How is the access to clusters managed? What happens if some of the resources, which user wants to distribute pre-exist in all or fewer clusters in some form.

In SIG multicluster, our journey has revealed that there are multiple possible models to solve these problems and there probably is no single best fit all scenario solution. Federation however is the single biggest kubernetes open source sub project which has seen maximum interest and contribution from the community in this problem space. The project initially reused the k8s API to do away with any added usage complexity for an existing k8s user. This became non-viable because of problems best discussed in this community update.

What has evolved further is a federation specific API architecture and a community effort which now continues as Federation V2.

Conceptual Overview

Because federation attempts to address a complex set of problems, it pays to break the different parts of those problems down. Let’s take a look at the different high-level areas involved:

Kubernetes Federation V2 Concepts
Kubernetes Federation V2 Concepts

Federating arbitrary resources

One of the main goals of Federation is to be able to define the APIs and API groups which encompass basic tenets needed to federate any given k8s resource. This is crucial due to the popularity of Custom Resource Definitions as a way to extend Kubernetes with new APIs.

The workgroup did arrive at a common definition of the federation API and API groups as ‘a mechanism that distributes “normal” Kubernetes API resources into different clusters’. The distribution in its most simple form could be imagined as simple propagation of this ‘normal Kubernetes API resource’ across the federated clusters. A thoughtful reader can certainly discern more complicated mechanisms, other than this simple propagation of the Kubernetes resources.

During the journey of defining building blocks of the federation APIs, one of the near term goals also evolved as ‘to be able to create a simple federation aka simple propagation of any Kubernetes resource or a CRD, writing almost zero code’. What ensued further was a core API group defining the building blocks as a Template resource, a Placement resource and an Override resource per given Kubernetes resource, a TypeConfig to specify sync or no sync for the given resource and associated controller(s) to carry out the sync. More details follow in the next section Federating resources: the details. Further sections will also talks about being able to follow a layered behaviour with higher level Federation APIs consuming the behaviour of these core building blocks, and users being able to consume whole or part of the API and associated controllers. Lastly this architecture also allows the users to write additional controllers or replace the available reference controllers with their own to carry out desired behaviour.

The ability to ‘easily federate arbitrary Kubernetes resources’, and a decoupled API, divided into building blocks APIs, higher level APIs and possible user intended types, presented such that different users can consume parts and write controllers composing solutions specific to them, makes a compelling case for Federation V2.

Federating resources: the details

Fundamentally, federation must be configured with two types of information: Which API types federation should handle Which clusters federation should target for distributing those resources. For each API type that federation handles, different parts of the declared state live in different API resources: A template type holds the base specification of the resource – for example, a type called FederatedReplicaSet holds the base specification of a ReplicaSet that should be distributed to the targeted clusters A placement type holds the specification of the clusters the resource should be distributed to – for example, a type called FederatedReplicaSetPlacement holds information about which clusters FederatedReplicaSets should be distributed to An optional overrides type holds the specification of how the template resource should be varied in some clusters – for example, a type called FederatedReplicaSetOverrides holds information about how a FederatedReplicaSet should be varied in certain clusters. These types are all associated by name – meaning that for a particular templateresource with name foo, the placement and override information for that resource are contained by the override and placement resources with the same name and namespace as that of the template.

Higher level behaviour

The architecture of federation v2 API allows higher level APIs to be constructed using the mechanics provided by the core API types (templateplacementand override) and associated controllers for a given resource. In the community we could uncover few use cases and did implement the higher level APIs and associated controllers useful for those cases. Some of these types described in further sections also provide an useful reference to anybody interested in solving more complex use cases, building on top of the mechanics already available with federation v2 API.

ReplicaSchedulingPreference

ReplicaSchedulingPreference provides an automated mechanism of distributing and maintaining total number of replicas for deployment or replicasetbased federated workloads into federated clusters. This is based on high level user preferences given by the user. These preferences include the semantics of weighted distribution and limits (min and max) for distributing the replicas. These also include semantics to allow redistribution of replicas dynamically in case some replica pods remain unscheduled in some clusters, for example due to insufficient resources in that cluster. More details can be found at the user guide for ReplicaSchedulingPreferences.

Federated Services & Cross-cluster service discovery

kubernetes services are very useful construct in micro-service architecture. There is a clear desire to deploy these services across cluster, zone, region and cloud boundaries. Services that span clusters provide geographic distribution, enable hybrid and multi-cloud scenarios and improve the level of high availability beyond single cluster deployments. Customers who want their services to span one or more (possibly remote) clusters, need them to be reachable in a consistent manner from both within and outside their clusters.

Federated Service at its core contains a template (definition of a kubernetes service), a placement(which clusters to be deployed into), an override (optional variation in particular clusters) and a ServiceDNSRecord (specifying details on how to discover it).

Note: The Federated Service has to be of type LoadBalancer in order for it to be discoverable across clusters.

Discovering a Federated Service from pods inside your Federated Clusters

By default, Kubernetes clusters come preconfigured with a cluster-local DNS server, as well as an intelligently constructed DNS search path which together ensure that DNS queries like myservicemyservice.mynamespacesome-other-service.other-namespace, etc issued by your software running inside Pods are automatically expanded and resolved correctly to the appropriate service IP of services running in the local cluster.

With the introduction of Federated Services and Cross-Cluster Service Discovery, this concept is extended to cover Kubernetes services running in any other cluster across your Cluster Federation, globally. To take advantage of this extended range, you use a slightly different DNS name (e.g. myservice.mynamespace.myfederation) to resolve federated services. Using a different DNS name also avoids having your existing applications accidentally traversing cross-zone or cross-region networks and you incurring perhaps unwanted network charges or latency, without you explicitly opting in to this behavior.

Lets consider an example: (The example uses a service named nginx and the query name for described above)

A Pod in a cluster in the us-central1-a availability zone needs to contact our nginx service. Rather than use the service’s traditional cluster-local DNS name (nginx.mynamespace, which is automatically expanded to nginx.mynamespace.svc.cluster.local) it can now use the service’s Federated DNS name, which is nginx.mynamespace.myfederation. This will be automatically expanded and resolved to the closest healthy shard of my nginx service, wherever in the world that may be. If a healthy shard exists in the local cluster, that service’s cluster-local IP address will be returned (by the cluster-local DNS). This is exactly equivalent to non-federated service resolution.

If the service does not exist in the local cluster (or it exists but has no healthy backend pods), the DNS query is automatically expanded to nginx.mynamespace.myfederation.svc.us-central1-a.us-central1.example.com. Behind the scenes, this is finding the external IP of one of the shards closest to my availability zone. This expansion is performed automatically by cluster-local DNS server, which returns the associated CNAME record. This results in a traversal of the hierarchy of DNS records, and ends up at one of the external IP’s of the Federated Service near by.

It is also possible to target service shards in availability zones and regions other than the ones local to a Pod by specifying the appropriate DNS names explicitly, and not relying on automatic DNS expansion. For example, nginx.mynamespace.myfederation.svc.europe-west1.example.comwill resolve to all of the currently healthy service shards in Europe, even if the Pod issuing the lookup is located in the U.S., and irrespective of whether or not there are healthy shards of the service in the U.S. This is useful for remote monitoring and other similar applications.

Discovering a Federated Service from Other Clients Outside your Federated Clusters

For external clients, automatic DNS expansion described is currently not possible. External clients need to specify one of the fully qualified DNS names of the federated service, be that a zonal, regional or global name. For convenience reasons, it is often a good idea to manually configure additional static CNAME records in your service, for example:

SHORT NAME CNAME
eu.nginx.acme.com nginx.mynamespace.myfederation.svc.europe-west1.example.com
us.nginx.acme.com nginx.mynamespace.myfederation.svc.us-central1.example.com
nginx.acme.com nginx.mynamespace.myfederation.svc.example.com

That way your clients can always use the short form on the left, and always be automatically routed to the closest healthy shard on their home continent. All of the required failover is handled for you automatically by Kubernetes Cluster Federation.

As a further reading a more elaborate guide for users is available at Multi-Cluster Service DNS with ExternalDNS Guide

Try it yourself

To get started with Federation V2, please refer to the user guide hosted on github.
Deployment can be accomplished with a helm chart, and once the control plane is available, the user guide’s example can be used to get some hands-on experience with using Federation V2.

Federation V2 can be deployed in both cluster-scoped and namespace-scoped configurations. A cluster-scoped deployment will require cluster-admin privileges to both host and member clusters, and may be a good fit for evaluating federation on clusters that are not running critical workloads. Namespace-scoped deployment requires access to only a single namespace on host and member clusters, and is a better fit for evaluating federation on clusters running workloads. Most of the user guide refers to cluster-scoped deployment, with the Namespaced Federation section documenting how use of a namespaced deployment differs. Infact same cluster can host multiple federations and/or same clusters can be part of multiple federations in case of Namespaced Federation.

Source

Most home routers don’t take advantage of Linux’s improved security features

linksys-wrt32x.jpg

Linksys WRT32X, the router that scored the highest in the Cyber-ITL security-focused case study.

Image: Linksys

Many of today’s most popular home router models don’t take full advantage of the security features that come with the Linux operating system, which many of them use as a basis for their firmware.

Security hardening features such as ASLR (Address Space Layout Randomization), DEP (Data Execution Prevention), RELRO (RELocation Read-Only), and stack guards have been found to be missing in a recent security audit of 28 popular home routers.

Security experts from the Cyber Independent Testing Lab (Cyber-ITL) analyzed the firmware of these routers and mapped out the percentage of firmware code that was protected by the four security features listed above.

“The absence of these security features is inexcusable,” said Parker Thompson and Sarah Zatko, the two Cyber-ITL researchers behind the study.

“The features discussed in this report are easy to adopt, come with no downsides, and are standard practices in other market segments (such as desktop and mobile software),” the two added.

While some routers had 100 percent coverage for one feature, none implemented all four. Furthermore, researchers also found inconsistencies in applying the four security features within the same brand, with some router models from one vendor rating extremely high, while others had virtually no protection.

According to the research team, of the 28 router firmware images they analyzed, the Linksys WRT32X model scored highest with 100 percent DEP coverage for all firmware binaries, 95 percent RELRO coverage, 82 percent stack guard coverage, but with a lowly 4 percent ASLR protection.

The full results for each router model are available below. Researchers first looked at the ten home routers recommended by Consumer Reports (table 1), but then expanded their research to include other router models recommended by other news publications such as CNET, PCMag, and TrustCompass (table 2).

Test results for Consumer Reports recommended routers

Test results for CNET, PCMag, TrustCompass recommended routers

As a secondary conclusion of this study, the results also come to show that the limited hardware resources found on small home routers aren’t a valid excuse for failing to ship router firmware without improved security hardening features like ASLR, DEP, and others.

It is clear that some companies can ship routers with properly secured firmware, and routers can benefit from the same OS hardening features that Linux provides to desktop and server versions.

Last but not least, the study also showed an inherent weakness in routers that use a MIPS Linux kernel. The Cyber-ITL team says that during their analysis of the 28 firmware images (ten of which ran MIPS Linux and 18 of which ran ARM Linux) they also discovered a security weakness in MIPS Linux kernel versions from 2001 to 2016.

“The issue from 2001 to 2016 resulted in stack based execution being enabled on userland processes,” the researchers said –issue which made DEP protection impossible.

The 2016 MIPS Linux kernel patch re-enabled DEP protection for MIPS Linux, researchers said, but also introduced another bug that allows an attacker to bypass both DEP and ASLR protections. Researchers detailed this MIPS Linux bug in more detail in a separate research paper available here.

Source

10 React Native Libraries Every React Native Developer Should Know About | Linux.com

10 React Native Libraries Every React Native Developer Should Know About

If you are starting out a new app development project, chances are pretty high that you have already decided writing it with React Native. React Native gives you benefit of leveraging a single codebase to produce two different kind of apps.

In order to make React Native app development simpler for you and spare you the time you would spend composing certain parts for your application, you can take the assistance of some magnificent React Native libraries that do the hard work for you, making it simple to integrate the basis or some newest features in your app.

However, choosing the best React Native library for your project can be of much hassle for you since there are thousands of libraries out there. Henceforth, here I am mentioning 10 best React Native libraries that you may find useful while developing an app with React Native.

1) Create-react-native-app

Establishing the Initial setup for your React Native app can be much of time consuming especially if you are just starting out to develop your first app. Create-react-native-app is a library that comes handy if you want to develop a React Native app without any build configuration. Create React Native App enables you to work with a majority of Components and APIs in React Native, and in addition the greater part of the JavaScript APIs that the Expo App gives.

2) React-native-config

If you are a developer than you might already be familiar about an XML file which basically represents your app’s configuration file. The file stores any setting that you might want to change between deploys such as staging, environment variables, production or others. By default, some apps store this config as constants in the code which is a serious violation of 12 factors. Henceforth, it is very important to separate the config from the code. It is important to note that your app’s config may vary substantially across various deploys.

React-native-config is a cool library which makes it easier for you to stick and adhere to the 12 factor code while effectively managing your app config settings.

3) React-native-permissions

React-native-permissions allows you to implement check and request users permission anywhere in a React Native app. Currently, it offers you an easy access to the following permissions:

  • Location
  • Camera
  • Microphone
  • Photos
  • Contacts
  • Events
  • Reminders (iOS only)
  • Bluetooth (iOS only)
  • Push Notifications (iOS only)
  • Background Refresh (iOS only)
  • Speech Recognition (iOS only)
  • Call Phone (Android Only)
  • Read/Receive SMS (Android only)

4) React Navigation

Widely known as one of the most widely used navigational library in the React Native ecosystem, React Navigation is an enhanced version of Navigator, NavigatiorExperimental and Ex-Navigation.

It’s written completely in JavaScript which has a gigantic upside in its own sense since you can transport updates to it OTA and you can submit patches without having to know Objective C/Java and the stage’s individual route APIs.

5) React-native-Animatable

Looking for a kickass library for implementing Animations in React Native? React Native Animatable is here to help you with just that.

React-native-animatable can be used in two ways to add Animations and transitions in a React Native app development: Declarative and Imperative.

When it comes to Declarative usage then its as simple as it sounds. Whatever name of pre-built animation will be declared in your code, animation will only be applied on the element for which it has been declared. Pretty straightforward! Right?

6) React Native Push Notifications

This library is very useful to implement push notifications in a React Native app. With some additional features such as schedule notification, repeat notification based on the day, week, time etc., the library stands out from all other libraries for push notifications in React Native.

7) React Native Material Kit

React Native Native Material Kit is a magnificent help for Material Design themed applications. The library gives you customizable yet ready made UI components such as buttons, cards, loading indicators, and floating label text fields etc.  Furthermore, there are numerous approaches to build every segment: pick either the constructor or JSX procedure to best fit the structure of your venture. This library is certain to spare a huge amount of time and styling exertion for any engineer with application plans established in Material rules.

8) React Native Code Push

Codepush let you deploy your React Native code over-the-air without much of hassle. Moreover, you will find codepush as a great deal for pushing bug fixes to your app without having to wait for your user to update their existing app’s version.

9) React-Native-Map

React Native Map is an awesome component which helps you to avoid necessary complications in case of apple and google maps. You will have a flexible and customizable map that can be zoomed or panned with markers by just using one simple <MapView> tag in your code. Moreover, the map which is rendered will feel smooth, native and high performant.

10) React Native Vector Icons

You might be well aware of the fact that how Icons contribute to improve user experience of an application. React Native Vector Icons may come last but it is definitely not the least in my list of Top 10 React Native libraries.  It boasts over a myriad number of well crafted icons which are contributed by renowned publishers. You can seamlessly integrate them to your app through the elegantly designed API provided by the library.

Conclusion

Aforementioned are some popular React Native libraries that deserves your utmost attention in React Native app development.

Source

How to Check your Debian Linux Version

When you login to a Debian Linux system for the first time, before doing any work it is always a good idea to check what version of Debian is running on the machine.

Three releases of Debian are always actively maintained:

  • Stable – The latest officially released distribution of Debian. At the time of writing this article the current stable distribution of Debian is version 9 (stretch). This is the version that is recommended for production environments.
  • Testing – The preview distribution that will become the next stable release. It contains packages that are not ready for stable release yet, but they are in the queue for that. This release is updated continually until it is frozen and released as stable.
  • Unstable, always codenamed sid – This is the distribution where the active development of Debian is taking place.

In this tutorial, we’ll show several different commands on how to check what version of Debian Linux is installed on your system.

Checking Debian Version from the Command Line

The lsb_release utility displays LSB (Linux Standard Base) information about the Linux distribution.

The preferred method to check your Debian version is to use the lsb_release utility which displays LSB (Linux Standard Base) information about the Linux distribution. This method will work no matter which desktop environment or Debian version you are running.

No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 9.5 (stretch)
Release: 9.5
Codename: stretch

Your Debian version will be shown in the Description line. As you can see from the output above I am using Debian GNU/Linux 9.5 (stretch).

Instead of printing all of the above information you can display the description line which shows your Debian version passing the -d switch.

Output should look similar to below:

Description: Debian GNU/Linux 9.5 (stretch)

Alternatively, you can also use the following commands to check your Debian version.

Checking Debian Version using the /etc/issue file

The following cat command will display the contents of the /etc/issue which contains a system identification text:

The output will look something like below:

Checking Debian Version using the /etc/os-release file

/etc/os-release is a file which contain operating system identification data, and can be found only on the newer Debian distributions running systemd.

This method will work only if you have Debian 9 or newer:

The output will look something like below:

PRETTY_NAME=”Debian GNU/Linux 9 (stretch)”
NAME=”Debian GNU/Linux”
VERSION_ID=”9″
VERSION=”9 (stretch)”
ID=debian
HOME_URL=”https://www.debian.org/”
SUPPORT_URL=”https://www.debian.org/support”
BUG_REPORT_URL=”https://bugs.debian.org/”

Checking Debian Version using the hostnamectl command

hostnamectl is a command that allows you to set the hostname but you can also use it to check your Debian version.

This command will work only on Debian 9 or newer versions:

Static hostname: debian9.localdomain
Icon name: computer-vm
Chassis: vm
Machine ID: a92099e30f704d559adb18ebc12ddac4
Boot ID: 4224ba0d5fc7489e95d0bbc7ffdaf709
Virtualization: qemu
Operating System: Debian GNU/Linux 9 (stretch)
Kernel: Linux 4.9.0-8-amd64
Architecture: x86-64

Conclusion

In this guide we have shown you how to find the version of Debian installed on your system. For more information on Debian releases visit the Debian Releases page.

Feel free to leave a comment if you have any questions.

Source

WP2Social Auto Publish Powered By : XYZScripts.com