6 Methods To Rename Multiple Files At Once In Linux | Linux.com

As you may already know, we use mv command to rename or move files and directories in Unix-like operating systems. But, the mv command won’t support renaming multiple files at once. It can rename only one file at a time. Worry not. There are few other utilities available, especially for batch renaming files. In this tutorial, we are going to learn to rename multiple files at once in six different methods. All examples provided here are tested in Ubuntu 18.04 LTS, however they should work on any Linux operating systems. Let’s get started!

Rename Multiple Files At Once In Linux

There could be many commands and utilities to a rename bunch of files. As of writing this, I know the following methods only. I will keep updating the list if I come across any method in future.

Method 1 – Using mmv

The mmv utility is used to move, copy, append and rename files in bulk using standard wildcards in Unix-like operating systems. It is available in the default repositories of Debian-based systems. To install it on Debian, Ubuntu, Linux Mint, run the following command:

$ sudo apt-get install mmv

Read more at OSTechnix

Source

What now, Larry? AWS boss insists Amazon will have dumped Oracle database by end of 2019.

Clock’s ticking on Ellison’s smack talk

Boxing_gloves

re:Invent AWS boss Andy Jassy has doubled down on claims Amazon will “be done” with Oracle databases by 2019, and used his Re:Invent keynote to throw shade at Big Red.

Speaking at Amazon’s main tech conference in Las Vegas this week, Jassy said that the world of “old guard commercial-grade databases” has been “miserable” for enterprises for the past 20 years.

Targeting cloud rival Oracle, Jassy said these legacy database vendors are too expensive and don’t serve customers well, pointing to aggressive audits and proprietary systems that lock in customers.

He also rubbished Big Red’s market share, showing a slide that was mostly AWS orange, followed by Microsoft at 13.3 per cent, Alibaba at 4.6 per cent and Google at 3.3 per cent.

Oracle was identified by a pop-up Larry Ellison, appearing like a small cartoon villain, in a segment of “other vendors”.

Amusing slide from Andy Jassy in his keynote showing market share (AWS being the big orange segment). @AWSreInvent pic.twitter.com/fLCHYRxsJy

— TechMarketView (@TechMarketView) November 28, 2018

The trading of blows is customary at vendor conferences – Ellison spends huge chunks of his keynotes trash-talking AWS, with the common refrain that Amazon still uses Oracle’s databases.

The online marketplace giant’s efforts to shift off its competitor’s tech is well documented – less well evidenced – but that hasn’t stopped Jassy from expanding on claims the firm is making strides.

In an interview with CNBC at re:Invent, he said: “We’re virtually done moving away from Oracle on the database side… I think by the end of 2019 or mid-2019 we’ll be done.”

He claimed that 88 per cent of databases running Oracle will be on Amazon’s DynamoDB or Aurora databases by January, and that 97 per cent of mission-critical databases will be on DynamoDB or Aurora by the end of next year.

Jassy also reiterated a previous tweet that Amazon moved its data warehouse from Oracle to Redshift on 1 November.

Elsewhere at the conference, AWS announced DeepRacer, a tiny radio-controlled “self-driving” car – which comes hot on the heels of Ellison’s comments at OpenWorld that Amazon’s database was semi-autonomous at best.

“That’s like a semi-autonomous car. You get in, drive it… and you die,” said Ellison. Of course, no one can get into this one. ®

Sponsored:
Five steps to dealing with the insider threat

Source

Open Source Tools for Writers » Linux Magazine

When it comes to writing, using the right tools can free you up to focus on your content.

Sooner or later, open source development comes to every field, and tools for working writers are no exception. However, if you search for the topic, you will find the lists of writing tools are full of apps that are no longer in development and have been dropped from most distributions.

Accordingly, here is a list of useful writing apps that are still available as of late 2018. Some have been around for a long time, while others are newer and little known.

 

Braindump

Over the last two decades, over half a dozen tools for brainstorming have been released. However, if the proprietary ones are ignored, few free-licensed ones have survived. Technically, Braindump is one of the casualties, having been removed from Calligra 3.0, apparently because of a lack of developers.

Fortunately, Braindump remains available in places like the Debian Stable repository. It remains useful in its current state for brainstorming maps that are almost as quick as pencil and paper (Figure 1). Its support for images, charts, and diagrams gives it a versatility that allows rapid, unimpeded development of ideas.

As an alternative, brainstormers might also want to look at VYM.


Figure 1: Originally part of Calligra Suite, Braindump is a brainstorming tool that is likely to be available for a while.

Zim

Longer works often require background material that the writer needs to know but which seldom finds its way into the story. This is especially true of fantasy. Often described as a desktop wiki, Zim is a convenient place to store such information and to link files together for quick reference. For example, I use Zim to store files with information such as character and historical background, as well as names for different cultures in my fantasy novel attempt (Figure 2).

KDE users might use BasKet instead. Although BasKet advertises itself more humbly as an advanced note taker, its capacities are similar to Zim’s.


Figure 2: Zim is ideal for storing background material.

Artha

Artha promotes itself as an open source thesaurus. At first, I saw nothing in the app that suggested any benefit of being open source. Perhaps, I thought, open source’s influence will only become evident over time, possibly in the speed with which new words and meanings update it.

Meanwhile, Artha is a comprehensive, local thesaurus with some valuable features (Figure 3). Like the online Thesaurus.com, it includes antonyms and alternate meanings. However, Artha also includes jargon, related words, pertainyms (forms of the word that are parts of speech, such as an adverb based on a noun), and derivatives (for instance, “clearing” for “clear”), as well as sound-alikes and regular expressions. Best of all, when you enter a word for lookup, Artha displays a drop-down list of meanings instead of going directly to an arbitrarily defined core meaning.This drop-down list allows me to use Artha as a concept thesaurus – one based on categories of meaning rather than words – which is by far the most useful structure for writing, although it is rarely seen these days. If that is not enough, Artha also has a hot key feature, which allows users to get a definition of a highlighted word on the desktop.

After discovering all these feature, I only then realized that the evidence of Artha being open source lies in its comprehensiveness – a long-time open source tradition. As soon as I discovered all it could do, within moments Artha became my online thesaurus of choice.


Figure 3: Artha is one of the most comprehensive thesauruses available online.

Klipper

Klipper is the clipboard in KDE (Figure 4). What makes it stand out is that it includes a buffer of previously copied or cut items to which it can revert with a couple of clicks on its icon in the system tray. This feature makes it ideal for copy editing when the same replacements are needed repeatedly. If necessary, items can be typed in to the buffer as needed. Why a similar buffer was not added to other desktops years ago is a mystery.


Figure 4: KDE’s long-time clipboard supports multiple items, which is useful in editing.

Diction

When I was a university instructor, I always told students that, if they had enough knowledge to use a grammar checker properly, then they had no need for one, except possibly to catch typos. Too often, the helpful suggestions can lead the unwary to further mistakes.

Diction is an exception to this rule – and a surprising one, considering that it runs from the command line (Figure 5). What makes Diction an exception is that it flags words that are common in grammatical errors and simply gives you the general rules associated with them, leaving you to decide whether to apply them or not. Instead of trustingly clicking a button to make a change, users have to stop and think whether each grammatical rule applies. Mistakes are less likely, and, confronted with these rules, users may actually learn a few points about grammar.

Starting with a plain text file, Diction has the options to flag words associated with common beginner’s mistakes and/or to suggest better wording. And Diction is thorough, averaging in my writing about 170 suggestions for about 2,000 words (most of which, I am happy to say, were false flags). In my experience, such thoroughness is unparalleled in grammar checkers, which makes the extra step of converting a file to plain text for the check well-worth it.


Figure 5: Diction shows where grammatical rules might apply, rather than suggesting changes.

Calibre

Many Linux users know Calibre as a convenient app for storing and launching ebooks. However, if you are producing ebooks yourself, Calibre is also a one-stop app for editing ebooks and exporting them to multiple formats (Figure 6).

The simplest way to edit ebooks is to write them in LibreOffice and export them to Calibre. Then, you can use Calibre to edit metadata, add graphics and tables of content, add new sections, and output the ebook to every major format. Armed with a knowledge of CSS, you can right-click to edit the raw code and validate it.

Calibre would be even more powerful if it included a guide to CSS tags. However, even so, it’s a basic necessity for writers who intend to self-publish online.


Figure 6: Besides being an ebook manager, Calibre also has tools for editing.

LibreOffice Writer

LibreOffice Writer may seem like an obvious choice, considering that it is a full-featured office suite. However, among those tools are several that are especially useful for professionals.

Admittedly, few editors accept manuscripts in LibreOffice’s default Open Document Format (ODT). However, formatting for manuscripts is simple enough that exporting files to MS Word format is no trouble. Moreover, Writer also exports to PDF (Figure 7), with enough options to give you full control over the process. The last few releases have even started to support exports to ePub, the leading free ebook format. Although the support for ePub within Writer is still limited, ODF files can be imported to the Calibre ebook manager and then converted with acceptable quality to ePub, Kindle’s MOBI, or any other popular ebook format.

In addition, Writer supports comments and tracking changes, two features that enable collaboration of exactly the kind that happens between writers and editors or critiquing readers. Using these tools, writers can accept or reject revisions and easily access revisions from within their manuscripts.

For those who are writing very long books, Writer has Master Documents, which are documents that consist of multiple files. These files can be edited separately, which reduces memory requirements and allows writers to work on different parts of the complete document at the same time.

Likewise, professionals may find features like AutoText and personal dictionaries for spell checking and hyphenation useful. Should you want to self-publish, either online or to hard copy, Writer also has the tools for professional layout and design unmatched by other word processors. With this array of tools, Writer is indispensable for serious writing.


Figure 7: Extensive PDF options are one of several reasons for writers to prefer LibreOffice.

What’s Missing

This list of applications is what I consider the best of the best. For example, there are countless text editors and word processors that I might mention. However some are free to use, but do not have free licenses. Neither have I mentioned any online tools, for the simple reason that when you are a writer with deadlines, the risk of Internet connection problems is too great, even though this only occasionally happens. Local apps are simply more reliable.

Also, I have left out most so-called writers’ applications. Some, like FocusWriter, promise a distraction-free writing environment that I can get more conveniently in Bluefish or Vim, or even LibreOffice by using styles and templates – and at the expense of extra time spent reformatting for submission.

Another category I have left out are databases for fiction like bibisco. Such tools claim to help writers by peppering them with questions about characters, settings, unnecessary links, and organization. I remain deeply skeptical about such tools, because I have yet to hear of a professionally published writer who uses them. Just as importantly, they take much of the joy from writing for me, reducing the experience to something more akin to filling out a seemingly endless survey.

In the end, writing is about writing – or, failing that, streamlining necessary research so that you can return to writing as soon as possible. Properly used, the applications mentioned here should help you do just that.

Source

Logging Into Websites With Python – Linux Hint

The login feature is an important functionality in today’s web applications. This feature helps keep special content from non-users of the site and is also used to identify premium users too. Therefore if you intend web scraping a website, you could come across the login feature if the content is only available to registered users.

Web scraping tutorials have been covered in the past, therefore this tutorial only covers the aspect of gaining access into websites by logging in with code instead of doing it manually by using the browser.

To understand this tutorial and be able to write scripts for logging into websites, you would need some understanding of HTML. Maybe not enough to build awesome websites, but enough to understand the structure of a basic web page.

This would be done with the Requests and BeautifulSoup Python libraries. Asides those Python libraries, you would need a good browser such as Google Chrome or Mozilla Firefox as they would be important for initial analysis before writing code.

The Requests and BeautifulSoup libraries can be installed with the pip command from the terminal as seen below:

pip install requests
pip install BeautifulSoup4

To confirm the success of the installation, activate Python’s interactive shell which is done by typing python into the terminal.

Then import both libraries:

import requests
from bs4 import BeautifulSoup

The import is successful if there are no errors.

The process

Logging into a website with scripts requires knowledge of HTML and an idea of how the web works. Let’s briefly look into how the web works.

Websites are made of two main parts, the client-side and the server-side. The client-side is the part of a website that the user interacts with, while the server-side is the part of the website where business logic and other server operations such as accessing the database are executed.

When you try opening a website through its link, you are making a request to the server-side to fetch you the HTML files and other static files such as CSS and JavaScript. This request is known as the GET request. However when you are filling a form, uploading a media file or a document, creating a post and clicking let’s say a submit button, you are sending information to the server side. This request is known as the POST request.

An understanding those two concepts would be important when writing our script.

Inspecting the website

To practice the concepts of this article, we would be using the Quotes To Scrape website.

Logging into websites requires information such as the username and a password.

However since this website is just used as a proof of concept, anything goes. Therefore we would be using admin as the username and 12345 as the password.

Firstly, it is important to view the page source as this would give an overview of the structure of the web page. This can be done by right clicking on the web page and clicking on “View page source”. Next, you inspect the login form. You do this by right clicking on one of the login boxes and clicking inspect element. On inspecting element, you should see input tags and then a parent form tag somewhere above it. This shows that logins are basically forms being POSTed to the server-side of the website.

Now, note the name attribute of the input tags for the username and password boxes, they would be needed when writing the code. For this website, the name attribute for the username and the password are username and password respectively.

Next, we have to know if there are other parameters which would be important for login. Let’s quickly explain this. To increase the security of websites, tokens are usually generated to prevent Cross Site Forgery attacks.

Therefore, if those tokens are not added to the POST request then the login would fail. So how do we know about such parameters?

We would need to use the Network tab. To get this tab on Google Chrome or Mozilla Firefox, open up the Developer Tools and click on the Network tab.

Once you are in the network tab, try refreshing the current page and you would notice requests coming in. You should try to watch out for POST requests being sent in when we try logging in.

Here’s what we would do next, while having the Network tab open. Put in the login details and try logging in, the first request you would see should be the POST request.

Click on the POST request and view the form parameters. You would notice the website has a csrf_token parameter with a value. That value is a dynamic value, therefore we would need to capture such values using the GET request first before using the POST request.

For other websites you would be working on, you probably may not see the csrf_token but there may be other tokens that are dynamically generated. Over time, you would get better at knowing the parameters that truly matter in making a login attempt.

The Code

Firstly, we need to use Requests and BeautifulSoup to get access to the page content of the login page.

from requests import Session
from bs4 import BeautifulSoup as bs

with Session() as s:
site = s.get(“http://quotes.toscrape.com/login”)
print(site.content)

This would print out the content of the login page before we log in and if you search for the “Login” keyword. The keyword would be found in the page content showing that we are yet to log in.

Next, we would search for the csrf_token keyword which was found as one of the parameters when using the network tab earlier. If the keyword shows a match with an input tag, then the value can be extracted every time you run the script using BeautifulSoup.

from requests import Session
from bs4 import BeautifulSoup as bs

with Session() as s:
site = s.get(“http://quotes.toscrape.com/login”)
bs_content = bs(site.content, “html.parser”)
token = bs_content.find(“input”, {“name”:”csrf_token”})[“value”]
login_data = {“username”:”admin”,”password”:”12345″, “csrf_token”:token}
s.post(“http://quotes.toscrape.com/login”,login_data)
home_page = s.get(“http://quotes.toscrape.com”)
print(home_page.content)

This would print the page’s content after logging in, and if you search for the “Logout” keyword. The keyword would be found in the page content showing that we were able to successfully log in.

Let’s take a look at each line of code.

from requests import Session
from bs4 import BeautifulSoup as bs

The lines of code above are used to import the Session object from the requests library and the BeautifulSoup object from the bs4 library using an alias of bs.

Requests session is used when you intend keeping the context of a request, so the cookies and all information of that request session can be stored.

bs_content = bs(site.content, “html.parser”)
token = bs_content.find(“input”, {“name”:”csrf_token”})[“value”]

This code here utilizes the BeautifulSoup library so the csrf_token can be extracted from the web page and then assigned to the token variable. You can learn about extracting data from nodes using BeautifulSoup.

login_data = {“username”:”admin”,”password”:”12345″, “csrf_token”:token}
s.post(“http://quotes.toscrape.com/login”, login_data)

The code here creates a dictionary of the parameters to be used for log in. The keys of the dictionaries are the name attributes of the input tags and the values are the value attributes of the input tags.

The post method is used to send a post request with the parameters and log us in.

home_page = s.get(“http://quotes.toscrape.com”)
print(home_page.content)

After a login, these lines of code above simply extract the information from the page to show that the login was successful.

Conclusion

The process of logging into websites using Python is quite easy, however the setup of websites are not the same therefore some sites would prove more difficult to log into than others. There is more that can be done to overcome whatever login challenges you have.

The most important thing in all of this is the knowledge of HTML, Requests, BeautifulSoup and the ability to understand the information gotten from the Network tab of your web browser’s Developer tools.

Source

Getting started with Jenkins X

Jenkins X is an open source system that offers software developers continuous integration, automated testing, and continuous delivery, known as CI/CD, in Kubernetes. Jenkins X-managed projects get a complete CI/CD process with a Jenkins pipeline that builds and packages project code for deployment to Kubernetes and access to pipelines for promoting projects to staging and production environments.

Developers are already benefiting from running “classic” open source Jenkins and CloudBees Jenkins on Kubernetes, thanks in part to the Jenkins Kubernetes plugin, which allows you to dynamically spin-up Kubernetes pods to run Jenkins build agents. Jenkins X adds what’s missing from Jenkins: comprehensive support for continuous delivery and managing the promotion of projects to preview, staging, and production environments running in Kubernetes.

This article is a high-level explanation of how Jenkins X works; it assumes you have some knowledge of Kubernetes and classic Jenkins.

What you get with Jenkins X

If you’re running on one of the major cloud providers (Amazon Elastic Container Service for Kubernetes, Google Kubernetes Engine, or Microsoft Azure Kubernetes Service), installing and deploying Jenkins X is easy. Download the Jenkins X command-line interface and run the jx create cluster command. You’ll be prompted for the necessary information and, if you take the defaults, Jenkins X will create a starter-size Kubernetes cluster and install Jenkins X.

When you deploy Jenkins X, a number of services are put in motion to watch your Git repositories and respond by building, testing, and promoting your applications to staging, production, and other environments you define. Jenkins X also deploys a set of supporting services, including Jenkins, Docker Registry, Chart Museum, and Monocular to manage Helm charts, and Nexus, which serves as a Maven and npm repository.

The Jenkins X deployment also creates two Git repositories, one for your staging environment and one for production. These are in addition to the Git repositories you use to manage your project source code. Jenkins X uses these repositories to manage what is deployed to each environment, and promotions are done via Git pull requests (PRs)—this approach is known as GitOps. Each repository contains a Helm chart that specifies the applications to be deployed to the corresponding environment. Each repository also has a Jenkins pipeline to handle promotions.

Creating a new project with Jenkins X

To create a new project with Jenkins X, use the jx create quickstart command. If you don’t specify any options, jx will prompt you to select a project name and a platform—which can be just about anything. SpringBoot, Go, Python, Node, ASP.NET, Rust, Angular, and React are all supported, and the list keeps growing. Once you have chosen your project name and platform, Jenkins X will:

  • Create a new project that includes a “hello-world”-style web project
  • Add the appropriate type of makefile or build script for the chosen platform
  • Add a Jenkinsfile to manage promotions to staging and production environments
  • Add a Dockerfile and Helm charts, created via Draft
  • Add a Skaffold configuration for deploying the application to Kubernetes
  • Create a Git repository and push the new project code there

Next, a webhook from Git will notify Jenkins X that a project changed, and it will run your project’s Jenkins pipeline to build and push your Docker image and Helm charts.

Finally, the pipeline will submit a PR to the staging environment’s Git repository with the changes needed to promote the application.

Once the PR is merged, the staging pipeline will run to apply those changes and do the promotion. A couple of minutes after creating your project, you’ll have end-to-end CI/CD, and your project will be running in staging and available for use.

The figure above illustrates the repositories, registries, and pipelines and how they interact in a Jenkins X promotion to staging. Here are the steps:

  1. The developer commits and pushes the change to the project’s Git repository
  2. Jenkins X is notified and runs the project’s Jenkins pipeline in a Docker image that includes the project’s language and supporting frameworks
  3. The project pipeline builds, tests, and pushes the project’s Helm chart to Chart Museum and its Docker image to the registry
  4. The project pipeline creates a PR with changes needed to add the project to the staging environment
  5. Jenkins X automatically merges the PR to Master
  6. Jenkins X is notified and runs the staging pipeline
  7. The staging pipeline runs Helm, which deploys the environment, pulling Helm charts from Chart Museum and Docker images from the Docker registry. Kubernetes creates the project’s resources, typically a pod, service, and ingress.

Importing your existing projects into Jenkins X

When you import a project via

jx import

, Jenkins X adds the things needed for your project to be deployed to Kubernetes and participate in CI/CD. It will add a Jenkins pipeline, Helm charts, and a Skaffold configuration for deploying the application to Kubernetes. Jenkins X will create a Git repository and push the changes there. Next, a webhook from Git will notify Jenkins X that a project changed, and promotion to staging will happen as described above for new projects.

Promoting your project to production

To promote a version of your project to the production environment, use the jx promote command. This command will prepare a Git PR that contains the Helm chart changes needed to deploy into the production environment and submit this request to the production environment’s Git repository. Once the request is manually approved, Jenkins X will run the production pipeline to deploy your project via Helm.

This figure illustrates the repositories, registries, and pipelines and how they interact in a Jenkins X promotion to production. Here are the steps:

  1. The developer runs the jx promote command to promote a project to production
  2. Jenkins X creates a PR with changes needed to add the project to the production environment
  3. The developer manually approves the PR, and it is merged to Master
  4. Jenkins X is notified and runs the production pipeline
  5. The production pipeline runs Helm, which deploys the environment, pulling Helm charts from Chart Museum and Docker images from the Docker registry. Kubernetes creates the project’s resources, typically a pod, service, and ingress.

Other features of Jenkins X

Other interesting and appealing features of Jenkins X include:

Preview environments

When you create a PR to add a new feature to your project, you can ask Jenkins X to create a preview environment so you can make your new feature available for preview and testing before the PR is merged.

Extensions

It is possible to create extensions to Jenkins X. An extension is code that runs at specific times in the CI/CD process. An extension can provide code that runs when the extension is installed, uninstalled, as well as before and after each pipeline.

Serverless Jenkins

Instead of running the Jenkins web application, which continually consumes CPU and memory resources, you can run Jenkins only when you need it. During the past year, the Jenkins community created a version of Jenkins that can run classic Jenkins pipelines via the command line with the configuration defined by code instead of HTML forms.

This capability is now available in Jenkins X. When you create a Jenkins X cluster, you can choose to use Serverless Jenkins. If you do, Jenkins X will deploy Prow to handle webhooks from GitHub and Knative to run Jenkins pipelines.

Jenkins X limitations

Jenkins X also has some limitations that should be considered:

  • Jenkins X is currently limited to projects that use Git: Jenkins X is opinionated about CI/CD and assumes everybody wants to run and deploy software to Kubernetes and everybody is happy to use Git for source code and defining environments. Also, the Serverless Jenkins feature currently works only with GitHub.
  • Jenkins X is limited to Kubernetes: It is true that Jenkins X can run automated builds, testing, and continuous integration for any type of software, but the continuous delivery part targets a Kubernetes namespace managed by Jenkins X.
  • Jenkins X requires cluster-admin level Kubernetes access: Jenkins X needs cluster-admin access so it can define and manage a Kubernetes custom resource definition. Hopefully, this is a temporary limitation, because it could be a show-stopper for some.

Conclusions

Jenkins X looks to be a good way to implement CI/CD for Kubernetes, and I’m looking forward to putting it to the test in production. Using Jenkins X is also a good way to learn about some useful open source tools for deploying to Kubernetes, including Helm, Draft, Skaffold, Prow, and more. These are things you might want to use even if you decide Jenkins X is not for you. If you’re deploying to Kubernetes, take Jenkins X for a spin.

Source

Install Redis on CentOS 7.5 – Linux Hint

Redis is a quick database like server that can be used as a in memory cache or data store. Its very popular in the context of scalable websites because it can store data in memory and be sharded to store large volumes of data and provide lightening fast results to users on the world wide web. Today we will look at how to install Redis on CentOS 7.5 and get started with its usage.

Update Yum

First start by updating your system to keep other packages up to date with yum update.

Extra Packages for Enterprise Linux(EPEL)

Redis server is not in the default repository on a standard CentOS7 install, so we need to install the EPEL package to get access to more packages.

[root@centos7-linuxhint ~]# yum install epel-release

After installing epel, you need to run yum update again.

[root@centos7-linuxhint ~]# yum update

Install Redis Server Package

Now that the EPEL has been added a simple yum install command will install the redis server software.

[root@centos7-linuxhint ~]# yum -y install redis

After installation you will have redis-server and redis-cli commands in your system. And also you can see a redis service has been installed

Start the Redis Server

Even though technically you can start a redis server using the inbuilt commands, lets use the service that is provided with CentOS to do the start, stop and status of redis server on the system.

[root@centos7-linuxhint ~]# service redis start

It should be running now, check it with status command:

Storing and Retrieving Data

Ok, now that Redis is running, lets start with a trivial example and store a key and value pair and then see how to query it. We will use redis-cli with default options which will connect to a server on localhost and the default redis port. Also note in the real world, you should consider setting up proper security to your Redis instances.

We will use the set and get commands in order to store a key value pair in the server. Here is a screen shot of an example:

You can also use the inline help to get a list of all the possible commands and the help text with them. Enter interactive mode from the redis-cli and then type help as shown below:

Redis: More information

For more information check out the following links below:

Source

Three SSH GUI Tools for Linux | Linux.com

At some point in your career as a Linux administrator, you’re going to use Secure Shell (SSH) to remote into a Linux server or desktop. Chances are, you already have. In some instances, you’ll be SSH’ing into multiple Linux servers at once. In fact, Secure Shell might well be one of the most-used tools in your Linux toolbox. Because of this, you’ll want to make the experience as efficient as possible. For many admins, nothing is as efficient as the command line. However, there are users out there who do prefer a GUI tool, especially when working from a desktop machine to remote into and work on a server.

If you happen to prefer a good GUI tool, you’ll be happy to know there are a couple of outstanding graphical tools for SSH on Linux. Couple that with a unique terminal window that allows you to remote into multiple machines from the same window, and you have everything you need to work efficiently. Let’s take a look at these three tools and find out if one (or more) of them is perfectly apt to meet your needs.

I’ll be demonstrating these tools on Elementary OS, but they are all available for most major distributions.

PuTTY

Anyone that’s been around long enough knows about PuTTY. In fact, PuTTY is the de facto standard tool for connecting, via SSH, to Linux servers from the Windows environment. But PuTTY isn’t just for Windows. In fact, from withing the standard repositories, PuTTY can also be installed on Linux. PuTTY’s feature list includes:

  • Saved sessions.
  • Connect via IP address or hostname.
  • Define alternative SSH port.
  • Connection type definition.
  • Logging.
  • Options for keyboard, bell, appearance, connection, and more.
  • Local and remote tunnel configuration
  • Proxy support
  • X11 tunneling support

The PuTTY GUI is mostly a way to save SSH sessions, so it’s easier to manage all of those various Linux servers and desktops you need to constantly remote into and out of. Once you’ve connected, from PuTTY to the Linux server, you will have a terminal window in which to work. At this point, you may be asking yourself, why not just work from the terminal window? For some, the convenience of saving sessions does make PuTTY worth using.

Installing PuTTY on Linux is simple. For example, you could issue the command on a Debian-based distribution:

sudo apt-get install -y putty

Once installed, you can either run the PuTTY GUI from your desktop menu or issue the command putty. In the PuTTY Configuration window (Figure 1), type the hostname or IP address in the HostName (or IP address) section, configure the port (if not the default 22), select SSH from the connection type, and click Open.

Once the connection is made, you’ll then be prompted for the user credentials on the remote server (Figure 2).

To save a session (so you don’t have to always type the remote server information), fill out the IP address (or hostname), configure the port and connection type, and then (before you click Open), type a name for the connection in the top text area of the Saved Sessions section, and click Save. This will then save the configuration for the session. To then connect to a saved session, select it from the saved sessions window, click Load, and then click Open. You should then be prompted for the remote credentials on the remote server.

EasySSH

Although EasySSH doesn’t offer the amount of configuration options found in PuTTY, it’s (as the name implies) incredibly easy to use. One of the best features of EasySSH is that it offers a tabbed interface, so you can have multiple SSH connections open and quickly switch between them. Other EasySSH features include:

  • Groups (so you can group tabs for an even more efficient experience).
  • Username/password save.
  • Appearance options.
  • Local and remote tunnel support.

Install EasySSH on a Linux desktop is simple, as the app can be installed via flatpak (which does mean you must have Flatpak installed on your system). Once flatpak is installed, add EasySSH with the commands:

sudo flatpak remote-add –if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo

sudo flatpak install flathub com.github.muriloventuroso.easyssh

Run EasySSH with the command:

flatpak run com.github.muriloventuroso.easyssh

The EasySSH app will open, where you can click the + button in the upper left corner. In the resulting window (Figure 3), configure your SSH connection as required.

Once you’ve added the connection, it will appear in the left navigation of the main window (Figure 4).

To connect to a remote server in EasySSH, select it from the left navigation and then click the Connect button (Figure 5).

The one caveat with EasySSH is that you must save the username and password in the connection configuration (otherwise the connection will fail). This means anyone with access to the desktop running EasySSH can remote into your servers without knowing the passwords. Because of this, you must always remember to lock your desktop screen any time you are away (and make sure to use a strong password). The last thing you want is to have a server vulnerable to unwanted logins.

Terminator

Terminator is not actually an SSH GUI. Instead, Terminator functions as a single window that allows you to run multiple terminals (and even groups of terminals) at once. Effectively you can open Terminator, split the window vertical and horizontally (until you have all the terminals you want), and then connect to all of your remote Linux servers by way of the standard SSH command (Figure 6).

To install Terminator, issue a command like:

sudo apt-get install -y terminator

Once installed, open the tool either from your desktop menu or from the command terminator. With the window opened, you can right-click inside Terminator and select either Split Horizontally or Split Vertically. Continue splitting the terminal until you have exactly the number of terminals you need, and then start remoting into those servers.
The caveat to using Terminator is that it is not a standard SSH GUI tool, in that it won’t save your sessions or give you quick access to those servers. In other words, you will always have to manually log into your remote Linux servers. However, being able to see your remote Secure Shell sessions side by side does make administering multiple remote machines quite a bit easier.

Few (But Worthwhile) Options

There aren’t a lot of SSH GUI tools available for Linux. Why? Because most administrators prefer to simply open a terminal window and use the standard command-line tools to remotely access their servers. However, if you have a need for a GUI tool, you have two solid options and one terminal that makes logging into multiple machines slightly easier. Although there are only a few options for those looking for an SSH GUI tool, those that are available are certainly worth your time. Give one of these a try and see for yourself.

Source

How to automatically join Ubuntu clients to a Univention Corporate Server domain – NoobsLab

Univention Corporate Server (UCS) provides a shared trust and security context called a domain. This means, the members of the domain know and trust each other. To get access to resources and services provided within the domain, users and computers have to join the domain.

Linux clients, such as Ubuntu and its derivates, could be joined to a UCS domain for a long time, but it always involved a rather long process of copying commands from the Univention documentation to the command line of the involved systems.

In April of 2018 Univention released the new “Ubuntu Domain Join Assistant” which provides both a graphical interface, making the whole join process a lot easier and time-saving, but also a CLI tool to automate the domain join of many Ubuntu clients.

Let‘s get started – the graphical way:

Let‘s assume we just installed a Univention Corporate Server in the current version of 4.3 and a plain Ubuntu 18.04 client – both with an out-of-the-box feature set and without any modifications. Using the optional software component “DHCP server” in UCS is not strictly necessary, but recommended.

Now log in to your Ubuntu client with a user. To install in Ubuntu 18.04/16.04/14.04/Linux Mint 19/18/17 open Terminal (Press Ctrl+Alt+T) and copy the following commands in the Terminal:

Those commands will add the “Personal Package Archive” (PPA) for the Ubuntu Domain Join Assistant, update the package index and then install the package “univention-domain-join”.

A successful installation will add the Ubuntu Domain Join Assistant to the start menu:

Now start the “Ubuntu Domain Join Assistant”. Since we are going to change some system settings, we need to authenticate as a privileged user:

The “Ubuntu Domain Join Assistant” will open. We need to enter the Domain Name (in my example that‘s “intranet.cabbage.org”) OR the IP address of the UCS Master. Additionally we need a privileged Domain User that is allowed to join computers, e.g. “Administrator” and the corresponding password:

The join process might take some seconds. Afterwards we need to reboot the Ubuntu client:

Now we can log in with a Domain User:


You will also notice that a corresponding computer object has been created in UCS:

Automate it!
As mentioned above, Univention also provides a CLI tool that can be utilized to automate the whole process. And can be integrated in any automation or configuration management tool or you can wrap it in your own shell script – we just need to:

– optionally copy a file with the domain join password to the client (or provide it directly with the “–password” option)

– run “univention-domain-join-cli” with the needed options, e.g.:

This way we could join dozens of Ubuntu clients at once or even install and configure them with the help of a software management software like opsi (https://www.univention.com/blog-en/2018/05/automated-maintenance-of-linux-desktop-clients-in-the-ucs-domain-with-opsi/).

The CLI tool shows more information during the join process than the graphical interface, but both tools provide a logfile underneath “/var/log/univention/”.

We hope this gives you an impression on what is possible with Univention Corporate Server and how easy the administration of a UCS domain can be. If you like further information on UCS, check this article Univention Corporate Server An Enterprise Linux (Overview And Tutorial)

https://www.noobslab.com/2018/10/univention-corporate-server-tutorial.html

Source

Ubuntu Killall Command – Linux Hint

Every single Linux distros is a collection of standard and something other tools at the core. Ubuntu, being one of the most popular ones, offers the most popular and powerful Linux tools in the wild. “killall” is such a powerful tool at your disposal. Let’s find out what you can do with “killall”. It’s just like “kill” but with a lot more power in the pocket. It’s a CLI tool. Caution is a must as a wrong command can completely render your system useless.

“killall” follows the following structure –

According to the man page of “killall”, the tool sends a signal to the target processes. Without specifying the signal type, the default is SIGTERM. If “killall” is able to kill at least 1 process matching the requirements, it returns a zero return code. You can’t kill a “killall” process.

Killing a process

This is the most basic usage of “killall” command. All you have to do is just pass the name of the process.

For example, I’ve got GNOME Disks open, process name “gnome-disks”. For killing the process, run the following command –

Asking for permission

When you’re running “killall” commands, there’s a pretty good chance that you’re about to kill something unintended. You have to use the “-i” flag.

Case sensitivity

Generally, “killall” is a case-sensitive tool, so make sure that you type the name correctly.

# Wrong command
killall

GNOME-disks

# Correct commandkillall gnome-disks

If you want to force killall as case-insensitive, use “-I” flag.

Choosing the ENDING signal

There are different types of termination signal available. If you want to use a specific end signal, use the following structure –

killall -s

# ORkillall –signal

# ORkillall -SIGNAL

For finding out the available signal list, use the “-l” flag.

Killing process by the time

You can also tell “killall” to terminate the processes depending on their time of running!

killall -o [

TIME

]

# ORkillall –older-than [TIME]

For example,

This command will kill all the processes that have been running for more than 2 hours.

killall -y [

TIME

]

# ORkillall –younger-than [TIME]

For example,

This command will kill all the processes that are younger than 2 hours.

Killing all the processes owned by a user

This is a very risky thing to do and may even render your system useless unless you restart it. Make sure that you have all your important tasks finished.

The structure goes like this –

For example,

This command will kill everything under the user “viktor”.

Other “killall” commands

There are a number of other available commands of “killall”. For the short list, use the following command –

For an in-depth explanation of every single parameter and options, the man page is the best option.

You can export the man page to a separate text file for reading later.

man killall > ~/Desktop/killall.txt

Enjoy!

Source

Valve’s card game Artifact is running very well on Linux, releasing next week

Artifact, Valve’s newest game, is due out on November 28th and it will be coming with same-day Linux support. Valve provided me with an early copy and it’s pleasing to see it running well.

We won’t have any formal review until after release, however, I do have some rather basic initial thoughts found from a few hours with the beta today. Mainly, I just wanted to assure people it’s running nicely on Linux. I also don’t want to break any rules by saying too much before release…

Some shots of the beta on Ubuntu 18.10 to start with. First up is a look at the three lanes during the hero placement section, which gives you a choice where to put them. It’s interesting, because you can only play coloured cards if you have a hero of that colour in the same lane.

Heroes are your essential cards of course, for a number of reasons. They can really turn the tide when things get ugly. They can buff up other card, have their own special abilities, you can equip items on them to buff them further and so on. Honestly, I’m a little blown away at the level of detail here.

For those collectors amongst our readers, here’s a little shot while opening a Booster Pack with the last one always being a rare card:

Lanes can extend across the screen, as shown here where I have an additional four cards not shown. You can amass a pretty big army of heroes and creeps. In this particular screenshot, I had already taken down the tower (there’s one in each lane) which was replaced with an Ancient in this lane and so with my current combined attack power this was a fatal finishing blow to my opponent (destroying an Ancient is an instant win).

I haven’t so far come across any Linux-specific issues, it certainly looks like Valve has given the Linux version plenty of attention. I would have been surprised if it wasn’t running well, given Valve’s focus on Linux lately. For those of you who might have had some worries—fear not!

It’s worth mentioning they have been through a bit of controversy on it lately, with it having a bit of a backlash against the monetization model. This was amplified somewhat, because Valve didn’t put enough focus into certain areas of the game. Valve responded here, to say they’ve added additional modes to practice and play with friends, along with allowing you to convert unwanted cards into event tickets. It sounds like they’re going in the right direction with it and it is good to seem them act on feedback.

It’s going to be interesting to see what more of you think of it once it has released. For me personally, I think I’m going to quite enjoy it. What I honestly thought would confuse the hell out of me, so far, hasn’t in any way. There’s quite a bit to learn of course and certain elements to it are quite complex, but it’s nothing like what I expected.

You can follow along and wishlist it on Steam. As always, do ensure your platform preference is set on Steam in your account preferences at the bottom. More thorough thoughts will be up at release on the 28th.

Source

WP2Social Auto Publish Powered By : XYZScripts.com