6 Best Modern Linux ‘init’ Systems (1992-2023).

In Linux and other Unix-like operating systems, the init (initialization) process is the first process executed by the kernel at boot time, which has a process ID (PID) of 1, that is executed in the background until the system is shut down.

The init process starts all other Linux processes, that is daemons, services, and other background processes, therefore, it is the mother of all other processes on the system.

A process can start many other child processes on the system, but if a parent process dies, init becomes the parent of the orphan process.

Linux init Systems (1992-2015)
Linux init Systems (1992-2015)

Over the years, many init systems have emerged in major Linux distributions and in this guide, we shall take a look at some of the best init systems you can work with on the Linux operating system.

1. System V Init

System V (SysV) is a mature and popular init scheme on Unix-like operating systems, it is the parent of all processes on a Unix/Linux system. SysV is the first commercial Unix operating system designed.

Almost all Linux distributions first used the SysV init scheme except Gentoo which has a custom init and Slackware which uses the BSD-style init scheme.

As years have passed by, due to some imperfections, several SysV init replacements have been developed in the quest to create more efficient and perfect init systems for Linux.

Although these alternatives seek to improve SysV and probably offer new features, they are still compatible with original SysV init scripts.

2. SystemD

SystemD is a relatively new init scheme on the Linux platform. Introduced in Fedora 15, it is an assortment of tools for easy system management. The main purpose is to initialize, manage, and keep track of all system processes in the boot process and while the system is running.

Systemd init is comprehensively distinct from other traditional Unix init systems, in the way it practically approaches system and services management. It is also compatible with SysV and LBS init scripts.

It has some of the following eminent features:

  • Clean, straightforward, and efficient design
  • Concurrent and parallel processing at bootup
  • Better APIv
  • Enables removal of optional processes
  • Supports event logging using journald
  • Supports job scheduling using systemd calendar timers
  • Storage of logs in binary files
  • Preservation of systemd state for future reference
  • Better integration with GNOME plus many more

3. Upstart

Upstart is an event-based init system developed by the makers of Ubuntu as a replacement for the SysV init system. It starts different system tasks and processes, inspects them while the system is running, and stops them during system shutdown.

It is a hybrid init system that uses both SysV startup scripts and Systemd scripts, some of the notable features of the Upstart init system include:

  • Originally developed for Ubuntu Linux but can run on all other distributions
  • Event-based starting and stopping of tasks and services
  • Events are generated during the starting and stopping of tasks and services
  • Events can be sent by other system processes
  • Communication with the init process through D-Bus
  • Users can start and stop their processes
  • Re-spawning of services that die abruptly and many more

4. OpenRC

OpenRC is a dependency-based init scheme for Unix-like operating systems, it is compatible with SysV init. As much as it brings some improvements to Sys V, you must keep in mind that OpenRC is not an absolute replacement for the /sbin/init file.

It offers some illustrious features including:

  • It can run on other many Linux distributions including Gentoo and also on BSD
  • Supports hardware-initiated init scripts
  • Supports a single configuration file
  • No per-service configurations supported
  • Runs as a daemon
  • Parallel services startup and many more

5. runit

runit is also a cross-platform init system that can run on GNU/Linux, Solaris, *BSD, and Mac OS X and it is an alternative to SysV init, which offers service supervision.

It comes with some benefits and remarkable components not found in SysV init and possibly other init systems in Linux and these include:

  • Service supervision, where each service is associated with a service directory
  • A clean process state guarantees each process a clean state
  • It has a reliable logging facility
  • Fast system boot-up and shutdown
  • It is also portable
  • Packaging friendly
  • Small code size and many more

6. s6

s6 offers a compact set of tools for UNIX, tailored for process supervision, similar to daemontools and runit. It facilitates operations on processes and daemons.

Designed as a low-level service administration toolkit, s6 provides diverse tools that can function independently or within its framework. These tools, when combined, deliver robust functionality with minimal code.

As I had earlier mentioned, the init system starts and manages all other processes on a Linux system. Additionally, SysV is the primary init scheme on Linux operating systems, but due to some performance weaknesses, system programmers have developed several replacements for it.

Here, we looked at a few of those replacements, but there could be other init systems that you think are worth mentioning.

Source.

Kubernetes Cheatsheet: Essential Commands and Concepts for Efficient Container Orchestration

Kubernetes Cheatsheet

Kubernetes Basics:

  • kubectl version: Check the Kubernetes client and server versions.
  • kubectl cluster-info: View cluster details.
  • kubectl get pods: List all pods in the current namespace.
  • kubectl get nodes: List all nodes in the cluster.
  • kubectl describe pod [pod-name]: Get detailed information about a pod.

Creating and Managing Resources:

  • kubectl create -f [yaml-file]: Create a resource from a YAML file.
  • kubectl apply -f [yaml-file]: Apply changes to a resource.
  • kubectl delete [resource-type] [resource-name]: Delete a resource.
  • kubectl edit [resource-type] [resource-name]: Edit a resource in the default text editor.
  • kubectl get [resource-type]: List resources of a specific type.
  • kubectl logs [pod-name]: View logs of a pod.

Scaling:

  • kubectl scale deployment [deployment-name] –replicas=[num-replicas]: Scale a deployment.
  • kubectl autoscale deployment [deployment-name] –min=[min-replicas] –max=[max-replicas]: Autoscale a deployment.

Networking:

  • kubectl expose [resource-type] [resource-name] –port=[port] –target-port=[target-port] –type=[service-type]:Expose a resource as a service.
  • kubectl get svc: List services.
  • kubectl port-forward [pod-name] [local-port]:[pod-port]: Forward ports from a local machine to a pod.

Configuration:

  • kubectl config view: View the current configuration.
  • kubectl config use-context [context-name]: Set the current context.

Pods:

  • kubectl exec -it [pod-name] — [command]: Execute a command in a pod.
  • kubectl run [pod-name] –image=[image-name]: Create a new pod running a specific image.

Namespaces:

  • kubectl create namespace [namespace-name]: Create a new namespace.
  • kubectl get namespaces: List namespaces.
  • kubectl config set-context –current –namespace=[namespace-name]: Set the default namespace.

Secrets and ConfigMaps:

  • kubectl create secret generic [secret-name] –from-literal=[key]=[value]: Create a secret.
  • kubectl create configmap [configmap-name] –from-literal=[key]=[value]: Create a ConfigMap.
  • kubectl describe secret [secret-name]: View secret details.
  • kubectl describe configmap [configmap-name]: View ConfigMap details.

Storage:

  • kubectl get pv: List persistent volumes.
  • kubectl get pvc: List persistent volume claims.

Advanced Troubleshooting:

  • kubectl describe [resource-type] [resource-name]: Get detailed information about a resource.
  • kubectl top [resource-type] [resource-name]: Display resource usage statistics.

Remember to replace placeholders like [resource-type], [resource-name], [pod-name], etc., with your actual resource and object names.

This cheatsheet should help you get started with Kubernetes and serve as a handy reference as you work with containers and orchestration in Kubernetes.

How to Check CPU Cores in Ubuntu.

Understanding the number of CPUs on your Ubuntu system is essential for a variety of tasks, including performance optimization, troubleshooting, and knowledge of system capabilities.

This article will examine several techniques for determining Ubuntu’s CPU count without the use of any external programs. To accommodate various user preferences, we will put a priority on command-line strategies and graphical user interface (GUI) tools.

Using the terminal is one of the simplest ways to check the number of CPUs in your Ubuntu system using various commands.

1. lscpu Command – Show CPU Architecture Information

The lscpu utility in Ubuntu is a useful command that offers comprehensive data on the CPU (Central Processing Unit) structure and its functionalities.

Users can acquire vital information like the number of CPUs or cores, CPU vendor details, cache dimensions, clock rates, and other essential details.

By employing the lscpu command, Ubuntu users can obtain valuable knowledge regarding their system’s CPU setup and utilize this information for diverse objectives such as system enhancement, performance assessment, and problem-solving.

You can install the lscpu tool with the help of the following command:

$ sudo apt-get install util-linux
$ lscpu
Ubuntu CPU Architecture Information
Ubuntu CPU Architecture Information

Look for the “CPU(s)” field to identify the number of CPUs.

2. cat /proc/cpuinfo – Show CPU Processor Info

The cat /proc/cpuinfo command is another way to retrieve detailed information about the CPU(s) on a Ubuntu system. It reads the /proc/cpuinfo file, which contains information about each CPU core.

When you run this command, it displays a comprehensive list of CPU-related details, including hardware configuration, number of CPUs, cores, etc.

$ cat /proc/cpuinfo
Get Ubuntu CPU Core Information
Get Ubuntu CPU Core Information

In order to get the total number of CPUs, count the number of distinct processor fields in the output. Each processor field represents a separate CPU core.

For example, let’s say the output of the command contains the following information:

processor  : 0
vendor_id  : GenuineIntel
cpu family : 6
model  	: 158
...
processor  : 1
vendor_id  : GenuineIntel
cpu family : 6
model  	: 158
...

In this case, there are two distinct processor fields (processor 0 and processor 1), indicating that there are two CPUs or CPU cores in the system.

3. nproc Command – Show Processing Units or CPU Cores

Using the nproc command user can quickly display the number of CPUs or CPU cores present in their system. The output is simply numeric that represents the number of CPUs.

To install nproc you need the following package:

$ sudo apt install coreutils
$ nproc
Show Ubuntu CPU Cores
Show Ubuntu CPU Cores

4. Hwinfo Command – Show CPU Hardware Components

The hwinfo command in Ubuntu is a strong utility that gives thorough hardware details about your system. You may learn more about numerous components, including CPUs, RAM, discs, network interfaces, and more, in-depth.

You can access a comprehensive report with hardware-related statistics by running the hwinfo command in the terminal. Understanding the setup of your system will help you fix hardware issues and improve performance.

To install hwinfo in Ubuntu:

$ sudo apt install hwinfo

Since the hwinfo provides detailed information about hardware components the output can be quite lengthy. Therefore we will filter the output by telling hwinfo to fetch only CPU-related information and filter it using the grep command in Linux.

$ hwinfo --cpu | grep "Units/Processor"
Show CPU Related Information
Show CPU Related Information

5. getconf _NPROCESSORS_ONLN Command

You can easily find out how many CPUs or online processors are currently in use on your system with Ubuntu’s “getconf _NPROCESSORS_ONLN” command. You may get an easy-to-understand numeric output showing the number of active CPUs by typing this command into the terminal.

getconf is mostly pre-installed in Ubuntu systems but if not you can install it:

$ sudo apt install libc-bin

To get the number of CPUs:

$ getconf _NPROCESSORS_ONLN
Find Number of Ubuntu CPUs
Find Number of Ubuntu CPUs

Numerous graphical user interfaces (GUI) tools provided by Ubuntu allow users to check the number of central processing units (CPUs) in their operating system.

These utilities present a user-friendly interface for showcasing system data, encompassing CPU specifications. Presented below is a selection of well-known GUI utilities that facilitate checking CPU quantity.

6. Gnome System Monitor

An elegant graphical program called GNOME System Monitor is included in Ubuntu and offers real-time resource management. The performance of your system’s CPU, memory, network, and disc utilization may be tracked and analyzed using its user-friendly interface.

You can simply monitor resource usage, spot any bottlenecks, and effectively manage activities with GNOME System Monitor.

If you have a Gnome environment then the Gnome system monitor is already installed. If not you can simply install it with the following command:

$ sudo apt install gnome-system-monitor
$ gnome-system-monitor

Head to the resources section to find out the number of CPUs and their usage.

Gnome System Monitor
Gnome System Monitor

7. Hardinfo

Hardinfo is an extensive tool for Ubuntu that offers in-depth insights into various hardware components and system configurations. It has a very user-friendly interface which will help you gather information related to your system.

To install the hardinfo graphical tool run:

$ sudo apt install hardinfo
$ hardinfo

Then head towards the Processor tab on the left-hand side of the app:
HardInfo - Check Hardware Information in Ubuntu
HardInfo – Check Hardware Information in Ubuntu

Here you can view the number of CPUs in your system.

Conclusion

This article delved into different techniques for verifying the CPU count in Ubuntu. Whether you favor the terminal or a graphical interface, Ubuntu provides multiple pre-installed choices for obtaining CPU data. By grasping the CPU count in your system, you can efficiently oversee system performance, address problems, and enhance resource allocation. Keep in mind to select the approach that aligns with your inclination and relish the advantages of comprehending your Ubuntu system thoroughly. Have a delightful computing experience!

Compare Files in Linux With These Tools.

Whether you’re a programmer, creative professional, or someone who just wants to browse the web, there are times when you find yourself finding the differences between files.

There are two main tools that you can use for comparing files in Linux:

  • diff: A command line utility that comes preinstalled on most Linux systems. The diff command has a learning curve.
  • Meld: A GUI tool that you can install to compare files and directories. It is easier to use, especially for desktop users.

But there are several other tools with different features for comparing files. Here, let me mention some useful GUI and CLI tools for checking the differences between files and folders.

Note: The tools aren’t ranked in any particular order. Choose what you find the best for you.

1. Diff command

diff command

Diff stands for difference (obviously!) and is used to find the difference between two files by scanning them line by line. It’s a core UNIX utility, developed in the 70s.

Diff will show you lines that are required to change in compared files to make them identical.

Key Features of Diff:

  • Uses special symbols and characters to indicate lines required to change to make both files identical.
  • Goes through line by line to provide the best possible result.

And, the best part is, diff comes pre-installed in every Linux distro.

As you can see in the screenshot above, it’s not easy to understand the diff command output in the first attempt. Worry not. We have a detailed guide on using diff command for you to explore.

2. Colordiff command

colordiff utility

For some reason, if you find Diff utility a bit bland in terms of colors, you can use Colordiff which is a modified version of the diff command utility with enhanced color and highlighting.

Key Features Colordiff:

  • Syntax highlighting with attractive colors.
  • Improved readability over the Diff utility.
  • Licensed under GPL and has digitally signed source code.
  • Customizable

Installation:

Colordiff is available in the default repository of almost every popular Linux distribution and if you’re using any Debian derivative, you can type in the following:

sudo apt install colordiff

3. Wdiff command

wdiff

Wdiff is the CLI front end of the Diff utility, and it has a different approach for comparing files i.e it scans on a word-per-word basis.

It starts by creating two temporary files and will run Diff over them. Finally, it collects the output from you’re met with word differences between two files.

Key Features of Wdiff:

  • Supports multiple languages.
  • Ability to add colorized output by integrating with Colordiff.

Installation:

Wdiff is available in the default repository of Debian derivatives and other distros. For Ubuntu-based distros, use the following command to get it installed:

sudo apt install wdiff

4. Vimdiff command

vimdiff

Key Features of Vimdiff:

  • Ability to export the results on an HTML web page.
  • Can also be used with Git.
  • Customization (of course).
  • Ability to use it as CLI and GUI tool.

It’s one of the most powerful features that you get with Vim editor. Whether you are using Vim in your terminal or the GUI version, you can use the vimdiff command.

Vimdiff works in a more advanced manner than the usual diff utility. For starters, when you enter vimdiff command, it starts the vim editor with your usual diff. However, if you know how to get around your way through Vim and its commands, you can perform a variety of tasks along with it.

So, I’d highly recommend you to get familiar with the basic commands of Vim if you intend to use this. Furthermore, having an idea of how to use buffers in Vim will be beneficial.

Installation:

To use Vimdiff, you would need to have Vim installed on your system. We also have a tutorial on how to install the latest Vim on Ubuntu.

You can use the command below to get it installed (if you’re not worried about the version you install):

sudo apt install vim

5. Gitdiff command

gitdiff

As its name suggests, this utility works over a Git repository.

This command will utilize the diff command we discussed earlier and will run over git data sources. That can be anything from commits, and branches to files and a lot more.

Key features of Gitdiff:

  • Ability to determine changes between multiple git data sources.
  • Can also be used with binary files.
  • Supports highlighting with colors.

Installation:

Gitdiff does not require any separate installation unless you don’t have Git installed on your system. And if you’re looking for the most recent version, we have a tutorial on how to install the latest Git version on Ubuntu.

Or, you can just follow the given command to install Git on your Ubuntu-based distro:

sudo apt install git

6. Kompare

kompare

Looking for a GUI tool that not just differentiates files, but also allows you to create and apply patches to them?

Then Kompare by KDE will be an interesting choice!

Primarily, it is used to view source files to compare and merge. But, you can get creative with it!

Kompare can be used over multiple files, and directories and supports multiple Diff formats.

Key Features of Kompare:

  • Offers statistics of differences found between compared files.
  • Bézier-based connection widget shows the source and destination of files.
  • Source and destination can also be changed with commands.
  • Easy to navigate UI.
  • Allows to create and apply patches.
  • Support for various Diff formats.
  • Appearance can be customized to some extent.

Installation:

Being part of the KDE family, Kompare can be found easily on the default repository of popular Linux distros and the software center. But, if you prefer the command-line, here’s the command:

sudo apt install kompare

7. Meld

meld

Tools like Kompare may overwhelm new users as they offer a plethora of features, but if you’re looking for simple, Meld is a good pick.

Meld provides up to three-way comparison for files and directories and has built-in support for version control systems. You can also refer to a detailed guide on how to compare files using Meld to know more about it.

Key Features of Meld:

  • Supports up to 3-way file comparison.
  • Syntax highlighting.
  • Support for version control systems.
  • Simple text filtering.
  • Minimal and easy-to-understand UI.

Installation:

Meld is popular software and can be found easily on the default repository of almost any Linux distro. And for installation on Ubuntu, you can use this command:

sudo apt install meld

Additional: Sublime Merge (Non-FOSS)

sublime merge

Coming from the developers of the famed Sublime Text editor, Sublime Merge is targeted at programmers who are constantly dealing with version control systems, especially Git, as having the best workflow with Git is its primary focus.

From command line integration, powerful search, and flexibility to Git flow integration, anything that powers your workflow comes with it.

Like Sublime Text, Sublime Merge is also not open source. Similarly, it is also free but encourages you to buy a license for continuous use. However, you can continue using it without purchasing the license forever.

Sublime Merge

What’s Your Pick?

There are a few more tools like Sublime Merge. P4Merge and Beyond Compare come to my mind. These are not open source software but they are available for the Linux platform.

In my opinion, the diff command and Meld tools are enough for most of your file comparison needs. Specific scenarios like dealing with Git could benefit from specialized tools like GitDiff.

Source

Docker throws weight behind Windows Subsystem for Linux, chucks Hyper-V option overboard • DEVCLASS

Docker has thrown its support behind Microsoft’s latest rev of the Windows Subsystem for Linux, promising a technical review of Docker Desktop for WSL-2 next month.

In a blog post yesterday, Docker’s Simon Ferquel, wrote that while the original WSL was “an impressive effort to emulate a Linux Kernel on top of Windows”, the fundamental differences were such that “it was impossible to run the Docker Engine and Kubernetes directly inside WSL.”

Docker had, consequently, developed “an alternative solution” using Hyper-V and LinuxKit.

However, the container innovator said that the new version, unveiled last month, delivered “a real Linux Kernel running inside a lightweight VM. This approach is architecturally very close to what we do with LinuxKit and Hyper-V today, with the additional benefit that it is more lightweight and more tightly integrated with Windows than Docker can provide alone.”

Consequently, wrote Ferquel, “We will replace the Hyper-V VM we currently use by a WSL 2 integration package.” He said this approach would provide the same features as the current approach: “Kubernetes 1-click setup, automatic updates, transparent HTTP proxy configuration, access to the daemon from Windows, transparent bind mounts of Windows files, and more.”

When it came to running Linux, he continued, “With WSL 2 integration, you will still experience the same seamless integration with Windows, but Linux programs running inside WSL will also be able to do the same.”

This would remove the need for running separate Linux and Windows build scripts, he continued, and “a developer at Docker can now work on the Linux Docker daemon on Windows, using the same set of tools and scripts as a developer on a Linux machine.”

The technical preview, “will run side by side with the current version of Docker Desktop, so you can continue to work safely on your existing projects. If you are running the latest Windows Insider build, you will be able to experience this first hand.”

Further features will be added over the coming months, “until the WSL 2 architecture is used in Docker Desktop for everyone running a compatible version of Windows.”

Microsoft and Docker have gotten steadily closer over the last year. The container outfit’s Docker Enterprise product has been tweaked to support ageing Windows architectures, giving Redmond’s customers a reason NOT to consider alternative platforms. At the same time, they have collaborated on specifications for running distributed applications.

Source

How to Install Latest MySQL 8.0 on RHEL/CentOS and Fedora

MySQL is an open source free relational database management system (RDBMS) released under GNU (General Public License). It is used to run multiple databases on any single server by providing multi-user access to each created database.

This article will walk through you the process of installing and updating latest MySQL 8.0 version on RHEL/CentOS 7/6/ and Fedora 28-26 using MySQL Yum repository via YUM utility.

Step 1: Adding the MySQL Yum Repository

1. We will use official MySQL Yum software repository, which will provides RPM packages for installing the latest version of MySQL server, client, MySQL Utilities, MySQL Workbench, Connector/ODBC, and Connector/Python for the RHEL/CentOS 7/6/ and Fedora 28-26.

Important: These instructions only works on fresh installation of MySQL on the server, if there is already a MySQL installed using a third-party-distributed RPM package, then I recommend you to upgrade or replace the installed MySQL package using the MySQL Yum Repository”.

Before Upgrading or Replacing old MySQL package, don’t forget to take all important databases backup and configuration files.

2. Now download and add the following MySQL Yum repository to your respective Linux distribution system’s repository list to install the latest version of MySQL (i.e. 8.0 released on 27 July 2018).

--------------- On RHEL/CentOS 7 ---------------
# wget https://repo.mysql.com/mysql80-community-release-el7-1.noarch.rpm
--------------- On RHEL/CentOS 6 ---------------
# wget https://dev.mysql.com/get/mysql80-community-release-el6-1.noarch.rpm
--------------- On Fedora 28 ---------------
# wget https://dev.mysql.com/get/mysql80-community-release-fc28-1.noarch.rpm
--------------- On Fedora 27 ---------------
# wget https://dev.mysql.com/get/mysql80-community-release-fc27-1.noarch.rpm
--------------- On Fedora 26 ---------------
# wget https://dev.mysql.com/get/mysql80-community-release-fc26-1.noarch.rpm

3. After downloading the package for your Linux platform, now install the downloaded package with the following command.

--------------- On RHEL/CentOS 7 ---------------
# yum localinstall mysql80-community-release-el7-1.noarch.rpm
--------------- On RHEL/CentOS 6 ---------------
# yum localinstall mysql80-community-release-el6-1.noarch.rpm
--------------- On Fedora 28 ---------------
# dnf localinstall mysql80-community-release-fc28-1.noarch.rpm
--------------- On Fedora 27 ---------------
# dnf localinstall mysql80-community-release-fc27-1.noarch.rpm
--------------- On Fedora 26 ---------------
# yum localinstall mysql80-community-release-fc26-1.noarch.rpm

The above installation command adds the MySQL Yum repository to system’s repository list and downloads the GnuPG key to verify the integrity of the packages.

4. You can verify that the MySQL Yum repository has been added successfully by using following command.

# yum repolist enabled | grep "mysql.*-community.*"
# dnf repolist enabled | grep "mysql.*-community.*"      [On Fedora versions]

Verify MySQL Yum Repository

Verify MySQL Yum Repository

Step 2: Installing Latest MySQL Version

5. Install latest version of MySQL (currently 8.0) using the following command.

# yum install mysql-community-server
# dnf install mysql-community-server      [On Fedora versions]

The above command installs all the needed packages for MySQL server mysql-community-servermysql-community-clientmysql-community-common and mysql-community-libs.

Step 3: Installing MySQL Release Series

6. You can also install different MySQL version using different sub-repositories of MySQL Community Server. The sub-repository for the recent MySQL series (currently MySQL 8.0) is activated by default, and the sub-repositories for all other versions (for example, the MySQL 5.x series) are deactivated by default.

To install specific version from specific sub-repository, you can use --enable or --disable options using yum-config-manager or dnf config-manager as shown:

# yum-config-manager --disable mysql57-community
# yum-config-manager --enable mysql56-community
------------------ Fedora Versions ------------------
# dnf config-manager --disable mysql57-community
# dnf config-manager --enable mysql56-community

Step 4: Starting the MySQL Server

7. After successful installation of MySQL, it’s time to start the MySQL server with the following command:

# service mysqld start

You can verify the status of the MySQL server with the help of following command.

# service mysqld status

This is the sample output of running MySQL under my CentOS 7 box.

Redirecting to /bin/systemctl status  mysqld.service
mysqld.service - MySQL Server
   Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled)
   Active: active (running) since Thu 2015-10-29 05:15:19 EDT; 4min 5s ago
  Process: 5314 ExecStart=/usr/sbin/mysqld --daemonize $MYSQLD_OPTS (code=exited, status=0/SUCCESS)
  Process: 5298 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
 Main PID: 5317 (mysqld)
   CGroup: /system.slice/mysqld.service
           └─5317 /usr/sbin/mysqld --daemonize

Oct 29 05:15:19 localhost.localdomain systemd[1]: Started MySQL Server.

Check Mysql Status

Check Mysql Status

8. Now finally verify the installed MySQL version using following command.

# mysql --version

mysql  Ver 8.0.12 for Linux on x86_64 (MySQL Community Server - GPL)

Check MySQL Installed Version

Check MySQL Installed Version

Step 5: Securing the MySQL Installation

9. The command mysql_secure_installation allows you to secure your MySQL installation by performing important settings like setting the root password, removing anonymous users, removing root login, and so on.

Note: MySQL version 8.0 or higher generates a temporary random password in /var/log/mysqld.log after installation.

Use below command to see the password before running mysql secure command.

# grep 'temporary password' /var/log/mysqld.log

Once you know the password you can now run following command to secure your MySQL installation.

# mysql_secure_installation

Note: Enter new Root password means your temporary password from file /var/log/mysqld.log.

Now follow the onscreen instructions carefully, for reference see the output of the above command below.

Sample Output
Securing the MySQL server deployment.

Enter password for user root: Enter New Root Password

VALIDATE PASSWORD PLUGIN can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD plugin?

Press y|Y for Yes, any other key for No: y

There are three levels of password validation policy:

LOW    Length >= 8
MEDIUM Length >= 8, numeric, mixed case, and special characters
STRONG Length >= 8, numeric, mixed case, special characters and dictionary                  file

Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 2
Using existing password for root.

Estimated strength of the password: 50 
Change the password for root ? ((Press y|Y for Yes, any other key for No) : y

New password: Set New MySQL Password

Re-enter new password: Re-enter New MySQL Password

Estimated strength of the password: 100 
Do you wish to continue with the password provided?(Press y|Y for Yes, any other key for No) : y
By default, a MySQL installation has an anonymous user,
allowing anyone to log into MySQL without having to have
a user account created for them. This is intended only for
testing, and to make the installation go a bit smoother.
You should remove them before moving into a production
environment.

Remove anonymous users? (Press y|Y for Yes, any other key for No) : y
Success.


Normally, root should only be allowed to connect from
'localhost'. This ensures that someone cannot guess at
the root password from the network.

Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y
Success.

By default, MySQL comes with a database named 'test' that
anyone can access. This is also intended only for testing,
and should be removed before moving into a production
environment.

Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y
 - Dropping test database...
Success.

 - Removing privileges on test database...
Success.

Reloading the privilege tables will ensure that all changes
made so far will take effect immediately.

Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y
Success.

All done! 

Step 6: Connecting to MySQL Server

10. Connecting to newly installed MySQL server by providing username and password.

# mysql -u root -p

Sample Output:

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 19
Server version: 8.0.1 MySQL Community Server (GPL)

Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>

Step 7: Updating MySQL with Yum

11. Besides fresh installation, you can also do updates for MySQL products and components with the help of following command.

# yum update mysql-server
# dnf update mysql-server       [On Fedora versions]

Update MySQL Version

Update MySQL Version

When new updates are available for MySQL, it will auto install them, if not you will get a message saying NO packages marked for updates.

That’s it, you’ve successfully installed MySQL 8.0 on your system. If you’re having any trouble installing feel free to use our comment section for solutions.

Source

How to Check MySQL Database Size in Linux

In this article, I will show you how to check the size of MySQL/MariaDB databases and tables via the MySQL shell. You will learn how to determine the real size of a database file on the disk as well as size of data that it present in a database.

Read Also20 MySQL (Mysqladmin) Commands for Database Administration in Linux

By default MySQL/MariaDB stores all the data in the file system, and the size of data that exists on the databases may differ from the actual size of Mysql data on the disk that we will see later on.

In addition, MySQL uses the information_schema virtual database to store information about your databases and other settings. You can query it to gather information about size of databases and their tables as shown.

# mysql -u root -p
MariaDB [(none)]> SELECT table_schema AS "Database Name", 
ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS "Size in (MB)" 
FROM information_schema.TABLES 
GROUP BY table_schema; 

Check MySQL Database Size

Check MySQL Database Size

To find out the size of a single MySQL database called rcubemail (which displays the size of all tables in it) use the following mysql query.

MariaDB [(none)]> SELECT table_name AS "Table Name",
ROUND(((data_length + index_length) / 1024 / 1024), 2) AS "Size in (MB)"
FROM information_schema.TABLES
WHERE table_schema = "rcubemail"
ORDER BY (data_length + index_length) DESC;

Check Size of MySQL Database

Check Size of MySQL Database

Finally, to find out the actual size of all MySQL database files on the disk (filesystem), run the du commandbelow.

# du -h /var/lib/mysql

Check MySQL Size on Disk

Check MySQL Size on Disk

You might also like to read these following MySQL related articles.

  1. 4 Useful Commandline Tools to Monitor MySQL Performance in Linux
  2. 12 MySQL/MariaDB Security Best Practices for Linux

For any queries or additional ideas you want to share regarding this topic, use the feedback form below.

Source

WP2Social Auto Publish Powered By : XYZScripts.com