How to Create GUI Applications Under Linux Desktop Using PyGObject

Creating applications on Linux can be done using different ways, but there are a limited ways of doing, so using the simplest and the most functional programming languages and libraries, that’s why we’re going to have a quick look about creating applications under the Linux desktop using the GTK+ library with Python programming language which is called “PyGObject”.

PyGObject uses the GObject Introspection to create binding for programming languages like Python, PyGObject is the next generation from PyGTK, you can say that PyGObject = Python + GTK3.

Create GUI Applications in Linux

Create GUI Applications in Linux – Part 1

Today, we’re going to start a series about creating GUI (Graphical User Interface) applications under the Linux desktop using GTK+ library and PyGobject language, the series will cover the following topics:

Part 1How to Create GUI Applications Under Linux Desktop Using PyGObject
About Python

First of all, you must have some basic knowledge in Python; Python is a very modern and easy to use programming language. It’s one of the most famous programming languages in the world, using Python, you will be able to create many great applications & tools. You may take some free courses like those at codeacademy.com or you may read some books about Python at:

GTK+ is an open-source cross-platform toolkit to create graphical user interfaces for desktop applications, it was first started in 1998 as a GUI toolkit for the GIMP, later, it was used in many other applications and soon became one of the most famous libraries to create GUIs. GTK+ is released under the LGPL license.

Creating GUI Applications Under Linux

There are 2 ways for creating the applications using GTK+ & Python:

  1. Writing the graphical interface using code only.
  2. Designing the graphical interface using the “Glade” program; which is RAD tool to design GTK+ interfaces easily, Glade generates the GUI as a XML file which can be used with any programming language to build the GUI, after exporting the GUI’s XML file, we’ll be able to link the XML file with our program to do the jobs we want.

We’ll explain both ways in short.

The Code-Only Way

Writing the GUI using code only can be little bit hard for noob programmer’s and very time-wasting, but using it, we can create very functional GUIs for our programs, more than those we create using some tools like Glade.

Let’s take the following example.

#!/usr/bin/python
# -*- coding: utf-8 -*-

from gi.repository import Gtk

class ourwindow(Gtk.Window):

    def __init__(self):
        Gtk.Window.__init__(self, title="My Hello World Program")
        Gtk.Window.set_default_size(self, 400,325)
        Gtk.Window.set_position(self, Gtk.WindowPosition.CENTER)

        button1 = Gtk.Button("Hello, World!")
        button1.connect("clicked", self.whenbutton1_clicked)

        self.add(button1)
        
    def whenbutton1_clicked(self, button):
      print "Hello, World!"

window = ourwindow()        
window.connect("delete-event", Gtk.main_quit)
window.show_all()
Gtk.main()

Copy the above code, paste it in a “test.py” file and set 755 permission on the test.py file and run the file later using “./test.py”, that’s what you will get.

# nano test.py
# chmod 755 test.py
# ./test.py

Hello World Script

Hello World Script

By clicking the button, you see the “Hello, World!” sentence printed out in the terminal:

Test Python Script

Test Python Script

Let me explain the code in detailed explanation.

  1. #!/usr/bin/python: The default path for the Python interpreter (version 2.7 in most cases), this line must be the first line in every Python file.
  2. # -*- coding: utf-8 -*-: Here we set the default coding for the file, UTF-8 is the best if you want to support non-English languages, leave it like that.
  1. from gi.repository import Gtk: Here we are importing the GTK 3 library to use it in our program.
  2. Class ourwindow(Gtk.Window): Here we are creating a new class, which is called “ourwindow”, we are also setting the class object type to a “Gtk.Window”.
  3. def __init__(self): Nothing new, we’re defining the main window components here.
  4. Gtk.Window.__init__(self, title=”My Hello World Program”): We’re using this line to set the “My Hello World Program” title to “ourwindow” window, you may change the title if you like.
  5. Gtk.Window.set_default_size(self, 400,325): I don’t think that this line need explanation, here we’re setting the default width and height for our window.
  6. Gtk.Window.set_position(self, Gtk.WindowPosition.CENTER): Using this line, we’ll be able to set the default position for the window, in this case, we set it to the center using the “Gtk.WindowPosition.CENTER” parameter, if you want, you can change it to “Gtk.WindowPosition.MOUSE” to open the window on the mouse pointer position.
  7. button1 = Gtk.Button(“Hello, World!”): We created a new Gtk.Button, and we called it “button1”, the default text for the button is “Hello, World!”, you may create any Gtk widget if you want.
  8. button1.connect(“clicked”, self.whenbutton1_clicked): Here we’re linking the “clicked” signal with the “whenbutton1_clicked” action, so that when the button is clicked, the “whenbutton1_clicked” action is activated.
  9. self.add(button1): If we want our Gtk widgets to appear, we have to add them to the default window, this simple line adds the “button1” widget to the window, it’s very necessary to do this.
  10. def whenbutton1_clicked(self, button): Now we’re defining the “whenbutton1_clicked” action here, we’re defining what’s going to happen when the “button1” widget is clicked, the “(self, button)” parameter is important in order to specific the signal parent object type.
  11. print “Hello, World!”: I don’t have to explain more here.
  12. window = ourwindow(): We have to create a new global variable and set it to ourwindow() class so that we can call it later using the GTK+ library.
  13. window.connect(“delete-event”, Gtk.main_quit): Now we’re connecting the “delete-event” signal with the “Gtk.main_quit” action, this is important in order to delete all the widgets after we close our program window automatically.
  14. window.show_all(): Showing the window.
  15. Gtk.main(): Running the Gtk library.

That’s it, easy isn’t? And very functional if we want to create some large applications. For more information about creating GTK+ interfaces using the code-only way, you may visit the official documentation website at:

Python GTK3 Tutorials

The Glade Designer Way

Like I said in the beginning of the article, Glade is a very easy tool to create the interfaces we need for our programs, it’s very famous among developers and many great applications interfaces were created using it. This way is called “Rapid applications development”.

You have to install Glade in order to start using it, on Debian/Ubuntu/Mint run:

$ sudo apt­-get install glade

On RedHat/Fedora/CentOS, run:

# yum install glade

After you download and install the program, and after you run it, you will see the available Gtk widgets on the left, click on the “window” widget in order to create a new window.

Create New Widget

Create New Widget

You will notice that a new empty window is created.

New Window Widget

New Window Widget

You can now add some widgets to it, on the left toolbar, click on the “button” widget, and click on the empty window in order to add the button to the window.

Add Widget

Add Widget

You will notice that the button ID is “button1”, now refer to the Signals tab in the right toolbar, and search for the “clicked” signal and enter “button1_clicked” under it.

Glade Button Properties

Button Properties

Glade Signals Tab

Signals Tab

Now that we’ve created our GUI, let’s export it. Click on the “File” menu and choose “Save”, save the file in your home directory in the name “myprogram.glade” and exit.

Glade Export Widget File

Export Widget File

Now, create a new “test.py” file, and enter the following code inside it.

#!/usr/bin/python
# -*- coding: utf-8 -*-

from gi.repository import Gtk

class Handler:
    def button_1clicked(self, button):
      print "Hello, World!"

builder = Gtk.Builder()
builder.add_from_file("myprogram.glade")
builder.connect_signals(Handler())

ournewbutton = builder.get_object("button1")
ournewbutton.set_label("Hello, World!")

window = builder.get_object("window1")

window.connect("delete-event", Gtk.main_quit)
window.show_all()
Gtk.main()

Save the file, give it 755 permissions like before, and run it using “./test.py”, and that’s what you will get.

# nano test.py
# chmod 755 test.py
# ./test.py

Hello World Window

Hello World Window

Click on the button, and you will notice that the “Hello, World!” sentence is printed in the terminal.

Now let’s explain the new things:

  1. class Handler: Here we’re creating a class called “Handler” which will include the the definitions for the actions & signals, we create for the GUI.
  2. builder = Gtk.Builder(): We created a new global variable called “builder” which is a Gtk.Builder widget, this is important in order to import the .glade file.
  3. builder.add_from_file(“myprogram.glade”): Here we’re importing the “myprogram.glade” file to use it as a default GUI for our program.
  4. builder.connect_signals(Handler()): This line connects the .glade file with the handler class, so that the actions and signals that we define under the “Handler” class work fine when we run the program.
  5. ournewbutton = builder.get_object(“button1”): Now we’re importing the “button1” object from the .glade file, we’re also passing it to the global variable “ournewbutton” to use it later in our program.
  6. ournewbutton.set_label(“Hello, World!”): We used the “set.label” method to set the default button text to the “Hello, World!” sentence.
  7. window = builder.get_object(“window1”): Here we called the “window1” object from the .glade file in order to show it later in the program.

And that’s it! You have successfully created your first program under Linux!

Of course there are a lot more complicated things to do in order to create a real application that does something, that’s why I recommend you to take a look into the GTK+ documentation and GObject API at:

  1. GTK+ Reference Manual
  2. Python GObject API Reference
  3. PyGObject Reference

Have you developed any application before under the Linux desktop? What programming language and tools have used to do it? What do you think about creating applications using Python & GTK 3?

Create More Advance GUI Applications Using PyGobject Tool in Linux – Part 2

We continue our series about creating GUI applications under the Linux desktop using PyGObject, This is the second part of the series and today we’ll be talking about creating more functional applications using some advanced widgets.

Create Gui Applications in Linux

Create Gui Applications in Linux- Part 2

Requirements

  1. Create GUI Applications Under Linux Using PyGObject – Part 1

In the previous article we said that there are two ways for creating GUI applications using PyGObject: the code-only-way and the Glade designer way, but from now on, we’ll only be explaining the Glade designer way since it’s much easier for most users, you can learn the code-only-way by yourself using python-gtk3-tutorial.

Creating Advance GUI Applications in Linux

1. Let’s start programming! Open your Glade designer from the applications menu.

Glade Designer

Glade Designer

2. Click on the “Window” button on the left sidebar in order to create a new one.

Create New Window

Create New Window

3. Click on the “Box” widget and release it on the empty window.

Select Box Widget

Select Box Widget

4. You will be prompted to enter the number of boxes you want, make it 3.

Create Boxes

Create Boxes

And you’ll see that the boxes are created, those boxes are important for us in order to be able to add more than just 1 widget in a window.

5. Now click on the box widget, and change the orientation type from vertical to horizontal.

Make Box Horizontal

Make Box Horizontal

6. In order to create a simple program, add a “Text Entry”, “Combo Box Text” and a “Button” widgets for each one of the boxes, you should have something like this.

Create Simple Program

Create Simple Program

7. Now click on the “window1” widget from the right sidebar, and change its position to “Center“.

Make Widget Center

Make Widget Center

Scroll down to the “Appearance” section.. And add a title for the window “My Program“.

Add Widget Title

Add Widget Title

8. You can also choose an icon for the window by clicking on the “Icon Name” box.

Set Widget Icon

Set Widget Icon

9. You can also change the default height & width for the application.. After all of that, you should have something like this.

Set Widget Height Width

Set Widget Height Width

In any program, one of the most important thing is to create a “About” window, to do this, first we’ll have to change the normal button we created before into a stock button, look at the picture.

Create About Window

Create About Window

10. Now, we’ll have to modify some signals in order to run specific actions when any event occur on our widgets. Click on the text entry widget, switch to the “Signals” tab in the right sidebar, search for “activated” and change its handler to “enter_button_clicked”, the “activated” signal is the default signal that is sent when the “Enter” key is hit while focusing on the text entry widget.

Set Widget Signals

Set Widget Signals

We’ll have to add another handler for the “clicked” signal for our about button widget, click on it and change the “clicked” signal to “button_is_clicked“.

Add Widget Handler

Add Widget Handler

11. Go to the “Common” tab and mark on “Has Focus” as it follows (To give the default focus for the about button instead of the entry).

Set Default Focus

Set Default Focus

12. Now from the left sidebar, create a new “About Dialog” window.

Create About Dialog

Create About Dialog

And you will notice that the “About Dialog” window is created.

About Dialog

About Dialog

Let’s modify it.. Make sure that you insert the following settings for it from the right sidebar.

Add Program Attributes

Add Program Attributes

Select License

Select License

Add About Authors

Add About Authors

Set Window Appreance

Set Window Appreance

Select Appreance Flags

Select Appreance Flags

After making above settings, you will get following about Window.

My Program about Window

My Program about Window

In the above window, you will notice the empty space, but you can remove it by declining the number of boxes from 3 to 2 or you can add any widget to it if you want.

Change Window Boxes

Change Window Boxes

13. Now save the file in your home folder in the name “ui.glade” and open a text editor and enter the following code inside it.

#!/usr/bin/python
# -*- coding: utf-8 -*-

from gi.repository import Gtk
class Handler:

    def button_is_clicked(self, button):
        ## The ".run()" method is used to launch the about window.
         ouraboutwindow.run()
        ## This is just a workaround to enable closing the about window.
         ouraboutwindow.hide()

    def enter_button_clicked(self, button):
        ## The ".get_text()" method is used to grab the text from the entry box. The "get_active_text()" method is used to get the selected item from the Combo Box Text widget, here, we merged both texts together".
         print ourentry.get_text() + ourcomboboxtext.get_active_text()

## Nothing new here.. We just imported the 'ui.glade' file.
builder = Gtk.Builder()
builder.add_from_file("ui.glade")
builder.connect_signals(Handler())

ournewbutton = builder.get_object("button1")

window = builder.get_object("window1")

## Here we imported the Combo Box widget in order to add some change on it.
ourcomboboxtext = builder.get_object("comboboxtext1")

## Here we defined a list called 'default_text' which will contain all the possible items in the Combo Box Text widget.
default_text = [" World ", " Earth ", " All "]

## This is a for loop that adds every single item of the 'default_text' list to the Combo Box Text widget using the '.append_text()' method.
for x in default_text:
  ourcomboboxtext.append_text(x)

## The '.set.active(n)' method is used to set the default item in the Combo Box Text widget, while n = the index of that item.
ourcomboboxtext.set_active(0)
ourentry = builder.get_object("entry1")

## This line doesn't need an explanation :D
ourentry.set_max_length(15)

## Nor this do.
ourentry.set_placeholder_text("Enter A Text Here..")

## We just imported the about window here to the 'ouraboutwindow' global variable.
ouraboutwindow = builder.get_object("aboutdialog1")

## Give that developer a cookie !
window.connect("delete-event", Gtk.main_quit)
window.show_all()
Gtk.main

Save the file in your home directory under that name “myprogram.py”, and give it the execute permission and run it.

$ chmod 755 myprogram.py
$ ./myprogram.py
This is what you will get, after running above script.

My Program Window

My Program Window

Enter a text in the entry box, hit the “Enter” key on the keyboard, and you will notice that the sentence is printed at the shell.

Box Output Text

Box Output Text

That’s all for now, it’s not a complete application, but I just wanted to show you how to link things together using PyGObject, you can view all methods for all GTK widgets at gtkobjects.

Just learn the methods, create the widgets using Glade, and connect the signals using the Python file, That’s it! It’s not hard at all my friend.

We’ll explain more new things about PyGObject in the next parts of the series, till then stay updated and don’t forget to give us your comments about the article.

Create Your Own ‘Web Browser’ and ‘Desktop Recorder’ Applications Using PyGobject – Part 3

This is the 3rd part of the series about creating GUI applications under the Linux desktop using PyGObject. Today we’ll talk about using some advanced Python modules & libraries in our programs like ‘os‘, ‘WebKit‘, ‘requests‘ and others, beside some other useful information for programming.

Create Own Web Browser and Recorder

Create Own Web Browser and Recorder – Part 3

Requirements

You must go through all these previous parts of the series from here, to continue further instructions on creating more advance applications:

  1. Create GUI Applications Under Linux Desktop Using PyGObject – Part 1
  2. Creating Advance PyGobject Applications on Linux – Part 2

Modules & libraries in Python are very useful, instead of writing many sub-programs to do some complicated jobs which will take a lot of time and work, you can just import them ! Yes, just import the modules & libraries you need to your program and you will be able to save a lot of time and effort to complete your program.

There are many famous modules for Python, which you can find at Python Module Index.

You can import libraries as well for your Python program, from “gi.repository import Gtk” this line imports the GTK library into the Python program, there are many other libraries like Gdk, WebKit.. etc.

Creating Advance GUI Applications

Today, we’ll create 2 programs:

  1. A simple web browser; which will use the WebKit library.
  2. A desktop recorder using the ‘avconv‘ command; which will use the ‘os’ module from Python.

I won’t explain how to drag & drop widgets in the Glade designer from now on, I will just tell you the name of the widgets that you need to create, additionally I will give you the .glade file for each program, and the Python file for sure.

Creating a Simple Web Browser

In order to create a web browser, we’ll have to use the “WebKit” engine, which is an open-source rendering engine for the web, it’s the same one which is used in Chrome/Chromium, for more info about it you may refer to the official Webkit.org website.

First, we’ll have to create the GUI, open the Glade designer and add the following widgets. For more information on how to create widgets, follow the Part 1 and Part 2 of this series (links given above).

  1. Create ‘window1’ widget.
  2. Create ‘box1’ and ‘box2’ widget.
  3. Create ‘button1’ and ‘button2’ widget.
  4. Create ‘entry1’ widget.
  5. Create ‘scrolledwindow1’ widget.

Add Widgets

Add Widgets

After creating widgets, you will get the following interface.

Glade Interface

Glade Interface

There’s nothing new, except the “Scrolled Window” widget; this widget is important in order to allow the WebKitengine to be implanted inside it, using the “Scrolled Window” widget you will also be able to scroll horizontally and vertically while you browse the websites.

You will have now to add “backbutton_clicked” handler to the Back button “clicked” signal, “refreshbutton_clicked” handler to the Refresh button “clicked signal” and “enterkey_clicked” handler to the “activated” signal for the entry.

The complete .glade file for the interface is here.

<?xml version="1.0" encoding="UTF-8"?>
<!-- Generated with glade 3.16.1 -->
<interface>
  <requires lib="gtk+" version="3.10"/>
  <object class="GtkWindow" id="window1">
    <property name="can_focus">False</property>
    <property name="title" translatable="yes">Our Simple Browser</property>
    <property name="window_position">center</property>
    <property name="default_width">1000</property>
    <property name="default_height">600</property>
    <property name="icon_name">applications-internet</property>
    <child>
      <object class="GtkBox" id="box1">
        <property name="visible">True</property>
        <property name="can_focus">False</property>
        <property name="orientation">vertical</property>
        <child>
          <object class="GtkBox" id="box2">
            <property name="visible">True</property>
            <property name="can_focus">False</property>
            <child>
              <object class="GtkButton" id="button1">
                <property name="label">gtk-go-back</property>
                <property name="visible">True</property>
                <property name="can_focus">True</property>
                <property name="receives_default">True</property>
                <property name="relief">half</property>
                <property name="use_stock">True</property>
                <property name="always_show_image">True</property>
                <signal name="clicked" handler="backbutton_clicked" swapped="no"/>
              </object>
              <packing>
                <property name="expand">False</property>
                <property name="fill">True</property>
                <property name="position">0</property>
              </packing>
            </child>
            <child>
              <object class="GtkButton" id="button2">
                <property name="label">gtk-refresh</property>
                <property name="visible">True</property>
                <property name="can_focus">True</property>
                <property name="receives_default">True</property>
                <property name="relief">half</property>
                <property name="use_stock">True</property>
                <property name="always_show_image">True</property>
                <signal name="clicked" handler="refreshbutton_clicked" swapped="no"/>
              </object>
              <packing>
                <property name="expand">False</property>
                <property name="fill">True</property>
                <property name="position">1</property>
              </packing>
            </child>
            <child>
              <object class="GtkEntry" id="entry1">
                <property name="visible">True</property>
                <property name="can_focus">True</property>
                <signal name="activate" handler="enterkey_clicked" swapped="no"/>
              </object>
              <packing>
                <property name="expand">True</property>
                <property name="fill">True</property>
                <property name="position">2</property>
              </packing>
            </child>
          </object>
          <packing>
            <property name="expand">False</property>
            <property name="fill">True</property>
            <property name="position">0</property>
          </packing>
        </child>
        <child>
          <object class="GtkScrolledWindow" id="scrolledwindow1">
            <property name="visible">True</property>
            <property name="can_focus">True</property>
            <property name="hscrollbar_policy">always</property>
            <property name="shadow_type">in</property>
            <child>
              <placeholder/>
            </child>
          </object>
          <packing>
            <property name="expand">True</property>
            <property name="fill">True</property>
            <property name="position">1</property>
          </packing>
        </child>
      </object>
    </child>
  </object>
</interface>

Now copy the above code and paste it in the “ui.glade” file in your home folder. Now create a new file called “mywebbrowser.py” and enter the following code inside it, all the explanation is in the comments.

#!/usr/bin/python 
# -*- coding: utf-8 -*- 

## Here we imported both Gtk library and the WebKit engine. 
from gi.repository import Gtk, WebKit 

class Handler: 
  
  def backbutton_clicked(self, button): 
  ## When the user clicks on the Back button, the '.go_back()' method is activated, which will send the user to the previous page automatically, this method is part from the WebKit engine. 
    browserholder.go_back() 

  def refreshbutton_clicked(self, button): 
  ## Same thing here, the '.reload()' method is activated when the 'Refresh' button is clicked. 
    browserholder.reload() 
    
  def enterkey_clicked(self, button): 
  ## To load the URL automatically when the "Enter" key is hit from the keyboard while focusing on the entry box, we have to use the '.load_uri()' method and grab the URL from the entry box. 
    browserholder.load_uri(urlentry.get_text()) 
    
## Nothing new here.. We just imported the 'ui.glade' file. 
builder = Gtk.Builder() 
builder.add_from_file("ui.glade") 
builder.connect_signals(Handler()) 

window = builder.get_object("window1") 

## Here's the new part.. We created a global object called 'browserholder' which will contain the WebKit rendering engine, and we set it to 'WebKit.WebView()' which is the default thing to do if you want to add a WebKit engine to your program. 
browserholder = WebKit.WebView() 

## To disallow editing the webpage. 
browserholder.set_editable(False) 

## The default URL to be loaded, we used the 'load_uri()' method. 
browserholder.load_uri("https://tecmint.com") 

urlentry = builder.get_object("entry1") 
urlentry.set_text("https://tecmint.com") 

## Here we imported the scrolledwindow1 object from the ui.glade file. 
scrolled_window = builder.get_object("scrolledwindow1") 

## We used the '.add()' method to add the 'browserholder' object to the scrolled window, which contains our WebKit browser. 
scrolled_window.add(browserholder) 

## And finally, we showed the 'browserholder' object using the '.show()' method. 
browserholder.show() 
 
## Give that developer a cookie ! 
window.connect("delete-event", Gtk.main_quit) 
window.show_all() 
Gtk.main()

Save the file, and run it.

$ chmod 755 mywebbrowser.py
$ ./mywebbrowser.py

And this is what you will get.

Create Own Web Browser

Create Own Web Browser

You may refer for the WebKitGtk official documentation in order to discover more options.

Creating a Simple Desktop Recorder

In this section, we’ll learn how to run local system commands or shell scripts from the Python file using the ‘os‘ module, which will help us to create a simple screen recorder for the desktop using the ‘avconv‘ command.

Open the Glade designer, and create the following widgets:

  1. Create ‘window1’ widget.
  2. Create ‘box1’ widget.
  3. Create ‘button1’, ‘button2’ and ‘button3’ widgets.
  4. Create ‘entry1’ widget.

Create Widgets

Create Widgets

After creating above said widgets, you will get below interface.

Glade UI Interface

Glade UI Interface

Here’s the complete ui.glade file.

<?xml version="1.0" encoding="UTF-8"?> 
<!-- Generated with glade 3.16.1 --> 
<interface> 
  <requires lib="gtk+" version="3.10"/> 
  <object class="GtkWindow" id="window1"> 
    <property name="can_focus">False</property> 
    <property name="title" translatable="yes">Our Simple Recorder</property> 
    <property name="window_position">center</property> 
    <property name="default_width">300</property> 
    <property name="default_height">30</property> 
    <property name="icon_name">applications-multimedia</property> 
    <child> 
      <object class="GtkBox" id="box1"> 
        <property name="visible">True</property> 
        <property name="can_focus">False</property> 
        <child> 
          <object class="GtkEntry" id="entry1"> 
            <property name="visible">True</property> 
            <property name="can_focus">True</property> 
          </object> 
          <packing> 
            <property name="expand">False</property> 
            <property name="fill">True</property> 
            <property name="position">0</property> 
          </packing> 
        </child> 
        <child> 
          <object class="GtkButton" id="button1"> 
            <property name="label">gtk-media-record</property> 
            <property name="visible">True</property> 
            <property name="can_focus">True</property> 
            <property name="receives_default">True</property> 
            <property name="use_stock">True</property> 
            <property name="always_show_image">True</property> 
            <signal name="clicked" handler="recordbutton" swapped="no"/> 
          </object> 
          <packing> 
            <property name="expand">True</property> 
            <property name="fill">True</property> 
            <property name="position">1</property> 
          </packing> 
        </child> 
        <child> 
          <object class="GtkButton" id="button2"> 
            <property name="label">gtk-media-stop</property> 
            <property name="visible">True</property> 
            <property name="can_focus">True</property> 
            <property name="receives_default">True</property> 
            <property name="use_stock">True</property> 
            <property name="always_show_image">True</property> 
            <signal name="clicked" handler="stopbutton" swapped="no"/> 
          </object> 
          <packing> 
            <property name="expand">True</property> 
            <property name="fill">True</property> 
            <property name="position">2</property> 
          </packing> 
        </child> 
        <child> 
          <object class="GtkButton" id="button3"> 
            <property name="label">gtk-media-play</property> 
            <property name="visible">True</property> 
            <property name="can_focus">True</property> 
            <property name="receives_default">True</property> 
            <property name="use_stock">True</property> 
            <property name="always_show_image">True</property> 
            <signal name="clicked" handler="playbutton" swapped="no"/> 
          </object> 
          <packing> 
            <property name="expand">True</property> 
            <property name="fill">True</property> 
            <property name="position">3</property> 
          </packing> 
        </child> 
      </object> 
    </child> 
  </object> 
</interface>

As usual, copy the above code and paste it in the file “ui.glade” in your home directory, create a new “myrecorder.py” file and enter the following code inside it (Every new line is explained in the comments).

#!/usr/bin/python 
# -*- coding: utf-8 -*- 

## Here we imported both Gtk library and the os module. 
from gi.repository import Gtk 
import os 
        
class Handler: 
  def recordbutton(self, button): 
    ## We defined a variable: 'filepathandname', we assigned the bash local variable '$HOME' to it + "/" + the file name from the text entry box. 
    filepathandname = os.environ["HOME"] + "/" + entry.get_text() 
    
    ## Here exported the 'filepathandname' variable from Python to the 'filename' variable in the shell. 
    os.environ["filename"] = filepathandname 
    
    ## Using 'os.system(COMMAND)' we can execute any shell command or shell script, here we executed the 'avconv' command to record the desktop video & audio. 
    os.system("avconv -f x11grab -r 25 -s `xdpyinfo | grep 'dimensions:'|awk '{print $2}'` -i :0.0 -vcodec libx264 -threads 4 $filename -y & ") 
    
    
  def stopbutton(self, button): 
    ## Run the 'killall avconv' command when the stop button is clicked. 
    os.system("killall avconv") 
    
  def playbutton(self, button): 
  ## Run the 'avplay' command in the shell to play the recorded file when the play button is clicked. 
    os.system("avplay $filename &") 
    
    
## Nothing new here.. We just imported the 'ui.glade' file. 
builder = Gtk.Builder() 
builder.add_from_file("ui.glade") 
builder.connect_signals(Handler()) 

window = builder.get_object("window1") 
entry = builder.get_object("entry1") 
entry.set_text("myrecording-file.avi") 

## Give that developer a cookie ! 
window.connect("delete-event", Gtk.main_quit) 
window.show_all() 
Gtk.main()

Now run the file by applying the following commands in the terminal.

$ chmod 755 myrecorder.py
$ ./myrecorder.py

And you got your first desktop recorder.

Create Desktop Recorder

Create Desktop Recorder

You can find more information about the ‘os‘ module at Python OS Library.

And that’s it, creating applications for the Linux desktop isn’t hard using PyGObject, you just have to create the GUI, import some modules and link the Python file with the GUI, nothing more, nothing less. There are many useful tutorials about doing this in the PyGObject website:

Have you tried creating applications using PyGObject? What do you think about doing so? What applications have you developed before?

Package PyGObject Applications and Programs as “.deb” Package for the Linux Desktop – Part 4

We continue the PyGObject programming series with you on the Linux desktop, in the 4th part of the series we’ll explain how to package the programs and applications that we created for the Linux desktop using PyGObject as a Debian package.

Packaging Applications as Deb Package

Packaging Applications as Deb Package

Debian packages (.deb) are the most used format to install programs under Linux, the “dpkg” system which deals with .deb packages is the default on all Debian-based Linux distributions like Ubuntu and Linux Mint. That’s why we’ll be only explaining how to package our programs for Debian.

Create a Debian Package from your PyGObject Applications

First, you should have some basic knowledge about creating Debian packages, this following guide will help you a lot.

  1. Introduction to Debian Packaging

In brief, if you have project called “myprogram” it must contain the following files and folders so that you can package it.

Create Deb Package

Create Deb Package

  1. debian (Folder): This folder includes all information about the Debian package divided to many sub-files.
  2. po (Folder): The po folder includes the translation files for the program (We’ll explain it in part 5).
  3. myprogram (File): This is the Python file we created using PyGObject, it’s the main file of the project.
  4. ui.glade (File): The graphical user interface file.. If you created the application’s interface using Glade, you must include this file in
    your project.
  5. bMyprogram.desktop (File): This is the responsible file for showing the application in the applications menu.
  6. setup.py (File): This file is the responsible for installing any Python program into the local system, it’s very important in any Python program, it has many other ways of usage as well.

Of course.. There are many other files and folders that you can include in your project (in fact you can include anything you want) but those are the basic ones.

Now, let’s start packaging a project. Create a new folder called “myprogram”, create a file called “myprogram” and add the following code to it.

#!/usr/bin/python 
# -*- coding: utf-8 -*- 

## Replace your name and email. 
# My Name <myemail@email.com> 

## Here you must add the license of the file, replace "MyProgram" with your program name. 
# License: 
#    MyProgram is free software: you can redistribute it and/or modify 
#    it under the terms of the GNU General Public License as published by 
#    the Free Software Foundation, either version 3 of the License, or 
#    (at your option) any later version. 
# 
#    MyProgram is distributed in the hope that it will be useful, 
#    but WITHOUT ANY WARRANTY; without even the implied warranty of 
#    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the 
#    GNU General Public License for more details. 
# 
#    You should have received a copy of the GNU General Public License 
#    along with MyProgram.  If not, see <http://www.gnu.org/licenses/>. 

from gi.repository import Gtk 
import os 

class Handler: 
  
  def openterminal(self, button): 
    ## When the user clicks on the first button, the terminal will be opened. 
    os.system("x-terminal-emulator ") 
  
  def closeprogram(self, button): 
    Gtk.main_quit() 
    
# Nothing new here.. We just imported the 'ui.glade' file. 
builder = Gtk.Builder() 
builder.add_from_file("/usr/lib/myprogram/ui.glade") 
builder.connect_signals(Handler()) 
window = builder.get_object("window1") 
window.connect("delete-event", Gtk.main_quit) 
window.show_all() 
Gtk.main()

Create a ui.glade file and fill it up with this code.

<?xml version="1.0" encoding="UTF-8"?> 
<!-- Generated with glade 3.16.1 --> 
<interface> 
  <requires lib="gtk+" version="3.10"/> 
  <object class="GtkWindow" id="window1"> 
    <property name="can_focus">False</property> 
    <property name="title" translatable="yes">My Program</property> 
    <property name="window_position">center</property> 
    <property name="icon_name">applications-utilities</property> 
    <property name="gravity">center</property> 
    <child> 
      <object class="GtkBox" id="box1"> 
        <property name="visible">True</property> 
        <property name="can_focus">False</property> 
        <property name="margin_left">5</property> 
        <property name="margin_right">5</property> 
        <property name="margin_top">5</property> 
        <property name="margin_bottom">5</property> 
        <property name="orientation">vertical</property> 
        <property name="homogeneous">True</property> 
        <child> 
          <object class="GtkLabel" id="label1"> 
            <property name="visible">True</property> 
            <property name="can_focus">False</property> 
            <property name="label" translatable="yes">Welcome to this Test Program !</property> 
          </object> 
          <packing> 
            <property name="expand">False</property> 
            <property name="fill">True</property> 
            <property name="position">0</property> 
          </packing> 
        </child> 
        <child> 
          <object class="GtkButton" id="button2"> 
            <property name="label" translatable="yes">Click on me to open the Terminal</property> 
            <property name="visible">True</property> 
            <property name="can_focus">True</property> 
            <property name="receives_default">True</property> 
            <signal name="clicked" handler="openterminal" swapped="no"/> 
          </object> 
          <packing> 
            <property name="expand">False</property> 
            <property name="fill">True</property> 
            <property name="position">1</property> 
          </packing> 
        </child> 
        <child> 
          <object class="GtkButton" id="button3"> 
            <property name="label">gtk-preferences</property> 
            <property name="visible">True</property> 
            <property name="can_focus">True</property> 
            <property name="receives_default">True</property> 
            <property name="use_stock">True</property> 
          </object> 
          <packing> 
            <property name="expand">False</property> 
            <property name="fill">True</property> 
            <property name="position">2</property> 
          </packing> 
        </child> 
        <child> 
          <object class="GtkButton" id="button4"> 
            <property name="label">gtk-about</property> 
            <property name="visible">True</property> 
            <property name="can_focus">True</property> 
            <property name="receives_default">True</property> 
            <property name="use_stock">True</property> 
          </object> 
          <packing> 
            <property name="expand">False</property> 
            <property name="fill">True</property> 
            <property name="position">3</property> 
          </packing> 
        </child> 
        <child> 
          <object class="GtkButton" id="button1"> 
            <property name="label">gtk-close</property> 
            <property name="visible">True</property> 
            <property name="can_focus">True</property> 
            <property name="receives_default">True</property> 
            <property name="use_stock">True</property> 
            <signal name="clicked" handler="closeprogram" swapped="no"/> 
          </object> 
          <packing> 
            <property name="expand">False</property> 
            <property name="fill">True</property> 
            <property name="position">4</property> 
          </packing> 
        </child> 
      </object> 
    </child> 
  </object> 
</interface>

There’s nothing new until now.. We just created a Python file and its interface file. Now create a “setup.py” file in the same folder, and add the following code to it, every line is explained in the comments.

# Here we imported the 'setup' module which allows us to install Python scripts to the local system beside performing some other tasks, you can find the documentation here: https://docs.python.org/2/distutils/apiref.html 
from distutils.core import setup 

setup(name = "myprogram", # Name of the program. 
      version = "1.0", # Version of the program. 
      description = "An easy-to-use web interface to create & share pastes easily", # You don't need any help here. 
      author = "TecMint", # Nor here. 
      author_email = "myemail@mail.com",# Nor here :D 
      url = "http://example.com", # If you have a website for you program.. put it here. 
      license='GPLv3', # The license of the program. 
      scripts=['myprogram'], # This is the name of the main Python script file, in our case it's "myprogram", it's the file that we added under the "myprogram" folder. 

# Here you can choose where do you want to install your files on the local system, the "myprogram" file will be automatically installed in its correct place later, so you have only to choose where do you want to install the optional files that you shape with the Python script 
      data_files = [ ("lib/myprogram", ["ui.glade"]), # This is going to install the "ui.glade" file under the /usr/lib/myprogram path. 
                     ("share/applications", ["myprogram.desktop"]) ] ) # And this is going to install the .desktop file under the /usr/share/applications folder, all the folder are automatically installed under the /usr folder in your root partition, you don't need to add "/usr/ to the path. 

Now create a “myprogram.desktop” file in the same folder, and add the following code, it’s explained as well in the comments.

# This is the .desktop file, this file is the responsible file about showing your application in the applications menu in any desktop interface, it's important to add this file to your project, you can view more details about this file from here: https://developer.gnome.org/integration-guide/stable/desktop-files.html.en 
[Desktop Entry] 
# The default name of the program. 
Name=My Program 
# The name of the program in the Arabic language, this name will be used to display the application under the applications menu when the default language of the system is Arabic, use the languages codes to change the name for each language. 
Name[ar]=برنامجي 
# Description of the file. 
Comment=A simple test program developed by me. 
# Description of the file in Arabic. 
Comment[ar]=برنامج تجريبي بسيط تم تطويره بواسطتي. 
# The command that's going to be executed when the application is launched from the applications menu, you can enter the name of the Python script or the full path if you want like /usr/bin/myprogram 
Exec=myprogram 
# Do you want to run your program from the terminal? 
Terminal=false 
# Leave this like that. 
Type=Application 
# Enter the name of the icon you want to use for the application, you can enter a path for the icon as well like /usr/share/pixmaps/icon.png but make sure to include the icon.png file in your project folder first and in the setup.py file as well. Here we'll use the "system" icon for now. 
Icon=system 
# The category of the file, you can view the available categories from the freedesktop website.
Categories=GNOME;GTK;Utility; 
StartupNotify=false 

We’re almost done here now.. We just have to create some small files under the “debian” folder in order to provide information about our package for the “dpkg” system.

Open the “debian” folder, and create a the following files.

control
compat
changelog
rules

Project Files For Deb Package

Project Files For Deb Package

control: This file provides the basic information about the Debian package, for more details, please visit Debian Package Control Fields.

Source: myprogram
Maintainer: My Name <myemail@email.com> 
Section: utils 
Priority: optional 
Standards-Version: 3.9.2 
Build-Depends: debhelper (>= 9), python2.7 

Package: myprogram 
Architecture: all 
Depends: python-gi 
Description: My Program 
Here you can add a short description about your program.

compat: This is just an important file for the dpkg system, it just includes the magical 9 number, leave it like that.

9

changelog: Here you’ll be able to add the changes you do on your program, for more information, please visit Debian Package Changelog Source.

myprogram (1.0) trusty; urgency=medium 

  * Add the new features here. 
  * Continue adding new changes here. 
  * And here. 

 -- My Name Here <myemail@mail.com>  Sat, 27 Dec 2014 21:36:33 +0200

rules: This file is responsible about running the installation process on the local machine to install the package, you can view more information
about this file from here: Debian Package Default Rules.

Though you won’t need anything more for your Python program.

#!/usr/bin/make -f 
# This file is responsible about running the installation process on the local machine to install the package, you can view more information about this file from here: https://www.debian.org/doc/manuals/maint-guide/dreq.en.html#defaultrules Though you won't need anything more for your Python program. 
%: 
    dh $@ 
override_dh_auto_install: 
    python setup.py install --root=debian/myprogram --install-layout=deb --install-scripts=/usr/bin/ # This is going to run the setup.py file to install the program as a Python script on the system, it's also going to install the "myprogram" script under /usr/bin/ using the --install-scripts option, DON'T FORGET TO REPLACE "myprogram" WITH YOUR PROGRAM NAME. 
override_dh_auto_build:

Now thats we created all the necessary files for our program successfully, now let’s start packaging it. First, make sure that you have installed some dependences for the build process before you start.

$ sudo apt-get update
$ sudo apt-get install devscripts

Now imagine that the “myprogram” folder is in your home folder (/home/user/myprogram) in order to package it as a Debian package, run the following commands.

$ cd /home/user/myprogram
$ debuild -us -uc
Sample Output
hanny@hanny-HP-Pavilion-15-Notebook-PC:~/Projects/myprogram$
debuild -us -uc dpkg-buildpackage -rfakeroot -D -us -uc
dpkg-buildpackage: source package myprogram
dpkg-buildpackage: source version 1.0
dpkg-buildpackage: source distribution trusty
dpkg-buildpackage: source changed by My Name Here
<myemail@email.com>
dpkg-source --before-build myprogram
dpkg-buildpackage: host architecture i386
fakeroot debian/rules clean
dh clean
dh_testdir
dh_auto_clean
....
.....
Finished running lintian.

And that’s it ! Your Debian package was created successfully:

Created Debian Package

Created Debian Package

In order to install it on any Debian-based distribution, run.

$ sudo dpkg -i myprogram_1.0_all.deb

Don’t forget to replace the above file with the name of the package.. Now after you install the package, you can run the program from the applications menu.

Run Program

Run Program

And it will work..

First Packaged Program

First Packaged Program

Here ends the 4th part of our series about PyGObject.. In the next lesson we’ll explain how to localize the PyGObject application easily, till then stay tunned for it…

Translating PyGObject Applications into Different Languages – Part 5

We continue the PyGObject programming series with you and here in this 5th part, we’ll learn how to translate our PyGObject applications into different languages. Translating your applications is important if you’re going to publish it for the world, it’ll be more user friendly for end-users because not everybody understands English.

Translating PyGObject Application Language

Translating PyGObject Application Language

How the Translation Process Works

We can summarize the steps of translating any program under the Linux desktop using these steps:

  1. Extract the translatable strings from the Python file.
  2. Save the strings into a .pot file which is format that allows you to translate it later to other languages.
  3. Start translating the strings.
  4. Export the new translated strings into a .po file which will be automatically used when system language is changed.
  5. Add some small programmatic changes to the main Python file and the .desktop file.

And that’s it! After doing these steps your application will be ready for use for end-users from all around the globe (will.. You have to translate your program to all languages around the globe, though !), Sounds easy doesn’t it? 🙂

First, to save some time, download the project files from below link and extract the file in your home directory.

  1. https://copy.com/TjyZAaNgeQ6BB7yn

Open the “setup.py” file and notice the changes that we did:

Translation Code

Translation Code

# Here we imported the 'setup' module which allows us to install Python scripts to the local system beside performing some other tasks, you can find the documentation here: https://docs.python.org/2/distutils/apiref.html
from distutils.core import setup

# Those modules will help us in creating the translation files for the program automatically.
from subprocess import call
from glob import glob
from os.path import splitext, split

# DON'T FOTGET TO REPLACE 'myprogram' WITH THE NAME OF YOUR PROGRAM IN EVERY FILE IN THIS PROJECT.

data_files = [ ("lib/myprogram", ["ui.glade"]), # This is going to install the "ui.glade" file under the /usr/lib/myprogram path.
                     ("share/applications", ["myprogram.desktop"]) ] 

# This code does everything needed for creating the translation files, first it will look for all the .po files inside the po folder, then it will define the default path for where to install the translation files (.mo) on the local system, then it's going to create the directory on the local system for the translation files of our program and finally it's going to convert all the .po files into .mo files using the "msgfmt" command.
po_files = glob("po/*.po")
for po_file in po_files:
  lang = splitext(split(po_file)[1])[0]
  mo_path = "locale/{}/LC_MESSAGES/myprogram.mo".format(lang)
# Make locale directories
  call("mkdir -p locale/{}/LC_MESSAGES/".format(lang), shell=True)
# Generate mo files
  call("msgfmt {} -o {}".format(po_file, mo_path), shell=True)
  locales = map(lambda i: ('share/'+i, [i+'/myprogram.mo', ]), glob('locale/*/LC_MESSAGES'))

# Here, the installer will automatically add the .mo files to the data files to install them later.
  data_files.extend(locales)

setup(name = "myprogram", # Name of the program.
      version = "1.0", # Version of the program.
      description = "An easy-to-use web interface to create & share pastes easily", # You don't need any help here.
      author = "TecMint", # Nor here.
      author_email = "myemail@mail.com",# Nor here :D
      url = "http://example.com", # If you have a website for you program.. put it here.
      license='GPLv3', # The license of the program.
      scripts=['myprogram'], # This is the name of the main Python script file, in our case it's "myprogram", it's the file that we added under the "myprogram" folder.

# Here you can choose where do you want to install your files on the local system, the "myprogram" file will be automatically installed in its correct place later, so you have only to choose where do you want to install the optional files that you shape with the Python script
      data_files=data_files) # And this is going to install the .desktop file under the /usr/share/applications folder, all the folder are automatically installed under the /usr folder in your root partition, you don't need to add "/usr/ to the path.

Also open the “myprogram” file and see the programmatic changes that we did, all the changes are explained in the comments:

#!/usr/bin/python 
# -*- coding: utf-8 -*- 

## Replace your name and email.
# My Name <myemail@email.com>

## Here you must add the license of the file, replace "MyProgram" with your program name.
# License:
#    MyProgram is free software: you can redistribute it and/or modify
#    it under the terms of the GNU General Public License as published by
#    the Free Software Foundation, either version 3 of the License, or
#    (at your option) any later version.
#
#    MyProgram is distributed in the hope that it will be useful,
#    but WITHOUT ANY WARRANTY; without even the implied warranty of
#    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#    GNU General Public License for more details.
#
#    You should have received a copy of the GNU General Public License
#    along with MyProgram.  If not, see <http://www.gnu.org/licenses/>.

from gi.repository import Gtk 
import os, gettext, locale

## This is the programmatic change that you need to add to the Python file, just replace "myprogram" with the name of your program. The "locale" and "gettext" modules will take care about the rest of the operation.
locale.setlocale(locale.LC_ALL, '')
gettext.bindtextdomain('myprogram', '/usr/share/locale')
gettext.textdomain('myprogram')
_ = gettext.gettext
gettext.install("myprogram", "/usr/share/locale")

class Handler: 
  
  def openterminal(self, button): 
    ## When the user clicks on the first button, the terminal will be opened.
    os.system("x-terminal-emulator ")
  
  def closeprogram(self, button):
    Gtk.main_quit()
    
# Nothing new here.. We just imported the 'ui.glade' file. 
builder = Gtk.Builder() 
builder.add_from_file("/usr/lib/myprogram/ui.glade") 
builder.connect_signals(Handler()) 

label = builder.get_object("label1")
# Here's another small change, instead of setting the text to ("Welcome to my Test program!") we must add a "_" char before it in order to allow the responsible scripts about the translation process to recognize that it's a translatable string.
label.set_text(_("Welcome to my Test program !"))

button = builder.get_object("button2")
# And here's the same thing.. You must do this for all the texts in your program, elsewhere, they won't be translated.
button.set_label(_("Click on me to open the Terminal"))


window = builder.get_object("window1") 
window.connect("delete-event", Gtk.main_quit)
window.show_all() 
Gtk.main()

Now.. Let’s start translating our program. First create the .pot file (a file that contains all the translatable strings in the program) so that you
can start translating using the following command:

$ cd myprogram
$ xgettext --language=Python --keyword=_ -o po/myprogram.pot myprogram

This is going to create the “myprogram.pot” file inside the “po” folder in the main project folder which contains the following code:

# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2014-12-29 21:28+0200\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"Language: \n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=CHARSET\n"
"Content-Transfer-Encoding: 8bit\n"

#: myprogram:48
msgid "Welcome to my Test program !"
msgstr ""

#: myprogram:52
msgid "Click on me to open the Terminal"
msgstr ""

Now in order to start translating the strings.. Create a separated file for each language that you want to translate your program to using the “ISO-639-1” languages codes inside the “po” folder, for example, if you want to translate your program to Arabic, create a file called “ar.po” and copy the contents from the “myprogram.pot” file to it.

If you want to translate your program to German, create a “de.po” file and copy the contents from the “myprogram.pot” file to it.. and so one, you must create a file for each language that you want to translate your program to.

Now, we’ll work on the “ar.po” file, copy the contents from the “myprogram.pot” file and put it inside that file and edit the following:

  1. SOME DESCRIPTIVE TITLE: you can enter the title of your project here if you want.
  2. YEAR THE PACKAGE’S COPYRIGHT HOLDER: replace it with the year that you’ve created the project.
  3. PACKAGE: replace it with the name of the package.
  4. FIRST AUTHOR <EMAIL@ADDRESS>, YEAR: replace this with your real name, Email and the year that you translated the file.
  5. PACKAGE VERSION: replace it with the package version from the debian/control file.
  6. YEAR-MO-DA HO:MI+ZONE: doesn’t need explanation, you can change it to any date you want.
  7. FULL NAME <EMAIL@ADDRESS>: also replace it your your name and Email.
  8. Language-Team: replace it with the name of the language that you’re translating to, for example “Arabic” or “French”.
  9. Language: here, you must insert the ISO-639-1 code for the language that you’re translating to, for example “ar”, “fr”, “de”..etc, you can find a complete list here.
  10. CHARSET: this step is important, replace this string with “UTF-8” (without the quotes) which supports most languages.

Now start translating! Add your translation for each string after the quotes in “msgstr”. Save the file and exit. A good translation file for the
Arabic language as an example should look like this:

# My Program
# Copyright (C) 2014
# This file is distributed under the same license as the myprogram package.
# Hanny Helal <youremail@mail.com<, 2014.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: 1.0\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2014-12-29 21:28+0200\n"
"PO-Revision-Date: 2014-12-29 22:28+0200\n"
"Last-Translator: M.Hanny Sabbagh <hannysabbagh<@hotmail.com<\n"
"Language-Team: Arabic <LL@li.org<\n"
"Language: ar\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"

#: myprogram:48
msgid "Welcome to my Test program !"
msgstr "أهلًا بك إلى برنامجي الاختباري!"

#: myprogram:52
msgid "Click on me to open the Terminal"
msgstr "اضغط عليّ لفتح الطرفية"

There’s nothing more to do, just package the program using the following command:

$ debuild -us -uc

Now try to install the new created package using the following command.

$ sudo dpkg -i myprogram_1.0_all.deb

And change the system language using the “Language Support” program or using any other program to Arabic(or the language the you’ve translated your file to):

Language Support

Language Support

After selecting, your program will be translated to Arabic language.

Translated to Arabic

Translated to Arabic

Here ends our series about PyGObject programming for the Linux desktop, of course there are many other things that you can learn from the official documentation and the Python GI API reference..

What do you think about the series? Do you find it useful? Were you able to create your first application by following this series? Share us your thoughts!

Source

How to Install RedHat Enterprise Virtualization (RHEV) 3.5

In this series we are discussing RHEV3.5 administration topics. RHEV is the RedHat Enterprise Virtualization solution, which is based on oVirt project [open-source Virtualization project].

Red Hat Enterprise Virtualization is a complete virtualization management solution for virtualized servers and desktops.

This series will discuss (How to) administration topics including the RHCVA exam objectives.

Part 1How to Install RedHat Enterprise Virtualization (RHEV) 3.5
Part 7How to Manage RedHat Enterprise Virtualization Environment Users and Groups

In our first article, we are discussing RHEV environment and basic deployment. RHEV consists of two main components, like Hypervisor and Management system.

RHEV-H is the Hypervisor of RHEV platform, it is a bare-metal hypervisor which used to host virtual machines. It’s also based on KVM and RHEL.

RHEVM is the management system of the environment which controls the environment hypervisors. It’s also used to create, migrate, modify and control virtual machines hosted by hypervisrors and a lot of other many tasks will be discussed later.

RHEV3.5 Features

  1. Open source solution based on the Red Hat Enterprise Linux kernel with the Kernel-based Virtual Machine (KVM) hypervisor technology.
  2. Supported limit of up to 160 logical CPUs and 4TB per host and up to 160 vCPU and 4TB vRAM per virtual machine.
  3. OpenStack integration.
  4. Supported Daily missions like offline migration, High availability, Clustering, etc..

For more features and details read: RedHat Enterprise Virtualization Guide

Prerequisites

During our series, we will work on two nodes ‘hypervisors’ and ‘hosts’ with one manager and one iscsi storage node. In the future we will add one IPA and DNS server to our environment.

For deployment scenarios we have two:

  1. Physical Deployment – Real environment, so you will need at least three or physical machines.
  2. Virtual deployment – Test labs/environment, so you will need one physical machine with high resources e.g. i3 or i5 processor with 8G or 12G ram. Additional to another virtualization software e.g. Vmware workstation.

In this series we are working on the second scenario:

Physical Host OS : Fedora 21 x86_64 with kernel 3.18.9-200
RHEV-M  machine OS : RHEL6.6 x86_64
RHEV-H  machines hypervisor : RHEV-H 6.6 
Virtualization software : Vmware workstation 11
Virtual Network interface : vmnet3
Network : 11.0.0.0/24
Physical Host IP : 11.0.0.1
RHEV-M machine : 11.0.0.3

RedHat Enterprise Virtualization Diagram

RedHat Enterprise Virtualization Diagram

In the future articles, we will add additional components like storage nodes and IPA server so make your environment scalable as possible.

For RHEV-M machine take care about this prerequisites:

  1. RHEL/CentOS6.6 x86_64 new minimal installation [Clean installation].
  2. Make sure your system is up-to-date.
  3. Static IP for your network configuration.
  4. Your host-name something like machine.domain.com.
  5. Update your local /etc/hosts file with host-name and IP [Make sure the host-name is resolvable].
  6. The minimum requirement is 4G for memory and 25GB for hard disk.
  7. Mozilla Firefox 37 is recommended browser to access WUI.

Installation of RedHat Enterprise Virtualization Manager 3.5

1. To get access for RHEV packages and updates, you should get a free 60-day trial subscription from the redhat official site using co-prorate mail from here:

  1. RedHat Enterprise Virtualization 60-Day Evaluation

Note: After 60-day your environment will work fine, but without availability to update your system if there is new updates.

2. Then register your machine to redhat channels. Steps explained here.

  1. Register RHEV Machine to RHN

3. Lets install rhevm package and its dependencies using yum command.

[root@rhevm ~]# yum install rhevm

4. Now its time to configure rhevm by runing “engine-setup” command, which will check the status of rhevm and any available updates with asking a series of questions.

We could summarize the questions in main sections :

  1. Product Options
  2. Packages
  3. Network Configuration
  4. DataBase Configuration
  5. oVirt Engine Configuration
  6. PKI Configuration
  7. Apache Configuration
  8. System Configuration
  9. Configuration Preview

Hint: Suggested configuration defaults are provided in square brackets; if the suggested value is acceptable for a given step, press Enter to accept that value.

To run the command:

[root@rhevm ~]# engine-setup
Product Options

First thing you will be asked about is to install and configure the engine on the same host. For our tutorial, keep the default value (Yes). If you want WebSocket Proxy to be configured on your machine, keep the default value (yes).

Product Options

Product Options

Packages

Script will check any updates are available for the packages linked to the Manager. No user input is required at this stage.

Package Updates

Package Updates

Network Configuration

Let script configures your iptables firewall automatically. For now we won’t use DNS, so make sure that your host-name is fully qualified name by updating /etc/hosts as we did previously.

Network Configuration

Network Configuration

Database Configuration

The default database for RHEV3.5 is PostgreSQL. You have the option to configure it on the same machine or remotely. For our tutorial will use the local one and let script to configure it automatically.

Database Configuration

Database Configuration

Ovirt Configuration

In this section you will provide the admin password and the application mode for you environment.

Ovirt Configuration

Ovirt Configuration

PKI Configuration

RHEVM uses certificates to communicate securely with its hosts. You provide the organization name for the certificate.

PKI Configuration

PKI Configuration

Apache Configuration

For RHEVM web user interface, manager needs Apache web-server to be installed and configured, lets make setup configure it automatically.

Apache Configuration

Apache Configuration

System configuration

RHEV environment has ISO library which you could store many OS ISO in. This ISO lib called also ISO domain, this domain is a network shared path, usually it shared by NFS. This domain/path will be on the same RHEVM machine so you could create it manually or let script configures it automatically.

System Configuration

System Configuration

Configuration Review

In this section you will review all previous configuration and confirm if everything is OK.

Configuration Review

Configuration Review

Summery

This is the last stage which show additional information about how to access the admin panel and starting the services.

Summary

Summary

Hint: Warning may appears, if the memory used is low than the minimum requirement. For test-environment it’s not very important just keep on.

To access RHEVM web user interface:

http://$your-ip/ovirt-engine

RedHat Enterprise Virtualization Manager

RedHat Enterprise Virtualization Manager

Then select Administrator Portal and provide your credentials Username:admin and the password you entered during the installation. Click Login.

RedHat Enterprise Virtualization Administrator Portal

RedHat Enterprise Virtualization Administrator Portal

This is the administration portal which will be discussed later. You will notice that hosts tab is empty as we didn’t add any host/hypervisor to our environment yet.

Administrator Dashboard

Administrator Dashboard

Conclusion

This is first article in our RHEV3.5 administration series. We just introduce the solution, its features and its main components then we installed RHEV-M for our RHEV environment. In next article we will discuses RHEV-Hinstallation and adding them to RHEV environment under RHEVM management.

How to Deploy RedHat Enterprise Virtualization Hypervisor (RHEV-H) – Part 2

In this second part, we are discussing the deployment of RHEVH or the Hypervisor nodes of our environment with some tips and tricks for your virtual lab or virtual environment.

Deploy RedHat Enterprise Virtualization Hypervisor

Deploy RedHat Enterprise Virtualization Hypervisor – Part 2

As we discussed before, in our scenario which including two hyprvisors with separate RHEVM machine. The reason to deploy the manager in separate machine is more reliable than deploying it on one of the environment hosts/nodes. If you try to deploy it (as a virtual machine/appliance) on one of environment nodes/hosts and for any reason this node becomes down, the RHEVM machine/appliance will become down due to the node failure, on other words, we wont RHEVM depends on environment nodes so we will deploy it over separate machine which doesn’t belong to DataCenter/Environment nodes.

Deploying RedHat Enterprise Virtualization Hypervisor

1. For our virtual environment, you should now have this network virtual interface “vmnet3” with this specification at VMware workstation 11.

Virtual Network Editor

Virtual Network Editor

2. Lets deploy our nodes, you will need to create normal virtual machine with some customization as presented in screen-shots.

Create New Machine

Create New Machine

Select Hardware Compatibility

Select Hardware Compatibility

Select Install Source

Select Install Source

 

3. Make sure about OS type in next step : Other, Other64-bit.

Select Guest OS Type

Select Guest OS Type

4. Select your suitable name and path for your virtual machine.

Set Name of OS

Set Name of OS

5. If you have more resources, increase the number of cores/processors on demand.

Processor Configuration

Processor Configuration

6. For memory, don’t choose less than 2G, we won’t to suffer later.

Select VM Memory

Select VM Memory

7. For now, select NAT connection, it isn’t make different as we will change it later.

Select Network Type

Select Network Type

8. It is very important point to select SAS controller.

Select I/O Controller Types

Select I/O Controller Types

9. Choose SCSI Disk Type.

Select Disk Type

Select Disk Type

10. We will work with shared storage later, so 20 G is more than suitable.

Select Storage Capacity

Select Storage Capacity

Select Storage Drive

Select Storage Drive

11. Before finishing, lets make some additional modification…click Customize Hardware.

Customize Hardware

Customize Hardware

First modification will be for Processor as we will check the two options to enable virtualization features in our Processor.

Enable Virtualization

Enable Virtualization

Second modification will be for Network Configuration… change it to be Custom and insert the path of “vmnet3”.

Network Configuration

Network Configuration

Last modification will be our Hypervisor-ISO path, then close, review and finishing.

Select Hypervisor ISO Path

Select Hypervisor ISO Path

Virtual Machine Summary

Virtual Machine Summary

12. Before starting your virtual machine, we should make some manual modification in vm configuration file. Go to the path of you virtual machine, you will find file with “vmx” extension.

Virtual Machine Configuration

Virtual Machine Configuration

13. Open it with your preferred editor and add those two option at the end of file.

vcpu.hotadd = "FALSE"
apic.xapic.enable = "FALSE"

Configure VM

Configure VM

Then save and go back to our virtual machine as its time to start it.

Start VM

Start VM

Press any button, DON’T continue with Automatic boot. This list will appear…

VM Boot Menu

VM Boot Menu

Make sure you selected the 1st line the press “tab” to edit some options.

Change Boot Options

Change Boot Options

Remove “quiet” from booting options and Press enter to continue.

Remove Quiet Option

Remove Quiet Option

VM Booting

VM Booting

14. In the start console make sure about INFO which talking Virtualzation H/W was detected and enabled. Don’t continue with anything else..

Install Hypervisor

Install Hypervisor

15. Check you preferred language and continue with your local storage.

Select Language

Select Language

Select Disk Storage

Select Disk Storage

Hint: Use arrows keys to change and space to select and deselect options.

Selected Disk Storage

Selected Disk Storage

16. There is no need to change the default values of storage volumes and system portions, so keep it as defaults and make sure to review and confirm settings.

Storage Volumes

Storage Volumes

Confirm Disk Selection

Confirm Disk Selection

17. For security reasons, root account isn’t available directly by-default as you should login with admin account, which have full privilege – then switch to root account. So take care about admin password as its equivalent to root password in importance.

Mainly, root account be needed for maintenance and troubleshoot proposals. Any configuration or customization done be admin account only.

Enter Login Details

Enter Login Details

Wait minutes until installation finishes and then “Reboot” the system.

Hypervisor Installation

Hypervisor Installation

RHEV H Installation

RHEV H Installation

18. After rebooting, provide admin credentials to login.

Root Login

Root Login

This is the default console to manage the basic configuration of your hypervisor.

RHEV H Console

RHEV H Console

Hint: Review the status of your hypervisor and make sure it looks like the above.

19. Now, we will make three major, basic and important configuration.

  1. Network
  2. Security
  3. RHEV-M

Network:

Select Network tab [using arrows].

Network

Network

Change you hypervisor hostname and add DNS IP address.

Set Hostname and DNS

Set Hostname and DNS

Hint: For our VMware workstation environment, we will provide the gateway IP address.

Next, configure Static IP for your NIC.

Configure IP Address

Configure IP Address

Then save and close ‘wait few minuets‘…

Network Interface Configuration

Network Interface Configuration

Test your connectivity with internet using ping.

Test Network Connectivity

Test Network Connectivity

Important: Check the connectivity between hypervisors nodes and manager machines using ping.

Security

Select security tab. By default ssh is disabled enable it [by check it using ‘space‘] and then save and close.

Enable SSH

Enable SSH

Security Configuration

Security Configuration

RHEV-M

The last and important thing is establish connection between hypervisors and manager then registration them under the manager.

RHEV-M Configuration

RHEV-M Configuration

Due to SSL connection you should compare SSL Finger print with Internal CA from RHEVM.

Check RHEV-M Fingerprint

Check RHEV-M Fingerprint

Hint: If you reviewed it with Install RedHat Enterprise Virtualization – Part 1 article, we will find them identical.

Then accept and close the tab, next go to status tab and review everything.

Configuring RHEV-M

Configuring RHEV-M

RHEV-H Summary

RHEV-H Summary

20. Final check, using RHEVM web interface. If you reviewed previous article, you will find there is no hosts under hosts tab.

RHEV Interface

RHEV Interface

You will find node is registered successfully and ready for admin approval to be included under data-center.

Conclusion

We’ve discussed how to deploy our environment hypervisors and connect them to manager, in our next upcoming article we will see how to deploy to general datacenter with clusters using shared storage. Last thing, If you working over real environment, skip VMware preparation section.

Reference Links:

  1. Red Hat Enterprise Virtualization Installation Guide
  2. Red Hat Enterprise Virtualization Administration Guide

How to Deploy Data-Centers with Cluster and Add ISCSI Storage in RHEV Environment

In this part, we are going to discuss how to deploy data-center with one cluster which contains our two hosts in the RHEV environment. Both of two hosts connected to shared storage, additional to previous preparation, we will add another CentOS6.6 virtual machine acts as storage node.

Create Data-Centers and Cluster in RHEV

Create Data-Centers and Cluster in RHEV – Part 3

Data Center is a terminology describes the RHEV environment resources such as logical, network and storage resources.

Data Center consists of Clusters which include a set of node or nodes, which host virtual machines and its related snapshots, templates and pools. Storage domains are mandatory attached to Data Center to make it working effectively in enterprise environments. Multiple data centers in the same infrastructure could be managed separately by the same RHEVM portal.

Data Center Diagram

Data Center Diagram

Red Hat Enterprise Virtualization uses a centralized storage system for virtual machine disk images, ISO files and snapshots.

Storage networking can be implemented using:

  1. Network File System (NFS)
  2. GlusterFS
  3. Internet Small Computer System Interface (iSCSI)
  4. Local storage attached directly to the virtualization hosts
  5. Fibre Channel Protocol (FCP)

Setting up storage is a prerequisite for a new data center because a data center cannot be initialized unless storage domains are attached and activated. For clustering features and enterprise deployment needs, its recommended to deploy shared-storage in your environment instead of host-based local storage.

In general, storage node will be accessible by Data Center hosts to create, store and snapshot virtual machines beside other important tasks.

Red Hat Enterprise Virtualization platform has three types of storage domains:

  1. Data Domain: used to hold the virtual hard disks and OVF files of all the virtual machines and templates in a data center. In addition, snapshots of the virtual machines are also stored in the data domain. You must attach a data domain to a data center before you can attach domains of other types to it.
  2. ISO Domain: used to store ISO files which are needed to install and boot operating systems and applications for the virtual machines.
  3. Export Domain: temporary storage repositories that are used to copy and move images between data centers in Red Hat Enterprise Virtualization environments.

In this part we are going to deploy Data Domain with following storage node specifications for our tutorial:

IP Address : 11.0.0.6
Hostname : storage.mydomain.org
Virtual Network : vmnet3
OS : CentOS6.6 x86_64 [Minimal Installation]
RAM : 512M
Number of Hard disks : 2 Disk  [1st: 10G for the entire system,  2nd : 50G to be shared by ISCSI]
Type of shared storage : ISCSI 

Note: You could change the above specs as per your environment needs.

Step 1: Creating New Data Center with Cluster of Two Nodes

By default, RHEVM create default data-center contains one empty cluster with name Default in our RHEV environment. We will create new one and add the two (Pending Approval) Hosts under it.

Check the current data-centers, by selecting Data centers tab.

Data Centers

Data Centers

1. Click on New to add new Data center to your environment. Wizard window like this will appear, Fill it as shown:

Create New Data Center

Create New Data Center

2. You will be asked to create new cluster as apart of “Data-Center1”. Click (Configure Cluster) and fill it as shown..

Configure Cluster

Configure Cluster

Configure Cluster for Data Center

Configure Cluster for Data Center

Important: Make sure that CPU Type is correct one and ALL nodes have the same CPU Type. You can modify any setting as per your environment needs. Some settings will be discussed in details later..

3. Click (Configure Later) to exit the wizard.

Configure Cluster Later

Configure Cluster Later

4. Switch to Hosts tab to approve and add (Pending Approval) node to our environment. Select your first node and click Approve.

Add Pending Node

Add Pending Node

5. Fill the appeared wizard with the new created “Data-Center1” and its first cluster as shown:

Configure Data Center and Cluster

Configure Data Center and Cluster

Important: You may see warning about Power Management just skip it by clicking OK, repeat the same steps with the second node..

If everything goes well, status should be changed from “Pending Approval” to (Installing).

Installing Data Centers

Installing Data Centers

Wait another few minutes, status should be changed from “Installing” to (Up).

Data Centers Up

Data Centers Up

Also you could check which cluster and data center are assigned to the two node..

Step 2: Prepare ISCSI Storage for RHEV Environment

Lets switch to storage node to configure ISCSI storage.

6. First you need to install some needed packages to configure ISCSI target.

[root@storage ~]# yum install scsi-target-utils

7. For this tutorial we use sdb as our backing device, so we should add the below section to the configurationfile targets.conf.

[root@storage ~]# vim /etc/tgt/targets.conf

Add the following lines to this file, save and close it.

<target iqn.2015-07.org.mydomain:server.target1>
    backing-store /dev/sdb
</target>

Important: Make sure that sdb is a raw device.
8. Start tgtd service and make up with system booting.

[root@storage ~]# service tgtd start
[root@storage ~]# chkconfig tgtd on

Important: Make sure that ports 860 and 3260 are opened in firewall or flush it (Not recommended).

Fore more details about ISCSI configuration and deployment, check tecmint’s iscsi series.

Step 3: Add ISCSI storage Node to RHEV Environment

The Red Hat Enterprise Virtualization platform enables you to assign and manage storage using the Administration Portal’s Storage tab. The Storage results list displays all the storage domains, and the details pane shows general information about the domain.

9. Select Data-Center1 from Left tree, then select storage tab as shown.

Select Storage Tab

Select Storage Tab

10. Click on New Domain to add new storage domain for our data-center1 then fill the wizard as shown.

Add New Domain to Data Center

Add New Domain to Data Center

Important: Make sure you select Storage Type “iSCSI” in the type list.

11. Then click Discover, here you will find the target name of our storage node. Click the arrow button to go on.

Target Name

Target Name

12. You will find our storage is discovered. Check it then Click OK as shown, then wait a while and check our new storage domain under Storage tab.

Discovered Storage

Discovered Storage

Storage Domain

Storage Domain

13. Review and check size, Status and attaching Data Center.

Review All Settings

Review All Settings

Conclusion

Now, our RHEV environment has active Data Center with one cluster which contains two active and ready nodes with active ISCSI shared storage. So, we are ready to deploy server and desktop virtual machines with visualization features such as HA, Snapshots , Pools, etc. that will be discussed in next articles…

ReferencesRHEV Storage Administration Guide

Don’t Miss:

  1. Install RedHat Enterprise Virtualization (RHEV) 3.5 – Part 1
  2. Deploy RedHat Enterprise Virtualization Hypervisor (RHEV-H) – Part 2

How to Deploy Virtual Machines in RHEV Environment – Part 4

 

Our environment consist of one datacenter attached with ISCSI shared storage. This datacenter included one cluster with two hosts/nodes which will be used to host our virtual machine.

Deploy Virtual Machines in RHEV ISO Domain

Deploy Virtual Machines in RHEV ISO Domain – Part 4

Basically in any environment, we could deploy physical/virtual machines by using popular methods such as From ISO/DVD, Network, Kickstart and so on. For our environment, there is no huge difference about previous fact, as we will use the same methods/installation types.

As a start we are discussing VM deployment using ISO file/image. RHEV entertainment is very organized one, so it has special domain used only for this target, store ISO files used to create virtual machines, this domain is storage one called ISO Domain.

Step 1: Deploy New ISO Domain

Actually, RHEVM creates ISO Domain during installation process. To check that, just navigate storage tab for the environment.

Confirm ISO Domains

Confirm ISO Domains

We could use the exist one and attach it to our datacenter, but lets create new one for more practice.

Note: The exist one is used NFS shared storage on the rhevm machine IP:11.0.0.3. The new created one will use NFS shared storage on our storage node IP:11.0.0.6.

1. To Deploy NFS service on our storage node,

[root@storage ~]# yum install nfs-utils -y
[root@storage ~]# chkconfig nfs on 
[root@storage ~]# service rpcbind start
[root@storage ~]# service nfs start

2. We should create new directory to be shared using NFS.

[root@storage ~]# mkdir /ISO_Domain

3. Share the directory by add this line to /etc/exports file and then apply changes.

/ISO_Domain     11.0.0.0/24(rw)
[root@storage ~]# exportfs -a

Important: Change the ownership of the directory to be with uid:36 and gid:36.

[root@storage ~]# chown 36:36 /ISO_Domain/

Note: The 36 is the uid for vdsm user “RHEVM agent” and the gid of kvm group.

It is mandatory to make the exported directory is accessible be RHEVM. So, your NFS should be ready to be attached as ISO Domain to our environment.

4. To create New ISO domain with NFS type… choose Data-Center1 From system tab, then click on New Domainfrom storage tab.

Create New ISO Domain

Create New ISO Domain

5. Then Fill the appeared window as shown:

New Domain Details

New Domain Details

Note: Make sure about the Domain function/Storage type is ISO / NFS.

Wait a moment and check again under storage tab.

Confirm New ISO Domain

Confirm New ISO Domain

Now, our ISO Domain is successfully created and attached. So, lets upload some ISO’s to it for VM’s deploying.

6. Make sure you have ISO file on your RHEVM server. We will work with two ISO’s one for Linux {CentOS_6.6} and the other one for windows {Windows_7}.

Confirm ISO Files

Confirm ISO Files

7. RHEVM provides tool called (rhevm-iso-uploader). It used to upload ISO’s to ISO Domains beside useful tasks.

First, we will use it to list all available ISO Doamins.

Check ISO Domains

Check ISO Domains

Hint: The upload operation supports multiple files (separated by spaces) and wildcards. Second, we will use it to upload ISO’s to our iso domain “ISO_Domain”.

Upload Files ISO Domain

Upload Files ISO Domain

Note: Uploading process takes some time as it depends on your network.

Hint: ISO domain could be on the RHEVM machine, its recommended in some cases, any way its totally depend on your environment and infrastructure needs.

8. Check the uploaded ISO’s from web interface.

Check Uploaded ISO Files

Check Uploaded ISO Files

Its time for second section “Virtual Machines Deployment”.

Step 2: Virtual Machines Deployment – Linux

11. Switch to Virtual Machines tab and click “New VM”.

Create New Virtual Machine

Create New Virtual Machine

12. Then fill the appeared windows as shown:

New VM Details

New VM Details

To modify some options like memory allocation and boot options, press “Show Advanced Options”.

13. Select “System” to modify Memory and vCPU’s.

Configure Memory CPU

Configure Memory CPU

14. Select Boot Options to attach our ISO image to virtual machines, then press OK.

Select Boot Options

Select Boot Options

15. Before starting your virtual machine, you should create and attach virtual disk. So, press “Configure Virtual Disks“ in the automatically appeared window.

Configure Virtual Disks

Configure Virtual Disks

16. Then Fill the next appeared window as shown and press OK.

Add Virtual Disk Details

Add Virtual Disk Details

Hint: We discussed the difference between “Pre-allocated” and “Thin Provision” previously in this article from kvm series at Manage KVM Storage Volumes and Pools – Part 3.

17. Close the window asks about adding another virtual disk. Now, Lets check our virtual machine.

Check New Virtual Machine

Check New Virtual Machine

Hint: You may need to install SPICE plug-in to make sure virtual machine console will work fine.

For Redhat based Distro’s
# yum install spice-xpi
For Debian based Distro’s
# apt-get install browser-plugin-spice

Then restart your Firefox browser.

18. For first time, we will run virtual machine from “Run once”…just click on it and then change the order of boot options – make First one is CD-ROM.

Run Virtual Machines

Run Virtual Machines

Note: Run once is used for modify vm setting just for one time (Not Permanent) for testing or installation.

19. After Clicking (OK), you will notice the state of virtual machine is changed to starting then to up!!.

Starting Virtual Machine

Up-Virtual-Machine

20. Click on icon  open Virtual Machine’s Console.

Open Virtual Machine

Open Virtual Machine

Basically, we created a linux-server virtual machine successfully which hosted on node1 {RHEVHN1}.

Step 3: Virtual Machines Deployment – Windows

So, lets complete the journey with deploying another virtual machine acts as desktop machine, we will discuss the difference between server and desktop type later, this desktop virtual machine will be Windows7.

Generally, we will repeat almost previous steps with some additional ones. Follow steps as shown in next screens:

21. Click New VM and then fill the requested information.

New Virtual Machine

New Virtual Machine

Add Information about New VM

Add Information about New VM

22. Create a new disk and confirm that the windows VM is created.

Add Windows Virtual Disk

Add Windows Virtual Disk

Confirm Windows VM

Confirm Windows VM

Before continue to next steps, windows virtual machines needs some special paravirtualization drivers and tools to be installed successfully…you can find them under:

/usr/share/virtio-win/
/usr/share/rhev-guest-tools-iso/

For this ISO used in this tutorial, we will need to upload those files to our ISO Domain and confirm from web interface.

/usr/share/rhev-guest-tools-iso/RHEV-toolsSetup_3.5_9.iso
/usr/share/virtio-win/virtio-win_amd64.vfd
Upload Windows ISO

Upload Windows ISO

Confirm Windows ISO Files

Confirm Windows ISO Files

23. Click Run once and Don’t forget to attach the virtual floppy disk to open VM console.

Run Windows Virtual Machine

Run Windows Virtual Machine

Open Windows VM Console

Open Windows VM Console

24. Follow windows instruction to complete the installation. At Disk partitioning stage, you will notice there is no appeared disks. Click on ”Load Driver” then ”Browse”.

Windows Driver Errors

Windows Driver Errors

Load Windows Drivers

Load Windows Drivers

25. Then locate the path of drivers on the virtual floppy disk and select the two drivers related to Ethernet and SCSI controller.

Browse Drivers

Browse Drivers

Install Drivers

Install Drivers

26. Then Next and wait some time to load our 10G virtual disk is appeared.

Installing Drivers

Installing Drivers

Loaded Disk Drive

Loaded Disk Drive

Complete the installation process until it finished successfully. Once it finished successfully, go to RHEVM web interface and change the attached CD.

Change CD Drive

Change CD Drive

27. Now attach RHEV tools CD and then go back to windows virtual machine, you will find tools CD is attached. Install RHEV tools as shown..

RHEV Tools Setup

RHEV Tools Setup

Install RHEV Tools on Windows

Install RHEV Tools on Windows

Follow the sequentially steps until it finished successfully then reboot the system.

RHEV Tools Installation

RHEV Tools Installation

and finally, your windows virtual machine is healthy up and running..:)

Conclusion

We discussed in this part, ISO Domain importance and deployment then how to use for storing ISO files which be used later to deploy virtual machines. Linux and windows virtual machines have been deployed and fine working. In next part, we will discuss Clustering importance and tasks with how to use clustering features in our environment.

RHEV Clustering and RHEL Hypervisors Installation – Part 5

In this part we are going to discuss some important points related to our RHEV series. In Part-2 of this series, we’ve discussed RHEV Hypervisor deployments and installations. In this part we will discuss another ways to install RHEV Hypervisor.

RHEV Clustering and RHEL Hypervisors Installation

RHEV Series: RHEV Clustering and RHEL Hypervisors Installation – Part 5

The First way was done by using dedicated RHEVH which customized by RedHat itself without any modification or change from admin side. The other way, we will use a normal RHEL server [Minimal installation] that will act as a RHEV Hypervisor.

Step 1: Add RHEL Hypervisor to the Environment

1. Install subscribed RHEL6 server [Minimal installation]. You may increase your virtual environment by adding additional subscribed RHEL6 server [Minimal installation] acts as hypervisor.

Virtual Machine Specification
OS: RHEL6.6 x86_64
Number of processors: 2
Number of cores : 1
Memory : 3G
Network : vmnet3
I/O Controller : LSI Logic SAS
Virtual Disk : SCSI
Disk Size : 20G
IP: 11.0.0.7
Hostname: rhel.mydomain.org

and make sure you checked the virtualization option in vm processor settings.

Hint : Make sure your system is subscribed to redhat channels and up-to-date, if you don’t know how to subscribe to redhat subscription channel, you may read the article Enable Red Hat Subscription Channel.

Tip : To save your resources you can shutdown one of the both currently up and running hypervisors.

2. To turn your server into hypervisor {use it as a hypervisor} you may need install the RHEVM agent on it.

# yum install vdsm

After packages installation complete, Go to RHEVM web interface to add it.

3. In against of RHEVH hypervisor, you can add RHEL hypervisor from one way from RHEM using the root credential of the RHEL hypervisor. So, from rhevm WUI switch to Hosts tab and click new.

Add RHEL Hypervisor

Add RHEL Hypervisor

Then Provide your host information as shown.

Add Host Information

Add Host Information

Next, ignore Power mgmt warning and finish then wait for a few minutes and check the status of the newly added host.

New Host Status

New Host Status

Confirm Host Information

Confirm Host Information

For more details about adding RHEL based Host, check out RedHat official RHEV documentation.

Step 2: Managing RHEV Clustering

Clustering in RHEV describes a group of the same CPU type hosts are sharing the same storage [e.g. over network] and are using to do specific task [e.g. High Availability ]

Clustering in general has a lot of additional tasks you can check out the article that explains What is Clustering and Advantages/Disadvantages of it.

The main advantage of clustering in RHEV is to enable and manage virtual machines migration between hosts that belong to the same cluster.

So, How virtual machines migrate between hosts ?

RHEV has two strategies:

1. Live Migration
2. High Availability

1. Live Migration

Live Migration used in non-critical situation which mean everything is working fine in general but you have to do some load balancing tasks (e.g. you found there is host is loaded by virtual machine over another. So, you may Live migrate virtual machine from host to another to achieve load balancing).

Note : There is no interruption to services, application or users running inside VM during Live Migration. Live migration also called as resources re-allocation.

Live migration can be processed manually or automatic according to pre-defined policy:

  1. Manually: Force selecting the the destination host then migrate VM to it manually using WUI.
  2. Automatic : Using one of Cluster policies to manage Live migration according to RAM usage, CPU utilization, etc.

Switch to Clusters tab and select Cluster1 the click on edit.

Clustering Tab

Clustering Tab

From window tabs, switch to Cluster Policy tab.

Cluster Policy

Cluster Policy

Select evenly_distributed policy. This policy allows you to configure Max threshold for CPU utilization on the host and the allowed time for the load before starting Live migration.

Hint

As shown I configured the max threshold to be 50% and duration to be 1 min.

Configure Cluster Properties

Configure Cluster Properties

Then OK and switch to VM’s tab.

Select Linux vm [Previously created] then click edit and check this points.

1. From Host tab : Check Manual and Automatic Live Migration is allowed for this VM.

Cluster Migration Options

Cluster Migration Options

2. From HA tab : Check the Priority degree of your virtual-machine. In our case, its not very important as we are playing with only one vm. But it will be important to set priorities for your vms in large environment.

Cluster VM Priorities

Cluster VM Priorities

Then start Linux VM.

First, we will use the Manually Live Migration. Linux VM in now running on rhel.mydomain.org.

Linux VM Status

Linux VM Status

Lets run the following command over vm console, before starting migration.

# ls -lRZ / 

Then select Linux VM and click Migrate.

Linux VM Migrate

Linux VM Migrate

If you select automatically, system will check the most responsible host to be destination under the cluster policy. We will test this without any interference from administrator.

Migrate Virtual Machines

Migrate Virtual Machines

So, after selecting manually and choose the destination, Click OK and go to console and monitor the running command. You can also check the vm status.

Monitor VM Status

Monitor VM Status

You may need to monitor Task events.

Monitor Task Events

Monitor Task Events

After a few seconds, you will find a change in he vm Hostname.

Confirm VM Changes

Confirm VM Changes

Your VM is manually Live migrated successfully !!

Lets try automatic Live Migration, our target is to make CPU Load on the rhevhn1 Host is exceeded 50%. We will do that by increasing the load on the vm itself, so from console write this command:

# dd if=/dev/urandom of=/dev/null

and monitor the load on Host.

Monitor VM Load

Monitor VM Load

After few minutes, the load on Host will exceeds 50%.

VM Load Alert

VM Load Alert

Just wait another few more minutes then live migration will start automatically as shown.

VM Live Migration

VM Live Migration

You can also check the tasks tab, and after little waiting, your virtual machine is automatically Live Migrated to rhel Host.

Monitor VM

Monitor VM

VM RHEL Migration

VM RHEL Migration

Important: Make sure that one of your hosts have resources more than the other one. If the two hosts are identical in resources. VM won’t be migrated because there will be no difference !!

Hint: Putting Host into Maintenance Mode will automatically Live Migration Up and running VM’s to other hosts in the same cluster.

For further information about VM Migrations, read Migrating Virtual Machines Between Hosts.

Hint: Live Migration between different clusters isn’t officially supported expect one case you can check it here.

2. High Availability

In the against of Live MigrationHA is used to Cover Critical Situation not just load balancing tasks. The common section that your VM will also migrated to another host but with rebooting down time.

If you have Failure, Non-Operational or Non-responsive Host in your cluster, Live Migration Cannot help you. HA will power-off the virtual-machine and restart it on another up and running host in the same cluster.

To Enable HA in your environment, you must have at least one power management device [e.g. power switch] in your environment.

Unfortunately, we aren’t able to do that in our virtual environment. So for more about HA in RHEV please check out Improving Uptime with VM High Availability.

Remember: Live Migration and High Availability are working with hosts in the same cluster with same type of CPU and connected to shared Storage.

Conclusion:

We reached peak point in our series as we discussed one of the important features in RHEV Clustering as we described it and its importance. Also we discussed the second type [method] to deploy RHEV hypervisors which based on RHEL [at least 6.6 x86_64].

In next article, we will be able to make some operations on virtual-machines such as snapshots, sealing, cloning, exporting and pools.

How to Manage RedHat Enterprise Virtualization (RHEV) Virtual Machines Operations and Tasks – Part 6

In this part of our tutorial we are going to discuss the operations and tasks such as Taking Snaphots, Creating Pools, Making Templates and Cloning are the main operations which could be performed on RHEV virtual machines hosted by RHEV environment.

Before going further, I request you to read the rest of the articles from this RHEV series here:

RedHat Enterprise Virtualization (RHEV) Administration Series – Part 1-7

Manage RHEV VM Operations and Tasks

Manage RHEV VM Operations and Tasks – Part 6

Snapshots

Snapshot is used to save VM’s state at specific Point-Time. This is very useful and helpful during software testing process or revert something going wrong on your system as you could return back to the Point-Time which you took snapshot at.

1. Start your linux-vm machine and verify the OS version and type before taking snapshot.

Check Linux OS Version

Img 01: Check Linux OS Version

2. Click on “Create Snapshot”.

Create RHEV Snapshot

Img 02: Create RHEV Snapshot

3. Add the description and select disks and saving memory then OK.

Add Snapshot Description

Img 03: Add Snapshot Description

Check the status of snapshot and task status from tasks bar.

Confirm Created Snapshot Status

Img 04: Confirm Created Snapshot Status

After finishing, you will note that status of snapshot changed from Lock to OK, which meaning that your snapshot is ready and created successfully.

Check Snapshot Status

Img 05: Check Snapshot Status

4. Lets go to the VM console and delete /etc/issue file.

Delect Issue File

Img 06: Delete Issue File

5. For reverting/restoring process, your virtual machine should be at down state. Make sure its powered off and then click “Preview” to check the snapshot and reverting on-fly to it.

Shut Down Snapshot

Img 07: Shut Down Snapshot

Now confirm Memory restoring.

Restore Snapshot Memory

Img 08: Restore Snapshot Memory

Wait for Previewing to be finished and after few minuets, you will noted that snapshot status is “In preview”.

Snapshot In Preview State

Img 09: Snapshot In Preview State

Now we have two ways:

6. First one to directly “Commit” the restored snapshot to the original virtual machine and finishing the total reverting process.

Second one to check the reverted changes before commit the restored snapshot to original vm. After checking we will go to the first way “Commit”.

For this article, we will start via second way. So, we will need to power up the virtual machine and then check the /etc/issue file. You will find it without any changes.

Check Issue File

Img 10: Check Issue File

7. Your VM should be powered off for reverting process. After powering off, Commit your snapshot to vm.

Commit VM Snapshot

Img 11: Commit VM Snapshot

Then watch restoring commit process, after finishing commit process, you will find snapshot status is “OK”.

Confirm Commit Snapshot

Img 12: Confirm Commit Snapshot

Hints : 1. If you don’t want to confirm reverting to snapshot after preview stage, just click on “Undo” to skip snapshot. Its always recommended to take snapshot of power down VM instead of be running. You can create new VM from current snapshot, just select your preferred snapshot and click on “Clone”.

Create VM Snapshot Clone

Img 13: Create VM Snapshot Clone

Templates:

Actually, template is a very normal virtual machine copy, but without any pre-configuration related to the original vm operating system. Templates are used to improve the speed and decrease time of vm operating system installation.

Creating templates has two main process:
  1. A. Sealing the original virtual machine.
  2. B. Taking copy [Create Template] of the sealed vm to be separated template.

A. Sealing Process:

To seal RHEL6 Virtual Machine you should make sure about this points :

8. Flagging system for pre-configuration for next booting by creating this empty hidden file.

# touch /.unconfigured

9. Remove any ssh host keys and set hostname to be localhost.localdomain in /etc/sysconfig/network file and also remove system udev rules.

# rm -rf /etc/ssh/ssh_host_*
# rm -rf /etc/udev/rules.d/70-*

10. Remove MAC address from Network interface configuration file eg. [/etc/sysconfig/network-scripts/ifcfg-eth0] and delete all system logs under /var/log/ and finally Power off your virtual machine.

Commands to Follow

Img 14: Commands to Follow

B. Creating Templates

11. Select the sealed vm and click “Create Template”.

Create New VM Template

Img 15: Create New VM Template

12. Provide details and proprieties about your new template.

Add Template Details

Img 16: Add Template Details

Now, you can check the process from tasks and also you could switch Templates tab to monitor the status of your new templates.

Check Template Information

Img 16: Check Template Information

Monitor Template Status

Img 17: Monitor Template Status

Wait a few minutes, then check template status again.

Check VM Template Again

Img 18: Check VM Template Again

You will note that its converted from lock to OK. Now our new template is ready to be used. Actually we will use it in the next section.

Creating Pools:

Pool is a group of identical virtual machines. Pooling is used to create a given number of identical virtual machines in one step. Those virtual machines could be based on pre-created template.

Creating New Pool

13. Switch to Pools tab and click New then fill the appeared wizard windows.

Create New Pool

Img 19: Create New Pool

14. Now check the status of created Pool vms and wait few minutes, you will note the status of virtual machines changed from Lock to Down.

VM Pool Status Locked

Img 20: VM Pool Status Locked

VM Pool Status Down

Img 21: VM Pool Status Down

You could also check the status from Virtual Machines tab.

Check Pool Status from VM

Img 22: Check Pool Status from VM

15. Lets try to run one of Pool virtual machines.

Run Virtual Machine

Img 22: Run Virtual Machine

That’s right, you will be asked for new root password and also you will be asked about basic authentication configuration. Once finished your new vm is now ready for use.

Select Basic Authentication

23: Select Basic Authentication

Monitor VMs also from pools tab.

Monitor Virtual Machine

Img 24: Monitor Virtual Machine

Notes:

  1. To delete Pool, You should detach all of VMs from the Pool.
  2. To detach VM from Pool, VM must be at down state.
  3. Compare VM installation time [Normal way VS. Template using].

Create VM Clones:

Cloning is normal Copy Process without any change to the Original Source. Cloning could done from Original VM or Snapshot.

To take Clone:

16. Select the Original source [VM or Snapshot] then click “Clone VM”.

Create VM Clone

Img 25: Create VM Clone

Hint: If you will take clone from VM, VM must be at down state.

17. Provide name to your cloned VM and wait few minutes, you will find the cloning process is done and the new vm now is ready to be used.

Give VM Clone Name

Img 26: Give VM Clone Name

VM Clone Details

Img 27: VM Clone Details

Conclusion

As a RHEV administrator, there some main tasks to be done on environment virtual machines. Cloning, Creating Pools, Making Templates and Taking snapshots are basic and important tasks should be done by RHEV admin. This tasks also considered as the core tasks of any virtualization environment, So make sure you understood it well then do more and more,,, and more practical labs in your private environment.

ResourcesRHEV Administration Guide

 

Source

Understanding Shell Initialization Files and User Profiles in Linux

Linux is a multi-user, time sharing system, implying that more than one user can log in and use a system. And system administrators have the task of managing various aspects of how different users can operate a system in terms of installing/updating/removing software, programs they can run, files they can view/edit and so on.

Linux also allows users’ environments to be created or maintained in two major ways: using system-wide (global) and user-specific (personal) configurations. Normally, the basic method of working with a Linux system is the shell, and the shell creates an environment depending on certain files it reads during its initialization after a successful user login.

Suggested Read: How to Set Environment Variables in Linux

In this article, we will explain shell initialization files in relation to user profiles for local user management in Linux. We will let you know where to keep custom shell functions, aliases, variables as well as startup programs.

Important: For the purpose of this article, we will focus on bash, a sh compatible shell which is the most popular/used shell on Linux systems out there.

If you are using a different shell (zsh, ash, fish etc..) program, read through its documentation to find out more about some of the related files we will talk about here.

Shell Initialization in Linux

When the shell is invoked, there are certain initialization/startup files it reads which help to setup an environment for the shell itself and the system user; that is predefined (and customized) functions, variables, aliases and so on.

There are two categories of initialization files read by the shell:

  • system-wide startup files – theses contain global configurations that apply to all users on the system, and are usually located in the /etc directory. They include: /etc/profiles and /etc/bashrc or /etc/bash.bashrc.
  • user-specific startup files – these store configurations that apply to a single user on the system and are normally located in the users home directory as dot files. They can override the system-wide configurations. They include: .profiles.bash_profile.bashrc and .bash_login.

Again, the shell can be invoked in three possible modes:

1. Interactive Login Shell

The shell is invoked after a user successfully login into the system, using /bin/login, after reading credentials stored in the /etc/passwd file.

When the shell is started as an interactive login shell, it reads the /etc/profile and its user-specific equivalent ~/.bash_profile.

Linux Interactive Login Shell

Linux Interactive Login Shell

2. Interactive non-login Shell

The shell is started at the command-line using a shell program for example $/bin/bash or $/bin/zsh. It can as well be started by running the /bin/su command.

Additionally, an interactive non-login shell can as well be invoked with a terminal program such as konsoleterminator or xterm from within a graphical environment.

When the shell is started in this state, it copies the environment of the parent shell, and reads the user-specific ~/.bashrc file for additional startup configuration instructions.

$ su
# ls -la
Interactive Non-Login Shell

Interactive Non-Login Shell

3. Non-interactive Shell

The shell is invoked when a shell script is running. In this mode, it’s processing a script (set of shell or generic system commands/functions) and doesn’t require user input between commands unless otherwise. It operates using the environment inherited from the parent shell.

Understanding System-wide Shell Startup Files

In this section, we will shade more light on shell startup files that store configurations for all users on the system and these include:

The /etc/profile file – it stores system-wide environment configurations and startup programs for login setup. All configurations that you want to apply to all system users’ environments should be added in this file.

For instance, you can set your the global PATH environment variable here.

# cat /etc/profile
System Wide Configuration File

System Wide Configuration File

Note: In certain systems like RHEL/CentOS 7, you’ll get such warnings as “It’s not recommended to change this file unless you know what you are doing. It’s much better to create a custom .sh shell script in /etc/profile.d/ to make custom changes to your environment, as this will prevent the need for merging in future updates”.

The /etc/profile.d/ directory – stores shell scripts used to make custom changes to your environment:

# cd /etc/profile.d/
# ls  -l 
Stores Custom Shell Scripts

Stores Custom Shell Scripts

The /etc/bashrc or /etc/bash.bashrc file – contains system-wide functions and aliases including other configurations that apply to all system users.

If your system has multiple types of shells, it is a good idea to put bash-specific configurations in this file.

# cat /etc/bashrc
System Wide Functions and Aliases

System Wide Functions and Aliases

Understanding User-specific Shell Startup Files

Next, we will explain more concerning user-specific shell (bash) startup dot files, that store configurations for a particular user on the system, they are located in a user’s home directory and they include:

# ls -la
User Specific Configuration Files

User Specific Configuration Files

The ~/.bash_profile file – this stores user specific environment and startup programs configurations. You can set your custom PATH environment variable here, as shown in the screenshot below:

# cat ~/.bash_profile
User Bash Profile

User Bash Profile

The ~/.bashrc file – this file stores user specific aliases and functions.

# cat ~/.bashrc
User Bashrc File

User Bashrc File

The ~/.bash_login file – it contains specific configurations that are normally only executed when you log in to the system. When the ~/.bash_profile is absent, this file will be read by bash.

The ~/.profile file – this file is read in the absence of ~/.bash_profile and ~/.bash_login; it can store the same configurations, which are can also be accessible by other shells on the system. Because we have mainly talked about bash here, take note that other shells might not understand the bash syntax.

Next, we will also explain two other important user specific files which are not necessarily bash initialization files:

The ~/.bash_history file – bash maintains a history of commands that have been entered by a user on the system. This list of commands is kept in a user’s home directory in the ~/.bash_history file.

To view this list, type:

$ history 
or 
$ history | less
View Last Executed Commands

View Last Executed Commands

The ~/.bash_logout file – it’s not used for shell startup, but stores user specific instructions for the logout procedure. It is read and executed when a user exits from an interactive login shell.

One practical example would by clearing the terminal window upon logout. This is important for remote connections, which will leave a clean window after closing them:

# cat bash_logout 
Clear History After Logout

Clear History After Logout

For additional insights, checkout the contents of these shell initialization files on various Linux distros and also read through the bash man page.

 
Source

Understand Linux Shell and Basic Shell Scripting Language Tips (I,II,III parts)

Picture speak more than words and the below picture says all about the working of Linux.

 

Understanding Linux Shell

Understanding Linux Shell

Read Also

  1. 5 Shell Scripts to Learn Shell Programming – Part II
  2. Sailing Through The World of Linux BASH Scripting – Part III

Understanding Linux Shell

  1. Shell: A Command-Line Interpretor that connects a user to Operating System and allows to execute the commands or by creating text script.
  2. Process: Any task that a user run in the system is called a process. A process is little more complex than just a task.
  3. File: It resides on hard disk (hdd) and contains data owned by a user.
  4. X-windows aka windows: A mode of Linux where screen (monitor) can be split in small “parts” called windows, that allow a user to do several things at the same time and/or switch from one task to another easily and view graphics in a nice way.
  5. Text terminal: A monitor that has only the capability of displaying text stuff, no graphics or a very basic graphics display.
  6. Session: Time between logging on and logging out of the system.

Types of Shell on a Standard Linux Distribution

Bourne shell : The Bourne shell was one of the major shells used in early versions and became a de facto standard. It was written by Stephen Bourne at Bell Labs. Every Unix-like system has at least one shell compatible with the Bourne shell. The Bourne shell program name is “sh” and it is typically located in the file system hierarchy at /bin/sh.

C shell: The C shell was developed by Bill Joy for the Berkeley Software Distribution. Its syntax is modelled after the C programming language. It is used primarily for interactive terminal use, but less frequently for scripting and operating system control. C shell has many interactive commands.

Beginning the Fun! (Linux Shell)

There exist thousands of commands for command-line user, how about remembering all of them? Hmmm! Simply you can not. The real power of computer is to ease the ease your work, you need to automate the process and hence you need scripts.

Scripts are collections of commands, stored in a file. The shell can read this file and act on the commands as if they were typed at the keyboard. The shell also provides a variety of useful programming features to make scripts truly powerful.

Basics of Shell Programming

  1. To get a Linux shell, you need to start a terminal.
  2. To see what shell you have, run: echo $SHELL.
  3. In Linux, the dollar sign ($) stands for a shell variable.
  4. The ‘echo‘ command just returns whatever you type in.
  5. The pipeline instruction (|) comes to rescue, when chaining several commands.
  6. Linux commands have their own syntax, Linux won’t forgive you whatsoever is the mistakes. If you get a command wrong, you won’t flunk or damage anything, but it won’t work.
  7. #!/bin/sh – It is called shebang. It is written at the top of a shell script and it passes the instruction to the program /bin/sh.

About shell Script

Shell script is just a simple text file with “.sh” extension, having executable permission.

Process of writing and executing a script

  1. Open terminal.
  2. Navigate to the place where you want to create script using ‘cd‘ command.
  3. Cd (enter) [This will bring the prompt at Your home Directory].
  4. touch hello.sh (Here we named the script as hello, remember the ‘.sh‘ extension is compulsory).
  5. vi hello.sh (nano hello.sh) [You can use your favourite editor, to edit the script].
  6. chmod 744 hello.sh (making the script executable).
  7. sh hello.sh or ./hello.sh (running the script)
Writing your First Script
#!/bin/bash
# My first script

echo "Hello World!"

Save the above lines on a text file, make it executable and run it, as described above.

Sample Output

Hello World!

In the above code.

#!/bin/bash (is the shebang.)
# My first script (is comment, anything following '#' is a comment)
echo “Hello World!” (is the main part of this script)
Writing your Second Script

OK time to move to the next script. This script will tell you, your’s “username” and list the running processes.

#! /bin/bash
echo "Hello $USER"
echo "Hey i am" $USER "and will be telling you about the current processes"
echo "Running processes List"
ps

Create a file with above codes, save it to anything you want, but with extension “.sh“, make it executable and run it, from you terminal.

Sample Output

Hello tecmint
Hey i am tecmint and will be telling you about the current processes
Running processes List
  PID TTY          TIME CMD
 1111 pts/0    00:00:00 bash
 1287 pts/0    00:00:00 sh
 1288 pts/0    00:00:00 ps

Was this cool? Writing script is as simple as getting an idea and writing pipelined commands. There are some restrictions, too. Shell scripts are excellent for concise filesystem operations and scripting the combination of existing functionality in filters and command line tools via pipes.

When your needs are greater – whether in functionalityrobustnessperformanceefficiency etc – then you can move to a more full-featured language.

If you already know C/Perl/Python programming language or any other programming language, learning the scripting language won’t be much difficult.

Writing your Third Script

Moving to, write our third and last script for this article. This script acts as an interactive script. Why don’t you, yourself execute this simple yet interactive script and tell us how you felt.

#! /bin/bash
echo "Hey what's Your First Name?";
read a;
echo "welcome Mr./Mrs. $a, would you like to tell us, Your Last Name";
read b;
echo "Thanks Mr./Mrs. $a $b for telling us your name";
echo "*******************"
echo "Mr./Mrs. $b, it's time to say you good bye"

Sample Output

Hey what's Your First Name?
Avishek
welcome Mr./Mrs. Avishek, would you like to tell us, Your Last Name
Kumar
Thanks Mr./Mrs. Avishek Kumar for telling us your name
******************************************************
Mr./Mrs. Kumar, it's time to say you good bye

Well this is not an end. We tried to bring a taste of scripting to you. In our future article we will elaborate this scripting language topic, rather a never ending scripting language topic, to be more perfect.

5 Shell Scripts for Linux Newbies to Learn Shell Programming – Part II

To Learn something you need to do it, without the fear of being unsuccessful. I believe in practicality and hence will be accompanying you to the practical world of Scripting Language.

Learn Basic Shell Scripting

Learn Basic Shell Scripting

This article is an extension of our First article Understand Linux Shell and Basic Shell Scripting – Part I, where we gave you a taste of Scripting, continuing that we won’t disappoint you in this article.

Script 1: Drawing a Special Pattern

#!/bin/bash
MAX_NO=0
echo -n "Enter Number between (5 to 9) : "
read MAX_NO
if ! [ $MAX_NO -ge 5 -a $MAX_NO -le 9 ] ; then
   echo "WTF... I ask to enter number between 5 and 9, Try Again"
   exit 1
fi
clear
for (( i=1; i<=MAX_NO; i++ )) do     for (( s=MAX_NO; s>=i; s-- ))
    do
       echo -n " "
    done
    for (( j=1; j<=i;  j++ ))     do      echo -n " ."      done     echo "" done ###### Second stage ###################### for (( i=MAX_NO; i>=1; i-- ))
do
    for (( s=i; s<=MAX_NO; s++ ))
    do
       echo -n " "
    done
    for (( j=1; j<=i;  j++ ))
    do
     echo -n " ."
    done
    echo ""
done
echo -e "\n\n\t\t\t Whenever you need help, Tecmint.com is always there"

Most of the above ‘key words‘ would be known to you and most of them are self explanatory. e.g., MAX sets the maximum value of the variable, for is a loop and anything within the loop gets on executing again and again till the loop is valid for given value of input.

Sample Output
[root@tecmint ~]# chmod 755 Special_Pattern.sh
[root@tecmint ~]# ./Special_Pattern.sh
Enter Number between (5 to 9) : 6
       .
      . .
     . . .
    . . . .
   . . . . .
  . . . . . .
  . . . . . .
   . . . . .
    . . . .
     . . .
      . .
       .

                         Whenever you need help, Tecmint.com is always there

If you are a little aware of any programming language, learning the above script is not difficult, even if you are new to computation, programming and Linux it is not going to be much difficult.

Download Special_Pattern.sh

Script 2: Creating Colorful Script

Who says, Linux is colorless and boring, save the codes below to anything [dotsh, make it executable and Run it, don’t forget to tell me how it was, Think what you can achieve, implementing it somewhere.

#!/bin/bash
clear 
echo -e "33[1m Hello World"
# bold effect
echo -e "33[5m Blink"
# blink effect
echo -e "33[0m Hello World"
# back to normal
echo -e "33[31m Hello World"
# Red color
echo -e "33[32m Hello World"
# Green color
echo -e "33[33m Hello World"
# See remaining on screen
echo -e "33[34m Hello World"
echo -e "33[35m Hello World"
echo -e "33[36m Hello World"
echo -e -n "33[0m"
# back to normal
echo -e "33[41m Hello World"
echo -e "33[42m Hello World"
echo -e "33[43m Hello World"
echo -e "33[44m Hello World"
echo -e "33[45m Hello World"
echo -e "33[46m Hello World"
echo -e "33[0m Hello World"

Note: Don’t bother about the color code now, Those important to you will be at your tongue, gradually.

Warning: Your terminal might not have the facility of blinking.

Sample Output
[root@tecmint ~]# chmod 755 Colorfull.sh
[root@tecmint ~]# ./Colorfull.sh

Hello World
Blink
Hello World
Hello World
Hello World
Hello World
Hello World
Hello World
Hello World
Hello World
Hello World
Hello World
Hello World
Hello World
Hello World
Hello World

Download Colorfull.sh

Script 3: Encrypt a File/Directory

This script will encrypt a file (remember? directory/driver/…. everything is treated as file, in Linux). The current limitation of the above script is that it don’t support auto completion of name using TAB. Moreover, you need to place the script and file to be encrypted in the same folder. You may need to install “pinentry-gui”, using yum or apt the package, if required.

[root@midstage ~]# yum install pinentry-gui
[root@midstage ~]# apt-get install pinentry-gui

Crete a file called “Encrypt.sh” and place the following script, make it executable and run it as shown.

#!/bin/bash
echo "Welcome, I am ready to encrypt a file/folder for you"
echo "currently I have a limitation, Place me to thh same folder, where a file to be 
encrypted is present"
echo "Enter the Exact File Name with extension"
read file;
gpg -c $file
echo "I have encrypted the file successfully..."
echo "Now I will be removing the original file"
rm -rf $file

Sample Output

[root@tecmint ~]# chmod 755 Encrypt.sh
[root@tecmint ~]# ./Encrypt.sh

Welcome, I am ready to encrypt a file/folder for you
currently I have a limitation, Place me to the same folder, where a file to be

encrypted is present
Enter the Exact File Name with extension

package.xml

                                                   ┌─────────────────────────────────────────────────────┐
                                                   │ Enter passphrase                                    │
                                                   │                                                     │
                                                   │                                                     │
                                                   │ Passphrase *******_________________________________ │
                                                   │                                                     │
                                                   │       <OK>                             <Cancel>     │
                                                   └─────────────────────────────────────────────────────┘

Please re-enter this passphrase

                                                   ┌─────────────────────────────────────────────────────┐
                                                   │ Please re-enter this passphrase                     │
                                                   │                                                     │
                                                   │ Passphrase ********________________________________ │
                                                   │                                                     │
                                                   │       <OK>                             <Cancel>     │
                                                   └─────────────────────────────────────────────────────┘

I have encrypted the file successfully...
Now I will be removing the original file
</pre>

gpg -c : This will encrypt your file, using a passkey aka password. In this process of learning you would have never thought that the actual process of learning could be that much easy. So after encrypting a file what you need? Obviously! decrypting the file. And I want you – the learner, the reader to write the decryption script yourself, don’t worry I am not leaving you in the middle, I just want you to gain something out of this article.

Notegpg -d filename.gpg > filename is what you need to implement in your decryption script. You may post you script in comment if successful, if not you may ask me to write it for you.

Download Encrypt.sh

Script 4: Checking Server Utilization

Checking the server utilization is one of the important task of an administrator, and a good administrator is one who knows how to automate his day to day task. Below is the script that will give many such information about your server. Check it yourself.

#!/bin/bash
    date;
    echo "uptime:"
    uptime
    echo "Currently connected:"
    w
    echo "--------------------"
    echo "Last logins:"
    last -a |head -3
    echo "--------------------"
    echo "Disk and memory usage:"
    df -h | xargs | awk '{print "Free/total disk: " $11 " / " $9}'
    free -m | xargs | awk '{print "Free/total memory: " $17 " / " $8 " MB"}'
    echo "--------------------"
    start_log=`head -1 /var/log/messages |cut -c 1-12`
    oom=`grep -ci kill /var/log/messages`
    echo -n "OOM errors since $start_log :" $oom
    echo ""
    echo "--------------------"
    echo "Utilization and most expensive processes:"
    top -b |head -3
    echo
	top -b |head -10 |tail -4
    echo "--------------------"
    echo "Open TCP ports:"
    nmap -p- -T4 127.0.0.1
    echo "--------------------"
    echo "Current connections:"
    ss -s
    echo "--------------------"
    echo "processes:"
    ps auxf --width=200
    echo "--------------------"
    echo "vmstat:"
    vmstat 1 5
Sample Output
[root@tecmint ~]# chmod 755 Server-Health.sh
[root@tecmint ~]# ./Server-Health.sh

Tue Jul 16 22:01:06 IST 2013
uptime:
 22:01:06 up 174 days,  4:42,  1 user,  load average: 0.36, 0.25, 0.18
Currently connected:
 22:01:06 up 174 days,  4:42,  1 user,  load average: 0.36, 0.25, 0.18
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
tecmint   pts/0    116.72.134.162   21:48    0.00s  0.03s  0.03s sshd: tecmint [priv]
--------------------
Last logins:
tecmint   pts/0        Tue Jul 16 21:48   still logged in    116.72.134.162
tecmint   pts/0        Tue Jul 16 21:24 - 21:43  (00:19)     116.72.134.162
--------------------
Disk and memory usage:
Free/total disk: 292G / 457G
Free/total memory: 3510 / 3838 MB
--------------------
OOM errors since Jul 14 03:37 : 0
--------------------
Utilization and most expensive processes:
top - 22:01:07 up 174 days,  4:42,  1 user,  load average: 0.36, 0.25, 0.18
Tasks: 149 total,   1 running, 148 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.1%us,  0.0%sy,  0.0%ni, 99.3%id,  0.6%wa,  0.0%hi,  0.0%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
    1 root      20   0  3788 1128  932 S  0.0  0.0   0:32.94 init
    2 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kthreadd
    3 root      RT   0     0    0    0 S  0.0  0.0   0:14.07 migration/0

Note: I have given you the script that gives the output in the terminal itself, how about getting the output in a file for future reference. Implement it using redirect operator.

  1. >‘ : the redirection operator causes a file creation, and if it does exist, the contents are overwritten.
  2. >>‘ : when you use >>, you are adding information, rather than replacing it.
  3. >>‘ is safe, as compared to ‘>

Download Server-Health.sh

Script 5: Check Disk Space and Sends an Email Alert

How about getting an email when disk use in partition PART is bigger than Maximum allowed, it is a life saver script for web administrators with little modification.

MAX=95
EMAIL=USER@domain.com
PART=sda1
USE=`df -h |grep $PART | awk '{ print $5 }' | cut -d'%' -f1`
if [ $USE -gt $MAX ]; then
  echo "Percent used: $USE" | mail -s "Running out of disk space" $EMAIL
fi

Note: Remove “USER” with your user name. You can check mail using using ‘mail‘ command.

Download Check-Disk-Space.sh

Script writing and programming is beyond boundaries, anything and everything could be implemented as required. That’s all for now, In my very next article I will be giving your some different flavors of scripting. Till then stay cool and tuned, enjoy.

Sailing Through The World of Linux BASH Scripting – Part III

The Previous following articles of ‘Shell Scripting‘ series were highly appreciated and hence I am writing this article to extend the never ending process of learning.

Basic Shell Scripting Part-3

Basic Shell Scripting Part-3

  1. Understand Basic Linux Shell Scripting Language Tips – Part I
  2. 5 Shell Scripts for Linux Newbies to Learn Shell Programming – Part II
Bash Keywords

keyword is a word or symbol that has a special meaning to a computer language. The following symbols and words have special meanings to Bash when they are unquoted and the first word of a command.

! 			esac 			select 		} 
case 			fi 			then 		[[ 
do 			for 			until 		]] 
done 			function 		while 		elif
if 			time 			else 		in 		{

Unlike most computer languages, Bash allows keywords to be used as variable names even though this can make scripts difficult to read. To keep scripts understandable, key-words should not be used for variable names.

A command is implemented in shell as $(command). You might have to include the full path of command. e.g., $(/bin/date), for correct execution.

You may know the path of specific program using ‘whereis‘ command. e.g., whereis date

[root@tecmint /]# whereis date
date: /bin/date /usr/share/man/man1/date.1.gz

That’s enough for now. We won’t be talking much about these theory now. Coming to Scripts.

Move Current Working Directory

Move from current working directory to any level up by just providing the numerical value at the end of script while executing.

#! /bin/bash 
LEVEL=$1 
for ((i = 1; i <= LEVEL; i++)) 
do 
CDIR=../$CDIR 
done 
cd $CDIR 
echo "You are in: "$PWD 
exec /bin/bash

Save the above codes as “up.sh“, on your desktop. Make it executable (chmod 755 up.sh). Run:

./up.sh 2 (will Move the current working directory to two level up).
./up.sh 4 (will Move the current working directory to four level up).

Use and Area of Application

In larger scripts which contains folder inside folder inside… containing librariesbinariesiconsexecutables, etc at different location, You as a developer can implement this script to move to the desired location in a very automated fashion.

Note: For is a loop in the above script and it will continue to execute till the values are true for the loop.

Sample Output
[root@tecmint /]# chmod 755 up
[root@tecmint /]# ./up.sh 2
You are in: /

[root@tecmint /]# ./up.sh 4 
You are in: / 

[root@tecmint /]#

Download up.sh

Create a Random File or Folder

Create a random file (folder) with no chance of duplication.

#! /bin/bash

echo "Hello $USER";
echo "$(uptime)" >> "$(date)".txt
echo "Your File is being saved to $(pwd)"

This is a Simple script but it’s working is not that much simple.

  1. echo‘ : Prints everything written within the quotes.
  2. $‘ : Is a shell variable.
  3. >>‘ : The output is redirected to the output of date command followed by txt extension.

We know the output of date command is date, and time in hourminute, second along with year. Hence we could get output on an organised file name without the chance of filename duplication. It could be very much useful when user needs the file created with time stamp for future reference.

Sample Output
[root@tecmint /]# ./randomfile.sh  
Hello server 
Your File is being saved to /home/server/Desktop

You can view the file which is created on desktop with Today’s Date and current time.

[root@tecmint /]# nano Sat\ Jul\ 20\ 13\:51\:52\ IST\ 2013.txt 
13:51:52 up  3:54,  1 user,  load average: 0.09, 0.12, 0.08

A more detailed implementation of the above script is given below, which works on the above principle and is very useful in gathering the network information of a Linux server.

Download randomfile.sh

Script to Collect Network Information

Gathers network information on a Linux server. The script is too large and it’s not possible to post the whole code and output of the script here. So, it’s better you can download the script using below download link and test it yourself.

Note: You might need to install lsb-core package and other required packages and dependency. Apt or Yum the required packages. Obviously you need to be root to run the script because most of the commands used here are configured to be run as root.

Sample Output
[root@tecmint /]# ./collectnetworkinfo.sh  

The Network Configuration Info Written To network.20-07-13.info.txt. Please email this file to your_name@service_provider.com. ktop

You can change the above email address in your script to get it being mailed to you. The Automatically generated file can be viewed.

Download collectnetworkinfo.sh

Script to Converts UPPERCASE to lowercase

A script that converts UPPERCASE to lowercase and redirects the output to a text file “small.txt” which can be modified as required.

#!/bin/bash 

echo -n "Enter File Name : " 
read fileName 

if [ ! -f $fileName ]; then 
  echo "Filename $fileName does not exists" 
  exit 1 
fi 

tr '[A-Z]' '[a-z]' < $fileName >> small.txt

This above script can convert the case of a file of any length with a single click from uppercase to lowercaseand vice-versa if required, with little modification.

Sample Output
[root@tecmint /]# ./convertlowercase.sh  
Enter File Name : a.txt 

Initial File: 
A
B
C
D
E
F
G
H
I
J
K
...

New File (small.txt) output:

a
b
c
d
e
f
g
h
i
j
k
...

Download convertlowercase.sh

Simple Calculator Program

#! /bin/bash 
clear 
sum=0 
i="y" 

echo " Enter one no." 
read n1 
echo "Enter second no." 
read n2 
while [ $i = "y" ] 
do 
echo "1.Addition" 
echo "2.Subtraction" 
echo "3.Multiplication" 
echo "4.Division" 
echo "Enter your choice" 
read ch 
case $ch in 
    1)sum=`expr $n1 + $n2` 
     echo "Sum ="$sum;; 
        2)sum=`expr $n1 - $n2` 
     echo "Sub = "$sum;; 
    3)sum=`expr $n1 \* $n2` 
     echo "Mul = "$sum;; 
    4)sum=`expr $n1 / $n2` 
     echo "Div = "$sum;; 
    *)echo "Invalid choice";; 
esac 
echo "Do u want to continue (y/n)) ?" 
read i 
if [ $i != "y" ] 
then 
    exit 
fi 
done
Sample Output
[root@tecmint /]# ./simplecalc.sh 

Enter one no. 
12 
Enter second no. 
14 
1.Addition 
2.Subtraction 
3.Multiplication 
4.Division 
Enter your choice 
1 
Sum =26 
Do u want to continue (y/n)) ? 
y
1.Addition 
2.Subtraction 
3.Multiplication 
4.Division 
Enter your choice 
3 
mul = 14812
Do u want to continue (y/n)) ? 
n

Download simplecalc.sh

So did you saw how easy it was to create a powerful program as calculations such a simple way. Its’ not the end. We will be comping up with at least one more article of this series, covering broad perspective from administration view.

That’s all for now. Being the reader and the best critic don’t forget to tell us how much and what you enjoyed in this article and what you want to see in the future article. Any question is highly welcome in comment. Till then stay healthysafe and tunedLike and Share us and help us spread.

Source

How to Change the SSH Port in Linux

By default, SSH listens on port 22. Changing the default SSH port adds an extra layer of security to your server by reducing the risk of automated attacks.

Instead of changing the port is much simpler and secure to configure your firewall to allow access to port 22 only from specific hosts.

This tutorial explains how to change the default SSH port in Linux. We will also show you how to configure your firewall to allow access to the new SSH port.

Changing the SSH Port

Follow the steps below to change the SSH Port on your Linux system:

1. Choosing a New Port Number

In Linux, port numbers below 1024 are reserved for well-known services and can only be bound to by root. Although you can use a port within 1-1024 range for the SSH service to avoid issues with port allocation in the future it is recommended to choose a port above 1024.

In this example will change the SSH port to 5522, you can choose any port you like.

2. Adjusting Firewall

Before changing the SSH port, first you’ll need to adjust your firewall to allow traffic on the new SSH port.

If you are using UFW, the default firewall configuration tool for Ubuntu run the following command to open the new SSH port:

In CentOS the default firewall management tool is FirewallD. To open the new port run the following commands:

sudo firewall-cmd –permanent –zone=public –add-port=5522/tcp
sudo firewall-cmd –reload

CentOS users will also need to adjust the SELinux rules to allows the new SSH port:

sudo semanage port -a -t ssh_port_t -p tcp 5522

If you are using iptables as your firewall, the following command will open the new SSH port:

sudo iptables -A INPUT -p tcp –dport 22 -m conntrack –ctstate NEW,ESTABLISHED -j ACCEPT

3. Editing the SSH Configuration

Open the SSH configuration file /etc/ssh/sshd_config with your text editor:

sudo nano /etc/ssh/sshd_config

Search for the line starting with Port 22. In most cases, this line will start with a hash #. Remove the hash # and enter your new SSH port number that will be used instead of the standard SSH port 22.

/etc/ssh/sshd_config

Be extra careful when modifying the SSH configuration file. The incorrect configuration may cause the SSH service to fail to start.

Once you are done save the file and restart the SSH service to apply the changes:

sudo systemctl restart ssh

In CentOS the ssh service is named sshd:

sudo systemctl restart sshd

To verify that SSH daemon is listening on the new port 5522 type:

The output should look something like this:

tcp LISTEN 0 128 0.0.0.0:5522 0.0.0.0:*
tcp ESTAB 0 0 192.168.121.108:5522 192.168.121.1:57638
tcp LISTEN 0 128 [::]:5522 [::]:*

Using the New SSH Port

Now that you changed the SSH port when login to the remote machine you’ll need to specify the new port.

Use the -p <port_number> option specify the port:

Conclusion

In this tutorial, you have learned how to change the SSH port on your Linux server. You may also want to setup an SSH key-based authentication and connect to your Linux servers without entering a password.

If you are regularly connecting to multiple systems, you can simplify your workflow by defining all of your connections in the SSH config file.

If you have any question or feedback feel free to leave a comment.

Source

Linux Today – 7 pieces of contrarian Kubernetes advice

Kubernetes

You can find many great resources for getting smarter about Kubernetes out there. (Ahem, we’ve written a few ourselves.)

That’s good news for IT teams and professionals looking to boost their knowledge and consider how Kubernetes might solve problems in their organization. The excited chatter about Kubernetes has gotten so loud, however, that it can become difficult to make sense of it all. Moreover, it can be challenging to sort the actual business and technical benefits from the sales pitches.

[ Need to help others understand Kubernetes? Check out our related article, How to explain Kubernetes in plain English. ]

So we asked several IT leaders and Kubernetes users to share some advice that goes against the grain.

If you always take conventional wisdom – or straight-up hype – at face value, you’re bound to be disappointed at some point. So consider these bits of contrarian thinking as another important dimension of your Kubernetes learning.

1. Don’t treat Kubernetes as a silver bullet

Interest in Kubernetes is astronomical for good reason: It’s a powerful tool when properly used. But if you treat Kubernetes as a cure-all for anything and everything that ails your applications and infrastructure, expect new challenges ahead.

“Kubernetes is not a silver bullet for all solutions,” Raghu Kishore Vempati, principal systems engineer at Aricent. “Understand and use it carefully.”

Indeed, the spotlight on Kubernetes has grown so bright as to suggest that it’s some kind of IT sorcery: Just put and everything in containers, deploy ’em to production, and let Kubernetes handle the rest while you plan your next vacation.

Even if you’re more realistic about it, it may be tempting to assume Kubernetes will automatically solve existing issues with, say, your application design. It won’t. (Even Kubernetes’ original co-creators agree with this.) Focus on what it’s good at rather than trying to use it as a blanket solution.

Containers and Kubernetes provide an opportunity to create solutions that previously would have required a lot of effort and code plumbing with higher costs,” Vempati says. “While Kubernetes can provide orchestration, it doesn’t solve any of the inherent design problems or limitations of the applications hosted on it. In short, application overheads cannot be addressed using Kubernetes.”

[ Related read: Getting started with Kubernetes: 5 misunderstandings, explained. ]

2. You don’t have to immediately refactor everything for microservices

Microservices and containers pair well together, so it’s reasonable to assume that Kubernetes and containerized microservices are a good match, too.

“Kubernetes is ideal for new and refactored applications that don’t carry the baggage – and requirements – of traditional and monolithic applications,” says Ranga Rajagopalan, CTO and cofounder at Avi Networks.

Just don’t mistake the ideal scenario as the only scenario.

“The conventional wisdom is to refactor or rewrite your monoliths before deploying them within a Kubernetes environment,” Rajagopalan says. “However, this can be a massive undertaking that risks putting your team in analysis paralysis.”

Rajagopalan notes that you can indeed run a monolith in a container and then begin to incrementally break off pieces of the application as microservices, rather than trying to do everything at once.

“This can jumpstart your modernization efforts and deliver value well before the application has been completely refactored,” Rajagopalan says. “You don’t have to be a purist about microservices.”

Vempati concurs, noting that there might be some legacy applications that you never refactor because the costs outweigh the benefits.

3. Account for feature differences on public clouds

One of the overarching appeals of containers is greater portability among environments, especially as multi-cloud and hybrid cloud environments proliferate. Indeed, Vempati notes that it’s a common scenario for a team to deploy Kubernetes clusters on public cloud platforms. But you can’t always assume “vendor-neutral” as a default.

“The native Kubernetes capabilities on public clouds differ. It is important to understand the features and workflows carefully,” Vempati advises. “All the key public cloud service providers provide native Kubernetes cluster services. While these are much similar, there will be certain features that are different. When designing the solution with an aim to keep it vendor-neutral, while still choosing one of the vendors, such differences must be taken into account.”

Vempati shares as an example that one public cloud provider will assign public (or external) IPs to nodes when a cluster is created with its Kubernetes service, while another does not. “So if there is any dynamic behavior of the apps to infer/use external IP, it may work with [one platform] but not on [another],” Vempati says.

4. It will take time to get automation right

A basic selling point of Kubernetes and orchestration in general is that it makes manageable the operational burden of running production containers at scale, largely through automation. So this is one of those times where it’s best to be reminded that “automation” is not a synonym of “easy,” especially as you’re setting it up. Expect some real effort to get it right, and you’ll get a return on that investment over time.

“Kubernetes is a wonderful platform for building highly scalable and elastic solutions,” Vempati says. “One of the key [selling points] of this platform is that it very effectively supports continuous delivery of microservices hosted in containers for cloud scale.”

This sounds great to any IT team working in multi-cloud or hybrid cloud environments, especially as their cloud-native development increases. Just be ready to do some legwork to reap the benefits.

“To support automated continuous delivery for any Kubernetes-based solution is not [as] simple as it may [first] appear,” Vempati says. “It involves a lot of preparation, simulation of multiple scenarios based on the solution, and several iterations to achieve the [desired results.]”

Red Hat VP and CTO Chris Wright recently wrote about four emerging tools that play into this need for simplification. Read also What’s next for Kubernetes and hybrid cloud.

5. Be judicious in your use of persistent volumes

The original conventional wisdom that containers should be stateless has changed, especially as it has become easier to manage stateful applications such as databases.

[ Read also How to explain Kubernetes Operators in plain English. ]

Persistent volumes are the Kubernetes abstraction that enables storage for stateful applications running in containers. In short, that’s a good thing. But Vempati urges careful attention to avoid longer-term issues, especially if you’re in the earlier phases of adoption.

“The use of persistent volumes in Kubernetes for data-centric apps is a common scenario,” Vempati says. “Understand the storage primitives available so that using persistent volumes doesn’t spike costs.”

Factors such as the type of storage you’re using can lead to cost surprises, especially when PVs are created dynamically in Kubernetes. Vempati offers an example scenario:

“Persistent volumes and their claims can be dynamically created as part of a deployment for a pod. Verify the storage class to make sure that the right kind of storage is used,” Vempati advises. “SSDs on public cloud will have a higher cost associated with them, compared to a standard storage.”

6. Don’t evangelize Kubernetes by shouting “Kubernetes!”

If you’re trying to make the case for Kubernetes in your organization, it may be tempting to simply ride the surging wave of excitement around it.

“The common wisdom out there is that simply mentioning Kubernetes is enough to gain someone’s interest in how you are using it to solve a particular problem,” says Glenn Sullivan, co-founder at SnapRoute, which uses Kubernetes as part of its cloud-native networking software’s underlying infrastructure. “My advice would be to spend less time pointing to Kubernetes as the answer and piggybacking on the buzz that surrounds the platform and focus more on the results of using Kubernetes. When you present Kubernetes as the solution for solving a problem, you will immediately elate some [people] and alienate others.”

One reason they might resist it or tune out is that they simply don’t understand what Kubernetes is. You can explain it to them in plain terms, but Sullivan says the lightbulb moment – and subsequent buy-in – is more likely to occur when you show them the results.

“We find it more advantageous to promote the [benefits] gained from using Kubernetes instead of presenting the integration into Kubernetes itself as the value-add,” Sullivan says.

7. Kubernetes is not an island

Kubernetes is an important piece of the cloud-native puzzle, but it’s only one piece. As Red Hat technology evangelist Gordon Haff notes, “The power of the open source cloud-native ecosystem comes only in part from individual projects such as Kubernetes. It derives, perhaps even more, from the breadth of complementary projects that come together to create a true cloud-native platform.”

This includes service meshes like  Istio, monitoring tools like Prometheus, command-line tools like Podman, distributed tracing from the likes of Jaeger and Kiali, enterprise registries like Quay, and inspection utilities like Skopeo, says Haff. And, of course, Linux, which is the foundation for the containers orchestrated by Kubernetes.

[ Want to learn more about automation and building cloud-native apps? Get the guide: Principles of container-based application design. ]

Source

Introduction to RAID, Concepts of RAID and RAID Levels in Linux

RAID is a Redundant Array of Inexpensive disks, but nowadays it is called Redundant Array of Independent drives. Earlier it is used to be very costly to buy even a smaller size of disk, but nowadays we can buy a large size of disk with the same amount like before. Raid is just a collection of disks in a pool to become a logical volume.

RAID in Linux

Understanding RAID Setups in Linux

Raid contains groups or sets or Arrays. A combine of drivers make a group of disks to form a RAID Array or RAID set. It can be a minimum of 2 number of disk connected to a raid controller and make a logical volume or more drives can be in a group. Only one Raid level can be applied in a group of disks. Raid are used when we need excellent performance. According to our selected raid level, performance will differ. Saving our data by fault tolerance & high availability.

This series will be titled Preparation for the setting up RAID ‘s through Parts 1-9 and covers the following topics.

Part 1Introduction to RAID, Concepts of RAID and RAID Levels
Part 2: How to setup RAID0 (Stripe) in Linux
Part 3: How to setup RAID1 (Mirror) in Linux
Part 4: How to setup RAID5 (Striping with Distributed Parity) in Linux
Part 5: How to setup RAID6 (Striping with Double Distributed Parity) in Linux
Part 6: Setting Up RAID 10 or 1+0 (Nested) in Linux
Part 7: Growing an Existing RAID Array and Removing Failed Disks in Raid
Part 8: How to Recover Data and Rebuild Failed Software RAID’s
Part 9: How to Manage Software RAID’s in Linux with ‘Mdadm’ Tool

This is the Part 1 of a 9-tutorial series, here we will cover the introduction of RAID, Concepts of RAID and RAID Levels that are required for the setting up RAID in Linux.

Software RAID and Hardware RAID

Software RAID have low performance, because of consuming resource from hosts. Raid software need to load for read data from software raid volumes. Before loading raid software, OS need to get boot to load the raid software. No need of Physical hardware in software raids. Zero cost investment.

Hardware RAID have high performance. They are dedicated RAID Controller which is Physically built using PCI express cards. It won’t use the host resource. They have NVRAM for cache to read and write. Stores cache while rebuild even if there is power-failure, it will store the cache using battery power backups. Very costly investments needed for a large scale.

Hardware RAID Card will look like below:

Hardware RAID

Hardware RAID

Featured Concepts of RAID

  1. Parity method in raid regenerate the lost content from parity saved information’s. RAID 5, RAID 6 Based on Parity.
  2. Stripe is sharing data randomly to multiple disk. This won’t have full data in a single disk. If we use 3 disks half of our data will be in each disks.
  3. Mirroring is used in RAID 1 and RAID 10. Mirroring is making a copy of same data. In RAID 1 it will save the same content to the other disk too.
  4. Hot spare is just a spare drive in our server which can automatically replace the failed drives. If any one of the drive failed in our array this hot spare drive will be used and rebuild automatically.
  5. Chunks are just a size of data which can be minimum from 4KB and more. By defining chunk size we can increase the I/O performance.

RAID’s are in various Levels. Here we will see only the RAID Levels which is used mostly in real environment.

  1. RAID0 = Striping
  2. RAID1 = Mirroring
  3. RAID5 = Single Disk Distributed Parity
  4. RAID6 = Double Disk Distributed Parity
  5. RAID10 = Combine of Mirror & Stripe. (Nested RAID)

RAID are managed using mdadm package in most of the Linux distributions. Let us get a Brief look into each RAID Levels.

RAID 0 (or) Striping

Striping have a excellent performance. In Raid 0 (Striping) the data will be written to disk using shared method. Half of the content will be in one disk and another half will be written to other disk.

Let us assume we have 2 Disk drives, for example, if we write data “TECMINT” to logical volume it will be saved as ‘T‘ will be saved in first disk and ‘E‘ will be saved in Second disk and ‘C‘ will be saved in First disk and again ‘M‘ will be saved in Second disk and it continues in round-robin process.

In this situation if any one of the drive fails we will loose our data, because with half of data from one of the disk can’t use to rebuilt the raid. But while comparing to Write Speed and performance RAID 0 is Excellent. We need at least minimum 2 disks to create a RAID 0 (Striping). If you need your valuable data don’t use this RAID LEVEL.

  1. High Performance.
  2. There is Zero Capacity Loss in RAID 0
  3. Zero Fault Tolerance.
  4. Write and Reading will be good performance.

RAID 1 (or) Mirroring

Mirroring have a good performance. Mirroring can make a copy of same data what we have. Assuming we have two numbers of 2TB Hard drives, total there we have 4TB, but in mirroring while the drives are behind the RAID Controller to form a Logical drive Only we can see the 2TB of logical drive.

While we save any data, it will write to both 2TB Drives. Minimum two drives are needed to create a RAID 1 or Mirror. If a disk failure occurred we can reproduce the raid set by replacing a new disk. If any one of the disk fails in RAID 1, we can get the data from other one as there was a copy of same content in the other disk. So there is zero data loss.

  1. Good Performance.
  2. Here Half of the Space will be lost in total capacity.
  3. Full Fault Tolerance.
  4. Rebuilt will be faster.
  5. Writing Performance will be slow.
  6. Reading will be good.
  7. Can be used for operating systems and database for small scale.

RAID 5 (or) Distributed Parity

RAID 5 is mostly used in enterprise levels. RAID 5 work by distributed parity method. Parity info will be used to rebuild the data. It rebuilds from the information left on the remaining good drives. This will protect our data from drive failure.

Assume we have 4 drives, if one drive fails and while we replace the failed drive we can rebuild the replaced drive from parity informations. Parity information’s are Stored in all 4 drives, if we have 4 numbers of 1TB hard-drive. The parity information will be stored in 256GB in each drivers and other 768GB in each drives will be defined for Users. RAID 5 can be survive from a single Drive failure, If drives fails more than 1 will cause loss of data’s.

  1. Excellent Performance
  2. Reading will be extremely very good in speed.
  3. Writing will be Average, slow if we won’t use a Hardware RAID Controller.
  4. Rebuild from Parity information from all drives.
  5. Full Fault Tolerance.
  6. 1 Disk Space will be under Parity.
  7. Can be used in file servers, web servers, very important backups.

RAID 6 Two Parity Distributed Disk

RAID 6 is same as RAID 5 with two parity distributed system. Mostly used in a large number of arrays. We need minimum 4 Drives, even if there 2 Drive fails we can rebuild the data while replacing new drives.

Very slower than RAID 5, because it writes data to all 4 drivers at same time. Will be average in speed while we using a Hardware RAID Controller. If we have 6 numbers of 1TB hard-drives 4 drives will be used for data and 2 drives will be used for Parity.

  1. Poor Performance.
  2. Read Performance will be good.
  3. Write Performance will be Poor if we not using a Hardware RAID Controller.
  4. Rebuild from 2 Parity Drives.
  5. Full Fault tolerance.
  6. 2 Disks space will be under Parity.
  7. Can be Used in Large Arrays.
  8. Can be use in backup purpose, video streaming, used in large scale.

RAID 10 (or) Mirror & Stripe

RAID 10 can be called as 1+0 or 0+1. This will do both works of Mirror & Striping. Mirror will be first and stripe will be the second in RAID 10. Stripe will be the first and mirror will be the second in RAID 01. RAID 10 is better comparing to 01.

Assume, we have 4 Number of drives. While I’m writing some data to my logical volume it will be saved under All 4 drives using mirror and stripe methods.

If I’m writing a data “TECMINT” in RAID 10 it will save the data as follow. First “T” will write to both disks and second “E” will write to both disk, this step will be used for all data write. It will make a copy of every data to other disk too.

Same time it will use the RAID 0 method and write data as follow “T” will write to first disk and “E” will write to second disk. Again “C” will write to first Disk and “M” to second disk.

  1. Good read and write performance.
  2. Here Half of the Space will be lost in total capacity.
  3. Fault Tolerance.
  4. Fast rebuild from copying data.
  5. Can be used in Database storage for high performance and availability.

Conclusion

In this article we have seen what is RAID and which levels are mostly used in RAID in real environment. Hope you have learned the write-up about RAID. For RAID setup one must know about the basic Knowledge about RAID. The above content will fulfil basic understanding about RAID.

In the next upcoming articles I’m going to cover how to setup and create a RAID using Various Levels, Growing a RAID Group (Array) and Troubleshooting with failed Drives and much more.

Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2

RAID is Redundant Array of Inexpensive disks, used for high availability and reliability in large scale environments, where data need to be protected than normal use. Raid is just a collection of disks in a pool to become a logical volume and contains an array. A combine drivers makes an array or called as set of (group).

RAID can be created, if there are minimum 2 number of disk connected to a raid controller and make a logical volume or more drives can be added in an array according to defined RAID Levels. Software Raid are available without using Physical hardware those are called as software raid. Software Raid will be named as Poor man raid.

Setup RAID0 in Linux

Setup RAID0 in Linux

Main concept of using RAID is to save data from Single point of failure, means if we using a single disk to store the data and if it’s failed, then there is no chance of getting our data back, to stop the data loss we need a fault tolerance method. So, that we can use some collection of disk to form a RAID set.

What is Stripe in RAID 0?

Stripe is striping data across multiple disk at the same time by dividing the contents. Assume we have two disks and if we save content to logical volume it will be saved under both two physical disks by dividing the content. For better performance RAID 0 will be used, but we can’t get the data if one of the drive fails. So, it isn’t a good practice to use RAID 0. The only solution is to install operating system with RAID0 applied logical volumes to safe your important files.

  1. RAID 0 has High Performance.
  2. Zero Capacity Loss in RAID 0. No Space will be wasted.
  3. Zero Fault Tolerance ( Can’t get back the data if any one of disk fails).
  4. Write and Reading will be Excellent.

Requirements

Minimum number of disks are allowed to create RAID 0 is 2, but you can add more disk but the order should be twice as 2, 4, 6, 8. If you have a Physical RAID card with enough ports, you can add more disks.

Here we are not using a Hardware raid, this setup depends only on Software RAID. If we have a physical hardware raid card we can access it from it’s utility UI. Some motherboard by default in-build with RAID feature, there UI can be accessed using Ctrl+I keys.

If you’re new to RAID setups, please read our earlier article, where we’ve covered some basic introduction of about RAID.

  1. Introduction to RAID and RAID Concepts
My Server Setup
Operating System :	CentOS 6.5 Final
IP Address	 :	192.168.0.225
Two Disks	 :	20 GB each

This article is Part 2 of a 9-tutorial RAID series, here in this part, we are going to see how we can create and setup Software RAID0 or striping in Linux systems or servers using two 20GB disks named sdb and sdc.

Step 1: Updating System and Installing mdadm for Managing RAID

1. Before setting up RAID0 in Linux, let’s do a system update and then install ‘mdadm‘ package. The mdadm is a small program, which will allow us to configure and manage RAID devices in Linux.

# yum clean all && yum update
# yum install mdadm -y

install mdadm in linux

Install mdadm Tool

Step 2: Verify Attached Two 20GB Drives

2. Before creating RAID 0, make sure to verify that the attached two hard drives are detected or not, using the following command.

# ls -l /dev | grep sd

Check Hard Drives in Linux

Check Hard Drives

3. Once the new hard drives detected, it’s time to check whether the attached drives are already using any existing raid with the help of following ‘mdadm’ command.

# mdadm --examine /dev/sd[b-c]

Check RAID Devices in Linux

Check RAID Devices

In the above output, we come to know that none of the RAID have been applied to these two sdb and sdc drives.

Step 3: Creating Partitions for RAID

4. Now create sdb and sdc partitions for raid, with the help of following fdisk command. Here, I will show how to create partition on sdb drive.

# fdisk /dev/sdb

Follow below instructions for creating partitions.

  1. Press ‘n‘ for creating new partition.
  2. Then choose ‘P‘ for Primary partition.
  3. Next select the partition number as 1.
  4. Give the default value by just pressing two times Enter key.
  5. Next press ‘P‘ to print the defined partition.

Create Partitions in Linux

Create Partitions

Follow below instructions for creating Linux raid auto on partitions.

  1. Press ‘L‘ to list all available types.
  2. Type ‘t‘to choose the partitions.
  3. Choose ‘fd‘ for Linux raid auto and press Enter to apply.
  4. Then again use ‘P‘ to print the changes what we have made.
  5. Use ‘w‘ to write the changes.

Create RAID Partitions

Create RAID Partitions in Linux

Note: Please follow same above instructions to create partition on sdc drive now.

5. After creating partitions, verify both the drivers are correctly defined for RAID using following command.

# mdadm --examine /dev/sd[b-c]
# mdadm --examine /dev/sd[b-c]1

Verify RAID Partitions

Verify RAID Partitions

Step 4: Creating RAID md Devices

6. Now create md device (i.e. /dev/md0) and apply raid level using below command.

# mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1
# mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1
  1. -C – create
  2. -l – level
  3. -n – No of raid-devices

7. Once md device has been created, now verify the status of RAID LevelDevices and Array used, with the help of following series of commands as shown.

# cat /proc/mdstat

Verify RAID Level

Verify RAID Level

# mdadm -E /dev/sd[b-c]1

Verify RAID Device

Verify RAID Device

# mdadm --detail /dev/md0

Verify RAID Array

Verify RAID Array

Step 5: Assiging RAID Devices to Filesystem

8. Create a ext4 filesystem for a RAID device /dev/md0 and mount it under /dev/raid0.

# mkfs.ext4 /dev/md0

Create ext4 Filesystem in Linux

Create ext4 Filesystem

9. Once ext4 filesystem has been created for Raid device, now create a mount point directory (i.e. /mnt/raid0) and mount the device /dev/md0 under it.

# mkdir /mnt/raid0
# mount /dev/md0 /mnt/raid0/

10. Next, verify that the device /dev/md0 is mounted under /mnt/raid0 directory using df command.

# df -h

11. Next, create a file called ‘tecmint.txt‘ under the mount point /mnt/raid0, add some content to the created file and view the content of a file and directory.

# touch /mnt/raid0/tecmint.txt
# echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt
# cat /mnt/raid0/tecmint.txt
# ls -l /mnt/raid0/

Verify Mount Device

Verify Mount Device

12. Once you’ve verified mount points, it’s time to create an fstab entry in /etc/fstab file.

# vim /etc/fstab

Add the following entry as described. May vary according to your mount location and filesystem you using.

/dev/md0                /mnt/raid0              ext4    defaults         0 0

Add Device to Fstab

Add Device to Fstab

13. Run mount ‘-a‘ to check if there is any error in fstab entry.

# mount -av

Check Errors in Fstab

Check Errors in Fstab

Step 6: Saving RAID Configurations

14. Finally, save the raid configuration to one of the file to keep the configurations for future use. Again we use ‘mdadm’ command with ‘-s‘ (scan) and ‘-v‘ (verbose) options as shown.

# mdadm -E -s -v >> /etc/mdadm.conf
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
# cat /etc/mdadm.conf

Save RAID Configurations

Save RAID Configurations

That’s it, we have seen here, how to configure RAID0 striping with raid levels by using two hard disks. In next article, we will see how to setup RAID5.

Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3

RAID Mirroring means an exact clone (or mirror) of the same data writing to two drives. A minimum two number of disks are more required in an array to create RAID1 and it’s useful only, when read performance or reliability is more precise than the data storage capacity.

Create Raid1 in Linux

Setup Raid1 in Linux

Mirrors are created to protect against data loss due to disk failure. Each disk in a mirror involves an exact copy of the data. When one disk fails, the same data can be retrieved from other functioning disk. However, the failed drive can be replaced from the running computer without any user interruption.

Features of RAID 1

  1. Mirror has Good Performance.
  2. 50% of space will be lost. Means if we have two disk with 500GB size total, it will be 1TB but in Mirroring it will only show us 500GB.
  3. No data loss in Mirroring if one disk fails, because we have the same content in both disks.
  4. Reading will be good than writing data to drive.

Requirements

Minimum Two number of disks are allowed to create RAID 1, but you can add more disks by using twice as 2, 4, 6, 8. To add more disks, your system must have a RAID physical adapter (hardware card).

Here we’re using software raid not a Hardware raid, if your system has an inbuilt physical hardware raid card you can access it from it’s utility UI or using Ctrl+I key.

Read AlsoBasic Concepts of RAID in Linux

My Server Setup
Operating System :	CentOS 6.5 Final
IP Address	 :	192.168.0.226
Hostname	 :	rd1.tecmintlocal.com
Disk 1 [20GB]	 :	/dev/sdb
Disk 2 [20GB]	 :	/dev/sdc

This article will guide you through a step-by-step instructions on how to setup a software RAID 1 or Mirror using mdadm (creates and manages raid) on Linux Platform. Although the same instructions also works on other Linux distributions such as RedHat, CentOS, Fedora, etc.

Step 1: Installing Prerequisites and Examine Drives

1. As I said above, we’re using mdadm utility for creating and managing RAID in Linux. So, let’s install the mdadm software package on Linux using yum or apt-get package manager tool.

# yum install mdadm		[on RedHat systems]
# apt-get install mdadm 	[on Debain systems]

2. Once ‘mdadm‘ package has been installed, we need to examine our disk drives whether there is already any raid configured using the following command.

# mdadm -E /dev/sd[b-c]

Check RAID on Disks

Check RAID on Disks

As you see from the above screen, that there is no any super-block detected yet, means no RAID defined.

Step 2: Drive Partitioning for RAID

3. As I mentioned above, that we’re using minimum two partitions /dev/sdb and /dev/sdc for creating RAID1. Let’s create partitions on these two drives using ‘fdisk‘ command and change the type to raid during partition creation.

# fdisk /dev/sdb
Follow the below instructions
  1. Press ‘n‘ for creating new partition.
  2. Then choose ‘P‘ for Primary partition.
  3. Next select the partition number as 1.
  4. Give the default full size by just pressing two times Enter key.
  5. Next press ‘p‘ to print the defined partition.
  6. Press ‘L‘ to list all available types.
  7. Type ‘t‘to choose the partitions.
  8. Choose ‘fd‘ for Linux raid auto and press Enter to apply.
  9. Then again use ‘p‘ to print the changes what we have made.
  10. Use ‘w‘ to write the changes.

Create Disk Partitions

Create Disk Partitions

After ‘/dev/sdb‘ partition has been created, next follow the same instructions to create new partition on /dev/sdc drive.

# fdisk /dev/sdc

Create Second Partitions

Create Second Partitions

4. Once both the partitions are created successfully, verify the changes on both sdb & sdc drive using the same ‘mdadm‘ command and also confirm the RAID type as shown in the following screen grabs.

# mdadm -E /dev/sd[b-c]

Verify Partitions Changes

Verify Partitions Changes

Check RAID Type

Check RAID Type

Note: As you see in the above picture, there is no any defined RAID on the sdb1 and sdc1 drives so far, that’s the reason we are getting as no super-blocks detected.

Step 3: Creating RAID1 Devices

5. Next create RAID1 Device called ‘/dev/md0‘ using the following command and verity it.

# mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1
# cat /proc/mdstat

Create RAID Device

Create RAID Device

6. Next check the raid devices type and raid array using following commands.

# mdadm -E /dev/sd[b-c]1
# mdadm --detail /dev/md0

Check RAID Device type

Check RAID Device type

Check RAID Device Array

Check RAID Device Array

From the above pictures, one can easily understand that raid1 have been created and using /dev/sdb1 and /dev/sdc1 partitions and also you can see the status as resyncing.

Step 4: Creating File System on RAID Device

7. Create file system using ext4 for md0 and mount under /mnt/raid1.

# mkfs.ext4 /dev/md0

Create RAID Device Filesystem

Create RAID Device Filesystem

8. Next, mount the newly created filesystem under ‘/mnt/raid1‘ and create some files and verify the contents under mount point.

# mkdir /mnt/raid1
# mount /dev/md0 /mnt/raid1/
# touch /mnt/raid1/tecmint.txt
# echo "tecmint raid setups" > /mnt/raid1/tecmint.txt

Mount Raid Device

Mount Raid Device

9. To auto-mount RAID1 on system reboot, you need to make an entry in fstab file. Open ‘/etc/fstab‘ file and add the following line at the bottom of the file.

/dev/md0                /mnt/raid1              ext4    defaults        0 0

Raid Automount Device

Raid Automount Device

10. Run ‘mount -a‘ to check whether there are any errors in fstab entry.

# mount -av

Check Errors in fstab

Check Errors in fstab

11. Next, save the raid configuration manually to ‘mdadm.conf‘ file using the below command.

# mdadm --detail --scan --verbose >> /etc/mdadm.conf

Save Raid Configuration

Save Raid Configuration

The above configuration file is read by the system at the reboots and load the RAID devices.

Step 5: Verify Data After Disk Failure

12. Our main purpose is, even after any of hard disk fail or crash our data needs to be available. Let’s see what will happen when any of disk disk is unavailable in array.

# mdadm --detail /dev/md0

Raid Device Verify

Raid Device Verify

In the above image, we can see there are 2 devices available in our RAID and Active Devices are 2. Now let us see what will happen when a disk plugged out (removed sdc disk) or fails.

# ls -l /dev | grep sd
# mdadm --detail /dev/md0

Test RAID Devices

Test RAID Devices

Now in the above image, you can see that one of our drive is lost. I unplugged one of the drive from my Virtual machine. Now let us check our precious data.

# cd /mnt/raid1/
# cat tecmint.txt

Verify RAID Data

Verify RAID Data

Did you see our data is still available. From this we come to know the advantage of RAID 1 (mirror). In next article, we will see how to setup a RAID 5 striping with distributed Parity. Hope this helps you to understand how the RAID 1 (Mirror) Works.

Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4

In RAID 5, data strips across multiple drives with distributed parity. The striping with distributed parity means it will split the parity information and stripe data over the multiple disks, which will have good data redundancy.

Setup Raid 5 in CentOS

Setup Raid 5 in Linux

For RAID Level it should have at least three hard drives or more. RAID 5 are being used in the large scale production environment where it’s cost effective and provide performance as well as redundancy.

What is Parity?

Parity is a simplest common method of detecting errors in data storage. Parity stores information in each disks, Let’s say we have 4 disks, in 4 disks one disk space will be split to all disks to store the parity information’s. If any one of the disks fails still we can get the data by rebuilding from parity information after replacing the failed disk.

Pros and Cons of RAID 5

  1. Gives better performance
  2. Support Redundancy and Fault tolerance.
  3. Support hot spare options.
  4. Will loose a single disk capacity for using parity information.
  5. No data loss if a single disk fails. We can rebuilt from parity after replacing the failed disk.
  6. Suits for transaction oriented environment as the reading will be faster.
  7. Due to parity overhead, writing will be slow.
  8. Rebuild takes long time.

Requirements

Minimum 3 hard drives are required to create Raid 5, but you can add more disks, only if you’ve a dedicated hardware raid controller with multi ports. Here, we are using software RAID and ‘mdadm‘ package to create raid.

mdadm is a package which allow us to configure and manage RAID devices in Linux. By default there is no configuration file is available for RAID, we must save the configuration file after creating and configuring RAID setup in separate file called mdadm.conf.

Before moving further, I suggest you to go through the following articles for understanding the basics of RAID in Linux.

  1. Basic Concepts of RAID in Linux – Part 1
  2. Creating RAID 0 (Stripe) in Linux – Part 2
  3. Setting up RAID 1 (Mirroring) in Linux – Part 3
My Server Setup
Operating System :	CentOS 6.5 Final
IP Address	 :	192.168.0.227
Hostname	 :	rd5.tecmintlocal.com
Disk 1 [20GB]	 :	/dev/sdb
Disk 2 [20GB]	 :	/dev/sdc
Disk 3 [20GB]	 :	/dev/sdd

This article is a Part 4 of a 9-tutorial RAID series, here we are going to setup a software RAID 5 with distributed parity in Linux systems or servers using three 20GB disks named /dev/sdb, /dev/sdc and /dev/sdd.

Step 1: Installing mdadm and Verify Drives

1. As we said earlier, that we’re using CentOS 6.5 Final release for this raid setup, but same steps can be followed for RAID setup in any Linux based distributions.

# lsb_release -a
# ifconfig | grep inet

Setup Raid 5 in CentOS

CentOS 6.5 Summary

2. If you’re following our raid series, we assume that you’ve already installed ‘mdadm‘ package, if not, use the following command according to your Linux distribution to install the package.

# yum install mdadm		[on RedHat systems]
# apt-get install mdadm 	[on Debain systems]

3. After the ‘mdadm‘ package installation, let’s list the three 20GB disks which we have added in our system using ‘fdisk‘ command.

# fdisk -l | grep sd

Install mdadm Tool in CentOS

Install mdadm Tool

4. Now it’s time to examine the attached three drives for any existing RAID blocks on these drives using following command.

# mdadm -E /dev/sd[b-d]
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd

Examine Drives For Raid

Examine Drives For Raid

Note: From the above image illustrated that there is no any super-block detected yet. So, there is no RAID defined in all three drives. Let us start to create one now.

Step 2: Partitioning the Disks for RAID

5. First and foremost, we have to partition the disks (/dev/sdb/dev/sdc and /dev/sdd) before adding to a RAID, So let us define the partition using ‘fdisk’ command, before forwarding to the next steps.

# fdisk /dev/sdb
# fdisk /dev/sdc
# fdisk /dev/sdd
Create /dev/sdb Partition

Please follow the below instructions to create partition on /dev/sdb drive.

  1. Press ‘n‘ for creating new partition.
  2. Then choose ‘P‘ for Primary partition. Here we are choosing Primary because there is no partitions defined yet.
  3. Then choose ‘1‘ to be the first partition. By default it will be 1.
  4. Here for cylinder size we don’t have to choose the specified size because we need the whole partition for RAID so just Press Enter two times to choose the default full size.
  5. Next press ‘p‘ to print the created partition.
  6. Change the Type, If we need to know the every available types Press ‘L‘.
  7. Here, we are selecting ‘fd‘ as my type is RAID.
  8. Next press ‘p‘ to print the defined partition.
  9. Then again use ‘p‘ to print the changes what we have made.
  10. Use ‘w‘ to write the changes.

Create sdb Partition

Create sdb Partition

Note: We have to follow the steps mentioned above to create partitions for sdc & sdd drives too.

Create /dev/sdc Partition

Now partition the sdc and sdd drives by following the steps given in the screenshot or you can follow above steps.

# fdisk /dev/sdc

Create sdc Partition

Create sdc Partition

Create /dev/sdd Partition
# fdisk /dev/sdd

Create sdd Partition

Create sdd Partition

6. After creating partitions, check for changes in all three drives sdb, sdc, & sdd.

# mdadm --examine /dev/sdb /dev/sdc /dev/sdd

or

# mdadm -E /dev/sd[b-c]

Check Partition Changes

Check Partition Changes

Note: In the above pic. depict the type is fd i.e. for RAID.

7. Now Check for the RAID blocks in newly created partitions. If no super-blocks detected, than we can move forward to create a new RAID 5 setup on these drives.

Check Raid on Partition

Check Raid on Partition

Step 3: Creating md device md0

8. Now create a Raid device ‘md0‘ (i.e. /dev/md0) and include raid level on all newly created partitions (sdb1, sdc1 and sdd1) using below command.

# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1

or

# mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1

9. After creating raid device, check and verify the RAID, devices included and RAID Level from the mdstat output.

# cat /proc/mdstat

Verify Raid Device

Verify Raid Device

If you want to monitor the current building process, you can use ‘watch‘ command, just pass through the ‘cat /proc/mdstat‘ with watch command which will refresh screen every 1 second.

# watch -n1 cat /proc/mdstat

Monitor Raid Process

Monitor Raid 5 Process

Raid 5 Process Summary

Raid 5 Process Summary

10. After creation of raid, Verify the raid devices using the following command.

# mdadm -E /dev/sd[b-d]1

Verify Raid Level

Verify Raid Level

Note: The Output of the above command will be little long as it prints the information of all three drives.

11. Next, verify the RAID array to assume that the devices which we’ve included in the RAID level are running and started to re-sync.

# mdadm --detail /dev/md0

Verify Raid Array

Verify Raid Array

Step 4: Creating file system for md0

12. Create a file system for ‘md0‘ device using ext4 before mounting.

# mkfs.ext4 /dev/md0

Create md0 Filesystem

Create md0 Filesystem

13. Now create a directory under ‘/mnt‘ then mount the created filesystem under /mnt/raid5 and check the files under mount point, you will see lost+found directory.

# mkdir /mnt/raid5
# mount /dev/md0 /mnt/raid5/
# ls -l /mnt/raid5/

14. Create few files under mount point /mnt/raid5 and append some text in any one of the file to verify the content.

# touch /mnt/raid5/raid5_tecmint_{1..5}
# ls -l /mnt/raid5/
# echo "tecmint raid setups" > /mnt/raid5/raid5_tecmint_1
# cat /mnt/raid5/raid5_tecmint_1
# cat /proc/mdstat

Mount Raid 5 Device

Mount Raid Device

15. We need to add entry in fstab, else will not display our mount point after system reboot. To add an entry, we should edit the fstab file and append the following line as shown below. The mount point will differ according to your environment.

# vim /etc/fstab

/dev/md0                /mnt/raid5              ext4    defaults        0 0

Raid 5 Automount

Raid 5 Automount

16. Next, run ‘mount -av‘ command to check whether any errors in fstab entry.

# mount -av

Check Fstab Errors

Check Fstab Errors

Step 5: Save Raid 5 Configuration

17. As mentioned earlier in requirement section, by default RAID don’t have a config file. We have to save it manually. If this step is not followed RAID device will not be in md0, it will be in some other random number.

So, we must have to save the configuration before system reboot. If the configuration is saved it will be loaded to the kernel during the system reboot and RAID will also gets loaded.

# mdadm --detail --scan --verbose >> /etc/mdadm.conf

Save Raid 5 Configuration

Save Raid 5 Configuration

Note: Saving the configuration will keep the RAID level stable in md0 device.

Step 6: Adding Spare Drives

18. What the use of adding a spare drive? its very useful if we have a spare drive, if any one of the disk fails in our array, this spare drive will get active and rebuild the process and sync the data from other disk, so we can see a redundancy here.

For more instructions on how to add spare drive and check Raid 5 fault tolerance, read #Step 6 and #Step 7 in the following article.

  1. Add Spare Drive to Raid 5 Setup

Conclusion

Here, in this article, we have seen how to setup a RAID 5 using three number of disks. Later in my upcoming articles, we will see how to troubleshoot when a disk fails in RAID 5 and how to replace for recovery.

Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux – Part 5

RAID 6 is upgraded version of RAID 5, where it has two distributed parity which provides fault tolerance even after two drives fails. Mission critical system still operational incase of two concurrent disks failures. It’s alike RAID 5, but provides more robust, because it uses one more disk for parity.

In our earlier article, we’ve seen distributed parity in RAID 5, but in this article we will going to see RAID 6 with double distributed parity. Don’t expect extra performance than any other RAID, if so we have to install a dedicated RAID Controller too. Here in RAID 6 even if we loose our 2 disks we can get the data back by replacing a spare drive and build it from parity.

Setup RAID 6 in CentOS

Setup RAID 6 in Linux

To setup a RAID 6, minimum 4 numbers of disks or more in a set are required. RAID 6 have multiple disks even in some set it may be have some bunch of disks, while reading, it will read from all the drives, so reading would be faster whereas writing would be poor because it has to stripe over multiple disks.

Now, many of us comes to conclusion, why we need to use RAID 6, when it doesn’t perform like any other RAID. Hmm… those who raise this question need to know that, if they need high fault tolerance choose RAID 6. In every higher environments with high availability for database, they use RAID 6 because database is the most important and need to be safe in any cost, also it can be useful for video streaming environments.

Pros and Cons of RAID 6

  1. Performance are good.
  2. RAID 6 is expensive, as it requires two independent drives are used for parity functions.
  3. Will loose a two disks capacity for using parity information (double parity).
  4. No data loss, even after two disk fails. We can rebuilt from parity after replacing the failed disk.
  5. Reading will be better than RAID 5, because it reads from multiple disk, But writing performance will be very poor without dedicated RAID Controller.

Requirements

Minimum 4 numbers of disks are required to create a RAID 6. If you want to add more disks, you can, but you must have dedicated raid controller. In software RAID, we will won’t get better performance in RAID 6. So we need a physical RAID controller.

Those who are new to RAID setup, we recommend to go through RAID articles below.

  1. Basic Concepts of RAID in Linux – Part 1
  2. Creating Software RAID 0 (Stripe) in Linux – Part 2
  3. Setting up RAID 1 (Mirroring) in Linux – Part 3
My Server Setup
Operating System :	CentOS 6.5 Final
IP Address	 :	192.168.0.228
Hostname	 :	rd6.tecmintlocal.com
Disk 1 [20GB]	 :	/dev/sdb
Disk 2 [20GB]	 :	/dev/sdc
Disk 3 [20GB]	 :	/dev/sdd
Disk 4 [20GB]	 : 	/dev/sde

This article is a Part 5 of a 9-tutorial RAID series, here we are going to see how we can create and setup Software RAID 6 or Striping with Double Distributed Parity in Linux systems or servers using four 20GB disks named /dev/sdb, /dev/sdc, /dev/sdd and /dev/sde.

Step 1: Installing mdadm Tool and Examine Drives

1. If you’re following our last two Raid articles (Part 2 and Part 3), where we’ve already shown how to install ‘mdadm‘ tool. If you’re new to this article, let me explain that ‘mdadm‘ is a tool to create and manage Raid in Linux systems, let’s install the tool using following command according to your Linux distribution.

# yum install mdadm		[on RedHat systems]
# apt-get install mdadm 	[on Debain systems]

2. After installing the tool, now it’s time to verify the attached four drives that we are going to use for raid creation using the following ‘fdisk‘ command.

# fdisk -l | grep sd

Check Hard Disk in Linux

Check Disks in Linux

3. Before creating a RAID drives, always examine our disk drives whether there is any RAID is already created on the disks.

# mdadm -E /dev/sd[b-e]
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde

Check Raid on Disk

Check Raid on Disk

Note: In the above image depicts that there is no any super-block detected or no RAID is defined in four disk drives. We may move further to start creating RAID 6.

Step 2: Drive Partitioning for RAID 6

4. Now create partitions for raid on ‘/dev/sdb‘, ‘/dev/sdc‘, ‘/dev/sdd‘ and ‘/dev/sde‘ with the help of following fdisk command. Here, we will show how to create partition on sdb drive and later same steps to be followed for rest of the drives.

Create /dev/sdb Partition
# fdisk /dev/sdb

Please follow the instructions as shown below for creating partition.

  1. Press ‘n‘ for creating new partition.
  2. Then choose ‘P‘ for Primary partition.
  3. Next choose the partition number as 1.
  4. Define the default value by just pressing two times Enter key.
  5. Next press ‘P‘ to print the defined partition.
  6. Press ‘L‘ to list all available types.
  7. Type ‘t‘ to choose the partitions.
  8. Choose ‘fd‘ for Linux raid auto and press Enter to apply.
  9. Then again use ‘P‘ to print the changes what we have made.
  10. Use ‘w‘ to write the changes.

Create sdb Partition

Create /dev/sdb Partition

Create /dev/sdb Partition
# fdisk /dev/sdc

Create sdc Partition

Create /dev/sdc Partition

Create /dev/sdd Partition
# fdisk /dev/sdd

Create sdd Partition

Create /dev/sdd Partition

Create /dev/sde Partition
# fdisk /dev/sde

Create sde Partition

Create /dev/sde Partition

5. After creating partitions, it’s always good habit to examine the drives for super-blocks. If super-blocks does not exist than we can go head to create a new RAID setup.

# mdadm -E /dev/sd[b-e]1


or

# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

Check Raid on New Partitions

Check Raid on New Partitions

Step 3: Creating md device (RAID)

6. Now it’s time to create Raid device ‘md0‘ (i.e. /dev/md0) and apply raid level on all newly created partitions and confirm the raid using following commands.

# mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
# cat /proc/mdstat

Create Raid 6 Device

Create Raid 6 Device

7. You can also check the current process of raid using watch command as shown in the screen grab below.

# watch -n1 cat /proc/mdstat

Check Raid 6 Process

Check Raid 6 Process

8. Verify the raid devices using the following command.

# mdadm -E /dev/sd[b-e]1

Note:: The above command will be display the information of the four disks, which is quite long so not possible to post the output or screen grab here.

9. Next, verify the RAID array to confirm that the re-syncing is started.

# mdadm --detail /dev/md0

Check Raid 6 Array

Check Raid 6 Array

Step 4: Creating FileSystem on Raid Device

10. Create a filesystem using ext4 for ‘/dev/md0‘ and mount it under /mnt/raid6. Here we’ve used ext4, but you can use any type of filesystem as per your choice.

# mkfs.ext4 /dev/md0

Create File System on Raid

Create File System on Raid 6

11. Mount the created filesystem under /mnt/raid6 and verify the files under mount point, we can see lost+found directory.

# mkdir /mnt/raid6
# mount /dev/md0 /mnt/raid6/
# ls -l /mnt/raid6/

12. Create some files under mount point and append some text in any one of the file to verify the content.

# touch /mnt/raid6/raid6_test.txt
# ls -l /mnt/raid6/
# echo "tecmint raid setups" > /mnt/raid6/raid6_test.txt
# cat /mnt/raid6/raid6_test.txt

Verify Raid Content

Verify Raid Content

13. Add an entry in /etc/fstab to auto mount the device at the system startup and append the below entry, mount point may differ according to your environment.

# vim /etc/fstab

/dev/md0                /mnt/raid6              ext4    defaults        0 0

Automount Raid 6 Device

Automount Raid 6 Device

14. Next, execute ‘mount -a‘ command to verify whether there is any error in fstab entry.

# mount -av

Verify Raid Automount

Verify Raid Automount

Step 5: Save RAID 6 Configuration

15. Please note by default RAID don’t have a config file. We have to save it by manually using below command and then verify the status of device ‘/dev/md0‘.

# mdadm --detail --scan --verbose >> /etc/mdadm.conf
# mdadm --detail /dev/md0

Save Raid 6 Configuration

Save Raid 6 Configuration

Check Raid 6 Status

Check Raid 6 Status

Step 6: Adding a Spare Drives

16. Now it has 4 disks and there are two parity information’s available. In some cases, if any one of the disk fails we can get the data, because there is double parity in RAID 6.

May be if the second disk fails, we can add a new one before loosing third disk. It is possible to add a spare drive while creating our RAID set, But I have not defined the spare drive while creating our raid set. But, we can add a spare drive after any drive failure or while creating the RAID set. Now we have already created the RAID set now let me add a spare drive for demonstration.

For the demonstration purpose, I’ve hot-plugged a new HDD disk (i.e. /dev/sdf), let’s verify the attached disk.

# ls -l /dev/ | grep sd

Check New Disk

Check New Disk

17. Now again confirm the new attached disk for any raid is already configured or not using the same mdadmcommand.

# mdadm --examine /dev/sdf

Check Raid on New Disk

Check Raid on New Disk

Note: As usual, like we’ve created partitions for four disks earlier, similarly we’ve to create new partition on the new plugged disk using fdisk command.

# fdisk /dev/sdf

Create sdf Partition

Create /dev/sdf Partition

18. Again after creating new partition on /dev/sdf, confirm the raid on the partition, include the spare drive to the /dev/md0 raid device and verify the added device.

# mdadm --examine /dev/sdf
# mdadm --examine /dev/sdf1
# mdadm --add /dev/md0 /dev/sdf1
# mdadm --detail /dev/md0

Verify Raid on sdf Partition

Verify Raid on sdf Partition

Add sdf Partition to Raid

Add sdf Partition to Raid

Verify sdf Partition Details

Verify sdf Partition Details

Step 7: Check Raid 6 Fault Tolerance

19. Now, let us check whether spare drive works automatically, if anyone of the disk fails in our Array. For testing, I’ve personally marked one of the drive is failed.

Here, we’re going to mark /dev/sdd1 as failed drive.

# mdadm --manage --fail /dev/md0 /dev/sdd1

Check Raid 6 Fault Tolerance

Check Raid 6 Fault Tolerance

20. Let me get the details of RAID set now and check whether our spare started to sync.

# mdadm --detail /dev/md0

Check Auto Raid Syncing

Check Auto Raid Syncing

Hurray! Here, we can see the spare got activated and started rebuilding process. At the bottom we can see the faulty drive /dev/sdd1 listed as faulty. We can monitor build process using following command.

# cat /proc/mdstat

Raid 6 Auto Syncing

Raid 6 Auto Syncing

Conclusion:

Here, we have seen how to setup RAID 6 using four disks. This RAID level is one of the expensive setup with high redundancy. We will see how to setup a Nested RAID 10 and much more in the next articles.

Setting Up RAID 10 or 1+0 (Nested) in Linux – Part 6

RAID 10 is a combine of RAID 0 and RAID 1 to form a RAID 10. To setup Raid 10, we need at least 4 number of disks. In our earlier articles, we’ve seen how to setup a RAID 0 and RAID 1 with minimum 2 number of disks.

Here we will use both RAID 0 and RAID 1 to perform a Raid 10 setup with minimum of 4 drives. Assume, that we’ve some data saved to logical volume, which is created with RAID 10. Just for an example, if we are saving a data “apple” this will be saved under all 4 disk by this following method.

Create Raid 10 in Linux

Create Raid 10 in Linux

Using RAID 0 it will save as “A” in first disk and “p” in the second disk, then again “p” in first disk and “l” in second disk. Then “e” in first disk, like this it will continue the Round robin process to save the data. From this we come to know that RAID 0 will write the half of the data to first disk and other half of the data to second disk.

In RAID 1 method, same data will be written to other 2 disks as follows. “A” will write to both first and second disks, “P” will write to both disk, Again other “P” will write to both the disks. Thus using RAID 1 it will write to both the disks. This will continue in round robin process.

Now you all came to know that how RAID 10 works by combining of both RAID 0 and RAID 1. If we have 4 number of 20 GB size disks, it will be 80 GB in total, but we will get only 40 GB of Storage capacity, the half of total capacity will be lost for building RAID 10.

Pros and Cons of RAID 5

  1. Gives better performance.
  2. We will loose two of the disk capacity in RAID 10.
  3. Reading and writing will be very good, because it will write and read to all those 4 disk at the same time.
  4. It can be used for Database solutions, which needs a high I/O disk writes.

Requirements

In RAID 10, we need minimum of 4 disks, the first 2 disks for RAID 0 and other 2 Disks for RAID 1. Like I said before, RAID 10 is just a Combine of RAID 0 & 1. If we need to extended the RAID group, we must increase the disk by minimum 4 disks.

My Server Setup
Operating System :	CentOS 6.5 Final
IP Address	 	:	192.168.0.229
Hostname	 	:	rd10.tecmintlocal.com
Disk 1 [20GB]	 	:	/dev/sdd
Disk 2 [20GB]	 	:	/dev/sdc
Disk 3 [20GB]	 	:	/dev/sdd
Disk 4 [20GB]	 	:	/dev/sde

There are two ways to setup RAID 10, but here I’m going to show you both methods, but I prefer you to follow the first method, which makes the work lot easier for setting up a RAID 10.

Method 1: Setting Up Raid 10

1. First, verify that all the 4 added disks are detected or not using the following command.

# ls -l /dev | grep sd

2. Once the four disks are detected, it’s time to check for the drives whether there is already any raid existed before creating a new one.

# mdadm -E /dev/sd[b-e]
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde

Verify 4 Added Disks

Verify 4 Added Disks

Note: In the above output, you see there isn’t any super-block detected yet, that means there is no RAID defined in all 4 drives.

Step 1: Drive Partitioning for RAID

3. Now create a new partition on all 4 disks (/dev/sdb, /dev/sdc, /dev/sdd and /dev/sde) using the ‘fdisk’ tool.

# fdisk /dev/sdb
# fdisk /dev/sdc
# fdisk /dev/sdd
# fdisk /dev/sde
Create /dev/sdb Partition

Let me show you how to partition one of the disk (/dev/sdb) using fdisk, this steps will be the same for all the other disks too.

# fdisk /dev/sdb

Please use the below steps for creating a new partition on /dev/sdb drive.

  1. Press ‘n‘ for creating new partition.
  2. Then choose ‘P‘ for Primary partition.
  3. Then choose ‘1‘ to be the first partition.
  4. Next press ‘p‘ to print the created partition.
  5. Change the Type, If we need to know the every available types Press ‘L‘.
  6. Here, we are selecting ‘fd‘ as my type is RAID.
  7. Next press ‘p‘ to print the defined partition.
  8. Then again use ‘p‘ to print the changes what we have made.
  9. Use ‘w‘ to write the changes.

Disk sdb Partition

Disk sdb Partition

Note: Please use the above same instructions for creating partitions on other disks (sdc, sdd sdd sde).

4. After creating all 4 partitions, again you need to examine the drives for any already existing raid using the following command.

# mdadm -E /dev/sd[b-e]
# mdadm -E /dev/sd[b-e]1

OR

# mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

Check All Disks for Raid

Check All Disks for Raid

Note: The above outputs shows that there isn’t any super-block detected on all four newly created partitions, that means we can move forward to create RAID 10 on these drives.

Step 2: Creating ‘md’ RAID Device

5. Now it’s time to create a ‘md’ (i.e. /dev/md0) device, using ‘mdadm’ raid management tool. Before, creating device, your system must have ‘mdadm’ tool installed, if not install it first.

# yum install mdadm		[on RedHat systems]
# apt-get install mdadm 	[on Debain systems]

Once ‘mdadm’ tool installed, you can now create a ‘md’ raid device using the following command.

# mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1

6. Next verify the newly created raid device using the ‘cat’ command.

# cat /proc/mdstat

Create md raid Device

Create md raid Device

7. Next, examine all the 4 drives using the below command. The output of the below command will be long as it displays the information of all 4 disks.

# mdadm --examine /dev/sd[b-e]1

8. Next, check the details of Raid Array with the help of following command.

# mdadm --detail /dev/md0

Check Raid Array Details

Check Raid Array Details

Note: You see in the above results, that the status of Raid was active and re-syncing.

Step 3: Creating Filesystem

9. Create a file system using ext4 for ‘md0’ and mount it under ‘/mnt/raid10‘. Here, I’ve used ext4, but you can use any filesystem type if you want.

# mkfs.ext4 /dev/md0

Create md Filesystem

Create md Filesystem

10. After creating filesystem, mount the created file-system under ‘/mnt/raid10‘ and list the contents of the mount point using ‘ls -l’ command.

# mkdir /mnt/raid10
# mount /dev/md0 /mnt/raid10/
# ls -l /mnt/raid10/

Next, add some files under mount point and append some text in any one of the file and check the content.

# touch /mnt/raid10/raid10_files.txt
# ls -l /mnt/raid10/
# echo "raid 10 setup with 4 disks" > /mnt/raid10/raid10_files.txt
# cat /mnt/raid10/raid10_files.txt

Mount md Device

Mount md Device

11. For automounting, open the ‘/etc/fstab‘ file and append the below entry in fstab, may be mount point will differ according to your environment. Save and quit using wq!.

# vim /etc/fstab

/dev/md0                /mnt/raid10              ext4    defaults        0 0

AutoMount md Device

AutoMount md Device

12. Next, verify the ‘/etc/fstab‘ file for any errors before restarting the system using ‘mount -a‘ command.

# mount -av

Check Errors in Fstab

Check Errors in Fstab

Step 4: Save RAID Configuration

13. By default RAID don’t have a config file, so we need to save it manually after making all the above steps, to preserve these settings during system boot.

# mdadm --detail --scan --verbose >> /etc/mdadm.conf

Save Raid10 Configuration

Save Raid10 Configuration

That’s it, we have created RAID 10 using method 1, this method is the easier one. Now let’s move forward to setup RAID 10 using method 2.

Method 2: Creating RAID 10

1. In method 2, we have to define 2 sets of RAID 1 and then we need to define a RAID 0 using those created RAID 1 sets. Here, what we will do is to first create 2 mirrors (RAID1) and then striping over RAID0.

First, list the disks which are all available for creating RAID 10.

# ls -l /dev | grep sd

List 4 Devices

List 4 Devices

2. Partition the all 4 disks using ‘fdisk’ command. For partitioning, you can follow #step 3 above.

# fdisk /dev/sdb
# fdisk /dev/sdc
# fdisk /dev/sdd
# fdisk /dev/sde

3. After partitioning all 4 disks, now examine the disks for any existing raid blocks.

# mdadm --examine /dev/sd[b-e]
# mdadm --examine /dev/sd[b-e]1

Examine 4 Disks

Examine 4 Disks

Step 1: Creating RAID 1

4. First let me create 2 sets of RAID 1 using 4 disks ‘sdb1’ and ‘sdc1’ and other set using ‘sdd1’ & ‘sde1’.

# mdadm --create /dev/md1 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[b-c]1
# mdadm --create /dev/md2 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[d-e]1
# cat /proc/mdstat

Creating Raid 1

Creating Raid 1

Check Details of Raid 1

Check Details of Raid 1

Step 2: Creating RAID 0

5. Next, create the RAID 0 using md1 and md2 devices.

# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/md1 /dev/md2
# cat /proc/mdstat

Creating Raid 0

Creating Raid 0

Step 3: Save RAID Configuration

6. We need to save the Configuration under ‘/etc/mdadm.conf‘ to load all raid devices in every reboot times.

# mdadm --detail --scan --verbose >> /etc/mdadm.conf

After this, we need to follow #step 3 Creating file system of method 1.

That’s it! we have created RAID 1+0 using method 2. We will loose two disks space here, but the performance will be excellent compared to any other raid setups.

Conclusion

Here we have created RAID 10 using two methods. RAID 10 has good performance and redundancy too. Hope this helps you to understand about RAID 10 Nested Raid level. Let us see how to grow an existing raid array and much more in my upcoming articles.

Growing an Existing RAID Array and Removing Failed Disks in Raid – Part 7

Every newbies will get confuse of the word array. Array is just a collection of disks. In other words, we can call array as a set or group. Just like a set of eggs containing 6 numbers. Likewise RAID Array contains number of disks, it may be 2, 4, 6, 8, 12, 16 etc. Hope now you know what Array is.

Here we will see how to grow (extend) an existing array or raid group. For example, if we are using 2 disks in an array to form a raid 1 set, and in some situation if we need more space in that group, we can extend the size of an array using mdadm –grow command, just by adding one of the disk to the existing array. After growing (adding disk to an existing array), we will see how to remove one of the failed disk from array.

Grow Raid Array in Linux

Growing Raid Array and Removing Failed Disks

Assume that one of the disk is little weak and need to remove that disk, till it fails let it under use, but we need to add one of the spare drive and grow the mirror before it fails, because we need to save our data. While the weak disk fails we can remove it from array this is the concept we are going to see in this topic.

Features of RAID Growth

  1. We can grow (extend) the size of any raid set.
  2. We can remove the faulty disk after growing raid array with new disk.
  3. We can grow raid array without any downtime.

Requirements

  1. To grow an RAID array, we need an existing RAID set (Array).
  2. We need extra disks to grow the Array.
  3. Here I’m using 1 disk to grow the existing array.

Before we learn about growing and recovering of Array, we have to know about the basics of RAID levels and setups. Follow the below links to know about those setups.

  1. Understanding Basic RAID Concepts – Part 1
  2. Creating a Software Raid 0 in Linux – Part 2
My Server Setup
Operating System 	:	CentOS 6.5 Final
IP Address	 	:	192.168.0.230
Hostname		:	grow.tecmintlocal.com
2 Existing Disks 	:	1 GB
1 Additional Disk	:	1 GB

Here, my already existing RAID has 2 number of disks with each size is 1GB and we are now adding one more disk whose size is 1GB to our existing raid array.

Growing an Existing RAID Array

1. Before growing an array, first list the existing Raid array using the following command.

# mdadm --detail /dev/md0

Check Existing Raid Array

Check Existing Raid Array

Note: The above output shows that I’ve already has two disks in Raid array with raid1 level. Now here we are adding one more disk to an existing array,

2. Now let’s add the new disk “sdd” and create a partition using ‘fdisk‘ command.

# fdisk /dev/sdd

Please use the below instructions to create a partition on /dev/sdd drive.

  1. Press ‘n‘ for creating new partition.
  2. Then choose ‘P‘ for Primary partition.
  3. Then choose ‘1‘ to be the first partition.
  4. Next press ‘p‘ to print the created partition.
  5. Here, we are selecting ‘fd‘ as my type is RAID.
  6. Next press ‘p‘ to print the defined partition.
  7. Then again use ‘p‘ to print the changes what we have made.
  8. Use ‘w‘ to write the changes.

Create New Partition in Linux

Create New sdd Partition

3. Once new sdd partition created, you can verify it using below command.

# ls -l /dev/ | grep sd

Confirm sdd Partition

Confirm sdd Partition

4. Next, examine the newly created disk for any existing raid, before adding to the array.

# mdadm --examine /dev/sdd1

Check Raid on sdd Partition

Check Raid on sdd Partition

Note: The above output shows that the disk has no super-blocks detected, means we can move forward to add a new disk to an existing array.

4. To add the new partition /dev/sdd1 in existing array md0, use the following command.

# mdadm --manage /dev/md0 --add /dev/sdd1

Add Disk To Raid-Array

Add Disk To Raid-Array

5. Once the new disk has been added, check for the added disk in our array using.

# mdadm --detail /dev/md0

Confirm Disk Added to Raid

Confirm Disk Added to Raid

Note: In the above output, you can see the drive has been added as a spare. Here, we already having 2 disks in the array, but what we are expecting is 3 devices in array for that we need to grow the array.

6. To grow the array we have to use the below command.

# mdadm --grow --raid-devices=3 /dev/md0

Grow Raid Array

Grow Raid Array

Now we can see the third disk (sdd1) has been added to array, after adding third disk it will sync the data from other two disks.

# mdadm --detail /dev/md0

Confirm Raid Array

Confirm Raid Array

Note: For large size disk it will take hours to sync the contents. Here I have used 1GB virtual disk, so its done very quickly within seconds.

Removing Disks from Array

7. After the data has been synced to new disk ‘sdd1‘ from other two disks, that means all three disks now have same contents.

As I told earlier let’s assume that one of the disk is weak and needs to be removed, before it fails. So, now assume disk ‘sdc1‘ is weak and needs to be removed from an existing array.

Before removing a disk we have to mark the disk as failed one, then only we can able to remove it.

# mdadm --fail /dev/md0 /dev/sdc1
# mdadm --detail /dev/md0

Disk Fail in Raid Array

Disk Fail in Raid Array

From the above output, we clearly see that the disk was marked as faulty at the bottom. Even its faulty, we can see the raid devices are 3, failed 1 and state was degraded.

Now we have to remove the faulty drive from the array and grow the array with 2 devices, so that the raid devices will be set to 2 devices as before.

# mdadm --remove /dev/md0 /dev/sdc1

Remove Disk in Raid Array

Remove Disk in Raid Array

8. Once the faulty drive is removed, now we’ve to grow the raid array using 2 disks.

# mdadm --grow --raid-devices=2 /dev/md0
# mdadm --detail /dev/md0

Grow Disks in Raid Array

Grow Disks in Raid Array

From the about output, you can see that our array having only 2 devices. If you need to grow the array again, follow the same steps as described above. If you need to add a drive as spare, mark it as spare so that if the disk fails, it will automatically active and rebuild.

Conclusion

In the article, we’ve seen how to grow an existing raid set and how to remove a faulty disk from an array after re-syncing the existing contents. All these steps can be done without any downtime. During data syncing, system users, files and applications will not get affected in any case.

In next, article I will show you how to manage the RAID, till then stay tuned to updates and don’t forget to add your comments.

How to Recover Data and Rebuild Failed Software RAID’s – Part 8

In the previous articles of this RAID series you went from zero to RAID hero. We reviewed several software RAID configurations and explained the essentials of each one, along with the reasons why you would lean towards one or the other depending on your specific scenario.

Recover Rebuild Failed Software RAID's

Recover Rebuild Failed Software RAID’s – Part 8

In this guide we will discuss how to rebuild a software RAID array without data loss when in the event of a disk failure. For brevity, we will only consider a RAID 1 setup – but the concepts and commands apply to all cases alike.

RAID Testing Scenario

Before proceeding further, please make sure you have set up a RAID 1 array following the instructions provided in Part 3 of this series: How to set up RAID 1 (Mirror) in Linux.

The only variations in our present case will be:

1) a different version of CentOS (v7) than the one used in that article (v6.5), and
2) different disk sizes for /dev/sdb and /dev/sdc (8 GB each).

In addition, if SELinux is enabled in enforcing mode, you will need to add the corresponding labels to the directory where you’ll mount the RAID device. Otherwise, you’ll run into this warning message while attempting to mount it:

SELinux RAID Mount Error

SELinux RAID Mount Error

You can fix this by running:

# restorecon -R /mnt/raid1

Setting up RAID Monitoring

There is a variety of reasons why a storage device can fail (SSDs have greatly reduced the chances of this happening, though), but regardless of the cause you can be sure that issues can occur anytime and you need to be prepared to replace the failed part and to ensure the availability and integrity of your data.

A word of advice first. Even when you can inspect /proc/mdstat in order to check the status of your RAIDs, there’s a better and time-saving method that consists of running mdadm in monitor + scan mode, which will send alerts via email to a predefined recipient.

To set this up, add the following line in /etc/mdadm.conf:

MAILADDR user@<domain or localhost>

In my case:

MAILADDR gacanepa@localhost

RAID Monitoring Email Alerts

RAID Monitoring Email Alerts

To run mdadm in monitor + scan mode, add the following crontab entry as root:

@reboot /sbin/mdadm --monitor --scan --oneshot

By default, mdadm will check the RAID arrays every 60 seconds and send an alert if it finds an issue. You can modify this behavior by adding the --delay option to the crontab entry above along with the amount of seconds (for example, --delay 1800 means 30 minutes).

Finally, make sure you have a Mail User Agent (MUA) installed, such as mutt or mailx. Otherwise, you will not receive any alerts.

In a minute we will see what an alert sent by mdadm looks like.

Simulating and Replacing a failed RAID Storage Device

To simulate an issue with one of the storage devices in the RAID array, we will use the --manage and --set-faulty options as follows:

# mdadm --manage --set-faulty /dev/md0 /dev/sdc1  

This will result in /dev/sdc1 being marked as faulty, as we can see in /proc/mdstat:

Stimulate Issue with RAID Storage

Stimulate Issue with RAID Storage

More importantly, let’s see if we received an email alert with the same warning:

Email Alert on Failed RAID Device

Email Alert on Failed RAID Device

In this case, you will need to remove the device from the software RAID array:

# mdadm /dev/md0 --remove /dev/sdc1

Then you can physically remove it from the machine and replace it with a spare part (/dev/sdd, where a partition of type fd has been previously created):

# mdadm --manage /dev/md0 --add /dev/sdd1

Luckily for us, the system will automatically start rebuilding the array with the part that we just added. We can test this by marking /dev/sdb1 as faulty, removing it from the array, and making sure that the file tecmint.txt is still accessible at /mnt/raid1:

# mdadm --detail /dev/md0
# mount | grep raid1
# ls -l /mnt/raid1 | grep tecmint
# cat /mnt/raid1/tecmint.txt

Confirm Rebuilding RAID Array

Confirm Rebuilding RAID Array

The image above clearly shows that after adding /dev/sdd1 to the array as a replacement for /dev/sdc1, the rebuilding of data was automatically performed by the system without intervention on our part.

Though not strictly required, it’s a great idea to have a spare device in handy so that the process of replacing the faulty device with a good drive can be done in a snap. To do that, let’s re-add /dev/sdb1 and /dev/sdc1:

# mdadm --manage /dev/md0 --add /dev/sdb1
# mdadm --manage /dev/md0 --add /dev/sdc1

Replace Failed Raid Device

Replace Failed Raid Device

Recovering from a Redundancy Loss

As explained earlier, mdadm will automatically rebuild the data when one disk fails. But what happens if 2 disks in the array fail? Let’s simulate such scenario by marking /dev/sdb1 and /dev/sdd1 as faulty:

# umount /mnt/raid1
# mdadm --manage --set-faulty /dev/md0 /dev/sdb1
# mdadm --stop /dev/md0
# mdadm --manage --set-faulty /dev/md0 /dev/sdd1

Attempts to re-create the array the same way it was created at this time (or using the --assume-clean option) may result in data loss, so it should be left as a last resort.

Let’s try to recover the data from /dev/sdb1, for example, into a similar disk partition (/dev/sde1 – note that this requires that you create a partition of type fd in /dev/sde before proceeding) using ddrescue:

# ddrescue -r 2 /dev/sdb1 /dev/sde1

Recovering Raid Array

Recovering Raid Array

Please note that up to this point, we haven’t touched /dev/sdb or /dev/sdd, the partitions that were part of the RAID array.

Now let’s rebuild the array using /dev/sde1 and /dev/sdf1:

# mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[e-f]1

Please note that in a real situation, you will typically use the same device names as with the original array, that is, /dev/sdb1 and /dev/sdc1 after the failed disks have been replaced with new ones.

In this article I have chosen to use extra devices to re-create the array with brand new disks and to avoid confusion with the original failed drives.

When asked whether to continue writing array, type Y and press Enter. The array should be started and you should be able to watch its progress with:

# watch -n 1 cat /proc/mdstat

When the process completes, you should be able to access the content of your RAID:

Confirm Raid Content

Confirm Raid Content

Summary

In this article we have reviewed how to recover from RAID failures and redundancy losses. However, you need to remember that this technology is a storage solution and DOES NOT replace backups.

The principles explained in this guide apply to all RAID setups alike, as well as the concepts that we will cover in the next and final guide of this series (RAID management).

How to Manage Software RAID’s in Linux with ‘Mdadm’ Tool – Part 9

Regardless of your previous experience with RAID arrays, and whether you followed all of the tutorials in this RAID series or not, managing software RAIDs in Linux is not a very complicated task once you have become acquainted with mdadm --manage command.

Manage Raid Devices with Mdadm in Linux

Manage Raid Devices with Mdadm in Linux – Part 9

In this tutorial we will review the functionality provided by this tool so that you can have it handy when you need it.

RAID Testing Scenario

As in the last article of this series, we will use for simplicity a RAID 1 (mirror) array which consists of two 8 GBdisks (/dev/sdb and /dev/sdc) and an initial spare device (/dev/sdd) to illustrate, but the commands and concepts listed herein apply to other types of setups as well. That said, feel free to go ahead and add this page to your browser’s bookmarks, and let’s get started.

Understanding mdadm Options and Usage

Fortunately, mdadm provides a built-in --help flag that provides explanations and documentation for each of the main options.

Thus, let’s start by typing:

# mdadm --manage --help

to see what are the tasks that mdadm --manage will allow us to perform and how:

Manage RAID with mdadm Tool

Manage RAID with mdadm Tool

As we can see in the above image, managing a RAID array involves performing the following tasks at one time or another:

  1. (Re)Adding a device to the array.
  2. Mark a device as faulty.
  3. Removing a faulty device from the array.
  4. Replacing the faulty device with a spare one.
  5. Start an array that’s partially built.
  6. Stop an array.
  7. Mark an array as ro (read-only) or rw (read-write).

Managing RAID Devices with mdadm Tool

Note that if you omit the --manage option, mdadm assumes management mode anyway. Keep this fact in mind to avoid running into trouble further down the road.

The highlighted text in the previous image shows the basic syntax to manage RAIDs:

# mdadm --manage RAID options devices

Let’s illustrate with a few examples.

​ Example 1: Add a device to the RAID array

You will typically add a new device when replacing a faulty one, or when you have a spare part that you want to have handy in case of a failure:

# mdadm --manage /dev/md0 --add /dev/sdd1

Add Device to Raid Array

Add Device to Raid Array

​Example 2: Marking a RAID device as faulty and removing it from the array

This is a mandatory step before logically removing the device from the array, and later physically pulling it out from the machine – in that order (if you miss one of these steps you may end up causing actual damage to the device):

# mdadm --manage /dev/md0 --fail /dev/sdb1

Note how the spare device added in the previous example is used to automatically replace the failed disk. Not only that, but the recovery and rebuilding of raid data start immediately as well:

Recover and Rebuild Raid Data

Recover and Rebuild Raid Data

Once the device has been indicated as failed manually, it can be safely removed from the array:

# mdadm --manage /dev/md0 --remove /dev/sdb1
​Example 3: Re-adding a device that was part of the array which had been removed previously

Up to this point, we have a working RAID 1 array that consists of 2 active devices: /dev/sdc1 and /dev/sdd1. If we attempt to re-add /dev/sdb1 to /dev/md0 right now:

# mdadm --manage /dev/md0 --re-add /dev/sdb1

we will run into an error:

mdadm: --re-add for /dev/sdb1 to /dev/md0 is not possible

because the array is already made up of the maximum possible number of drives. So we have 2 choices: a) add /dev/sdb1 as a spare, as shown in Example #1, or b) remove /dev/sdd1 from the array and then re-add /dev/sdb1.

We choose option b), and will start by stopping the array to later reassemble it:

# mdadm --stop /dev/md0
# mdadm --assemble /dev/md0 /dev/sdb1 /dev/sdc1

If the above command does not successfully add /dev/sdb1 back to the array, use the command from Example #1 to do it.

Although mdadm will initially detect the newly added device as a spare, it will start rebuilding the data and when it’s done doing so, it should recognize the device to be an active part of the RAID:

Raid Rebuild Status

Raid Rebuild Status

Example 4: Replace a Raid device with a specific disk

Replacing a disk in the array with a spare one is as easy as:

# mdadm --manage /dev/md0 --replace /dev/sdb1 --with /dev/sdd1

Replace Raid Device

Replace Raid Device

This results in the device following the --with switch being added to the RAID while the disk indicated through --replace being marked as faulty:

Check Raid Rebuild Status

Check Raid Rebuild Status

​Example 5: Marking an Raid array as ro or rw

After creating the array, you must have created a filesystem on top of it and mounted it on a directory in order to use it. What you probably didn’t know then is that you can mark the RAID as ro, thus allowing only read operations to be performed on it, or rw, in order to write to the device as well.

To mark the device as ro, it needs to be unmounted first:

# umount /mnt/raid1
# mdadm --manage /dev/md0 --readonly
# mount /mnt/raid1
# touch /mnt/raid1/test1

Set Permissions on Raid Array

Set Permissions on Raid Array

To configure the array to allow write operations as well, use the --readwrite option. Note that you will need to unmount the device and stop it before setting the rw flag:

# umount /mnt/raid1
# mdadm --manage /dev/md0 --stop
# mdadm --assemble /dev/md0 /dev/sdc1 /dev/sdd1
# mdadm --manage /dev/md0 --readwrite
# touch /mnt/raid1/test2

Allow Read Write Permission on Raid

Allow Read Write Permission on Raid

Summary

Throughout this series we have explained how to set up a variety of software RAID arrays that are used in enterprise environments. If you followed through the articles and the examples provided in these articles you are prepared to leverage the power of software RAIDs in Linux.

Source

Perf- A Performance Monitoring and Analysis Tool for Linux

When we talk of performance in computing, we refer to the relationship between our resources and the tasks that they allows us to complete in a given period of time.

Perf- A Performance Monitoring and Analysis Tool for Linux

Perf- A Performance Monitoring and Analysis Tool for Linux

In a day of fierceless competition between companies, it is important that we learn how to use what we have at the best of its capacity. The waste of hardware or software resources, or the lack of ability to know how to use them more efficiently, ends up being a loss that we just can’t afford if we want to be at the top of our game.

At the same time, we must be careful to not take our resources to a limit where sustained use will yield irreparable damage.

In this article we will introduce you to a relatively new performance analysis tool and provide tips that you can use to monitor your Linux systems, including hardware and applications. This will help you to ensure that they operate so that you are capable to produce the desired results without wasting resources or your own energy.

Introducing and installing Perf in Linux

Among others, Linux provides a performance monitoring and analysis tool called conveniently perf. So what distinguishes perf from other well-known tools with which you are already familiar?

The answer is that perf provides access to the Performance Monitoring Unit in the CPU, and thus allows us to have a close look at the behavior of the hardware and its associated events.

In addition, it can also monitor software events, and create reports out of the data that is collected.

You can install perf in RPM-based distributions with:

# yum update && yum install perf     [CentOS / RHEL / Fedora]
# dnf update && dnf install perf     [Fedora 23+ releases]

In Debian and derivatives:

# sudo aptitude update && sudo aptitude install linux-tools-$(uname -r) linux-tools-generic

If uname -r in the command above returns extra strings besides the actual version (3.2.0-23-generic in my case), you may have to type linux-tools-3.2.0-23 instead of using the output of uname.

It is also important to note that perf yields incomplete results when run in a guest on top of VirtualBox or VMWare as they do not allow access to hardware counters as other virtualization technologies (such as KVM or XEN) do.

Additionally, keep in mind that some perf commands may be restricted to root by default, which can be disabled (until the system is rebooted) by doing:

# echo 0 > /proc/sys/kernel/perf_event_paranoid

If you need to disable paranoid mode permanently, update the following setting in /etc/sysctl.conf file.

kernel.perf_event_paranoid = 0

Subcommands

Once you have installed perf, you can refer to its man page for a list of available subcommands (you can think of subcommands as special options that open a specific window into the system). For best and more complete results, use perf either as root or through sudo.

Perf list

perf list (without options) returns all the symbolic event types (long list). If you want to view the list of events available in a specific category, use perf list followed by the category name ([hw|sw|cache|tracepoint|pmu|event_glob]), such as:

Display list of software pre-defined events in Linux:

# perf list sw 

List Software Pre-defined Events in Linux

List Software Pre-defined Events in Linux

Perf stat

perf stat runs a command and collects Linux performance statistics during the execution of such command. What happens in our system when we run dd?

# perf stat dd if=/dev/zero of=test.iso bs=10M count=1

Collects Performance Statistics of Linux Command

Collects Performance Statistics of Linux Command

The stats shown above indicate, among other things:

  1. The execution of the dd command took 21.812281 milliseconds of CPU. If we divide this number by the “seconds time elapsed” value below (23.914596 milliseconds), it yields 0.912 (CPU utilized).
  2. While the command was executed, 15 context-switches (also known as process switches) indicate that the CPUs were switched 15 times from one process (or thread) to another.
  3. 2 CPU migrations is the expected result when in a 2-core CPU the workload is distributed evenly between the number of cores.
    During that time (21.812281 milliseconds), the total number of CPU cycles that were consumed was 62,025,623, which divided by 0.021812281 seconds gives 2.843 GHz.
  4. If we divide the number of cycles by the total instructions count we get 4.9 Cycles Per Instruction, which means each instruction took almost 5 CPU cycles to complete (on average). We can blame this (at least in part) on the number of branches and branch-misses (see below), which end up wasting or misusing CPU cycles.
  5. As the command was executed, a total of 3,552,630 branches were encountered. This is the CPU-level representation of decision points and loops in the code. The more branches, the lower the performance. To compensate for this, all modern CPUs attempt to predict the flow the code will take. 51,348 branch-misses indicate the prediction feature was wrong 1.45% of the time.

The same principle applies to gathering stats (or in other words, profiling) while an application is running. Simply launch the desired application and after a reasonable period of time (which is up to you) close it, and perf will display the stats in the screen. By analyzing those stats you can identify potential problems.

Perf top

perf top is similar to top command, in that it displays an almost real-time system profile (also known as live analysis).

With the -a option you will display all of the known event types, whereas the -e option will allow you to choose a specific event category (as returned by perf list):

Will display all cycles event.

perf top -a 

Will display all cpu-clock related events.

perf top -e cpu-clock 

Live Analysis of Linux Performance

Live Analysis of Linux Performance

The first column in the output above represents the percentage of samples taken since the beginning of the run, grouped by function Symbol and Shared Object. More options are available in man perf-top.

Perf record

perf record runs a command and saves the statistical data into a file named perf.data inside the current working directory. It runs similarly to perf stat.

Type perf record followed by a command:

# perf record dd if=/dev/null of=test.iso bs=10M count=1

Record Command Statistical Data

Record Command Statistical Data

Perf report

perf report formats the data collected in perf.data above into a performance report:

# sudo perf report

Perf Linux Performance Report

Perf Linux Performance Report

All of the above subcommands have a dedicated man page that can be invoked as:

# man perf-subcommand

where subcommand is either liststattoprecord, or report. These are the most frequently used subcommands; others are listed in the documentation (refer to the Summary section for the link).

Summary

In this guide we have introduced you to perf, a performance monitoring and analysis tool for Linux. We highly encourage you to become familiar with its documentation which is maintained in https://perf.wiki.kernel.org.

If you find applications that are consuming a high percentage of resources, you may consider modifying the source code, or use other alternatives.

Source

Scout_Realtime – Monitor Server and Process Metrics in Linux

In the past, we’ve covered lots of command-line based tools for monitoring Linux performance, such as tophtopatopglances and more, and a number of web based tools such as cockpitpydashlinux-dash, just to mention but a few. You can also run glances in web server mode to monitor remote servers. But all that aside, we have discovered yet another simple server monitoring tool that we would like to share with you, called Scout_Realtime.

Scout_Realtime is a simple, easy-to-use web based tool for monitoring Linux server metrics in real-time, in a top-like fashion. It shows you smooth-flowing charts about metrics gathered from the CPU, memory, disk, network, and processes (top 10), in real-time.

Real Time Linux Server Process Monitoring

Real Time Linux Server Process Monitoring

In this article, we will show you how install scout_realtime monitoring tool on Linux systems to monitor a remote server.

Installing Scout_Realtime Monitoring Tool in Linux

1. To install scout_realtime on your Linux server, you must have Ruby 1.9.3+ installed on your server using following command.

$ sudo apt-get install rubygems		[On Debian/Ubuntu]
$ sudo yum -y install rubygems-devel	[On RHEL/CentOS]
$ sudo dnf -y install rubygems-devel	[On Fedora 22+]

2. Once you have installed Ruby on your Linux system, now you can install scout_realtime package using the following command.

$ sudo gem install scout_realtime

3. After successfully installing scout_realtime package, next, you need to start the scout_realtime daemon which will collect server metrics in real-time as shown.

$ scout_realtime

Start Scout Realtime on Server

Start Scout Realtime on Server

4. Now that the scout_realtime daemon is running on your Linux server that you want to monitor remotely on port 5555. If you are running a firewall, you need to open the port 5555 which scout_realtime listens on, in the firewall to allow requests to it.

---------- On Debian/Ubuntu ----------
$ sudo ufw allow 27017  
$sudo ufw reload 

---------- On RHEL/CentOS 6.x ----------
$ sudo iptables -A INPUT -p tcp --dport 5555 -j ACCEPT    
$ sudo service iptables restart

---------- On RHEL/CentOS 7.x ----------
$ sudo firewall-cmd --permanent --add-port=5555/tcp       
$ sudo firewall-cmd reload 

5. Now from any other machine, open a web browser and use the URL below to access the scout_realtime to monitor your remote Linux server performance.

http://localhost:5555 
OR
http://ip-address-or-domain.com:5555 

ScoutRealtime Linux Server Process Monitoring

ScoutRealtime Linux Server Process Monitoring

6. By default, scout_realtime logs are written in .scout/scout_realtime.log on the system, that you can view using cat command.

$ cat .scout/scout_realtime.log

7. To stop the scout_realtime daemon, run the following command.

$ scout_realtime stop

8. To uninstall scout_realtime from the system, run the following command.

$ gem uninstall scout_realtime

For more information, check out Scout_realtime Github repository.

It’s that simple! Scout_realtime is a simple yet useful tool for monitoring Linux server metrics in real-time in a top-like fashion. You can ask any questions or give us your feedback in the comments about this article.

Source

HTTPie – A Modern HTTP Client Similar to Curl and Wget Commands

HTTPie (pronounced aitch-tee-tee-pie) is a cURL-like, modern, user-friendly, and cross-platform command line HTTP client written in Python. It is designed to make CLI interaction with web services easy and as user-friendly as possible.

HTTPie - A Command Line HTTP Client

HTTPie – A Command Line HTTP Client

It has a simple http command that enables users to send arbitrary HTTP requests using a straightforward and natural syntax. It is used primarily for testing, trouble-free debugging, and mainly interacting with HTTP servers, web services and RESTful APIs.

  • HTTPie comes with an intuitive UI and supports JSON.
  • Expressive and intuitive command syntax.
  • Syntax highlighting, formatted and colorized terminal output.
  • HTTPS, proxies, and authentication support.
  • Support for forms and file uploads.
  • Support for arbitrary request data and headers.
  • Wget-like downloads and extensions.
  • Supports ython 2.7 and 3.x.

In this article, we will show how to install and use httpie with some basic examples in Linux.

How to Install and Use HTTPie in Linux

Most Linux distributions provide a HTTPie package that can be easily installed using the default system package manager, for example:

# apt-get install httpie  [On Debian/Ubuntu]
# dnf install httpie      [On Fedora]
# yum install httpie      [On CentOS/RHEL]
# pacman -S httpie        [On Arch Linux]

Once installed, the syntax for using httpie is:

$ http [options] [METHOD] URL [ITEM [ITEM]]

The most basic usage of httpie is to provide it a URL as an argument:

$ http example.com

Basic HTTPie Usage

Basic HTTPie Usage

Now let’s see some basic usage of httpie command with examples.

Send a HTTP Method

You can send a HTTP method in the request, for example, we will send the GET method which is used to request data from a specified resource. Note that the name of the HTTP method comes right before the URL argument.

$ http GET tecmint.lan

Send GET HTTP Method

Send GET HTTP Method

Upload a File

This example shows how to upload a file to transfer.sh using input redirection.

$ http https://transfer.sh < file.txt

Download a File

You can download a file as shown.

$ http https://transfer.sh/Vq3Kg/file.txt > file.txt		#using output redirection
OR
$ http --download https://transfer.sh/Vq3Kg/file.txt  	        #using wget format

Submit a Form

You can also submit data to a form as shown.

$ http --form POST tecmint.lan date='Hello World'

View Request Details

To see the request that is being sent, use -v option, for example.

$ http -v --form POST tecmint.lan date='Hello World'

View HTTP Request Details

View HTTP Request Details

Basic HTTP Auth

HTTPie also supports basic HTTP authentication from the CLI in the form:

$ http -a username:password http://tecmint.lan/admin/

Custom HTTP Headers

You can also define custom HTTP headers in using the Header:Value notation. We can test this using the following URL, which returns headers. Here, we have defined a custom User-Agent called ‘strong>TEST 1.0’:

$ http GET https://httpbin.org/headers User-Agent:'TEST 1.0'

Custom HTTP Headers

Custom HTTP Headers

See a complete list of usage options by running.

$ http --help
OR
$ man  ttp

You can find more usage examples from the HTTPie Github repository: https://github.com/jakubroztocil/httpie.

HTTPie is a cURL-like, modern, user-friendly command line HTTP client with simple and natural syntax, and displays colorized output. In this article, we have shown how to install and use httpie in Linux. If you have any questions, reach us via the comment form below.

Source

WP2Social Auto Publish Powered By : XYZScripts.com