Configure Active/Passive NFS Server on a Pacemaker Cluster with Puppet | Lisenet.com :: Linux | Security

We’re going to use Puppet to install Pacemaker/Corosync and configure an NFS cluster.

For instructions on how to compile fence_pve on CentOS 7, scroll to the bottom of the page.

This article is part of the Homelab Project with KVM, Katello and Puppet series.

Homelab

We have two CentOS 7 servers installed which we want to configure as follows:

storage1.hl.local (10.11.1.15) – Pacemaker cluster node
storage2.hl.local (10.11.1.16) – Pacemaker cluster node

SELinux set to enforcing mode.

See the image below to identify the homelab part this article applies to.

Cluster Requirements

To configure the cluster, we are going to need the following:

  1. A virtual IP address, required for the NFS server.
  2. Shared storage for the NFS nodes in the cluster.
  3. A power fencing device for each node of the cluster.

The virtual IP is 10.11.1.31 (with the DNS name of nfsvip.hl.local).

With regards to shared storage, while I agree that iSCSI would be ideal, the truth is that “we don’t have that kind of money“. We will have to make it with a shared disk among different VMs on same Proxmox host.

In terms of fencing, as mentioned earlier, Proxmox does not use libvirt, therefore Pacemaker clusters cannot be fenced by using fence-agents-virsh. There is fence_pve available, but we won’t find it in CentOS/RHEL. We’ll need to compile it from source.

Proxmox and Disk Sharing

I was unable to find a WebUI way to add an existing disk to another VM. Proxmox forum was somewhat helpful, and I ended up manually editing the VM’s config file since the WebUI would not let me assign the same disk to two VMs.

Take a look at the following image, showing two disks attached to the storage1.hl.local node:

We want to use the smaller (2GB) disk for NFS.

The VM ID of the storage2.hl.local node is 208 (see here), therefore we can add the disk by editing the node’s configuration file.

# cat /etc/pve/qemu-server/208.conf
boot: cn
bootdisk: scsi0
cores: 1
hotplug: disk,cpu
memory: 768
name: storage2.hl.local
net0: virtio=00:22:FF:00:00:16,bridge=vmbr0
onboot: 1
ostype: l26
scsi0: data_ssd:208/vm-208-disk-1.qcow2,size=32G
scsi1: data_ssd:207/vm-207-disk-3.qcow2,size=2G
scsihw: virtio-scsi-pci
smbios1: uuid=030e28da-72e6-412d-be77-a79f06862351
sockets: 1
startup: order=208

The disk that we’ve added is scsi1. Note how it references the VM ID 207.

The disk will be visible on both nodes as /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1.

Configuration with Puppet

Puppet master runs on the Katello server.

Puppet Modules

We use puppet-corosync Puppet module to configure the server. We also use puppetlabs-accounts for Linux account creation.

Please see the module documentation for features supported and configuration options available.

Configure Firewall

It is essential to ensure that Pacemaker servers can talk to each other. The following needs applying to both cluster nodes:

firewall { ‘007 accept HA cluster requests’:
dport => [‘2224’, ‘3121’, ‘5403’, ‘21064’],
proto => ‘tcp’,
source => ‘10.11.1.0/24’,
action => ‘accept’,
}->
firewall { ‘008 accept HA cluster requests’:
dport => [‘5404’, ‘5405’],
proto => ‘udp’,
source => ‘10.11.1.0/24’,
action => ‘accept’,
}->
firewall { ‘009 accept NFS requests’:
dport => [‘2049’],
proto => ‘tcp’,
source => ‘10.11.1.0/24’,
action => ‘accept’,
}->
firewall { ‘010 accept TCP mountd requests’:
dport => [‘20048’],
proto => ‘tcp’,
source => ‘10.11.1.0/24’,
action => ‘accept’,
}->
firewall { ‘011 accept UDP mountd requests’:
dport => [‘20048’],
proto => ‘udp’,
source => ‘10.11.1.0/24’,
action => ‘accept’,
}->
firewall { ‘012 accept TCP rpc-bind requests’:
dport => [‘111’],
proto => ‘tcp’,
source => ‘10.11.1.0/24’,
action => ‘accept’,
}->
firewall { ‘013 accept UDP rpc-bind requests’:
dport => [‘111’],
proto => ‘udp’,
source => ‘10.11.1.0/24’,
action => ‘accept’,
}

Create Apache User and NFS Mountpoint

Before we configure the cluster, we need to make sure that we have the nfs-utils package installed and that the nfs-lock service is disabled – it will be managed by pacemaker.

The Apache user is created in order to match ownership and allow web servers to write to the NFS share.

The following needs applying to both cluster nodes:

package { ‘nfs-utils’: ensure => ‘installed’ }->
service { ‘nfs-lock’: enable => false }->
accounts::user { ‘apache’:
comment => ‘Apache’,
uid => ’48’,
gid => ’48’,
shell => ‘/sbin/nologin’,
password => ‘!!’,
home => ‘/usr/share/httpd’,
home_mode => ‘0755’,
locked => false,
}->
file {‘/nfsshare’:
ensure => ‘directory’,
owner => ‘root’,
group => ‘root’,
mode => ‘0755’,
}

Configure Pacemaker/Corosync on storage1.hl.local

We disable STONITH initially because the fencing agent fence_pve is simply not available yet. We will compile it later, however, it’s not required in order to get the cluster into an operational state.

We use colocations to keep primitives together. While colocation defines that a set of primitives must live together on the same node, order definitions will define the order of which each primitive is started. This is importat, as we want to make sure that we start cluster resources in the correct order.

Note how we configure NFS exports to be available to two specific clients only: web1.hl.local and web2.hl.local. In reality there is no need for any other homelab server to have access to the NFS share.

We make the apache user the owner of the NFS share, and export it with no_all_squash.

class { ‘corosync’:
authkey => ‘/etc/puppetlabs/puppet/ssl/certs/ca.pem’,
bind_address => $::ipaddress,
cluster_name => ‘nfs_cluster’,
enable_secauth => true,
enable_corosync_service => true,
enable_pacemaker_service => true,
set_votequorum => true,
quorum_members => [ ‘storage1.hl.local’, ‘storage2.hl.local’ ],
}
corosync::service { ‘pacemaker’:
## See: https://wiki.clusterlabs.org/wiki/Pacemaker
version => ‘1.1’,
}->
cs_property { ‘stonith-enabled’:
value => ‘false’,
}->
cs_property { ‘no-quorum-policy’:
value => ‘ignore’,
}->
cs_primitive { ‘nfsshare’:
primitive_class => ‘ocf’,
primitive_type => ‘Filesystem’,
provided_by => ‘heartbeat’,
parameters => { ‘device’ => ‘/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1’, ‘directory’ => ‘/nfsshare’, ‘fstype’ => ‘ext4’ },
}->
cs_primitive { ‘nfsd’:
primitive_class => ‘ocf’,
primitive_type => ‘nfsserver’,
provided_by => ‘heartbeat’,
parameters => { ‘nfs_shared_infodir’ => ‘/nfsshare/nfsinfo’ },
require => Cs_primitive[‘nfsshare’],
}->
cs_primitive { ‘nfsroot1’:
primitive_class => ‘ocf’,
primitive_type => ‘exportfs’,
provided_by => ‘heartbeat’,
parameters => { ‘clientspec’ => ‘web1.hl.local’, ‘options’ => ‘rw,async,no_root_squash,no_all_squash’, ‘directory’ => ‘/nfsshare’, ‘fsid’ => ‘0’ },
require => Cs_primitive[‘nfsd’],
}->
cs_primitive { ‘nfsroot2’:
primitive_class => ‘ocf’,
primitive_type => ‘exportfs’,
provided_by => ‘heartbeat’,
parameters => { ‘clientspec’ => ‘web2.hl.local’, ‘options’ => ‘rw,async,no_root_squash,no_all_squash’, ‘directory’ => ‘/nfsshare’, ‘fsid’ => ‘0’ },
require => Cs_primitive[‘nfsd’],
}->
cs_primitive { ‘nfsvip’:
primitive_class => ‘ocf’,
primitive_type => ‘IPaddr2’,
provided_by => ‘heartbeat’,
parameters => { ‘ip’ => ‘10.11.1.31’, ‘cidr_netmask’ => ’24’ },
require => Cs_primitive[‘nfsroot1′,’nfsroot2’],
}->
cs_colocation { ‘nfsshare_nfsd_nfsroot_nfsvip’:
primitives => [
[ ‘nfsshare’, ‘nfsd’, ‘nfsroot1’, ‘nfsroot2’, ‘nfsvip’ ],
}->
cs_order { ‘nfsshare_before_nfsd’:
first => ‘nfsshare’,
second => ‘nfsd’,
require => Cs_colocation[‘nfsshare_nfsd_nfsroot_nfsvip’],
}->
cs_order { ‘nfsd_before_nfsroot1’:
first => ‘nfsd’,
second => ‘nfsroot1’,
require => Cs_colocation[‘nfsshare_nfsd_nfsroot_nfsvip’],
}->
cs_order { ‘nfsroot1_before_nfsroot2’:
first => ‘nfsroot1’,
second => ‘nfsroot2’,
require => Cs_colocation[‘nfsshare_nfsd_nfsroot_nfsvip’],
}->
cs_order { ‘nfsroot2_before_nfsvip’:
first => ‘nfsroot2’,
second => ‘nfsvip’,
require => Cs_colocation[‘nfsshare_nfsd_nfsroot_nfsvip’],
}->
file {‘/nfsshare/uploads’:
ensure => ‘directory’,
owner => ‘apache’,
group => ‘root’,
mode => ‘0755’,
}

Configure Pacemaker/Corosync on storage2.hl.local

class { ‘corosync’:
authkey => ‘/etc/puppetlabs/puppet/ssl/certs/ca.pem’,
bind_address => $::ipaddress,
cluster_name => ‘nfs_cluster’,
enable_secauth => true,
enable_corosync_service => true,
enable_pacemaker_service => true,
set_votequorum => true,
quorum_members => [ ‘storage1.hl.local’, ‘storage2.hl.local’ ],
}
corosync::service { ‘pacemaker’:
version => ‘1.1’,
}->
cs_property { ‘stonith-enabled’:
value => ‘false’,
}

Cluster Status

If all went well, we should have our cluster up and running at this point.

[[email protected] ~]# pcs status
Cluster name: nfs_cluster
Stack: corosync
Current DC: storage2.hl.local (version 1.1.16-12.el7_4.8-94ff4df) – partition with quorum
Last updated: Sun Apr 29 17:04:50 2018
Last change: Sun Apr 29 16:56:25 2018 by root via cibadmin on storage1.hl.local

2 nodes configured
5 resources configured

Online: [ storage1.hl.local storage2.hl.local ]

Full list of resources:

nfsshare (ocf::heartbeat:Filesystem): Started storage1.hl.local
nfsd (ocf::heartbeat:nfsserver): Started storage1.hl.local
nfsroot1 (ocf::heartbeat:exportfs): Started storage1.hl.local
nfsroot2 (ocf::heartbeat:exportfs): Started storage1.hl.local
nfsvip (ocf::heartbeat:IPaddr2): Started storage1.hl.local

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: inactive/disabled
[[email protected] ~]# pcs status
Cluster name: nfs_cluster
Stack: corosync
Current DC: storage2.hl.local (version 1.1.16-12.el7_4.8-94ff4df) – partition with quorum
Last updated: Sun Apr 29 17:05:04 2018
Last change: Sun Apr 29 16:56:25 2018 by root via cibadmin on storage1.hl.local

2 nodes configured
5 resources configured

Online: [ storage1.hl.local storage2.hl.local ]

Full list of resources:

nfsshare (ocf::heartbeat:Filesystem): Started storage1.hl.local
nfsd (ocf::heartbeat:nfsserver): Started storage1.hl.local
nfsroot1 (ocf::heartbeat:exportfs): Started storage1.hl.local
nfsroot2 (ocf::heartbeat:exportfs): Started storage1.hl.local
nfsvip (ocf::heartbeat:IPaddr2): Started storage1.hl.local

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: inactive/disabled

Test cluster failover by putting the active node into standby:

[[email protected] ~]# pcs node standby

Services should become available on the other cluster node:

[[email protected] ~]# pcs status
Cluster name: nfs_cluster
Stack: corosync
Current DC: storage2.hl.local (version 1.1.16-12.el7_4.8-94ff4df) – partition with quorum
Last updated: Sun Apr 29 17:06:36 2018
Last change: Sun Apr 29 16:56:25 2018 by root via cibadmin on storage1.hl.local

2 nodes configured
5 resources configured

Node storage1.hl.local: standby
Online: [ storage2.hl.local ]

Full list of resources:

nfsshare (ocf::heartbeat:Filesystem): Started storage2.hl.local
nfsd (ocf::heartbeat:nfsserver): Started storage2.hl.local
nfsroot1 (ocf::heartbeat:exportfs): Started storage2.hl.local
nfsroot2 (ocf::heartbeat:exportfs): Started storage2.hl.local
nfsvip (ocf::heartbeat:IPaddr2): Started storage2.hl.local

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: inactive/disabled

Do showmount on the virtual IP address:

[[email protected] ~]# showmount -e 10.11.1.31
Export list for 10.11.1.31:
/nfsshare web1.hl.local,web2.hl.local

Compile fence_pve on CentOS 7

This is where the automated part ends I’m afraid, however, there is nothing that stops you from putting the manual steps below into a Puppet manifest.

Install Packages

# yum install git gcc make automake autoconf libtool
pexpect python-requests

Download Source and Compile

# git clone https://github.com/ClusterLabs/fence-agents.git

Note the configuration part, we are interested in compiling one fencing agent only, fence_pve.

# cd fence-agents/
# ./autogen.sh
# ./configure –with-agents=pve
# make && make install

Verify:

# fence_pve –version
4.1.1.51-6e6d

Configure Pacemaker to Use fence_pve

Big thanks to Igor Cicimov’s blog post which helped me to get it working with minimal effort.

To test the fencing agent, do the following:

[[email protected] ~]# fence_pve –ip=10.11.1.5 –nodename=pve
[email protected] –password=passwd
–plug=208 –action=off

Where 10.11.1.5 is the IP of the Proxmox hypervisor, pve is the name of the Proxmox node, and the plug is the VM ID. In this case we fenced the storage2.hl.local node.

To configure Pacemaker, we can create two STONITH configurations, one for each node that we want to be able to fence.

[[email protected] ~]# pcs stonith create my_proxmox_fence207 fence_pve
ipaddr=”10.11.1.5″ inet4_only=”true” vmtype=”qemu”
login=”[email protected]” passwd=”passwd”
node_name=”pve” delay=”15″
port=”207″
pcmk_host_check=static-list
pcmk_host_list=”storage1.hl.local”
[[email protected] ~]# pcs stonith create my_proxmox_fence208 fence_pve
ipaddr=”10.11.1.5″ inet4_only=”true” vmtype=”qemu”
login=”[email protected]” passwd=”passwd”
node_name=”pve” delay=”15″
port=”208″
pcmk_host_check=static-list
pcmk_host_list=”storage2.hl.local”

Verify:

[[email protected] ~]# stonith_admin -L
my_proxmox_fence207
my_proxmox_fence208
2 devices found
[[email protected] ~]# pcs status
Cluster name: nfs_cluster
Stack: corosync
Current DC: storage1.hl.local (version 1.1.16-12.el7_4.8-94ff4df) – partition with quorum
Last updated: Sun Apr 29 17:50:59 2018
Last change: Sun Apr 29 17:50:55 2018 by root via cibadmin on storage1.hl.local

2 nodes configured
7 resources configured

Online: [ storage1.hl.local ]
OFFLINE: [ storage2.hl.local ]

Full list of resources:

nfsshare (ocf::heartbeat:Filesystem): Started storage1.hl.local
nfsd (ocf::heartbeat:nfsserver): Started storage1.hl.local
nfsroot1 (ocf::heartbeat:exportfs): Started storage1.hl.local
nfsroot2 (ocf::heartbeat:exportfs): Started storage1.hl.local
nfsvip (ocf::heartbeat:IPaddr2): Started storage1.hl.local
my_proxmox_fence207 (stonith:fence_pve): Started storage1.hl.local
my_proxmox_fence208 (stonith:fence_pve): Started storage1.hl.local

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: inactive/disabled

Note how the storage2.hl.local node is down, because we’ve fenced it.

If you decide to use test configuration, do not forget to stop the Puppet agent on the cluster nodes as it will disable STONITH (we set stonith-enabled to false in the manifest).

For more info, do the following:

# pcs stonith describe fence_pve

This will give you a list of other STONITH options available.

Source

Create a user and grant permission to a database — The Ultimate Linux Newbie Guide

Grant, Update, Insert, Drop, Use. MySQL is a copyright of Oracle Incorporated.Here is something that as a system or database admin, you’ll do lots of – create a database, create a database user, and then assign the permission for that user to operate on that database. We can do the same thing to grant permissions to other databases for that user too.

Here’s what you want to know:

First, log in to your database server as a database admin user. Usually this will be root (note this is not the same root user as your Linux server, this is the database root user).

$mysql -u root -p

once logged in, you can create the database, user and assign the right privileges:

mysql> CREATE DATABASE somedatabase;
mysql> CREATE USER ‘new_user’@’localhost’ IDENTIFIED BY ‘their_password’;

mysql> GRANT ALL PRIVILEGES ON somedatabase.* To ‘new_user’@’localhost’ IDENTIFIED BY ‘their_password’;
mysql> FLUSH PRIVILEGES;

Here’s what that all means:

CREATE – This command creates things like databases, users and tables. Note you can’t use usernames with dashes in them (underscores are OK).

GRANT – This command gives (grants) permission to databases, tables and so forth.

ALL PRIVILEGES – This tells it the user will have all standard privileges such as SELECT, INSERT, UPDATE, etc. The only privilege it does not provide is the use of the GRANT query, for obvious reasons!

ON somedatabase.* – this means grant all the privileges to the named database. If you change the * after the dot for a table name, routine or view, then this will apply the GRANT to that specified table,etc only.

TO ‘new_user’@’localhost’ – ‘new_user’ is the username of the user account you are creating. It is very important to ensure you use single quotes (‘). The hostname ‘localhost’ tells MySQL what hosts the user can connect from. In most cases, this will be localhost, because most MySQL servers are only configured to listen its own host. Opening it up to other hosts (especially on the Internet), is insecure.

IDENTIFIED BY ‘their_password’ – This sets the password for that user, replace the text their_password with a sensible password!

FLUSH PRIVILEGES – this makes sure that any privileges granted are updated in mysql so that they are ready to use.

Hope this helps. For more information on creating users, refer to the MySQL Reference Guide.

Source

How to Use Git Version Control System in Linux [Comprehensive Guide]

Version Control (revision control or source control) is a way of recording changes to a file or collection of files over time so that you can recall specific versions later. A version control system (or VCS in short) is a tool that records changes to files on a filesystem.

There are many version control systems out there, but Git is currently the most popular and frequently used, especially for source code management. Version control can actually be used for nearly any type of file on a computer, not only source code.

Version control systems/tools offer several features that allow individuals or a group of people to:

  • create versions of a project.
  • track changes accurately and resolve conflicts.
  • merge changes into a common version.
  • rollback and undo changes to selected files or an entire project.
  • access historical versions of a project to compare changes over time.
  • see who last modified something that might be causing a problem.
  • create a secure offsite backup of a project.
  • use multiple machines to work on a single project and so much more.

A project under a version control system such as Git will have mainly three sections, namely:

  • a repository: a database for recording the state of or changes to your project files. It contains all of the necessary Git metadata and objects for the new project. Note that this is normally what is copied when you clone a repository from another computer on a network or remote server.
  • a working directory or area: stores a copy of the project files which you can work on (make additions, deletions and other modification actions).
  • a staging area: a file (known as index under Git) within the Git directory, that stores information about changes, that you are ready to commit (save the state of a file or set of files) to the repository.

There are two main types of VCSs, with the main difference being the number of repositories:

  • Centralized Version Control Systems (CVCSs): here each project team member gets their own local working directory, however, they commit changes to just a single central repository.
  • Distributed Version Control Systems (DVCSs): under this, each project team member gets their own local working directory and Git directory where they can make commits. After an individual makes a commit locally, other team members can’t access the changes until he/she pushes them to the central repository. Git is an example of a DVCS.

In addition, a Git repository can be bare (repository that doesn’t have a working directory) or non-bare (one with a working directory). Shared (or public or central) repositories should always be bare – all Github repositories are bare.

Learn Version Control with Git

Git is a free and open source, fast, powerful, distributed, easy to use, and popular version control system that is very efficient with large projects, and has a remarkable branching and merging system. It is designed to handle data more like a series of snapshots of a mini filesystem, which is stored in a Git directory.

The workflow under Git is very simple: you make modifications to files in your working directory, then selectively add just those files that have changed, to the staging area, to be part of your next commit.

Once you are ready, you do a commit, which takes the files from staging area and saves that snapshot permanently to the Git directory.

To install Git in Linux, use the appropriate command for your distribution of choice:

$ sudo apt install git [On Debian/Ubuntu]
$ sudo yum install git [On CentOS/RHEL]

After installing Git, it is recommended that you tell Git who you are by providing your full name and email address, as follows:

$ git config –global user.name “Aaron Kili”
$ git config –global user.email “[email protected]

To check your Git settings, use the following command.

$ git config –list

View Git Settings

View Git Settings

Creates a New Git Repository

Shared repositories or centralized workflows are very common and that is what we will demonstrate here. For example, we assume that you have been tasked to setup a remote central repository for system administrators/programmers from various departments in your organization, to work on a project called bashscripts, which will be stored under /projects/scritpts/ on the server.

SSH into the remote server and create the necessary directory, create a group called sysadmins (add all project team members to this group e.g user admin), and set the appropriate permissions on this directory.

# mkdir-p /projects/scripts/
# groupadd sysadmins
# usermod -aG sysadmins admin
# chown :sysadmins -R /projects/scripts/
# chmod 770 -R /projects/scripts/

Then initialize a bare project repository.

# git init –bare /projects/scripts/bashscripts

Initialize Git Shared Repository

Initialize Git Shared Repository

At this point, you have successfully initialized a bare Git directory which is the central storage facility for the project. Try to do a listing of the directory to see all the files and directories in there:

# ls -la /projects/scripts/bashscripts/

List Git Shared Repository

List Git Shared Repository

Clone a Git Repository

Now clone the remote shared Git repository to your local computer via SSH (you can also clone via HTTP/HTTPS if you have a web server installed and appropriately configured, as is the case with most public repositories on Github), for example:

$ git clone ssh://[email protected]_server_ip:/projects/scripts/bashscripts

To clone it to a specific directory (~/bin/bashscripts), use the command below.

$ git clone ssh://[email protected]_server_ip:/projects/scripts/bashscripts ~/bin/bashscripts

Clone Shared Git Repository to Local

You now have a local instance of the project in a non-bare repository (with a working directory), you can create the initial structure of the project (i.e add a README.md file, sub-directories for different categories of scripts e.g recon to store reconnaissance scripts, sysadmin ro store sysadmin scripts etc.):

$ cd ~/bin/bashscripts/
$ ls -la

Create Git Project Structure

Create Git Project Structure

Check a Git Status Summary

To display the status of your working directory, use the status command which will shows you any changes you have made; which files are not being tracked by Git; those changes that have been staged and so on.

$ git status

Check Git Status

Git Stage Changes and Commit

Next, stage all the changes using the add command with the -A switch and do the initial commit. The -a flag instructs the command to automatically stage files that have been modified, and -m is used to specify a commit message:

$ git add -A
$ git commit -a -m “Initial Commit”

Do Git Commit

Publish Local Commits to Remote Git Repository

As the project team lead, now that you have created the project structure, you can publish the changes to the central repository using the push command as shown.

$ git push origin master

Push Commit to Centrol Git Repository

Right now, your local git repository should be up-to-date with the project central repository (origin), you can confirm this by running the status command once more.

$ git status

Check Git Status

You can also inform you colleagues to start working on the project by cloning the repository to their local computers.

Create a New Git Branch

Branching allows you to work on a feature of your project or fix issues quickly without touching the codebase (master branch). To create a new branch and then switch to it, use the branch and checkout commands respectively.

$ git branch latest
$ git checkout latest

Alternatively, you can create a new branch and switch to it in one step using the checkout command with the -b flag.

$ git checkout -b latest

You can also create a new branch based on another branch, for instance.

$ git checkout -b latest master

To check which branch you are in, use branch command (an asterisk character indicates the active branch):

$ git branch

Check Active Branch

After creating and switching to the new branch, make some changes under it and do some commits.

$ vim sysadmin/topprocs.sh
$ git status
$ git commit add sysadmin/topprocs.sh
$ git commit -a -m ‘modified topprocs.sh’

Merge Changes From One Branch to Another

To merge the changes under the branch test into the master branch, switch to the master branch and do the merge.

$ git checkout master
$ git merge test

Merge Test Branch into Master

If you no longer need a particular branch, you can delete it using the -d switch.

$ git branch -d test

Download Changes From Remote Central Repository

Assuming your team members have pushed changes to the central project repository, you can download any changes to your local instance of the project using the pull command.

$ git pull origin
OR
$ git pull origin master #if you have switched to another branch

Pull Changes from Central Repository

Inspect Git Repository and Perform Comparisons

In this last section, we will cover some useful Git features that keep track of all activities that happened in your repository, thus enabling you to view the project history.

The first feature is Git log, which displays commit logs:

$ git log

View Git Commit Logs

Another important feature is the show command which displays various types of objects (such as commits, tags, trees etc..):

$ git show

Git Show Objects

The third vital feature you need to know is the diff command, used to compare or show difference between branches, display changes between the working directory and the index, changes between two files on disk and so much more.

For instance to show the difference between the master and latest branch, you can run the following command.

$ git diff master latest

Show Difference Between Branches

Read Also: 10 Best Git Alternatives to Host Open Source Projects

Summary

Git allows a team of people to work together using the same file(s), while recording changes to the file(s) over time so that they can recall specific versions later.

This way, you can use Git for managing source code, configuration files or any file stored on a computer. You may want to refer to the Git Online Documentation for further documentation.

Source

Linux Top 3: Parted Magic, Quirky and Ultimate Edition

January 16, 2017
By Sean Michael Kerner

1) Parted Magic 2017_01_08

Parted Magic is a very niche Linux distribution that many users first discover when they’re trying to either re-partition a drive or recover data from an older system. The new Parted Magic 2017_01_08 release is an incremental update that follows the very large 2016_10_18 update that provided 800 updates. In contrast the big updates for the new release are:

  • Parted Magic now ships with ZFS on Linux kernel drivers!
  • Added Programs: grub-customizer-5.0.6, x11vnc-0.9.13, fslint-2.44, zerofree-1.0.4, spl-solaris-0.7.0-git12172016, zfs-on-linux-0.7.0-git12172016, and bleachbit-1.12.
  • Updated Programs: bind-9.10.4_P4, btrfs-progs-v4.9, curl-7.51.0, flashplayer-plugin-24.0.0.186, glibc-zoneinfo-2016j, gparted-0.27.0, hdparm-9.50, kernel-firmware-20170106git, libfm-1.2.5, libpng-1.6.27, firefox-50.1.0, ntp-4.2.8p9, pcmanfm-1.2.5, Python-2.7.13, samba-4.4.8, tigervnc-1.7.0.

2) Quirky 8.1.6

The Quirky Linux distribution is part of the Puppy Linux family of distributions, providing users with a lightweight operating system. The new Quirky 8.1.6 update support Ubuntu 16.04, based applications through Quriky is built using a woofQ Quirky Linux build system.

Quirky Linux 8.1.6 x86_64 is codenamed “Xerus” and is built using the woofQ Quirky Linux build system, with the help of Ubuntu 16.04 binary packages. Thus, Xerus has compatibility with all of the Ubuntu repositories. The Linux kernel is version 4.4.40 and SeaMonkey is upgraded to version 2.46. Quirky is a fork of Puppy Linux, and is mainly differentiated by being a “full installation” only, with special snapshot and recovery features, and Service Pack upgrades.

3) Ultimate Edition 5.1

The Ultimate Edition Linux distribution is yet another Ubuntu 16.04 derived distribution.

“Ultimate Edition 5.1 was built from the Ubuntu 16.04 Xenial Xerius tree using a combination of Tmosb (TheeMahn’s Operating System Builder) & work by hand. Tmosb is also included in this release (1.9.7), allowing you to do the same.”

Sean Michael Kerner is a senior editor at LinuxPlanet and InternetNews.com. Follow him on Twitter @TechJournalist.

Source

Arch Linux – News: Deprecation of ABS tool and rsync endpoint

Due to high maintenance cost of scripts related to the Arch Build
System, we have decided to deprecate the abs tool and thus rsync
as a way of obtaining PKGBUILDs.

The asp tool, available in [extra], provides similar functionality to
abs. asp export pkgname can be used as direct alternative; more
information about its usage can be found in the documentation.
Additionally Subversion sparse checkouts, as described here, can
be used to achieve a similar effect. For fetching all PKGBUILDs, the
best way is cloning the svntogit mirrors.

While the extra/abs package has been already dropped, the rsync
endpoint (rsync://rsync.archlinux.org/abs) will be disabled by the end
of the month.

Source

How to upgrade to LMDE 3 – The Linux Mint Blog

If you’ve been waiting for this I’d like to thank you for your patience.

It is now possible to upgrade the Cinnamon edition of LMDE 2 to version 3.

The upgrade instructions are available at: https://community.linuxmint.com/tutorial/view/2419

Take your time

Read all the instructions and take the time to understand them, ask for help if you’re stuck.

The instructions will ask you to make backups and to prepare system snapshots. Don’t rush into upgrading and do not take shortcuts.

Don’t panic

If you’re stuck or wondering about something don’t hesitate to ask for help:

  • You can post here in the comments section.
  • You can ask for help in the forums.
  • You can connect to the IRC (from within Linux Mint, launch Menu->Internet->Hexchat). If you’re new to IRC, please read this tutorial.

Source

Linux Scoop — Peppermint OS 9

Peppermint OS 9 – See What’s New

Peppermint OS 9 is the latest release of Ubuntu-based distribution featuring a desktop environment mashup of Xfce and LXDE components. The latest release nearly completes a process begun several upgrades ago, using more Xfce elements and fewer LXDE components.

Based on Ubuntu 18.04 LTS (Bionic Beaver), Peppermint OS 9 is using the Linux 4.15 kernel and supports both 32-bit and 64-bit hardware architectures. Highlights of this release include a new default system theme based on the popular Arc GTK+ theme, support for both Snap and Flatpak universal binary packages via GNOME Software, which will now be displayed in the main menu.

Also installed by default is the Menulibre menu editor, the Xfce Panel Switch utility, xfce4-screenshooter as default screenshot utility instead of pyshot, and xfce4-display-setttings replaces the lxrandr utility for monitor settings. The Htop system monitor utiliy is available as well and has its own menu item, and the Mozilla Firerefox is now the default web browser instead of Chromium.

Source

How to get mail statistics from your postfix mail logs | Elinux.co.in | Linux Cpanel/ WHM blog

How to get mail statistics from your postfix mail logs

Its an amazing tool and will provide you the following details

  • Total number of:
    • Messages received, delivered, forwarded, deferred, bounced and rejected
    • Bytes in messages received and delivered
    • Sending and Recipient Hosts/Domains
    • Senders and Recipients
    • Optional SMTPD totals for number of connections, number of hosts/domains connecting, average connect time and total connect time
  • Per-Day Traffic Summary (for multi-day logs)
  • Per-Hour Traffic (daily average for multi-day logs)
  • Optional Per-Hour and Per-Day SMTPD connection summaries
  • Sorted in descending order:
    • Recipient Hosts/Domains by message count, including:
      • Number of messages sent to recipient host/domain
      • Number of bytes in messages
      • Number of defers
      • Average delivery delay
      • Maximum delivery delay
    • Sending Hosts/Domains by message and byte count
    • Optional Hosts/Domains SMTPD connection summary
    • Senders by message count
    • Recipients by message count
    • Senders by message size
    • Recipients by message size

    with an option to limit these reports to the top nn.

  • A Semi-Detailed Summary of:
    • Messages deferred
    • Messages bounced
    • Messages rejected
  • Summaries of warnings, fatal errors, and panics
  • Summary of master daemon messages

Installation :-

Installation is very simple , just download the package and unzip

  • wget http://jimsun.linxnet.com/downloads/pflogsumm-1.1.5.tar.gz
  • tar -zxf pflogsumm-1.1.5.tar.gz
  • chown root:root pflogsumm-1.1.5

Generate the statistics :-

cat /var/log/maillog | ./pflogsumm.pl
( The above command will generate a detailed statistics as follows )

Grand Totals
————
messages

118 received
319 delivered
1 forwarded
6 deferred (1597 deferrals)
18 bounced
20 rejected (5%)
0 reject warnings
0 held
0 discarded (0%)

5452k bytes received
277987k bytes delivered
76 senders
49 sending hosts/domains
128 recipients
37 recipient hosts/domains

Per-Day Traffic Summary
date received delivered deferred bounced rejected
——————————————————————–
Jan 13 2018 51 251 476 14 9
Jan 14 2018 17 16 522 2 5
Jan 15 2018 43 45 527 2 6
Jan 16 2018 7 7 72

Per-Hour Traffic Daily Average
time received delivered deferred bounced rejected
——————————————————————–
0000-0100 0 1 19 0 0
0100-0200 1 1 13 0 0
0200-0300 1 1 13 0 0
0300-0400 1 1 19 0 0
0400-0500 1 1 14 0 0
0500-0600 0 0 7 0 0
0600-0700 1 1 13 0 0
0700-0800 1 1 13 0 0
0800-0900 0 0 7 0 0
0900-1000 2 2 14 0 1
1000-1100 5 51 32 3 0
1100-1200 1 1 33 0 0
1200-1300 1 4 14 0 0
1300-1400 2 2 20 0 0
1400-1500 2 2 20 0 0
1500-1600 4 4 14 0 0
1600-1700 1 1 20 0 0
1700-1800 2 2 20 0 1
1800-1900 1 2 14 1 0
1900-2000 1 1 13 0 2
2000-2100 1 1 19 0 0
2100-2200 1 1 19 0 0
2200-2300 1 1 13 0 0
2300-2400 1 1 19 0 1

Post navigation

Categories

October 2018

M T W T F S S
« Jun
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31

Source

Linux for freshers: How do I login over ssh without using password less RSA

Linux system Admins normally login to the linux servers either supplying a password, or using keybased authentication. sshpass is a tool which allows us to automatically supply password to the command prompt so that automated scripts can be run as desired by users. sshpass supplies password to ssh prompt using dedicated tty , fooling ssh to believe that a interactive user is supplying password.

Install sshpass under Debian / Ubuntu Linux

Type the following command:

$ sudo apt-get install sshpass

Install sshpass under RHEL/CentOS Linux

$ sudo yum install sshpass

If you are using Fedora Linux, type:

$ sudo dnf install sshpass

Install sshpass under Arch Linux

$ sudo pacman -S sshpass

Install sshpass under OpenSUSE Linux

$ sudo zypper install sshpass

Install sshpass under FreeBSD Unix

To install the port, enter:

# cd /usr/ports/security/sshpass/ && make install clean

To add the package, run:

# pkg install sshpass

Getting Help :

# sshpass -h

Usage: sshpass [-f|-d|-p|-e] [-hV] command parameters

  • -f filename Take password to use from file
  • -d number Use number as file descriptor for getting password
  • -p password Provide password as argument (security unwise)
  • -e Password is passed as env-var “SSHPASS” With no parameters – password will be taken from stdin
  • -h Show help (this screen)
  • -V Print version information

At most one of -f, -d, -p or -e should be used

How do I use sshpass in Linux or Unix?

Login to ssh server called example.com with password called redhat@1234

$ sshpass -p ‘redhat@1234’ ssh username@example.com

For shell script you may need to disable host key checking:

$ sshpass -p ‘redhat@1234’ ssh -o StrictHostKeyChecking=no username@example.com

TO RUN SOME COMMAND ON THE REMOTE SERVER TO CHECKING UPTIME

$sshpass -p ‘redhat@1234’ ssh username@example.com “uptime”

Sample output

01:04:35 up 126 days, 3:34, 2 users, load average: 0.50, 0.54, 0.55

Reading password from file

Another option is to read password from file using the -f option.

The syntax is:

sshpass -f fileNameHere ssh user@server

Source

Warframe: Game Guide | GamersOnLinux

1.jpg

They were called Tenno. Warriors of blade and gun: masters of the Warframe armor. Those that survived the old war were left drifting among the ruins. Now they are needed once more.

The Grineer, with their vast armies, are spreading throughout the solar system. A call echoes across the stars summoning the Tenno to an ancient place. They summon you.

Allow the Lotus to guide you. She has rescued you from your cryostasis chamber and given you a chance to survive. The Grineer will find you; you must be prepared. The Lotus will teach you the ways of the Warframes and the secrets to unlocking their powers.

COME TENNO, YOU MUST JOIN THE WAR.

 

9.jpg

Follow my step-by-step guide on installing, Warframe in Linux with Steam-Proton

Note: This guide applies to Steam-Proton. Only on Steam. It delivers DXVK and wine, tools by steam,winehq,codeweavers.

Tips
To learn more about proton see here Proton Explained

I’m using Arch. You can use Ubuntu or any Distro of choice

Arch 64bit Manjaro
Kernel-4.18.9-1
Steam-Proton-3.7-6 Beta
DXVK-patched

Steam Proton Installation Setup

 

Please have the following installed on your LinuxNote: have the most updated Nvidia drivers installed 396,54 if you’re on Intel or AMD you need mesa

Note: If your game Starts every time in windowed mode and the save does not take effect for your resolution. Remove -fullscreen:0 from line 422 in Launcher.sh from

Code:

/.local/share/Steam/steamapps/common/Warframe/Tools/
(Mesa for Intel & AMD) only Intel & AMD
Nvidia
Steam

Vulkan
Vulkan-32bit
For Intel and AMD Users with no Nvidia

Code:

sudo add-apt-repository ppa:paulo-miguel-dias/mesa
sudo apt dist-upgrade
sudo apt install mesa-vulkan-drivers mesa-vulkan-drivers:i386
Install Steam for Ubuntu and Debian – You might need to add a repo if you’re on an older Ubuntu

 

Code:

sudo apt install steam
Install Arch Linux

Code:

sudo pacman -S steam
Install Vulkan and Vulkan-32bit

Install Vulkan on Ubuntu and Debian

Code:

sudo apt install libvulkan1 libvulkan1:i386
Install Arch Linux

Code:

sudo pacman -S vulkan-icd-loader lib32-vulkan-icd-loader

Now that is done and you have everything

open Steam and go to Settings”
23.jpg
Now go to your account and change to Beta updates for Steam Play. It will Restart Steam and Update
Let it do its thing it should take more than a few seconds

22.jpg

Once done Go back to Settings-Steam Play- Select Proton 3.7.6-Beta checkmark all three boxes

21.jpg

It will Restart Steam again Now that Proton is set

Now Go to the Store and find Warframe you should now see an install button. Now go to Library and it will say installing but will say steam proton next to the play button. Wait for it to install and try and launch it. (It won’t launch) This is ok we needed an initial start for proton

Now go to your $HOME/username/

and find /.local/steam/steamapps/common/
Now find the Proton 3.7 Beta Folder
20.jpg

Go here and change user_settings.sample.py to user_settings.py

Once done open the python file with a text editor I’m using mousepad. Make sure unhash the DXVK_HUD if you want it.

30.jpg

Now that this is complete.

We have to go get some files from an awesome guy who has been making Warframe work in Lutris for a long time now in wine and DXVK. His name is GloriousEggroll. If you like what he has done thank him and donate to him.

Note: GloriousEggroll says

You will need to plug in a controller. Proton auto-closes warframe after a few minutes if it does not detect one. You do not have to use the controller to play.
I’m using XBOX360 controller I have it plugged in but not using it

I haven’t tested if it works without one but any USB controller that you have that works with steam will work you do not have to use it if you don’t want too

Go here and Download https://gitlab.com/GloriousEggroll/warframe-linux/tree/steamplay-proton pick and choose your format I choose zip archive. Then extract it somewhere you want to keep it mines in /.Downloads/.

18.jpg

once you have those files extracted we need to copy all of them to here and replace all (if dialog box appears)

In your $HOME/yourusername/
/home/username/.local/share/Steam/steamapps/common/Warframe/Tools/
Before you copy the files from here. Copy Launcher.exe and rename it to Launcher.backup or whatever just so you have a backup copy of original

17.jpg

Now you are good and done and we are finished. Now Start Warframe in Steam and you should get a white terminal box. It will update everything let it do its thing should only take 10-20mins depending on your CPU prob, not even that once its done it will open Warframe in windowed mode and you will have to set your Resolution.

15.jpg

Everytime you start Warframe you will get this white terminal box it will check updates and then open Warframe this is good

Optimization
If you want a higher FPS, Turn off Vsync. But I do not recommend it. Only if you’re not getting the FPS you want. Make sure you know what FPS you get in Games and know your internet speed.

Then you can set it to your right FPS. If its too high your CPU and GPU will go up. Unless Warframe Devs fixed this

Gameplay Video:

2.jpg

3.jpg

6.jpg

13.jpg

7.jpg

8.jpg

10.jpg

12.jpg

 

Last edited: Sep 24, 2018

Source

WP2Social Auto Publish Powered By : XYZScripts.com