Aptik – A Tool to Backup/Restore Your Favourite PPAs and Apps in Ubuntu

As we all know that Ubuntu has a six month release cycle for new version. All the PPAs and Packages of your choice also needs to be re-added, to avoid doing those stuffs and save your time, here we bringing a fantastic tool called ‘Aptik‘.

Aptik (Automated Package Backup and Restore) is a GUI application that lets you backup your favourite PPAsand Packages. It is very difficult to remember to which packages are installed and from where it has been installed them. We can take backup and restore of all the PPAs before re-installation or up-gradation of OS.

Install Aptik in Ubuntu

Backup/Restore PPAs and Apps

Aptik is a open source package that simplify backup and restore of PPAsApplications and Packages after a fresh installation or upgradation of Debian based UbuntuLinux Mint and other Ubuntu derivatives.

Features of Aptik

  1. Custom PPAs and the Apps
  2. Backup Themes and icons
  3. Backup applications installed via APT cache
  4. Apps installed from Ubuntu Software Centre
  5. Aptik command-line options

How to Backup PPA’s and Packages on Old Systems

By default Aptik tool is not available under Ubuntu Software Centre, you need to use PPA to install it. Add the following PPA to your system and update the local repository and install package as shown.

Installation of Aptik

$ sudo apt-add-repository -y ppa:teejee2008/ppa
$ sudo apt-get update
$ sudo apt-get install aptik      [Commandline]
$ sudo apt-get install aptik-gtk  [GUI]

Start ‘Aptik‘ from the applications menu.

Start Aptik

Start Aptik

Create Backup Directory

Create or Select a backup directory to store your all sections to re-use on your new install.

Aptik Backup-Directory

Select Backup Directory

Backup Software Sources

Click the ‘Backup‘ button for Software Sources. A list of installed third-party PPAs will be displayed along with their Packages names that are installed from the PPA.

Aptik Software Sources

Backup Software Sources

Note: PPAs with a green icon indicates as active and have some packages installed. Whereas yellow icon indicates as active but no packages installed.

Select your favourite PPAs and click on the ‘Backup‘ button to create backup. All PPAs will be stored in a file called ‘ppa.list‘ in the selected backup directory.

Backup Downloaded Packages (APT Cache)

Click the ‘Backup‘ button to copy all the downloaded packages to backup folder.

Aptik Downloaded Packages

Backup Downloaded Packages

Note: All the downloaded packages stored under your ‘/var/cache/apt/archives‘ folder will be copied to the backup folder.

This step is only useful if you are re-installing the same version of Linux distribution. This step can be skipped for upgradation of system, since all the packages for the new release will be latest than the packages in the system cache.

Backup Software Selections

Clicking the ‘Backup‘ button will show a list of all installed top-level packages.

Aptik Software Selections

Software Selections

Note: By default all the packages installed by Linux distribution are un-selected, because those packages are the part of Linux distribution. If required those packages can be selected for backup.

By default all extra packages installed by the user marked as selected, because those packages are installed via Software Centre or by running apt-get install command. If required those can be un-selected.

Select your favourite packages to backup and click the ‘Backup‘ button. A file named ‘packages.list‘ will be created under the backup directory.

Backup Backup Themes and Icons

Click the ‘Backup‘ button to list all the installed themes and icons from the ‘/usr/share/themes‘ and ‘/usr/share/icons‘ directories. Next, select your themes and click on the ‘Backup‘ button to backup.

Aptik Themes Icons

Backup Themes Icons

Aptik Command-line Otions

Run ‘aptik –help’ on the terminal to see the full list of available options.

Aptik Command-line

Command-line Options

To restore those backups, you will need to install Aptik from its own PPA on the newly installed system. After this, hit the ‘Restore’ button to restore all your PPAs Packages, Themes and Icons to your freshly installed system.

Conclusion

You may be wondering why such cool stuff is not by default available on Ubuntu? Ubuntu does it via ‘Ubuntu One‘ and that too paid apps. What do you think about this tool? Share you views through our comment section.

SourceAptik

Source

How to Auto Backup Files to USB Media When Connected

A backup is the last defense against data loss, offering a means to restore original data. You can use either a removable media such as an external hard drive or USB flash disk or a shared network folder, or a remote host to back up your data. It’s very easy (and equally essential) to automatically backup your important files without you having to remember to do so.

Read Also24 Outstanding Backup Tools for Linux Systems in 2018

In this article, we will learn how to auto backup data to a removable media after connecting it to your Linux machine. We will test with an external disk. This is a basic guide to get you started with using udev for real-life solutions.

For the purpose of this article, we need a modern Linux system with:

  1. systemd systems and services manager
  2. udev device manager
  3. rsync backup tool

How to Configuring Udev Rules for Removable Media

Udev is a device manager that enables you to define rules that can among others, trigger execution of a program or script when a device is added to or removed from a running system, as part of the device event handling. We can use this feature to execute a backup script after adding a removable media to the running system.

Before we configure the actual rule for device event handling, we need to provide udev some attributes of the removable media that will be used for the backup. Connect the external disk to the running system and run the following lsusb command to identify its vendor and product ID.

For the testing purpose, we will be using 1TB external hard disk as shown.

$ lsusb

Find Device Vendor ID of Removable Media

Find Device Vendor ID of Removable Media

From the output of the above command, our device vendor ID is 125f, which we will specify in the udev rules as explained below.

First remove the connected media from the system and create a new udev rules file called 10.autobackup.rules under the directory /etc/udev/rules.d/.

The 10 in the filename specifies the order of rules execution. The order in which rules are parsed is important; you should always create custom rules to be parsed before the defaults.

$ sudo vim /etc/udev/rules.d/10.autobackup.rules

Then add the following rule in it:

SUBSYSTEM=="block", ACTION=="add", ATTRS{idVendor}=="125f" SYMLINK+="external%n", RUN+="/bin/backup.sh"

Let’s briefly explain the above rule:

  • "==": is an operator to compare for equality.
  • "+=": is an operator to add the value to a key that holds a list of entries.
  • SUBSYSTEM: matches the subsystem of the event device.
  • ACTION: matches the name of the event action.
  • ATTRS{idVendor}: matches sysfs attribute values of the event device, which is the device vendor ID.
  • RUN: specifies a program or script to execute as part of the event handling.

Save the file and close it.

Create a Auto Backup Script

Now create a auto backup script that will auto backup files to removable USB when connected to the system.

$ sudo vim /bin/autobackup.sh 

Now copy and paste the following script, make sure to replace values of BACKUP_SOURCEBACKUP_DEVICEand MOUNT_POINT in the script.

#!/usr/bin/bash
BACKUP_SOURCE="/home/admin/important"
BACKUP_DEVICE="/dev/external1"
MOUNT_POINT="/mnt/external"


#check if mount point directory exists, if not create it
if [ ! -d “MOUNT_POINT” ] ; then 
	/bin/mkdir  “$MOUNT_POINT”; 
fi

/bin/mount  -t  auto  “$BACKUP_DEVICE”  “$MOUNT_POINT”

#run a differential backup of files
/usr/bin/rsync -auz  "$MOUNT_LOC" "$BACKUP_SOURCE" && /bin/umount "$BACKUP_DEVICE"
exit

Then make the script executable with the following command.

$ sudo chmod +x /bin/autobackup.sh

Next, reload udev rules using following command.

$ udevadm control --reload

The next time you connect your external hard disk or whatever device you configured to the system, all your documents from the specified location should be auto backed up to it.

Note: How effectively this works may be influenced by the filesystem on your removable media and the udev rules you write, especially capturing the device attributes.

For more information, see the udevmount and rsync man pages.

$ man udev
$ man mount 
$ man rsync 

You might also like to read these following Linux backup related articles.

  1. rdiff-backup – A Remote Incremental Backup Tool for Linux
  2. Tomb – A File Encryption and Personal Backup Tool for Linux
  3. System Tar and Restore – A Versatile Backup Script for Linux
  4. How to Create Bandwidth-efficient Backups Using Duplicity in Linux
  5. Rsnapshot – A Local/Remote Backup Tool for Linux
  6. How to Sync Two Apache Web Servers/Websites Using Rsync

That’s all for now! In this article, we have explained how to auto backup data to a removable media after connecting it to your Linux machine. We would like to hear from you via the feedback form below.

Source

Unison – An Ultimate Local/Remote File Synchronization Tool for Linux

File Synchronization is the process of mirroring, files and data in two or more locations in accordance with certain protocols. Files and Data are the most valuable thing in this era of Information Technology. By File Synchronization, we ensure that one or more copies of our priceless data is always available in case of disaster of any kind or when we need to work in many locations.

A good File Synchronizer is supposed to have below listed Features:

  1. Cryptographic synchronisation, as a security Implementation.
  2. A good ratio data compression.
  3. A Perfect algorithm implementation to check data duplication.
  4. Keep track of file source change.
  5. Scheduled Synchronisation.

One such tool is Unison. Here in this article we will be discussing “Unison” in details, along with its features, functionality and a lot more.

What is Unison?

Unison is a cross platform file synchronization application which is useful in synchronizing data between two or more locations be it computers or storage device.

Features of Unison

  1. Released under General Public License (GPL)
  2. Open Source and Cross Platform Available for (Linux, Unix, BSD, Windows, Mac)
  3. Make available same version of file across different machine, regardless of last modified location.
  4. Cross Platform Synchronization possible i.e., a Windows machine can be synchronized over a *nix Server.
  5. Communicate over standard Protocol TCP/IP i.e., possible between any two machines over internet regardless of Geographical Location.
  6. Smart Management – Show conflict when a file has been modified on both source and show it to the user.
  7. Secured SSH Connection – An encrypted data transfer.
  8. rsync algorithm is deployed here, only the modified part is transferred and overwritten. Hence. it’s fast in execution and Maintenance.
  9. Robust in nature
  10. Written in “Objective Caml” programming Language.
  11. Matured and Stable, no active development required.
  12. It is a user-level program ie., Application don’t need superuser privileged.
  13. It is known for its clear and precise specification.

Installation of Unison in Linux

The current stable release (Unison-2.40.102) can be downloaded from the link below:

Download Unison 2.40.102 Stable

Alternatively, we can also download and Install “Unison”, if it is available in repo using apt or yum command as shown below.

On Debian/Ubuntu/Linux Mint

Open terminal using “Ctr+Alt+T” and run the following command on the terminal.

$ sudo apt-get install unison
On RHEL/CentOS/Fedora

First, enable EPEL repository and then install using the following command.

$ sudo yum install unison

NOTE: The above command will Install Unison without GUI. If you need to Install Unison with GUI support, install ‘unison-gtk‘ package (Only available for Debian based distros) using the below command.

# apt-get install unison-gtk

How to Use Unison

Unison is used to synchronize a set of files in a directory tree to another location with similar structure, which may be a local host or remote host.

Local File Synchronization

Let’s create 5 files under your Desktop and then synchronize it to a folder called ‘desk-back‘ in your home directory.

$ cd Desktop/
$ touch 1.txt 2.txt 3.txt 4.txt 5.txt
$ ls

1.txt 2.txt 3.txt 4.txt 5.txt
$ mkdir /home/server/desk-back

Now run the ‘unison‘ command to synchronize your Desktop files to under ‘desk-back‘ in your home directory.

$ unison /home/server/Desktop /home/server/desk-back/
Sample Output
Contacting server...
Looking for changes
Warning: No archive files were found for these roots, whose canonical names are:
/home/server/Desktop
/home/server/desk-back
This can happen either
because this is the first time you have synchronized these roots,
or because you have upgraded Unison to a new version with a different
archive format.
Update detection may take a while on this run if the replicas are
large.
Unison will assume that the 'last synchronized state' of both replicas
was completely empty. This means that any files that are different
will be reported as conflicts, and any files that exist only on one
replica will be judged as new and propagated to the other replica.
If the two replicas are identical, then no changes will be reported.If you see this message repeatedly, it may be because one of your machines
is getting its address from DHCP, which is causing its host name to change
between synchronizations. See the documentation for the UNISONLOCALHOSTNAME
environment variable for advice on how to correct this.
Donations to the Unison project are gratefully accepted:
http://www.cis.upenn.edu/~bcpierce/unison
Press return to continue.[]
...
...
Saving synchronizer state
Synchronization complete at 13:52:15 (5 items transferred, 0 skipped, 0 failed)

Now check the location /home/server/desk-back, if the synchronization process was successful?

$ cd /home/server/desk-back/
$ ls

1.txt 2.txt 3.txt 4.txt 5.txt

Remote File Synchronization

For remote file synchronization, you must have same version of Unison installed on both local and remote server. Run the following command to verify that the local unison can start and connect to the remote unison server.

$ unison -testServer /home/ravisaive/Desktop/ ssh://172.16.25.125//home/ravisaive/Desktop/
Sample Output
Contacting server...
ravisaive@172.16.25.125's password: 
Connected [//tecmint//home/ravisaive/Desktop -> //tecmint//home/ravisaive/Desktop]

The above results, indicates that the remote server is connected successfully, now sync the files using below command.

$ unison -batch /home/ravisaive/Desktop/ ssh://172.16.25.125//home/ravisaive/Desktop/

Executing GUI Unison

The first step is to set profile which requires you to set basic information as name of a profile and what you want to synchronize, source and Destination location, etc.

To start Unison GUI, run the following command on the terminal.

$ unison-gtk

Create New Unison Profile

Create New Profile

Enter Unison Profile Description

Enter Profile Description

Select Unison Synchronization Type

Select Sync Type

Select Sync Directories

Select Sync Directories

5elect Partition Type

5elect Partition Type

Unison Profile Created

New Profile Created

Select Created Profile

Select Created Profile

Unison Sync Message

Unison Sync Message

Once profile is created and source as well as destination is entered, we are welcomed with the below window.

Unison Flle Synchronization Process

File Synchronization Process

Just select all the files and click on OK. The files will start synchronising from both the directions, based upon last update time stamp.

Conclusion

Unison is a wonderful tool which makes it possible to have custom synchronisation either way (Bidirectional), available in GUI as well as command Line Utility. Unison provides what it promises. This tool is very easy to use and requires no extra effort. As a tester I was very much impressed with this application. It has a whole lot of features which can be implemented as required. For more information read unison-manual.

Read Also:

  1. Rsync (Remote Sync) of Files
  2. Rsnapshot (Rsync Based) File Synchronizer

That’s all for now. I’ll soon be here again with another interesting article. Till then stay tuned and connected to Tecmint. Don’t forget to provide us with your valuable feedback in our comment section.

Source

How to Clone/Backup Linux Systems Using – Mondo Rescue Disaster Recovery Tool

Mondo Rescue is an open source, free disaster recovery and backup utility that allows you to easily create complete system (Linux or WindowsClone/Backup ISO Images to CDDVDTapeUSB devicesHard Disk, and NFS. And can be used to quickly restore or redeploy working image into other systems, in the event of data loss, you will be able to restore as much as entire system data from backup media.

Mondo program is available freely for download and released under GPL (GNU Public License) and has been tested on a large number of Linux distributions.

This article describes Mondo installation and usage of Mondo Tools to backup of your entire systems. The Mondo Rescue is a Disaster Recovery and Backup Solutions for System Administrators to take full backup of their Linux and Windows file system partitions into CD/DVDTapeNFS and restore them with the help of Mondo Restore media feature that uses at boot-time.

Installing MondoRescue on RHEL / CentOS / Scientific Linux

The latest Mondo Rescue packages (current version of Mondo is 3.0.3-1) can be obtained from the “MondoRescue Repository“. Use “wget” command to download and add repository under your system. The Mondo repository will install suitable binary software packages such as afiobuffermindimindi-busyboxmondo and mondo-doc for your distribution, if they are available.

For RHEL/CentOS/SL 6,5,4 – 32-Bit

Download the MondoRescue repository under “/etc/yum.repos.d/” as file name “mondorescue.repo“. Please download correct repository for your Linux OS distribution version.

# cd /etc/yum.repos.d/

## On RHEL/CentOS/SL 6 - 32-Bit ##
# wget ftp://ftp.mondorescue.org/rhel/6/i386/mondorescue.repo

## On RHEL/CentOS/SL 5 - 32-Bit ##
# wget ftp://ftp.mondorescue.org/rhel/5/i386/mondorescue.repo

## On RHEL/CentOS/SL 4 - 32-Bit ##
# wget ftp://ftp.mondorescue.org/rhel/4/i386/mondorescue.repo

For RHEL/CentOS/SL 6,5,4 – 64-Bit

# cd /etc/yum.repos.d/

## On RHEL/CentOS/SL 6 - 64-Bit ##
# wget ftp://ftp.mondorescue.org/rhel/6/x86_64/mondorescue.repo

## On RHEL/CentOS/SL 5 - 64-Bit ##
# wget ftp://ftp.mondorescue.org/rhel/5/x86_64/mondorescue.repo

## On RHEL/CentOS/SL 4 - 64-Bit ##
# wget ftp://ftp.mondorescue.org/rhel/4/x86_64/mondorescue.repo

Once you successfully added repository, do “yum” to install latest Mondo tool.

# yum install mondo

Installing MondoRescue on Debian / Ubuntu / Linux Mint

Debian user’s can do “wget” to grab the MondoRescue repository for Debain 6 and 5 distributions. Run the following command to add “mondorescue.sources.list” to “/etc/apt/sources.list” file to install Mondo packages.

On Debian

## On Debian 6 ##
# wget ftp://ftp.mondorescue.org/debian/6/mondorescue.sources.list
# sh -c "cat mondorescue.sources.list >> /etc/apt/sources.list" 
# apt-get update 
# apt-get install mondo
## On Debian 5 ##
# wget ftp://ftp.mondorescue.org/debian/5/mondorescue.sources.list
# sh -c "cat mondorescue.sources.list >> /etc/apt/sources.list" 
# apt-get update 
# apt-get install mondo

On Ubuntu/Linux Mint

To install Mondo Rescue in Ubuntu 12.1012.0411.1011.0410.10 and 10.04 or Linux Mint 13, open the terminal and add the MondoRescue repository in “/etc/apt/sources.list” file. Run these following commands to install Mondo Resuce packages.

# wget ftp://ftp.mondorescue.org/ubuntu/`lsb_release -r|awk '{print $2}'`/mondorescue.sources.list
# sh -c "cat mondorescue.sources.list >> /etc/apt/sources.list" 
# apt-get update 
# apt-get install mondo

Creating Cloning or Backup ISO Image of System/Server

After installing Mondo, Run “mondoarchive” command as “root” user. Then follow screenshots that shows how to create an ISO based backup media of your full system.

# mondoarchive

Welcome to Mondo Rescue

Mondo Rescue Welcome Screen

Mondo Rescue Welcome Screen


Please enter the full path name to the directory for your ISO Images. For example: /mnt/backup/

Mondo Rescue Storage Directory

Mondo Rescue Storage Directory

Select Type of compression. For example: bzipgzip or lzo.

Select Type of Compression

Select Type of Compression

Select the maximum compression option.

Mondo Rescue Compression Speed

Select Compression Speed

Please enter how large you want each ISO image in MB (Megabytes). This should be less than or equal to the size of the CD-R(W)’s (i.e. 700) and for DVD’s (i.e. 4480).

Mondo Rescue ISO Size

Define Mondo Rescue ISO Size

Please give a name of your ISO image filename. For example: tecmint1 to obtain tecmint-[1-9]*.iso files.

Mondo Rescue Prefix

Enter Name of Mondo Rescue

Please add the filesystems to backup (separated by “|“). The default filesystem is “/” means full backup.

Mondo Rescue Backup Paths

Enter Backup Paths

Please exclude the filesystem that you don’t want to backup (separated by “|“). For example: “/tmp” and “/proc” are always excluded or if you want full backup of your system, just hit enter.

Mondo Rescue Exclude Paths

Enter Exclude File System

Please enter your temporary directory path or select default one.

Mondo Rescue Temporary Directory

Enter Temporary Directory Name

Please enter your scratch directory path or select default one.

Mondo Rescue Scratch Directory Name

Enter Scratch Directory Name

If you would like to backup extended attributes. Just hit “enter“.

Mondo Rescue Extended Backup

Enter Extended Backup Attributes

If you want to Verify your backup, after mondo has created them. Click “Yes“.

Mondo Rescue Verify Backups

Verify Backups

If you’re using stable standalone Linux Kernel, click “Yes” or if you using other Kernel say “Gentoo” or “Debain” hit “No“.

Mondo Rescue Kernel

Select Stable Linux Kernel

Click “Yes” to proceed further.

Mondo Rescue Backup Process

Proceed Cloning Process

Creating a catalog of “/” filesystem.

Mondo Rescue Making Catalog

Creating Catalog for File System

Dividing filelist into sets.

Mondo Rescue Dividing File List

Dividing File List

Calling MINDI to create boot+data disk.

Mondo Rescue Boot Data Disk

Creating Boot Data Disk

Backing up filesytem. It may take a couple of hours, please be patient.

Mondo Rescue Backup Filesystem

Backing up File System

Backing up big files.

Mondo Rescue Big Files Backup

Big Files Backup

Running “mkisofs” to make ISO Image.

Mondo Rescue Creating ISO

Making ISO Image

Verifying ISO Image tarballs.

Mondo Rescue Verify ISO

Verify ISO

Verifying ISO Image Big files.

Mondo Rescue Verify Big Files

Verify Big Files

Finally, Mondo Archive has completed. Please hit “Enter” to back to the shell prompt.

Mondo Rescue Backup Completed

Backup Completed

If you’ve selected default backup path, you will see an ISO image under “/var/cache/mondo/“, that you can burnt into a CD/DVD for later restore.

To restore all files automatically, boot the system with Mondo ISO Image and at boot prompt type “nuke” to restore files. Here is the detailed video that demonstrates how to restore files automatically from CD/DVDmedia.

For other distributions, you can also grab Mondo Rescue packages at mondorescue.org download page.

Source

Rclone – Sync Files Directories from Different Cloud Storage

Rclone is a command line program written in Go language, used to sync files and directories from different cloud storage providers such as: Amazon Drive, Amazon S3, Backblaze B2, Box, Ceph, DigitalOcean Spaces, Dropbox, FTP, Google Cloud Storage, Google Drive, etc.

As you see, it supports multiple platforms, which makes it a useful tool to sync your data between servers or to a private storage.

Rclone comes with the following features

  • MD5/SHA1 hash checks at all times for ensuring file integrity integrity.
  • Timestamps are preserved on files.
  • Partial syncs supported on a whole file basis.
  • Copy mode for new or changed files.
  • One way sync to make a directory identical.
  • Check mode – hash equality check.
  • Can sync to and from network, eg two different cloud accounts.
  • (Encryption) backend.
  • (Cache) backend.
  • (Union) backend.
  • Optional FUSE mount (rclone mount).

How to Install rclone in Linux Systems

The installation of rclone can be completed in two different ways. The easier one is using their installation script, by issuing the following command.

# curl https://rclone.org/install.sh | sudo bash

What this script does is to check the OS type on which it is ran and download the archive related to that OS. Then it extracts the archive and copies rclone binary to /usr/bin/rclone and gives 755 permissions on the file.

In the end, when the installation is complete, you should see the following line:

Rclone v1.44 has successfully installed.
Now run “rclone config” for setup, Check https://rclone.org/docs/ for  more details.

The second way to install rclone is by issuing the following commands.

# curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip
# unzip rclone-current-linux-amd64.zip
# cd rclone-*-linux-amd64

Now copy the binary file and give it executable permissions.

# cp rclone /usr/bin/
# chown root:root /usr/bin/rclone
# chmod 755 /usr/bin/rclone

Install rclone manpage.

# mkdir -p /usr/local/share/man/man1
# cp rclone.1 /usr/local/share/man/man1/
# mandb 

How to Configure rclone in Linux Systems

Next what you will need to do is run the rclone config to create your config file. It will be used for authentication for future usage of rclone. To run the configuration setup run the following command.

# rclone config

You will see the following prompt:

2018/11/13 11:39:58 NOTICE: Config file “/home/user/.config/rclone/rclone.conf” not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q>

The options are as follows:

  • n) – Create new remote connection
  • s) – set password protection for your configuration
  • q) – exit the config

For the purpose of this tutorial lets press "n" and create new connection. You will be asked to give the new connection a name. After that you will be prompted to select the type of storage to be configured:

rclone - New Remote Connection

rclone – New Remote Connection

I have named by connection “Google” and selected “Google Drive”, which is under the number 12. The rest of the questions you can answer by simply leaving the default answer, which is an empty “”.

When asked to, you may select “autoconfig”, which will generate all the required info to connect to your Google Drive and give rclone permissions to use data from Google Drive.

The process looks something like this:

Google Application Client Secret - leave blank normally.
client_secret>
Scope that rclone should use when requesting access from drive.
Choose a number from below, or type in your own value
 1 / Full access all files, excluding Application Data Folder.
   \ "drive"
 2 / Read-only access to file metadata and file contents.
   \ "drive.readonly"
   / Access to files created by rclone only.
 3 | These are visible in the drive website.
   | File authorization is revoked when the user deauthorizes the app.
   \ "drive.file"
   / Allows read and write access to the Application Data folder.
 4 | This is not visible in the drive website.
   \ "drive.appfolder"
   / Allows read-only access to file metadata but
 5 | does not allow any access to read or download file content.
   \ "drive.metadata.readonly"
scope> 1
ID of the root folder - leave blank normally.  Fill in to access "Computers" folders. (see docs).
root_folder_id> 
Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
service_account_file>
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine or Y didn't work
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
Configure this as a team drive?
y) Yes
n) No
y/n> n
--------------------
[remote]
client_id = 
client_secret = 
scope = drive
root_folder_id = 
service_account_file =
token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2018-11-13T11:57:58.955387075Z"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

How to Use rclone in Linux Systems

Rclone has quite a long list of available options and commands to be used with. We will try to cover some of the more important ones:

List Remote Directory

# rclone lsd <remote-dir-name>:

rclone - List Remote Directory

rclone – List Remote Directory

Copy Data with rclone

# rclone copy source:sourcepath dest:destpath

Note that if rclone finds duplicates, those will be ignored:

rclone - Copy Data

rclone – Copy Data

Sync data with rclone

If you want to sync some data between directories, you should use rclone with sync command.

The command should look like this:

# rclone sync source:path dest:path [flags]

In this case the source is synced to destination, changing the destination only! This method skips unchanged files. Since the command can cause data loss, you can use it with “–dry-run” to see what exactly will be copied and deleted.

rclone Sync Data

rclone Sync Data

Move Data with rclone

To move data, you can use rclone with move command. The command should look like this:

# rclone move source:path dest:path [flags]

The content from the source, will be moved (deleted) and placed on the selected destination.

Other useful rclone Commands

To create a directory on destination.

# rclone mkdir remote:path

To remove a directory.

# rclone rmdir remote:path

Check if files on source and destination match:

# rclone check source:path dest:path

Delete files:

# rclone delete remote:path

Each of rclone commands can be used with different flags and includes its own help menu. For example, you can do a selective delete using the delete option. Lets say you want to delete files larger than 100M, the command would look like this.

# rclone --min-size 100M delete remote:path

It is highly recommend to review the manual and help for each command to get the most of rclone. The full documentation of rclone is available at: https://rclone.org/

Conclusion

rclone is a powerful command line utility to help you manage data between different Cloud storage providers. While in this article we scratched just the surface of rclone capabilities, there is much more to be achieved with it especially when used in combination with cron service (for example).

Source

8 Best Open Source “Disk Cloning/Backup” Softwares for Linux Servers

Disk cloning is the process of copying data from a hard disk to another one, in fact you can do this process by copy & paste but you won’t be able to copy the hidden files and folders or the in-use files, that’s why you need a cloning software to do the job, also you may need the cloning process to save a backup image from your files and folders.

Linux Disk Cloning Tools

8 Linux Disk Cloning Tools

Basically, the cloning software job is to take all disk data, convert them into a single .img file and give it to you, so you can copy it to another hard drive, and here we have the best 8 Open Source Cloning software to do the job for you.

1. Clonezilla

Clonezilla is a Live CD based on Ubuntu & Debian to clone all your hard drive data or to take a backup, licensed under GPL 3, it is similar to Norton Ghost on Windows but more effective.

Features

  1. Support for many filesystems like: ext2, ext3, ext4, btrfs, xfs, and many other filesystems.
  2. Support for BIOS and UEFI.
  3. Support for MPR and GPT partitions.
  4. Ability to reinstall grub 1 and 2 on any attached hard drive.
  5. Works on weak computers ( 200 MB of RAM is needed only).
  6. Many other features.

Clonezilla for Linux

Clonezilla for Linux

Suggested Read: How to Clone or Backup Linux Disk Using Clonezilla

2. Redo Backup

Redo Bakcup also a Live CD tool to clone your drivers easily, Redo Backup is a free & Open Source Live System licensed under GPL 3 to do the job, Features are as the website says.

  1. Easy GUI boots from CD in less than a minute.
  2. No installation required; runs from a CD-ROM or a USB device.
  3. Saves and restores Linux and Windows systems.
  4. Automatically locates local network shares.
  5. Access files even without login.
  6. Recover deleted files, documents, media files quickly.
  7. Internet access with a Chromium browser to download drivers.
  8. Small in size only 250MB Live CD.

Redo Backup for Linux

Redo Backup

  1. Install Redo Backup to Clone/Backup Linux Systems

3. Mondo Rescue

Unlike other cloning software, Mondo doesn’t convert your hard drivers into an .img file, but it will convert them into an .iso image, you can also create a custom Live CD with Mondo using “mindi” which is a special tool developed by Mondo Rescue to clone your data from the Live CD.

It supports most Linux distributions, it also supports FreeBSD, and it is licensed under GPL, You can install Mondo Rescue by using the following link.

MondoRescue for Linux

MondoRescue

  1. Install Mondo Rescue to Clone/Backup Linux Systems

4. Partimage

Partimage is an open-source software backup, by default it works under Linux system and available to install from the package manager for most Linux distributions, if you don’t have a Linux system installed by default you can use “SystemRescueCd” which is a Live CD that include Partimage by default to do the cloning process that you want.

Partimage is very fast in cloning hard drivers, but the problem is that it doesn’t support ext4 or btrfs partitions, although that you can use it to clone other filesystems like ext3 and NTFS.

Partimage for Linux

Partimage

Suggested Read: How to Backup or Clone Linux Partitions Using ‘cat’ Command

5. FSArchiver

FSArchiver is a continuation of Partimage, also a good tool to clone hard disks, it supports cloning Ext4 partitions and NTFS partitions, here’s a list of features:

Features

  1. Support for basic file attributes like owner, permissions, etc.
  2. Support for extended attributes like those used by SELinux.
  3. Support the basic file­system attributes (label, uuid, block­size) for all Linux file­systems.
  4. Support for NTFS partitions of Windows and Ext of Linux and Unix­Like.
  5. Support for checksums which enables you to check for data corruption.
  6. Ability to restore corrupted archive by just skipping the corrupted file.
  7. Ability to have more than one filesystem in an archive.
  8. Ability to compress the archive in many formats like lzo, gzip, bzip2, lzma/xz.
  9. Ability to split big files in size to a smaller one.

You can download FSArchiver and install it on your system, or you can download SystemRescueCD which also contains FSArchiver.

FSArchiver for Linux

FSArchiver

6. Partclone

Partclone is a free tool to clone & restore partitions, written in C in first appeared in 2007, it supports many filesystems like ext2, ext3, ext4, xfs, nfs, reiserfs, reiser4, hfs+, btrfs  and it is very simple to use.

Licensed under GPL, it is available as a tool in Clonezilla as well, you can download it as a package.

Partclone for Linux

Partclone

7. G4L

G4L is a free Live CD system to clone hard disk easily, it’s main feature is that you can compress the filesystem, send it via FTP or CIFS or SSHFS or NFS to any location you want, it also supports GPT partitions since version 0.41, it is licensed under BSD license and available to download for free.

G4L for Linux

G4L

Suggested Read: 14 Outstanding Backup Utilities for Linux Systems

8. doClone

doClone is also a free software project that is developed to clone Linux system partitions easily, written in C++, it supports up to 12 different filesystems, it can preform Grub bootloader restoration and can transform the clone image to another computers via LAN, it also supports live cloning which means that you can create a clone from the system even when it is up and running, doClone.

doClone for Linux

doClone

There are many other tools to clone your Linux hard disks, Have you used any cloning software from the above list to backup your hard drivers? Which one is the best for you? and also tell us if any other tool if you know, which is not listed here.

Source

rdiff-backup – A Remote Incremental Backup Tool for Linux

rdiff-backup is a powerful and easy-to-use Python script for local/remote incremental backup, which works on any POSIX operating system such as Linux, Mac OS X or Cygwin. It brings together the remarkable features of a mirror and an incremental backup.

Significantly, it preserves subdirectories, dev files, hard links, and critical file attributes such as permissions, uid/gid ownership, modification times, extended attributes, acls, and resource forks. It can work in a bandwidth-efficient mode over a pipe, in a similar way as the popular rsync backup tool.

rdiff-backup backs up a single directory to another over a network using SSH, implying that the data transfer is encrypted thus secure. The target directory (on the remote system) ends up an exact copy of the source directory, however extra reverse diffs are stored in a special subdirectory in the target directory, making it possible to recover files lost some time ago.

Dependencies

To use rdiff-backup in Linux, you’ll need the following packages installed on your system:

  • Python v2.2 or later
  • librsync v0.9.7 or later
  • pylibacl and pyxattr Python modules are optional but necessary for POSIX access control list(ACL) and extended attribute support respectively.
  • rdiff-backup-statistics requires Python v2.4 or later.

How to Install rdiff-backup in Linux

Important: If you are operating over a network, you’ll have to install rdiff-backup both systems, preferably both installations of rdiff-backup will have to be the exact same version.

The script is already present in the official repositories of the mainstream Linux distributions, simply run the command below to install rdiff-backup as well as its dependencies:

On Debian/Ubuntu

$ sudo apt-get update
$ sudo apt-get install librsync-dev rdiff-backup

On CentOS/RHEL 7

# wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-9.noarch.rpm
# rpm -ivh epel-release-7-9.noarch.rpm
# yum install librsync rdiff-backup

On CentOS/RHEL 6

# wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
# rpm -ivh epel-release-6-8.noarch.rpm
# yum install librsync rdiff-backup

On Fedora

# yum install librsync rdiff-backup
# dnf install librsync rdiff-backup [Fedora 22+]

How to Use rdiff-backup in Linux

As I mentioned before, rdiff-backup uses SSH to connect to remote machines on your network, and the default authentication in SSH is the username/password method, which normally requires human interaction.

However, to automate tasks such as automatic backups with scripts and beyond, you will need to configure SSH Passwordless Login Using SSH keys, because SSH keys increases the trust between two Linux servers for easy file synchronization or transfer.

Once you have setup SSH Passwordless Login, you can start using the script with the following examples.

Backup Files to Different Partition

The example below will backup the /etc directory in a Backup directory on another partition:

$ sudo rdiff-backup /etc /media/aaronkilik/Data/Backup/mint_etc.backup

Backup Files to Different Partition

Backup Files to Different Partition

To exclude a particular directory as well as it’s subdirectories, you can use the --exclude option as follows:

$ sudo rdiff-backup --exclude /etc/cockpit --exclude /etc/bluetooth /media/aaronkilik/Data/Backup/mint_etc.backup

We can include all device files, fifo files, socket files, and symbolic links with the --include-special-filesoption as below:

$ sudo rdiff-backup --include-special-files --exclude /etc/cockpit /media/aaronkilik/Data/Backup/mint_etc.backup

There are two other important flags we can set for file selection; --max-file-size size which excludes files that are larger than the given size in bytes and --min-file-size size which excludes files that are smaller than the given size in bytes:

$ sudo rdiff-backup --max-file-size 5M --include-special-files --exclude /etc/cockpit /media/aaronkilik/Data/Backup/mint_etc.backup

Backup Remote Files on Local Linux Server

For the purpose of this section, we’ll use:

Remote Server (tecmint)	        : 192.168.56.102 
Local Backup Server (backup) 	: 192.168.56.10

As we stated before, you must install the same version of rdiff-backup on both machines, now try to check the version on both machines as follows:

$ rdiff-backup -V

Check rdiff Version on Servers

Check rdiff Version on Servers

On the backup server, create a directory which will store the backup files like so:

# mkdir -p /backups

Now from the backup server, run the following commands to make a backup of directories /var/log/ and /root from remote Linux server 192.168.56.102 in /backups:

# rdiff-backup root@192.168.56.102::/var/log/ /backups/192.168.56.102_logs.backup
# rdiff-backup root@192.168.56.102::/root/ /backups/192.168.56.102_rootfiles.backup

The screenshot below shows the root file on remote server 192.168.56.102 and the backed up files on the back server 192.168.56.10:

Backup Remote Directory on Local Server

Backup Remote Directory on Local Server

Take note of the rdiff-backup-data directory created in the backup directory as seen in the screenshot, it contains vital data concerning the backup process and incremental files.

rdiff-backup - Backup Process Files

rdiff-backup – Backup Process Files

Now, on the server 192.168.56.102, additional files have been added to the root directory as shown below:

Verify Backup Directory

Verify Backup Directory

Let’s run the backup command once more time to get the changed data, we can use the -v[0-9] (where the number specifies the verbosity level, default is 3 which is silent) option to set the verbosity feature:

# rdiff-backup -v4 root@192.168.56.102::/root/ /backups/192.168.56.102_rootfiles.backup 

Incremental Backup with Summary

Incremental Backup with Summary

And to list the number and date of partial incremental backups contained in the /backups/192.168.56.102_rootfiles.backup directory, we can run:

# rdiff-backup -l /backups/192.168.56.102_rootfiles.backup/

Automating rdiff-back Backup Using Cron

We can print summary statistics after a successful backup with the --print-statistics. However, if we don’t set this option, the info will still be available from the session statistics file. Read more concerning this option in the STATISTICS section of the man page.

And the –remote-schema flag enables us to specify an alternative method of connecting to a remote computer.

Now, let’s start by creating a backup.sh script on the backup server 192.168.56.10 as follows:

# cd ~/bin
# vi backup.sh

Add the following lines to the script file.

#!/bin/bash

#This is a rdiff-backup utility backup script

#Backup command
rdiff-backup --print-statistics --remote-schema 'ssh -C %s "sudo /usr/bin/rdiff-backup --server --restrict-read-only  /"'  root@192.168.56.102::/var/logs  /backups/192.168.56.102_logs.back

#Checking rdiff-backup command success/error
status=$?
if [ $status != 0 ]; then
        #append error message in ~/backup.log file
        echo "rdiff-backup exit Code: $status - Command Unsuccessful" >>~/backup.log;
        exit 1;
fi

#Remove incremental backup files older than one month
rdiff-backup --force --remove-older-than 1M /backups/192.168.56.102_logs.back

Save the file and exit, then run the following command to add the script to the crontab on the backup server 192.168.56.10:

# crontab -e

Add this line to run your backup script daily at midnight:

0   0  *  *  * /root/bin/backup.sh > /dev/null 2>&1

Save the crontab and close it, now we’ve successful automated the backup process. Ensure that it is working as expected.

Read through the rdiff-backup man page for additional info, exhaustive usage options and examples:

# man rdiff-backup

rdiff-backup Homepage: http://www.nongnu.org/rdiff-backup/

That’s it for now! In this tutorial, we showed you how to install and basically use rdiff-backup, an easy-to-use Python script for local/remote incremental backup in Linux. Do share your thoughts with us via the feedback section below.

Source

Rsnapshot (Rsync Based) – A Local/Remote File System Backup Utility for Linux

rsnapshot is an open source local / remote filesystem backup utility was written in Perl language that advantage the power of Rsync and SSH program to create, scheduled incremental backups of Linux/Unixfilesystems, while only taking up the space of one single full backup plus differences and keep those backups on local drive to different hard drive, an external USB stick, an NFS mounted drive or simply over the network to another machine via SSH.

Install rsnapshot backup in Linux

Install Rsnapshot Backup Tool

This article will demonstrate how to install, setup and use rsnapshot to create incremental hourlydailyweeklyand monthly local backups, as well as remote backups. To perform all the steps in this article, you must be rootuser.

Step 1: Installing Rsnapshot Backup in Linux

Installation of rsnapshot using Yum and APT may differs slightly, if you’re using Red Hat and Debian based distributions.

On RHEL/CentOS

First you will have to install and enable third-party repository called EPEL. Please follow below link to install and enable under your RHEL/CentOS systems. Fedora users don’t require any special repository configurations.

  1. Install and Enable EPEL Repository in RHEL/CentOS 6/5/4

Once you get things setup, install rsnapshot from the command line as shown.

# yum install rsnapshot
On Debian/Ubuntu/Linux Mint

By default, rsnapshot included in Ubuntu’s repositories, so you can install it using apt-get command as shown.

# apt-get install rsnapshot

Step 2: Setting up SSH Password-less Login

To backup remote Linux servers, your rsnapshot backup server will be able to connect through SSH without a password. To accomplish this, you will need to create an SSH public and private keys to authenticate on the rsnapshot server. Please follow below link to generate a public and private keys on your rsnapshot backup server.

  1. Create SSH Passwordless Login Using SSH Keygen

Step 3: Configuring Rsnapshot

Now you will need to edit and add some parameters to rsnapshot configuration file. Open rsnapshot.conf file with vi or nano editor.

# vi /etc/rsnapshot.conf

Next create a backup directory, where you want to store all your backups. In my case my backup directory location is “/data/backup/”. Search for and edit the following parameter to set the backup location.

snapshot_root			 /data/backup/

Also uncomment the “cmd_ssh” line to allow to take remote backups over SSH. To uncomment the line remove the “#” in-front of the following line so that rsnapshot can securely transfer your data to a backup server.

cmd_ssh			/usr/bin/ssh

Next, you need to decide how many old backups you would like to keep, because rsnapshot had no idea how often you want to take snapshots. You need to specify how much data to save, add intervals to keep, and how many of each.

Well, the default settings are good enough, but still I would like you to enable “monthly” interval so that you could also have longer term backups in place. Please edit this section to look similar to below settings.

#########################################
#           BACKUP INTERVALS            #
# Must be unique and in ascending order #
# i.e. hourly, daily, weekly, etc.      #
#########################################

interval        hourly  6
interval        daily   7
interval        weekly  4
interval        monthly 3

One more thing you need to edit is “ssh_args” variable. If you have changed the default SSH Port (22) to something else, you need to specify that port number of your remote backing up server.

ssh_args		-p 7851

Finally, add your local and remote backup directories that you want to backup.

Backup Local Directories

If you’ve decided to backup your directories locally to the same machine, the backup entry would look like this. For example, I am taking backup of my /tecmint and /etc directories.

backup		/tecmint/		localhost/
backup		/etc/			localhost/
Backup Remote Directories

If you would like to backup up a remote server directories, then you need to tell the rsnapshot where the server is and which directories you want to backup. Here I am taking a backup of my remote server “/home” directory under “/data/backup” directory on rsnapshot server.

backup		 root@example.com:/home/ 		/data/backup/

Read Also:

  1. How to Backup/Sync Directories Using Rsync (Remote Sync) Tool
  2. How to Transfer Files/Folders Using SCP Command
Exclude Files and Directories

Here, I’m going to exclude everything, and then only specifically define what I want to backed up. To do this, you need to create a exclude file.

# vi /data/backup/tecmint.exclude

First get the list of directories that you want to backed up and add ( – * ) to exclude everything else. This will only backup what you listed in the file. My exclude file looks like similar to below.

+ /boot
+ /data
+ /tecmint
+ /etc
+ /home
+ /opt
+ /root
+ /usr
- /usr/*
- /var/cache
+ /var
- /*

Using exclude file option can be very tricky due to use of rsync recursion. So, my above example may not be what you are looking. Next add the exclude file to rsnapshot.conf file.

exclude_file    /data/backup/tecmint.exclude

Finally, you are almost finished with the initial configuration. Save the “/etc/rsnapshot.conf” configuration file before moving further. There are many options to explain, but here is my sample configuration file.

config_version  1.2
snapshot_root   /data/backup/
cmd_cp  /bin/cp
cmd_rm  /bin/rm
cmd_rsync       /usr/bin/rsync
cmd_ssh /usr/bin/ssh
cmd_logger      /usr/bin/logger
cmd_du  /usr/bin/du
interval        hourly  6
interval        daily   7
interval        weekly  4
interval        monthly 3
ssh_args	-p 25000
verbose 	2
loglevel        4
logfile /var/log/rsnapshot/
exclude_file    /data/backup/tecmint.exclude
rsync_long_args --delete        --numeric-ids   --delete-excluded
lockfile        /var/run/rsnapshot.pid
backup		/tecmint/		localhost/
backup		/etc/			localhost/
backup		root@example.com:/home/ 		/data/backup/

All the above options and argument explanations are as follows:

  1. config_version 1.2 = Configuration file version
  2. snapshot_root = Backup Destination to store snapshots
  3. cmd_cp = Path to copy command
  4. cmd_rm = Path to remove command
  5. cmd_rsync = Path to rsync
  6. cmd_ssh = Path to SSH
  7. cmd_logger = Path to shell command interface to syslog
  8. cmd_du = Path to disk usage command
  9. interval hourly = How many hourly backups to keep.
  10. interval daily = How many daily backups to keep.
  11. interval weekly = How many weekly backups to keep.
  12. interval monthly = How many monthly backups to keep.
  13. ssh_args = Optional SSH arguments, such as a different port (-p )
  14. verbose = Self-explanatory
  15. loglevel = Self-explanatory
  16. logfile = Path to logfile
  17. exclude_file = Path to the exclude file (will be explained in more detail)
  18. rsync_long_args = Long arguments to pass to rsync
  19. lockfile = Self-explanatory
  20. backup = Full path to what to be backed up followed by relative path of placement.

Step 4: Verify Rsnapshot Configuration

Once you’ve done with your all configuration, its time to verify that everything works as expected. Run the following command to verify that your configuration has the correct syntax.

# rsnapshot configtest

Syntax OK

If everything configured correctly, you will receive a “Syntax OK” message. If you get any error messages, that means you need to correct those errors before running rsnapshot.

Next, do a test run on one of the snapshot to make sure that we are generating correct results. We take the “hourly” parameter to do a test run using -t (test) argument. This below command will display a verbose list of the things it will do, without actually doing them.

# rsnapshot -t hourly
Sample Output
echo 2028 > /var/run/rsnapshot.pid 
mkdir -m 0700 -p /data/backup/ 
mkdir -m 0755 -p /data/backup/hourly.0/ 
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded /home \
    /backup/hourly.0/localhost/ 
mkdir -m 0755 -p /backup/hourly.0/ 
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded /etc \
    /backup/hourly.0/localhost/ 
mkdir -m 0755 -p /data/backup/hourly.0/ 
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \
    /usr/local /data/backup/hourly.0/localhost/ 
touch /data/backup/hourly.0/

Note: The above command tells rsnapshot to create an “hourly” backup. It actually prints out the commands that it will perform when we execute it really.

Step 5: Running Rsnapshot Manually

After verifying your results, you can remove the “-t” option to run the command really.

# rsnapshot hourly

The above command will run the backup script with all the configuration that we added in the rsnapshot.conffile and creates a “backup” directory and then creates the directory structure under it that organizes our files. After running above command, you can verify the results by going to the backup directory and list the directory structure using ls -l command as shown.

# cd /data/backup
# ls -l

total 4
drwxr-xr-x 3 root root 4096 Oct 28 09:11 hourly.0

Step 6: Automating the Process

To automate the process, you need to schedule rsnapshot to be run at certain intervals from Cron. By default, rsnapshot comes with cron file under “/etc/cron.d/rsnapshot“, if it’s doesn’t exists create one and add the following lines to it.

By default rules are commented, so you need to remove the “#” from in front of the scheduling section to enable these values.

# This is a sample cron file for rsnapshot.
# The values used correspond to the examples in /etc/rsnapshot.conf.
# There you can also set the backup points and many other things.
#
# To activate this cron file you have to uncomment the lines below.
# Feel free to adapt it to your needs.

0     */4    * * *    root    /usr/bin/rsnapshot hourly
30     3     * * *    root    /usr/bin/rsnapshot daily
0      3     * * 1    root    /usr/bin/rsnapshot weekly
30     2     1 * *    root    /usr/bin/rsnapshot monthly

Let me explain exactly, what above cron rules does:

  1. Runs every 4 hours and creates an hourly directory under /backup directory.
  2. Runs daily at 3:30am and create a daily directory under /backup directory.
  3. Runs weekly on every Monday at 3:00am and create a weekly directory under /backup directory.
  4. Runs every monthly at 2:30am and create a monthly directory under /backup directory.

To better understand on how cron rules works, I suggest you read our article that describes.

  1. 11 Cron Scheduling Examples

Step 7: Rsnapshot Reports

The rsnapshot provides a nifty small reporting Perl script that sends you an email alert with all the details as to what occurred during your data backup. To setup this script, you need to copy the script somewhere under “/usr/local/bin” and make it executable.

# cp /usr/share/doc/rsnapshot-1.3.1/utils/rsnapreport.pl /usr/local/bin
# chmod +x /usr/local/bin/rsnapreport.pl

Next, add “–stats” parameter in your “rsnapshot.conf” file to the rsync’s long arguments section.

vi /etc/rsnapshot.conf
rsync_long_args --stats	--delete        --numeric-ids   --delete-excluded

Now edit the crontab rules that were added earlier and call the rsnapreport.pl script to pass the reports to specified email address.

# This is a sample cron file for rsnapshot.
# The values used correspond to the examples in /etc/rsnapshot.conf.
# There you can also set the backup points and many other things.
#
# To activate this cron file you have to uncomment the lines below.
# Feel free to adapt it to your needs.

0     */4    * * *    root    /usr/bin/rsnapshot hourly 2>&1  | \/usr/local/bin/rsnapreport.pl | mail -s "Hourly Backup" yourname@email.com
30     3     * * *    root    /usr/bin/rsnapshot daily 2>&1  | \/usr/local/bin/rsnapreport.pl | mail -s "Daily Backup" yourname@email.com
0      3     * * 1    root    /usr/bin/rsnapshot weekly 2>&1  | \/usr/local/bin/rsnapreport.pl | mail -s "Weekly Backup" yourname@email.com
30     2     1 * *    root    /usr/bin/rsnapshot monthly 2>&1  | \/usr/local/bin/rsnapreport.pl | mail -s "Montly Backup" yourname@email.com

Once you’ve added above entries correctly, you will get a report to your e-mail address similar to below.

SOURCE           TOTAL FILES	FILES TRANS	TOTAL MB    MB TRANS   LIST GEN TIME  FILE XFER TIME
--------------------------------------------------------------------------------------------------------
localhost/          185734	   11853   	 2889.45    6179.18    40.661 second   0.000 seconds

Reference Links

  1. rsnapshot homepage

That’s it for now, if any problems occur during installation do drop me a comment. Till then stay tuned to TecMint for more interesting articles on the Open source world.

Source

How to Install TeamSpeak Server in CentOS 7

TeamSpeak is a popular, cross-platform VoIP and text chat application for internal business communication, education and training (lectures), online gaming and connecting with friends and family. Its primary priority is delivering a solution that is simpler to use, with strong security standards, superb voice quality, and less system and bandwidth utilization. It uses a client-server architecture and is capable of handling thousands of simultaneous users.

How it Works

Deploy your own TeamSpeak Server on a Linux VPS and share your TeamSpeak Server address with teammates, friends and family or anyone you want to communicate with. Using the free desktop TeamSpeak Client, they connect to your TeamSpeak Server and start talking. It’s that easy!

You can get a 2GB RAM VPS from Linode for $10, but it’s unmanaged. If you want a Managed VPS, then use our new BlueHost Promotion Offer, you will get upto 40% OFF on hosting with one Free Domain for Life. If you get a Managed VPS, they will probably install TeamSpeak Server for you.

Key Features

  • It is easy to use and highly customizable.
  • Has a decentralized infrastructure and is highly scalable.
  • Supports high security standards.
  • Offers remarkable voice quality.
  • Allows for low system resource and bandwidth usage.
  • Supports powerful file transfer.
  • Also supports a robust permission system.
  • Supports stunning 3D sound effects .
  • Allows for mobile connectivity and lots more.

Requirements

  1. CentOS 7 Server with Minimal System Installation
  2. CentOS 7 Server with Static IP Address

In this tutorial, we will explain how to install TeamSpeak Server on your CentOS 7 instance and a desktop TeamSpeak Client on a Linux machine.

Installing TeamSpeak Server in CentOS 7

1. First start by updating your CentOS 7 server packages and then install needed dependencies for the installation process using following commands.

# yum update
# yum install vim wget perl tar net-tools bzip2

2. Next, you need to create a user for TeamSpeak Server process to ensure that the TeamSpeak server is running in user mode detached from other processes.

# useradd teamspeak
# passwd teamspeak

3. Now go to the TeamSpeak Server download page and grab the most recent version (i.e. 3.2.0) using following wget command and then extract the tarball and copy all of the files to our unprivileged user’s home directory as shown.

# wget -c http://dl.4players.de/ts/releases/3.2.0/teamspeak3-server_linux_amd64-3.2.0.tar.bz2
# tar -xvf teamspeak3-server_linux_amd64-3.2.0.tar.bz2
# mv teamspeak3-server_linux_amd64 teamspeak3
# cp -R teamspeak3 /home/teamspeak/
# chown -R teamspeak:teamspeak /home/teamspeak/teamspeak3/

4. Once everything in place, now switch to teamspeak user and start the teamspeak server manually using following commands.

# su - teamspeak
$ cd teamspeak3/
$ ./ts3server_startscript.sh start

TeamSpeak Starting

TeamSpeak Starting

5. To manage TeamSpeak Server under Systemd services, you need to create a teamspeak service unit file.

$ su -
# vi  /lib/systemd/system/teamspeak.service

Add the following configuration in the unit file.

[Unit]
Description=Team Speak 3 Server
After=network.target

[Service]
WorkingDirectory=/home/teamspeak/
User=teamspeak
Group=teamspeak
Type=forking
ExecStart=/home/teamspeak/ts3server_startscript.sh start inifile=ts3server.ini
ExecStop=/home/teamspeak/ts3server_startscript.sh stop
PIDFile=/home/teamspeak/ts3server.pid
RestartSec=15
Restart=always

[Install]
WantedBy=multi-user.target

Save and close the file. Then start teamspeak server for now and enable it to start automatically at system boot as follows.

# systemctl start teamspeak
# systemctl enable teamspeak
# systemctl status teamspeak

Start TeamSpeak Server

Start TeamSpeak Server

6. When you start the teamspeak server for the first time, it generates a administrator token/key which you will use to connect to the server from a TeamSpeak Client. You can view the log file to get the key.

# cat /home/teamspeak/logs/ts3server_2017-08-09__22_51_25.819181_1.log

TeamSpeak Server Token

TeamSpeak Server Token

7. Next, TeamSpeak listens on a number of ports: 9987 UDP (TeamSpeak Voice service), 10011 TCP (TeamSpeak ServerQuery) and 30033 TCP (TeamSpeak FileTransfer).

Therefore modify your firewall rules to open these ports as follows.

# firewall-cmd --zone=public --add-port=9987/udp --permanent
# firewall-cmd --zone=public --add-port=10011/tcp --permanent
# firewall-cmd --zone=public --add-port=30033/tcp --permanent
# firewall-cmd --reload

Installing TeamSpeak Client in Ubuntu 18.04

8. Login into your Ubuntu Desktop machine (you can use any Linux OS) and go to the TeamSpeak Client download page and grab the most recent version (i.e. 3.1.9) using following wget command and install it as shown.

$ wget http://dl.4players.de/ts/releases/3.1.9/TeamSpeak3-Client-linux_amd64-3.1.9.run
$ chmod 755 TeamSpeak3-Client-linux_amd64-3.1.9.run
$ ./TeamSpeak3-Client-linux_amd64-3.1.9.run
$ cd TeamSpeak3-Client-linux_amd64
./ts3client_runscript.sh

TeamSpeak Client on Ubuntu

TeamSpeak Client on Ubuntu

9. To access the server query admin account, use the loginname and password which were created after starting the server. Here, you will also be asked to provide the ServerAdmin Key, once entered the key, you will see the message below meaning you now have administrative rights on the teamspeak server you just installed.

Privilege Key successfully used.

For more information, check out the TeamSPeak Homepage: https://www.teamspeak.com/en/

In this article, we have explained how to install TeamSpeack Server on CentOS 7 and a client on Ubuntu Desktop. If you have any questions or thoughts to share, use the feedback form below to reach us.

Source

Getting Started with MySQL Clusters as a Service

MySQL Cluster.me starts offering MySQL Clusters and MariaDB Clusters as a service based on Galera Replication technology.

In this article we will go through the main features of a MySQL and MariaDB clusters as a service.

MySQL Clusters as a Service

MySQL Clusters as a Service

What is a MySQL Cluster?

If you have ever wondered how you can increase the reliability and scalability of your MySQL database you might have found that one of the ways to do that is through a MySQL Cluster based on Galera Cluster technology.

This technology allows you to have a complete copy of the MySQL database synchronized across many servers in one or several datacenters. This lets you achieve high database availability – which means that if 1 or more of your database servers crash then you will still have a fully operational database on another server.

It is important to note that the minimum number of servers in a MySQL Cluster is 3 because when one server recovers from a crash it needs to copy data from one of the remaining two servers making one of them a “donor“. So in case of crash recovery you must have at least two online servers from which the crashed server can recover the data.

Also, a MariaDB cluster is essentially the same thing as MySQL cluster just based on a newer and more optimized version on MySQL.

MySQL Clusters Galera Replications

MySQL Clusters Galera Replications

What is a MySQL Cluster and MariaDB Cluster as a Service?

MySQL Clusters as a service offer you a great way to achieve both requirements at the same time.

First, you get High Database Availability with a high probability of 100% Uptime in case of any datacenter issues.

Secondly, outsourcing the tedious tasks associated with managing a mysql cluster let you focus on your business instead of spending time on cluster management.

In fact, managing a cluster on your own may require you to perform the following tasks:

  1. Provision and setup the cluster – may take you a few hours of an experienced database administrator to fully setup an operational cluster.
  2. Monitor the cluster – one of your techs must keep an eye on the cluster 24×7 because many issues can happen – cluster desynchronization, server crash, disk getting full etc.
  3. Optimize and resize the cluster – this can be a huge pain if you have a large database and you need to resize the cluster. This task needs to be handled with extra care.
  4. Backups management – you need to backup your cluster data to avoid it being lost if your cluster fails.
  5. Issue resolution – you need an experienced engineer who will be able to dedicate a lot of effort optimizing and solving issues with your cluster.

Instead, you can save a lot of time and money by going with a MySQL Cluster as a Service offered by MySQLcluster.me team.

So what’s included into MySQL Cluster as a Service offered by MySQLcluster.me?

Apart from high database availability with an almost guaranteed uptime of 100%, you get the ability to:

  1. Resize the MySQL Cluster at any time – you can increase or decrease cluster resources to adjust for the spikes in your traffic (RAM, CPU, Disk).
  2. Optimized Disks and Database Performance – disks can achieve a rate of 100,000 IOPS which is crucial for database operation.
  3. Datacenter Choice – you can decide in which datacenter you would like to host the cluster. Currently supported – Digital Ocean, Amazon AWS, RackSpace, Google Compute Engine.
  4. 24×7 Cluster Support – if anything happens to your cluster our team will always assist you and even provide you advice on your cluster architecture.
  5. Cluster Backups – our team sets up backups for you so that your cluster is automatically backed up on a daily basis to a secure location.
  6. Cluster Monitoring – our team sets up automatic monitoring so in case of any issue our team starts working on your cluster even if you are away from your desk.

There are a lot of advantages of having your own MySQL Cluster but this must be done with care and experience.

Speak to MySQL Cluster team to find the best suitable package for you.

Source

WP2Social Auto Publish Powered By : XYZScripts.com