Redo Backup and Recovery Tool to Backup and Restore Linux Systems

Redo Backup and Recovery software is a complete backup and disaster recovery solution for systems. It provides simple and easy to use functions that anyone can use. It supports bare-metal restore, means even if your computer hard drive totally melts or damaged by a virus, you can still able to restore a completely-functional system running in less than 10 minutes.

All your files and settings will be restored to the exact same situation they were in when the most recent snapshot was taken. Redo Backup and Recovery is a live ISO image is built on Ubuntu to give a graphical user interface for users. You can use this tool to backup and restore every system, it doesn’t matter whether you use Windows or Linux, it works on both platforms, because it is open source and completely free to use for personal and commercial use.

Features

Redo Backup and Recovery tool key features are:

  1. No Installation Needed : You don’t need to install Redo Backup or even you don’t need an operating system installed to restore. Just put the CD device into your system and reboot. No need to re-install Windows again!
  2. Boots in Seconds : The system boots in 30 seconds from CD, and it automatically detect all your hardware. It consumes less space and resources, the download size is only 250MB, and you can download it freely. No serial key or license required.
  3. It’s Pretty : Redo Backup gives an easy to use interface with network access and a complete system via Ubuntu. Operate other applications while your operating system backup is being transferred.
  4. Works with Linux or Windows : Redo Backup works on both operating systems and any computer user can backup and restore all machines with this tool.
  5. Finds Network Shares : Redo Backup automatically search and locate your local area network for drives to backup to or restore from. No need to bother about shared drive or attached network storage device, it detects automatically.
  6. Recover Lost Data : Redo Backup provides a file recovery tool that automatically finds deleted files and save them to another drive.
  7. Easy Internet Access : Is your computer crashed or broken, but you needing an internet access to download drivers? Doesn’t worry just insert Redo Backup CD, reboot, and start browsing the Internet.
  8. Drive Configuration Tools : Redo Backup start menu provides a powerful graphical drive management and partition editing tools to edit, manage and resize partitions.

Download Redo Backup

As I said it’s a Live CD image, so you cannot directly run this program from within the operating system. You need to follow our steps as described below in order to use Redo Backup.

Getting Started

Download the latest version of the Redo Backup live CD.

You will need to burn the ISO disc image using a CD burning software such as KDE Burning Tool for Linux and for Windows there are plenty search for it.

After creating ISO CD image, put the CD in and reboot your computer to use Redo Backup. While system is starting you may need to press F8 or F12 keys to boot from the CD-ROM drive.

Once you boots up system with Live CD, a mini operating system will loaded into memory which will launch Redo Backup. Now decide what you want to do, Backup machines or Restore machines from last saved images. For example, Here I’m taking my own Ubuntu 12.10 system backup, follow the screen grabs below for your reference.

Click on “Start Redo Backup“.

Redo Backup Boot Screen

Redo Backup Boot Screen

Welcome Screen of “Redo Backup“.

Redo Backup Welcome Screen

Redo Backup Welcome Screen

Easily create a backup image of your computer or completely restore from one. Click on “Backup” to create full system backup.

Backup Linux Server

Select Backup

Select the source drive from the drop-down list that you would like to create a backup image from. Click on “Next“.

Linux Partition Backup

Select Partition

Select which parts of the drive to create backup of. Leave all parts selected if you are unsure. Click on “Next“.

Select Partition Backup Drive

Select Partition Backup Drive

Select Destination Drive it could be local drive connected to your computer or shared network drive.

Linux Backup Drive

Select Backup Destination Drive

Next it will ask you to give unique name for this backup image, such as the “date“. Today’s date is automatically entered for you like “20130820“.

Next it will backing up your system to the location you selected. This may take an hour or more depending on the speed of your computer and the amount of data you have.

That’s it, you successfully created backup image for your computer. If you would like to Restore this image on any other computer follow the same procedure and select “Restore“, then follow on-screen instructions.

Reference Link

Redo Backup homepage.

Source

How to Clone or Backup Linux Disk Using Clonezilla

Clonezilla is one of the greatest Open Source backup tool for Linux. The absence of a Graphical User Interface combined with a simpler, fast and intuitive guided command line wizard that runs on top of a live Linux Kernel makes it a perfect candidate back-up tool for every sysadmin out there.

With Clonezilla, not only you can perform a full backup of a device data blocks directly to another drive, also known disk cloning, but you can also backup entire disks or individual partitions remotely (using SSH, Samba or NFS shares) or locally to images which can be all encrypted and stored in a central backup storage, typically a NAS, or even on external hard-disks or other USB devices.

Suggested Read: 8 Best Open Source “Disk Cloning/Backup” Softwares for Linux Servers

In case of a drive failure the backed-up images can be easily restored to a new device plugged-in into your machine, with the remark that the new device must meet the minimum required space value, which is at least the same size that the failed backed-up drive had.

In simpler terms, if you clone a 120 GB hard-disk which has 80 GB free space, you can’t restore the backed-up image to a new 80 GB hard-drive. The new hard drive which will be used for cloning or restoring the old one must have at least the same size as the source drive (120 GB).

Suggested Read: 14 Outstanding Backup Utilities for Linux Systems

In this tutorial we are going to show you how you can clone a block device, typically a hard-disk on top of which we run a CentOS 7 server (or any Linux distribution such as RHELFedoraDebianUbuntu, etc.).

In order to clone the target disk you need to physically add a new disk into your machine with at least the same size as source disk used for cloning.

Requirements

  1. Clonezilla ISO image – http://clonezilla.org/downloads.php
  2. New Hard Drive – physically plugged-in into the machine and operational (consult BIOS for device information)

Suggested Read: How to Backup or Clone Linux Partitions Using ‘cat’ Command

How to Clone or Backup CentOS 7 Disk with Clonezilla

1. After you download and burn Clonezilla ISO image to a CD/DVD, place the bootable media into your machine optical drive, reboot the machine and press the specific key (F11F12ESCDEL, etc) in order to instruct the BIOS to boot from the appropriate optical drive.

2. The first screen of Clonezilla should appear on your screen. Select the first option, Clonezilla live and press Enter key to proceed further.

Clonezilla Boot Screen

Clonezilla Boot Screen

3. After the system loads the required components into your machine RAM a new interactive screen should appear which will ask you to choose your language.

Use up or down arrow keys to navigate through language menu and press Enter key in order to choose your language and move forward.

Select Clonezilla Language

Select Clonezilla Language

4. On the next screen you have the option to configure your keyboard. Just press Enter key at Don’t touch keymap option to move to the next screen.

Configuring Console-data

Configuring Console-data

5. On the next screen choose Start Clonezilla in order to enter Clonezilla interactive console menu.

Start Clonezilla for Interactive Menu

Start Clonezilla for Interactive Menu

6. Because in this tutorial we are going to perform a local disk clone, so choose the second option, device-device, and press Enter key again to proceed further.

Also, make sure that the new hard-dive is already physically plugged-in intro your machine and properly detected by your machine.

Select Disk to Clone

Select Disk to Clone

7. On the next screen choose Beginner mode wizard and press Enter key to move to the next screen.

If the new hard disk is larger than the old one you can choose Expert mode and select -k1 and -r options which will assure that the partitions will be proportionally created in the target disk and the file system will be automatically resized.

Be advised to use the expert mode options with extreme caution.

Select Beginner Mode for Disk Cloning

Select Beginner Mode for Disk Cloning

8. On the next menu choose disk_to_local_disk option and press Enter to continue. This option ensures that a full disk clone (MBR, partition table and data) with the same size as the source disk to target disk will be performed further.

Select Disk to Local Disk Cloning

Select Disk to Local Disk Cloning

9. On the next screen you must choose the source disk that will be used for clone. Pay attention at disk names used here. In Linux a disk can be named sdasdb etc, meaning that sda is the first disk, sdb the second and so on.

In case you’re not sure what’s your source disk name you can physically examine the source disk name and serial No, check SATA port cabling on the motherboard or consult the BIOS in order to obtain disk information.

In this guide we’re using Vmare Virtual disks for cloning and sda is the source disk that will be used for cloning. After you successfully identified the source drive press Enter key in order to move to the next screen.

Select Linux Disk to Clone

Select Linux Disk to Clone

10. Next, select the second disk that will be used as a target for cloning and press Enter key to continue. Proceed with maximum attention because the cloning process is destructive and will wipe all data from the target disk, including MBR, partition table, data or any boot loader.

Choose Local Disk as Target Cloning

Choose Local Disk as Target Cloning

11. If you’re sure the source file system is not corrupted you can safely choose Skip checking/reparing source file system and press Enter to continue.

Next, the command used for this cloning session will be displayed on your screen and the prompt will wait for you to hit the Enter key in order to continue.

Skip Checking Source Filesystem

Skip Checking Source Filesystem

12. Before starting the real process of disk cloning, the utility will display some reports concerning its activity and will issue two warning messages.

Press y key twice to agree with both warnings and press y key the third time in order to clone the boot loader on the target device.

Confirm Disk Cloning Warning Messages

Confirm Disk Cloning Warning Messages

13. After you agreed all warning the clone process will automatically start. All data from the source drive will be automatically replicated to the target device with no user interference.

Clonezilla will display a graphical report regarding all data it transfers from a partition to the other, including the time and speed it takes to transfer data.

Clonezilla Linux Disk Cloning Process

Clonezilla Linux Disk Cloning Process

14. After the cloning process finishes successfully a new report will be displayed on your screen and the prompt will ask you whether you would like to use Cloneziila again by entering command line or exit the wizard.

Just press Enter key to move to the new wizard and from there select poweroff option in order to halt your machine.

Clonezilla Linux Disk Cloning Completed

Clonezilla Linux Disk Cloning Completed

Poweroff Machine

Poweroff Machine

That’s all! The cloning process is finished and the new hard disk can now be used instead of the old one after it has been physically detached from the machine. If the old hard drive is still in better shape you can store it in a safe location and use it as a backup solution for extreme cases.

In case your CentOS File System Hierarchy spawns multiple disks you need to make sure that each disk in the hierarchy is also duplicated in order to backup data in case if one of the disks fails.

Source

Amanda – An Advanced Automatic Network Backup Tool For Linux

In the era of information technology, data is priceless. We have to protect data from unauthorised access as well as from any kind of data loss. We have to manage each of them separately.

Install Amanda in Linux

Amanda Backup Solution

Here, in this article we will be covering data backup process, which is must for most of the System Administrators and most of the time supposed to be boring activity. The tool we will be using is ‘Amanda‘.

What is Amanda

Amanda Stands for (Advanced Maryland Automatic Network Disk Archiver) which is very useful backup tool designed to backup and archive computers on the network to disk, tape or cloud.

Amanda History

The Computer Science Department of University of Maryland (UoM) remained the source of Free and Quality Software which was at par with Proprietary Software. The Advanced Maryland Automatic Network Disk Archiver was developed by UoM but now this wonderful project is no more supported by UoM and is hosted by SourceForge, where it remains in development.

Features of Amands

  1. Open Source Archiving Tool written in C and Perl.
  2. Capable of Data Backup on Multiple Computers on Network.
  3. Based on Client-Server Model.
  4. Scheduled Backup Supported.
  5. Available as Free Community Edition as well as Enterprise Edition, with Full Support.
  6. Available for most of the Linux Distributions.
  7. Windows Machine Supported using Samba or native win32 Client.
  8. Support Tape as well as Disk Drives for backup.
  9. Support tape-spanning i.e., Split lager files into multiple tapes.
  10. Commercial Enterprise Amanda is developed by Zmanda.
  11. Zmanda includes – Zmanda Management Console (ZMC), scheduler, Cloud Based Service and Plugin framework.
  12. The cloud based service works in accordance with Amazon s3.
  13. Plugin framework supports application like Oracle Database, Samba, etc.
  14. Amanda Enterprise zmanda supports image backup, which makes it possible to make backups of Live VMware.
  15. Takes less time than other backup tools to create a backup of same volume of data.
  16. Support Secure Connection between Server and client using OpenSSH.
  17. Encryption possible using GPG and compression supported
  18. Recover gracefully for errors.
  19. Report detailed result, including errors via email.
  20. Very Configurable, Stable and robust because of high quality code.

Installation of Amanda Backup in Linux

We are building Amanda from Source and then Install it. This process of Building and Installing Amanda is same for any distribution be it YUM based or APT based.

Before, compiling from the source, we need to install some required packages from the repository using yum or apt-get command.

On RHEL, CentOS & Fedora
# yum install gcc make gcc-c++ glib2-devel gnuplot perl-ExtUtils-Embed bison flex
On Debian, Ubuntu & Linux Mint
$ sudo apt-get install build-essential gnuplot

Once, required packages installed, you can download Amanda (latest version Amanda 3.3.5) from the link below.

  1. http://sourceforge.net/projects/amanda/files/latest/download

Alternatively, you may use following wget command to download and compile it from source as shown below.

# wget http://jaist.dl.sourceforge.net/project/amanda/amanda%20-%20stable/3.3.5/amanda-3.3.5.tar.gz
# tar -zxvf amanda-3.3.5.tar.gz
# cd amanda-3.3.5/ 
# ./configure 
# make
# make install		[On Red Hat based systems]
# sudo make install	[On Debian based systems]

After successful installation, verify the amanda installation using the following command.

# amadmin --version

amadmin-3.3.5

Note: Use amadmin administrative interface to control Amanda backups. Also note that amanda configuration file is located at ‘/etc/amanda/intra/amanda.conf’.

Dump Filesystem

Run the following command to dump the whole filesystem using amanda and send the email to the email address listed in configuration file.

# amdump all

Flush Amanda

# amflush -f all

Amanda have a lots of options to generate backup output to precise location and create custom backup. Amanda itself is a very vast topic and it was difficult for us to cover all these in one article. We will be covering those options and commands in later posts.

That’s all for now. I’ll be here again with another article soon. Till then stay tuned and connected to us and don’t forget to provide us with your valuable feedback in comment section.

Source

System Tar and Restore – A Versatile System Backup Script for Linux

System Tar and Restore is a versatile system backup script for Linux systems. It comes with two bash scripts, the main script star.sh and a GUI wrapper script star-gui.sh, which perform in three modes: backuprestore and transfer.

Read Also: 14 Outstanding Backup Utilities for Linux Systems

Features

  1. Full or partial system backup
  2. Restore or transfer to the same or different disk/partition layout.
  3. Restore or transfer backup to an external drive such as USB, SD card etc.
  4. Restore a BIOS-based system to UEFI and vice versa.
  5. Arrange a system in a virtual machine (such as virtualbox), back it up and restore it in a normal system.

Requirements:

  1. gtkdialog 0.8.3 or later (for the gui).
  2. tar 1.27 or later (acls and xattrs support).
  3. rsync (for Transfer Mode).
  4. wget (for downloading backup archives).
  5. gptfdisk/gdisk (for GPT and Syslinux).
  6. openssl/gpg (for encryption).

How to Install System Tar and Restore Tool in Linux

To install System Tar and Restore program, you need to first install all the required software packages as listed below.

$ sudo apt install git tar rsync wget gptfdisk openssl  [On Debian/Ubuntu]
# yum install git tar rsync wget gptfdisk openssl       [On CentOS/RHEL]
# dnf install git tar rsync wget gptfdisk openssl       [On Fedora]

Once all the required packages installed, now it’s time to download these scripts by cloning the system tar and restore repository to your system and run these scripts with root user privileges, otherwise, use the sudo command.

$ cd Download
$ git clone https://github.com/tritonas00/system-tar-and-restore.git
$ cd system-tar-and-restore/
$ ls

Install System Tar and Restore

Install System Tar and Restore

Linux System Backup

First create a directory where your system backup files will be stored (you can actually use any other directory of your choice).

$ sudo mkdir /backups

Now run the following command to create a system backup file in /backups directory, the archive file will be compressed using the xz utility, where the flags are.

  • -i – specifies the operation mode(0 meaning backup mode).
  • -d – specifies destination directory, where the backup file will be stored.
  • -c – defines the compression utility.
  • -u – allows for reading additional tar/rsync options.
$ sudo ./star.sh -i 0 -d /backups -c xz -u "--warning=none"

Perform Linux System Backup

Perform Linux System Backup

To exclude the /home in the backup, add the -H flag, and use gzip compression utility as shown.

$ sudo ./star.sh -i 0 -d /backups -c gzip -H -u "--warning=none"

Restore Linux System Backup

You can also restore a backup as in the following command.

$ sudo ./star.sh -i 1 -r /dev/sdb1 -G /dev/sdb -f /backups/backup.tar.xz

where the option are:

  • -i – specifies operation mode (1 meaning restore mode).
  • -r – defines targeted root (/) partition.
  • -G – defines the grub partition.
  • -f – specified the backup file path.

The final example show how to run it in transfer mode (2). The new option here is -b, which sets the boot partition.

$ sudo ./star.sh -i 2 -r /dev/sdb2 -b /dev/sdb1 -G /dev/sdb

In addition, if you have mounted /usr and /var on separate partitions, considering the previous command, you can specify them using the -t switch, as shown.

$ sudo ./star.sh -i 2 -r /dev/sdb2 -b /dev/sdb1 -t "/var=/dev/sdb4 /usr=/dev/sdb3" -G /dev/sdb

We have just looked a few basic options of System Tar and Restore script, you can view all available options using the following command.

$ star.sh --help 

If you are accustomed to graphical user interfaces, you can use the GUI wrapper star-gui.sh instead. But you need to install gtkdialog – used to create graphical (GTK+) interfaces and dialog boxes using shell scripts in Linux.

System Tar and Restore Gui

System Tar and Restore Gui

You can find more command-line usage examples from the System Tar and Restore Github repository: https://github.com/tritonas00/system-tar-and-restore.

Summary

System Tar and Restore is a simple yet powerful, and versatile system backup script for Linux systems. Try it out comprehensively and share your thoughts about it via the feedback form below.

Source

How to Create Encrypted and Bandwidth-efficient Backups Using ‘Duplicity’ in Linux

Experience shows that you can never be too paranoid about system backups. When it comes to protecting and preserving precious data, it is best to go the extra mile and make sure you can depend on your backups if the need arises.

Create Encrypted Linux File System Backups

Duplicity – Create Encrypted Linux File System Backups

Even today, when some cloud and hosting providers offer automated backups for VPS’s at a relatively low cost, you will do well to create your own backup strategy using your own tools in order to save some money and then perhaps use it to buy extra storage or get a bigger VPS.

Sounds interesting? In this article we will show you how to use a tool called Duplicity to backup and encrypt file and directories. In addition, using incremental backups for this task will help us to save space.

That said, let’s get started.

Installing Duplicity

To install duplicity in Fedora-based distros, you will have to enable the EPEL repository first (you can omit this step if you’re using Fedora itself):

# yum update && yum install epel-release

Then run,

# yum install duplicity

For Debian and derivatives:

# aptitude update && aptitude install duplicity

In theory, many methods for connecting to a file server are supported although only ssh/scp/sftp, local file access, rsyncftp, HSI, WebDAV and Amazon S3 have been tested in practice so far.

Once the installation completes, we will exclusively use sftp in various scenarios, both to back up and to restore the data.

Our test environment consists of a CentOS 7 box (to be backed up) and a Debian 8 machine (backup server).

Creating SSH keys to access remote servers and GPG keys for encryption

Let’s begin by creating the SSH keys in our CentOS box and transfer them to the Debian backup server.

The below commands assumes the sshd daemon is listening on port XXXXX in the Debian server. Replace AAA.BBB.CCC.DDD with the actual IP of the remote server.

# ssh-keygen -t rsa
# ssh-copy-id -p XXXXX root@AAA.BBB.CCC.DDD

Then you should make sure that you can connect to the backup server without using a password:

Create SSH Keys

Create SSH Keys

Now we need to create the GPG keys that will be used for encryption and decryption of our data:

# gpg --gen-key

You will be prompted to enter:

  1. Kind of key
  2. Key size
  3. How long the key should be valid
  4. A passphrase

Create GPG Keys

Create GPG Keys

To create the entropy needed for the creation of the keys, you can log on to the server via another terminal window and perform a few tasks or run some commands to generate entropy (otherwise you will have to wait for a long time for this part of the process to finish).

Once the keys have been generated, you can list them as follows:

# gpg --list-keys

List Generated GPG Keys

List Generated GPG Keys

The string highlighted in yellow above is known as the public key ID, and is a requested argument to encrypt your files.

Creating a backup with Duplicity

To start simple, let’s only backup the /var/log directory, with the exception of /var/log/anaconda and /var/log/sa.

Since this is our first backup, it will be a full one. Subsequent runs will create incremental backups (unless we add the full option with no dashes right next to duplicity in the command below):

PASSPHRASE="YourPassphraseHere" duplicity --encrypt-key YourPublicKeyIdHere --exclude /var/log/anaconda --exclude /var/log/sa /var/log scp://root@RemoteServer:XXXXX//backups/centos7

Make sure you don’t miss the double slash in the above command! They are used to indicate an absolute path to a directory named /backups/centos7 in the backup box, and is where the backup files will be stored.

Replace YourPassphraseHereYourPublicKeyIdHere and RemoteServer with the passphrase you entered earlier, the GPG public key ID, and with the IP or hostname of the backup server, respectively.

Your output should be similar to the following image:

Create /var Partition Backup

Create Backup using Duplicity

The image above indicates that a total of 86.3 MB was backed up into a 3.22 MB in the destination. Let’s switch to the backup server to check on our newly created backup:

Confirm Backup File

Confirm Backup File

A second run of the same command yields a much smaller backup size and time:

Compress Backup

Compress Backup

Restoring backups using Duplicity

To successfully restore a file, a directory with its contents, or the whole backup, the destination must not exist (duplicity will not overwrite an existing file or directory). To clarify, let’s delete the cron log in the CentOS box:

# rm -f /var/log/cron

Delete Cron Logs

Delete Cron Logs

The syntax to restore a single file from the remote server is:

# PASSPHRASE="YourPassphraseHere" duplicity --file-to-restore filename sftp://root@RemoteHost//backups/centos7 /where/to/restore/filename

where,

  1. filename is the file to be extracted, with a relative path to the directory that was backed up
  2. /where/to/restore is the directory in the local system where we want to restore the file to.

In our case, to restore the cron main log from the remote backup we need to run:

# PASSPHRASE="YourPassphraseHere" duplicity --file-to-restore cron sftp://root@AAA.BBB.CCC.DDD:XXXXX//backups/centos7 /var/log/cron

The cron log should be restored to the desired destination.

Likewise, feel free to delete a directory from /var/log and restore it using the backup:

# rm -rf /var/log/mail
# PASSPHRASE="YourPassphraseHere" duplicity --file-to-restore mail sftp://root@AAA.BBB.CCC.DDD:XXXXX//backups/centos7 /var/log/mail

In this example, the mail directory should be restored to its original location with all its contents.

Other features of Duplicity

At any time you can display the list of archived files with the following command:

# duplicity list-current-files sftp://root@AAA.BBB.CCC.DDD:XXXXX//backups/centos7

Delete backups older than 6 months:

# duplicity remove-older-than 6M sftp://root@AAA.BBB.CCC.DDD:XXXXX//backups/centos7

Restore myfile inside directory gacanepa as it was 2 days and 12 hours ago:

# duplicity -t 2D12h --file-to-restore gacanepa/myfile sftp://root@AAA.BBB.CCC.DDD:XXXXX//remotedir/backups /home/gacanepa/myfile

In the last command, we can see an example of the usage of the time interval (as specified by -t): a series of pairs where each one consists of a number followed by one of the characters smhDWM, or Y (indicating seconds, minutes, hourse, days, weeks, months, or years respectively).

Summary

In this article we have explained how to use Duplicity, a backup utility that provides encryption for files and directories out of the box. I highly recommend you take a look at the duplicity project’s web site for further documentation and examples.

We’ve provided man page of duplicity in PDF format for your reading convenience, is also a complete reference guide.

Feel free to let us know if you have any questions or comments.

Source

Aptik – A Tool to Backup/Restore Your Favourite PPAs and Apps in Ubuntu

As we all know that Ubuntu has a six month release cycle for new version. All the PPAs and Packages of your choice also needs to be re-added, to avoid doing those stuffs and save your time, here we bringing a fantastic tool called ‘Aptik‘.

Aptik (Automated Package Backup and Restore) is a GUI application that lets you backup your favourite PPAsand Packages. It is very difficult to remember to which packages are installed and from where it has been installed them. We can take backup and restore of all the PPAs before re-installation or up-gradation of OS.

Install Aptik in Ubuntu

Backup/Restore PPAs and Apps

Aptik is a open source package that simplify backup and restore of PPAsApplications and Packages after a fresh installation or upgradation of Debian based UbuntuLinux Mint and other Ubuntu derivatives.

Features of Aptik

  1. Custom PPAs and the Apps
  2. Backup Themes and icons
  3. Backup applications installed via APT cache
  4. Apps installed from Ubuntu Software Centre
  5. Aptik command-line options

How to Backup PPA’s and Packages on Old Systems

By default Aptik tool is not available under Ubuntu Software Centre, you need to use PPA to install it. Add the following PPA to your system and update the local repository and install package as shown.

Installation of Aptik

$ sudo apt-add-repository -y ppa:teejee2008/ppa
$ sudo apt-get update
$ sudo apt-get install aptik      [Commandline]
$ sudo apt-get install aptik-gtk  [GUI]

Start ‘Aptik‘ from the applications menu.

Start Aptik

Start Aptik

Create Backup Directory

Create or Select a backup directory to store your all sections to re-use on your new install.

Aptik Backup-Directory

Select Backup Directory

Backup Software Sources

Click the ‘Backup‘ button for Software Sources. A list of installed third-party PPAs will be displayed along with their Packages names that are installed from the PPA.

Aptik Software Sources

Backup Software Sources

Note: PPAs with a green icon indicates as active and have some packages installed. Whereas yellow icon indicates as active but no packages installed.

Select your favourite PPAs and click on the ‘Backup‘ button to create backup. All PPAs will be stored in a file called ‘ppa.list‘ in the selected backup directory.

Backup Downloaded Packages (APT Cache)

Click the ‘Backup‘ button to copy all the downloaded packages to backup folder.

Aptik Downloaded Packages

Backup Downloaded Packages

Note: All the downloaded packages stored under your ‘/var/cache/apt/archives‘ folder will be copied to the backup folder.

This step is only useful if you are re-installing the same version of Linux distribution. This step can be skipped for upgradation of system, since all the packages for the new release will be latest than the packages in the system cache.

Backup Software Selections

Clicking the ‘Backup‘ button will show a list of all installed top-level packages.

Aptik Software Selections

Software Selections

Note: By default all the packages installed by Linux distribution are un-selected, because those packages are the part of Linux distribution. If required those packages can be selected for backup.

By default all extra packages installed by the user marked as selected, because those packages are installed via Software Centre or by running apt-get install command. If required those can be un-selected.

Select your favourite packages to backup and click the ‘Backup‘ button. A file named ‘packages.list‘ will be created under the backup directory.

Backup Backup Themes and Icons

Click the ‘Backup‘ button to list all the installed themes and icons from the ‘/usr/share/themes‘ and ‘/usr/share/icons‘ directories. Next, select your themes and click on the ‘Backup‘ button to backup.

Aptik Themes Icons

Backup Themes Icons

Aptik Command-line Otions

Run ‘aptik –help’ on the terminal to see the full list of available options.

Aptik Command-line

Command-line Options

To restore those backups, you will need to install Aptik from its own PPA on the newly installed system. After this, hit the ‘Restore’ button to restore all your PPAs Packages, Themes and Icons to your freshly installed system.

Conclusion

You may be wondering why such cool stuff is not by default available on Ubuntu? Ubuntu does it via ‘Ubuntu One‘ and that too paid apps. What do you think about this tool? Share you views through our comment section.

SourceAptik

Source

How to Auto Backup Files to USB Media When Connected

A backup is the last defense against data loss, offering a means to restore original data. You can use either a removable media such as an external hard drive or USB flash disk or a shared network folder, or a remote host to back up your data. It’s very easy (and equally essential) to automatically backup your important files without you having to remember to do so.

Read Also24 Outstanding Backup Tools for Linux Systems in 2018

In this article, we will learn how to auto backup data to a removable media after connecting it to your Linux machine. We will test with an external disk. This is a basic guide to get you started with using udev for real-life solutions.

For the purpose of this article, we need a modern Linux system with:

  1. systemd systems and services manager
  2. udev device manager
  3. rsync backup tool

How to Configuring Udev Rules for Removable Media

Udev is a device manager that enables you to define rules that can among others, trigger execution of a program or script when a device is added to or removed from a running system, as part of the device event handling. We can use this feature to execute a backup script after adding a removable media to the running system.

Before we configure the actual rule for device event handling, we need to provide udev some attributes of the removable media that will be used for the backup. Connect the external disk to the running system and run the following lsusb command to identify its vendor and product ID.

For the testing purpose, we will be using 1TB external hard disk as shown.

$ lsusb

Find Device Vendor ID of Removable Media

Find Device Vendor ID of Removable Media

From the output of the above command, our device vendor ID is 125f, which we will specify in the udev rules as explained below.

First remove the connected media from the system and create a new udev rules file called 10.autobackup.rules under the directory /etc/udev/rules.d/.

The 10 in the filename specifies the order of rules execution. The order in which rules are parsed is important; you should always create custom rules to be parsed before the defaults.

$ sudo vim /etc/udev/rules.d/10.autobackup.rules

Then add the following rule in it:

SUBSYSTEM=="block", ACTION=="add", ATTRS{idVendor}=="125f" SYMLINK+="external%n", RUN+="/bin/backup.sh"

Let’s briefly explain the above rule:

  • "==": is an operator to compare for equality.
  • "+=": is an operator to add the value to a key that holds a list of entries.
  • SUBSYSTEM: matches the subsystem of the event device.
  • ACTION: matches the name of the event action.
  • ATTRS{idVendor}: matches sysfs attribute values of the event device, which is the device vendor ID.
  • RUN: specifies a program or script to execute as part of the event handling.

Save the file and close it.

Create a Auto Backup Script

Now create a auto backup script that will auto backup files to removable USB when connected to the system.

$ sudo vim /bin/autobackup.sh 

Now copy and paste the following script, make sure to replace values of BACKUP_SOURCEBACKUP_DEVICEand MOUNT_POINT in the script.

#!/usr/bin/bash
BACKUP_SOURCE="/home/admin/important"
BACKUP_DEVICE="/dev/external1"
MOUNT_POINT="/mnt/external"


#check if mount point directory exists, if not create it
if [ ! -d “MOUNT_POINT” ] ; then 
	/bin/mkdir  “$MOUNT_POINT”; 
fi

/bin/mount  -t  auto  “$BACKUP_DEVICE”  “$MOUNT_POINT”

#run a differential backup of files
/usr/bin/rsync -auz  "$MOUNT_LOC" "$BACKUP_SOURCE" && /bin/umount "$BACKUP_DEVICE"
exit

Then make the script executable with the following command.

$ sudo chmod +x /bin/autobackup.sh

Next, reload udev rules using following command.

$ udevadm control --reload

The next time you connect your external hard disk or whatever device you configured to the system, all your documents from the specified location should be auto backed up to it.

Note: How effectively this works may be influenced by the filesystem on your removable media and the udev rules you write, especially capturing the device attributes.

For more information, see the udevmount and rsync man pages.

$ man udev
$ man mount 
$ man rsync 

You might also like to read these following Linux backup related articles.

  1. rdiff-backup – A Remote Incremental Backup Tool for Linux
  2. Tomb – A File Encryption and Personal Backup Tool for Linux
  3. System Tar and Restore – A Versatile Backup Script for Linux
  4. How to Create Bandwidth-efficient Backups Using Duplicity in Linux
  5. Rsnapshot – A Local/Remote Backup Tool for Linux
  6. How to Sync Two Apache Web Servers/Websites Using Rsync

That’s all for now! In this article, we have explained how to auto backup data to a removable media after connecting it to your Linux machine. We would like to hear from you via the feedback form below.

Source

Unison – An Ultimate Local/Remote File Synchronization Tool for Linux

File Synchronization is the process of mirroring, files and data in two or more locations in accordance with certain protocols. Files and Data are the most valuable thing in this era of Information Technology. By File Synchronization, we ensure that one or more copies of our priceless data is always available in case of disaster of any kind or when we need to work in many locations.

A good File Synchronizer is supposed to have below listed Features:

  1. Cryptographic synchronisation, as a security Implementation.
  2. A good ratio data compression.
  3. A Perfect algorithm implementation to check data duplication.
  4. Keep track of file source change.
  5. Scheduled Synchronisation.

One such tool is Unison. Here in this article we will be discussing “Unison” in details, along with its features, functionality and a lot more.

What is Unison?

Unison is a cross platform file synchronization application which is useful in synchronizing data between two or more locations be it computers or storage device.

Features of Unison

  1. Released under General Public License (GPL)
  2. Open Source and Cross Platform Available for (Linux, Unix, BSD, Windows, Mac)
  3. Make available same version of file across different machine, regardless of last modified location.
  4. Cross Platform Synchronization possible i.e., a Windows machine can be synchronized over a *nix Server.
  5. Communicate over standard Protocol TCP/IP i.e., possible between any two machines over internet regardless of Geographical Location.
  6. Smart Management – Show conflict when a file has been modified on both source and show it to the user.
  7. Secured SSH Connection – An encrypted data transfer.
  8. rsync algorithm is deployed here, only the modified part is transferred and overwritten. Hence. it’s fast in execution and Maintenance.
  9. Robust in nature
  10. Written in “Objective Caml” programming Language.
  11. Matured and Stable, no active development required.
  12. It is a user-level program ie., Application don’t need superuser privileged.
  13. It is known for its clear and precise specification.

Installation of Unison in Linux

The current stable release (Unison-2.40.102) can be downloaded from the link below:

Download Unison 2.40.102 Stable

Alternatively, we can also download and Install “Unison”, if it is available in repo using apt or yum command as shown below.

On Debian/Ubuntu/Linux Mint

Open terminal using “Ctr+Alt+T” and run the following command on the terminal.

$ sudo apt-get install unison
On RHEL/CentOS/Fedora

First, enable EPEL repository and then install using the following command.

$ sudo yum install unison

NOTE: The above command will Install Unison without GUI. If you need to Install Unison with GUI support, install ‘unison-gtk‘ package (Only available for Debian based distros) using the below command.

# apt-get install unison-gtk

How to Use Unison

Unison is used to synchronize a set of files in a directory tree to another location with similar structure, which may be a local host or remote host.

Local File Synchronization

Let’s create 5 files under your Desktop and then synchronize it to a folder called ‘desk-back‘ in your home directory.

$ cd Desktop/
$ touch 1.txt 2.txt 3.txt 4.txt 5.txt
$ ls

1.txt 2.txt 3.txt 4.txt 5.txt
$ mkdir /home/server/desk-back

Now run the ‘unison‘ command to synchronize your Desktop files to under ‘desk-back‘ in your home directory.

$ unison /home/server/Desktop /home/server/desk-back/
Sample Output
Contacting server...
Looking for changes
Warning: No archive files were found for these roots, whose canonical names are:
/home/server/Desktop
/home/server/desk-back
This can happen either
because this is the first time you have synchronized these roots,
or because you have upgraded Unison to a new version with a different
archive format.
Update detection may take a while on this run if the replicas are
large.
Unison will assume that the 'last synchronized state' of both replicas
was completely empty. This means that any files that are different
will be reported as conflicts, and any files that exist only on one
replica will be judged as new and propagated to the other replica.
If the two replicas are identical, then no changes will be reported.If you see this message repeatedly, it may be because one of your machines
is getting its address from DHCP, which is causing its host name to change
between synchronizations. See the documentation for the UNISONLOCALHOSTNAME
environment variable for advice on how to correct this.
Donations to the Unison project are gratefully accepted:
http://www.cis.upenn.edu/~bcpierce/unison
Press return to continue.[]
...
...
Saving synchronizer state
Synchronization complete at 13:52:15 (5 items transferred, 0 skipped, 0 failed)

Now check the location /home/server/desk-back, if the synchronization process was successful?

$ cd /home/server/desk-back/
$ ls

1.txt 2.txt 3.txt 4.txt 5.txt

Remote File Synchronization

For remote file synchronization, you must have same version of Unison installed on both local and remote server. Run the following command to verify that the local unison can start and connect to the remote unison server.

$ unison -testServer /home/ravisaive/Desktop/ ssh://172.16.25.125//home/ravisaive/Desktop/
Sample Output
Contacting server...
ravisaive@172.16.25.125's password: 
Connected [//tecmint//home/ravisaive/Desktop -> //tecmint//home/ravisaive/Desktop]

The above results, indicates that the remote server is connected successfully, now sync the files using below command.

$ unison -batch /home/ravisaive/Desktop/ ssh://172.16.25.125//home/ravisaive/Desktop/

Executing GUI Unison

The first step is to set profile which requires you to set basic information as name of a profile and what you want to synchronize, source and Destination location, etc.

To start Unison GUI, run the following command on the terminal.

$ unison-gtk

Create New Unison Profile

Create New Profile

Enter Unison Profile Description

Enter Profile Description

Select Unison Synchronization Type

Select Sync Type

Select Sync Directories

Select Sync Directories

5elect Partition Type

5elect Partition Type

Unison Profile Created

New Profile Created

Select Created Profile

Select Created Profile

Unison Sync Message

Unison Sync Message

Once profile is created and source as well as destination is entered, we are welcomed with the below window.

Unison Flle Synchronization Process

File Synchronization Process

Just select all the files and click on OK. The files will start synchronising from both the directions, based upon last update time stamp.

Conclusion

Unison is a wonderful tool which makes it possible to have custom synchronisation either way (Bidirectional), available in GUI as well as command Line Utility. Unison provides what it promises. This tool is very easy to use and requires no extra effort. As a tester I was very much impressed with this application. It has a whole lot of features which can be implemented as required. For more information read unison-manual.

Read Also:

  1. Rsync (Remote Sync) of Files
  2. Rsnapshot (Rsync Based) File Synchronizer

That’s all for now. I’ll soon be here again with another interesting article. Till then stay tuned and connected to Tecmint. Don’t forget to provide us with your valuable feedback in our comment section.

Source

How to Clone/Backup Linux Systems Using – Mondo Rescue Disaster Recovery Tool

Mondo Rescue is an open source, free disaster recovery and backup utility that allows you to easily create complete system (Linux or WindowsClone/Backup ISO Images to CDDVDTapeUSB devicesHard Disk, and NFS. And can be used to quickly restore or redeploy working image into other systems, in the event of data loss, you will be able to restore as much as entire system data from backup media.

Mondo program is available freely for download and released under GPL (GNU Public License) and has been tested on a large number of Linux distributions.

This article describes Mondo installation and usage of Mondo Tools to backup of your entire systems. The Mondo Rescue is a Disaster Recovery and Backup Solutions for System Administrators to take full backup of their Linux and Windows file system partitions into CD/DVDTapeNFS and restore them with the help of Mondo Restore media feature that uses at boot-time.

Installing MondoRescue on RHEL / CentOS / Scientific Linux

The latest Mondo Rescue packages (current version of Mondo is 3.0.3-1) can be obtained from the “MondoRescue Repository“. Use “wget” command to download and add repository under your system. The Mondo repository will install suitable binary software packages such as afiobuffermindimindi-busyboxmondo and mondo-doc for your distribution, if they are available.

For RHEL/CentOS/SL 6,5,4 – 32-Bit

Download the MondoRescue repository under “/etc/yum.repos.d/” as file name “mondorescue.repo“. Please download correct repository for your Linux OS distribution version.

# cd /etc/yum.repos.d/

## On RHEL/CentOS/SL 6 - 32-Bit ##
# wget ftp://ftp.mondorescue.org/rhel/6/i386/mondorescue.repo

## On RHEL/CentOS/SL 5 - 32-Bit ##
# wget ftp://ftp.mondorescue.org/rhel/5/i386/mondorescue.repo

## On RHEL/CentOS/SL 4 - 32-Bit ##
# wget ftp://ftp.mondorescue.org/rhel/4/i386/mondorescue.repo

For RHEL/CentOS/SL 6,5,4 – 64-Bit

# cd /etc/yum.repos.d/

## On RHEL/CentOS/SL 6 - 64-Bit ##
# wget ftp://ftp.mondorescue.org/rhel/6/x86_64/mondorescue.repo

## On RHEL/CentOS/SL 5 - 64-Bit ##
# wget ftp://ftp.mondorescue.org/rhel/5/x86_64/mondorescue.repo

## On RHEL/CentOS/SL 4 - 64-Bit ##
# wget ftp://ftp.mondorescue.org/rhel/4/x86_64/mondorescue.repo

Once you successfully added repository, do “yum” to install latest Mondo tool.

# yum install mondo

Installing MondoRescue on Debian / Ubuntu / Linux Mint

Debian user’s can do “wget” to grab the MondoRescue repository for Debain 6 and 5 distributions. Run the following command to add “mondorescue.sources.list” to “/etc/apt/sources.list” file to install Mondo packages.

On Debian

## On Debian 6 ##
# wget ftp://ftp.mondorescue.org/debian/6/mondorescue.sources.list
# sh -c "cat mondorescue.sources.list >> /etc/apt/sources.list" 
# apt-get update 
# apt-get install mondo
## On Debian 5 ##
# wget ftp://ftp.mondorescue.org/debian/5/mondorescue.sources.list
# sh -c "cat mondorescue.sources.list >> /etc/apt/sources.list" 
# apt-get update 
# apt-get install mondo

On Ubuntu/Linux Mint

To install Mondo Rescue in Ubuntu 12.1012.0411.1011.0410.10 and 10.04 or Linux Mint 13, open the terminal and add the MondoRescue repository in “/etc/apt/sources.list” file. Run these following commands to install Mondo Resuce packages.

# wget ftp://ftp.mondorescue.org/ubuntu/`lsb_release -r|awk '{print $2}'`/mondorescue.sources.list
# sh -c "cat mondorescue.sources.list >> /etc/apt/sources.list" 
# apt-get update 
# apt-get install mondo

Creating Cloning or Backup ISO Image of System/Server

After installing Mondo, Run “mondoarchive” command as “root” user. Then follow screenshots that shows how to create an ISO based backup media of your full system.

# mondoarchive

Welcome to Mondo Rescue

Mondo Rescue Welcome Screen

Mondo Rescue Welcome Screen


Please enter the full path name to the directory for your ISO Images. For example: /mnt/backup/

Mondo Rescue Storage Directory

Mondo Rescue Storage Directory

Select Type of compression. For example: bzipgzip or lzo.

Select Type of Compression

Select Type of Compression

Select the maximum compression option.

Mondo Rescue Compression Speed

Select Compression Speed

Please enter how large you want each ISO image in MB (Megabytes). This should be less than or equal to the size of the CD-R(W)’s (i.e. 700) and for DVD’s (i.e. 4480).

Mondo Rescue ISO Size

Define Mondo Rescue ISO Size

Please give a name of your ISO image filename. For example: tecmint1 to obtain tecmint-[1-9]*.iso files.

Mondo Rescue Prefix

Enter Name of Mondo Rescue

Please add the filesystems to backup (separated by “|“). The default filesystem is “/” means full backup.

Mondo Rescue Backup Paths

Enter Backup Paths

Please exclude the filesystem that you don’t want to backup (separated by “|“). For example: “/tmp” and “/proc” are always excluded or if you want full backup of your system, just hit enter.

Mondo Rescue Exclude Paths

Enter Exclude File System

Please enter your temporary directory path or select default one.

Mondo Rescue Temporary Directory

Enter Temporary Directory Name

Please enter your scratch directory path or select default one.

Mondo Rescue Scratch Directory Name

Enter Scratch Directory Name

If you would like to backup extended attributes. Just hit “enter“.

Mondo Rescue Extended Backup

Enter Extended Backup Attributes

If you want to Verify your backup, after mondo has created them. Click “Yes“.

Mondo Rescue Verify Backups

Verify Backups

If you’re using stable standalone Linux Kernel, click “Yes” or if you using other Kernel say “Gentoo” or “Debain” hit “No“.

Mondo Rescue Kernel

Select Stable Linux Kernel

Click “Yes” to proceed further.

Mondo Rescue Backup Process

Proceed Cloning Process

Creating a catalog of “/” filesystem.

Mondo Rescue Making Catalog

Creating Catalog for File System

Dividing filelist into sets.

Mondo Rescue Dividing File List

Dividing File List

Calling MINDI to create boot+data disk.

Mondo Rescue Boot Data Disk

Creating Boot Data Disk

Backing up filesytem. It may take a couple of hours, please be patient.

Mondo Rescue Backup Filesystem

Backing up File System

Backing up big files.

Mondo Rescue Big Files Backup

Big Files Backup

Running “mkisofs” to make ISO Image.

Mondo Rescue Creating ISO

Making ISO Image

Verifying ISO Image tarballs.

Mondo Rescue Verify ISO

Verify ISO

Verifying ISO Image Big files.

Mondo Rescue Verify Big Files

Verify Big Files

Finally, Mondo Archive has completed. Please hit “Enter” to back to the shell prompt.

Mondo Rescue Backup Completed

Backup Completed

If you’ve selected default backup path, you will see an ISO image under “/var/cache/mondo/“, that you can burnt into a CD/DVD for later restore.

To restore all files automatically, boot the system with Mondo ISO Image and at boot prompt type “nuke” to restore files. Here is the detailed video that demonstrates how to restore files automatically from CD/DVDmedia.

For other distributions, you can also grab Mondo Rescue packages at mondorescue.org download page.

Source

Rclone – Sync Files Directories from Different Cloud Storage

Rclone is a command line program written in Go language, used to sync files and directories from different cloud storage providers such as: Amazon Drive, Amazon S3, Backblaze B2, Box, Ceph, DigitalOcean Spaces, Dropbox, FTP, Google Cloud Storage, Google Drive, etc.

As you see, it supports multiple platforms, which makes it a useful tool to sync your data between servers or to a private storage.

Rclone comes with the following features

  • MD5/SHA1 hash checks at all times for ensuring file integrity integrity.
  • Timestamps are preserved on files.
  • Partial syncs supported on a whole file basis.
  • Copy mode for new or changed files.
  • One way sync to make a directory identical.
  • Check mode – hash equality check.
  • Can sync to and from network, eg two different cloud accounts.
  • (Encryption) backend.
  • (Cache) backend.
  • (Union) backend.
  • Optional FUSE mount (rclone mount).

How to Install rclone in Linux Systems

The installation of rclone can be completed in two different ways. The easier one is using their installation script, by issuing the following command.

# curl https://rclone.org/install.sh | sudo bash

What this script does is to check the OS type on which it is ran and download the archive related to that OS. Then it extracts the archive and copies rclone binary to /usr/bin/rclone and gives 755 permissions on the file.

In the end, when the installation is complete, you should see the following line:

Rclone v1.44 has successfully installed.
Now run “rclone config” for setup, Check https://rclone.org/docs/ for  more details.

The second way to install rclone is by issuing the following commands.

# curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip
# unzip rclone-current-linux-amd64.zip
# cd rclone-*-linux-amd64

Now copy the binary file and give it executable permissions.

# cp rclone /usr/bin/
# chown root:root /usr/bin/rclone
# chmod 755 /usr/bin/rclone

Install rclone manpage.

# mkdir -p /usr/local/share/man/man1
# cp rclone.1 /usr/local/share/man/man1/
# mandb 

How to Configure rclone in Linux Systems

Next what you will need to do is run the rclone config to create your config file. It will be used for authentication for future usage of rclone. To run the configuration setup run the following command.

# rclone config

You will see the following prompt:

2018/11/13 11:39:58 NOTICE: Config file “/home/user/.config/rclone/rclone.conf” not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q>

The options are as follows:

  • n) – Create new remote connection
  • s) – set password protection for your configuration
  • q) – exit the config

For the purpose of this tutorial lets press "n" and create new connection. You will be asked to give the new connection a name. After that you will be prompted to select the type of storage to be configured:

rclone - New Remote Connection

rclone – New Remote Connection

I have named by connection “Google” and selected “Google Drive”, which is under the number 12. The rest of the questions you can answer by simply leaving the default answer, which is an empty “”.

When asked to, you may select “autoconfig”, which will generate all the required info to connect to your Google Drive and give rclone permissions to use data from Google Drive.

The process looks something like this:

Google Application Client Secret - leave blank normally.
client_secret>
Scope that rclone should use when requesting access from drive.
Choose a number from below, or type in your own value
 1 / Full access all files, excluding Application Data Folder.
   \ "drive"
 2 / Read-only access to file metadata and file contents.
   \ "drive.readonly"
   / Access to files created by rclone only.
 3 | These are visible in the drive website.
   | File authorization is revoked when the user deauthorizes the app.
   \ "drive.file"
   / Allows read and write access to the Application Data folder.
 4 | This is not visible in the drive website.
   \ "drive.appfolder"
   / Allows read-only access to file metadata but
 5 | does not allow any access to read or download file content.
   \ "drive.metadata.readonly"
scope> 1
ID of the root folder - leave blank normally.  Fill in to access "Computers" folders. (see docs).
root_folder_id> 
Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
service_account_file>
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine or Y didn't work
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
Configure this as a team drive?
y) Yes
n) No
y/n> n
--------------------
[remote]
client_id = 
client_secret = 
scope = drive
root_folder_id = 
service_account_file =
token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2018-11-13T11:57:58.955387075Z"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

How to Use rclone in Linux Systems

Rclone has quite a long list of available options and commands to be used with. We will try to cover some of the more important ones:

List Remote Directory

# rclone lsd <remote-dir-name>:

rclone - List Remote Directory

rclone – List Remote Directory

Copy Data with rclone

# rclone copy source:sourcepath dest:destpath

Note that if rclone finds duplicates, those will be ignored:

rclone - Copy Data

rclone – Copy Data

Sync data with rclone

If you want to sync some data between directories, you should use rclone with sync command.

The command should look like this:

# rclone sync source:path dest:path [flags]

In this case the source is synced to destination, changing the destination only! This method skips unchanged files. Since the command can cause data loss, you can use it with “–dry-run” to see what exactly will be copied and deleted.

rclone Sync Data

rclone Sync Data

Move Data with rclone

To move data, you can use rclone with move command. The command should look like this:

# rclone move source:path dest:path [flags]

The content from the source, will be moved (deleted) and placed on the selected destination.

Other useful rclone Commands

To create a directory on destination.

# rclone mkdir remote:path

To remove a directory.

# rclone rmdir remote:path

Check if files on source and destination match:

# rclone check source:path dest:path

Delete files:

# rclone delete remote:path

Each of rclone commands can be used with different flags and includes its own help menu. For example, you can do a selective delete using the delete option. Lets say you want to delete files larger than 100M, the command would look like this.

# rclone --min-size 100M delete remote:path

It is highly recommend to review the manual and help for each command to get the most of rclone. The full documentation of rclone is available at: https://rclone.org/

Conclusion

rclone is a powerful command line utility to help you manage data between different Cloud storage providers. While in this article we scratched just the surface of rclone capabilities, there is much more to be achieved with it especially when used in combination with cron service (for example).

Source

WP2Social Auto Publish Powered By : XYZScripts.com