12 Open Source Cloud Storage Software to Store and Sync Your Data Quickly and Safely

Cloud by name indicates something which is very huge and present over a large area. Going by the name, in technical field, Cloud is something which is virtual and provides services to end users in form of storage, hosting of apps or virtualizing any physical space. Now a days, Cloud computing is used by small as well as large organizations for data storage or providing customers with its advantages which are listed above.

Free Open Source Cloud Storage Softwares for Linux

12 Free Open Source Cloud Storage Software’s

Mainly, three types of Services come associated with Cloud which are: SaaS (Software as a Service) for allowing users to access other publically available clouds of large organizations for storing their data like: gmailPaaS(Platform as a Service) for hosting of apps or software on Others public cloud ex: Google App Engine which hosts apps of users, IaaS (Infrastructure as a Service) for virtualizing any physical machine and availing it to customers to make them get feel of a real machine.

Cloud Storage

Cloud Storage means storage of data away from users local system and across the span of dedicated servers which are meant for this. At its earliest, CompuServe in 1983 offered its customers 128k of disk space which could be used to store files. Whereas this field is under active development and will be because of potential threats including: loss of data or information, data hacking or masquerading and other attacks, many organizations have come forward with their own solutions to Cloud Storage and Data Privacy which is strengthening and stabilizing its future.

In this article, we will present some of selected contributions for this concern which are open source and successfully being accepted by huge masses and big organizations.

1. OwnCloud

Dropbox replacement for Linux users, giving many functionalities which are similar to that of DropBox, ownCloud is a self-hosted file sync and share server.

Its open source functionality provides users with access to unlimited amount of storage space. Project started in January 2010 with aim to provide open source replacement for proprietary cloud storage service providers. It is written in PHP, JavaScript and available for Windows, Linux, OS X desktops and even successfully provides mobile clients for Android and iOS.

OwnCloud employs WebDav server for remote access and can integrate with large number of Databases including: SQLite, MariaDB, MySQL, Oracle Database, PostgreSQL.

Provides large number of features countable of which include: File storage and encryption, Music Streaming, content sharing across URL’s, Mozilla sync hosting and RSS/Atom feed reader, one-click app installation, Video and PDF viewer and many more.

The latest version of ownCloud i.e. 8.2 adds on other new features including: improved design, allows admin to notify users and set retention limits on files in the trash.

OwnCloud

OwnCloud

Read MoreInstall OwnCloud 8 to Create Personal Cloud Storage in Linux

2. Seafile

Another file hosting software system which exploits open source property to avail its users with all advantages they expect from a good cloud storage software system. It is written in C, Python with latest stable release being 4.4.3 released on 15th October 2015.

Seafile provides desktop client for Windows, Linux, and OS X and mobile clients for Android, iOS and Windows Phone. Along with a community edition released under General Public License, it also has a professional edition released under commercial license which provides extra features not supported in community edition i.e. user logging and text search.

Since it got open sourced in July 2012, it started gaining international attention. Its main features are syncing and sharing with main focus on data safety. Other features of Seafile which have made it common in many universities like: University Mainz, University HU Berlin and University Strasbourg and also among other thousands of people worldwide are: online file editing, differential sync to minimize the bandwidth required, client-side encryption to secure client data.

Seafile Cloud Storage

Seafile Cloud Storage

Read MoreInstall Seafile Secure Cloud Storage in Linux

3. Pydio

Earlier known by the name AjaXplorerPydio is a freeware aiming to provide file hosting, sharing and syncing. As a project it was initiated in 2009 by Charles du jeu and since 2010, it is on all NAS equipment’s supplied by LaCie.

Pydio is written in PHP and JavaScript and available for Windows, Mac OS and Linux and additionally for iOS and Android also. With nearly 500,000 downloads on Sourceforge, and acceptance by companies like Red Hat and Oracle, Pydio is one of the very popular Cloud Storage Software in the market.

In itself, Pydio is just a core which runs on a web server and can be accessed through any browser. Its integrated WebDAV interface makes it ideal for online file management and SSL/TLS encryption makes transmission channels encrypted securing the data and ensuring its privacy. Other features which come with this software are: text editor with syntax highlighting, audio and video playback, integration of Amazon, S3, FTP or MySQL Databases, image editor, file or folder sharing even through public URL’s.

Pydio Cloud Storage

Pydio Cloud Storage

 

4. Ceph

Ceph was initially started by Sage Well for his doctoral dissertation, and in fall 2007 he continued on this project full time and expanded the development team. In April 2014 Red Hat brought its development in-house. Till now 8 releases of Ceph have been released latest being Hammer in April 7, 2015. Ceph is a distributed cluster written in C++ and Perl and highly scalable and freely available.

Data can be populated in Ceph as block device, a file or in form Object through RADOS gateway which can present support for Amazon S3 and Openstack Swift API’s. Apart from being secure in terms of data, Scalable and reliable, other features provided by Ceph are:

  1. network file system which aims for high performance and large data storage.
  2. compatibility to VM clients.
  3. allowance of partial/complete reads/ writes.
  4. object level mappings.

Ceph Storage

Ceph Storage

5. Syncany

Released in about an year ago in near March 2014, Syncany is one of the lightest and open source cloud storage and file sharing application. It is currently being actively developed by Philipp C. Heckel and as of today, is available as a command line tool for all supported platforms, but GUI version is under active development.

One of the most important feature about Syncany is that it is a tool and requires you to bring in your own storage, which can be FTP or SFTP storage, WebDAV or Samba Shares, Amazon S3 buckets etc.

Other features which make it an awesome tool to have are: 128-bit AES+Twofish/GCM encryption for all the data leaving the local machine, file sharing support with which you can share your files with your friends, offsite storage as chosen by user instead of provider-based storage, interval-based or on-demand backups, binary compatible file versioning, local deduplication of files. It can be more advantageous for companies who want to use their own storage space rather trusting some providers provided storage.

Syncany Cloud Storage

Syncany Cloud Storage

6. Cozy

Not just a file sharing or synchronization tool or software, Cozy is bundled as a complete package of functions that can help you build your complete App Engine.

Like SyncanyCozy provides flexibility to user in terms of storage space. You can either use your own personal storage or trust Cozy team’s servers. It relies on some open source software’s for its complete functioning which are: CouchDB for Database storage and Whoosh for indexing. It is available for all platforms including smartphones.

Main features which make it a must to have Cloud storage software are: ability to store all the Contacts, Files, Calendar, etc in the Cloud and sync them between laptop and smartphone, provides ability to user to create his own apps and share them with other users by just sharing Git URL of the repository, hosting static websites or HTML5 video game consoles.

As a step further to provide its availability even for cheap hardware’s, Cozy team has introduced Cozy Light which performs well even on cheap hardware’s like: Rasberry Pi, small Digital Ocean VPS etc.

Cozy Cloud Storage

Cozy Cloud Storage

7. GlusterFS

GlusterFS is a network attached file storage system. Initially, started by Gluster Inc., this project is now under Red Hat Inc. After their purchase of Gluster Inc in 2011. Red Hat integrated Gluster FS with their Red Hat Storage Server changing its name to Red Hat Gluster Storage. It is available for platforms including Linux, OS X, NetBSd and OpenSolaris with some of its parts licensed under GPLv3 while others dual licensed under GPLv2. It has been used as a foundation for academic research.

GlusterFs uses a client-server model with servers being deployed as storage bricks. Client can connect to server with custom protocol over TCP/IP, Infiband or SDP and store files to the GlusterFs server. Various functionalities being employed by it over the files are: file-based mirroring and replication, file-based stripping, load balancing, scheduling and disk caching to name a few.

Other very useful feature of it is that it is flexible i.e. data here is stored on native file systems like: xfs, ext4 etc.

GlusterFS Storage

GlusterFS Storage

Read MoreHow to Install GlusterFS in Linux Systems

8. StackSync

StackSync is a Dropbox like tool running on top of OpenStack swift which is specially designed to tackle the needs of organizations to sync their data at one place. It us written in Java and released under GNU General public license v3.

Its framework is composed of three main components: a synchronization server, Openstack swift, desktop & mobile clients. While the server processes metadata and logic, Openstack is focused on storing the metadata while desktop and mobile clients help users sync their data to their personal cloud.

StackSync employs various data optimizations that allow it to scale to cater the needs of thousands of people with efficient use of cloud resources. Its other features are: provision of RESTful API as a Swift module which allows mobile apps and other third party applications to use it to sync data, separation between data and metadata which makes it flexible for deployment based on different configurations, provides both Public configuration which is useful for Public Cloud providers and Private configuration which solves the problems of big organizations aiming for a better cloud storage solution.

StackSync Cloud Storage

StackSync Cloud Storage

9. Git-annex

Git-annex is another file synchronization service developed by Joey Hess, released in October 2010 which also aims to solve file sharing and synchronization problems but independent of any commercial service or central server. It is written in Haskell and available for Linux, Android, OS X and Windows.

Git-annex manages the git repository of the user without storing the session into git again. But instead it stores only the linking to the file in the git repository and manages the files associated to the link at a separate place. It ensures the duplicacy of file which is needed in case recovery of lost information is required.

Further, it ensures availability of file data instantly as and when required which prevents files to present on each system. This reduces a lot of memory overhead. Notably, git-annex is available on various Linux distributions including: Fedora, Ubuntu, Debian etc.

Git-Annex

Git-Annex

10. Yandex.Disk

Yandex.Disk is a cloud storage and synchronization service released in April 2012 and available on all major platforms including: Linux, Windows, OS X, Android, iOS and Windows Phone. It allows users to synchronize data between different devices and share it with others online.

Various features provided by Yandex.Disk to its users are: built-in flash player that lets people preview songs, sharing of files with others by sharing download links, synchronization of files between different devices of same user, unlimited storage, WebDAV support allowing easy management of files by any application supporting WebDAV protocol.

Yandex-Disk

Yandex-Disk

11. Bitcasa

Developed by Bitcasa Inc. Which is a California Based company, Bitcasa is yet another solution for open source Cloud Storage and synchronization available for Windows, OS X, Android and Linux. Not directly an open source software, but it is still a part of Open Source community as it uses those software’s of which many are open sourced like: gcc/clang, libCurl, OpenSSL, APR, Rapid JSON etc.

With main features being file storage, access and sharing other features which make it popular among customers across more than 140 countries worldwide are: its convergent encryption protocol which is mostly safe but with less risks associated with it as reported by an article, provision of secure API’s and white label storage applications for OEM’s, network operators and software developers.

Bitcasa Storage

Bitcasa Storage

12. NAS4Free

NAS is acronym for ‘Network Attached Storage‘ and ‘4Free‘ indicates its free and open source nature. NAS4Free released under this name in March 2012. It is a network attached storage server software with a user interface written in PHP and released under Simplified BSD License. It supports platforms including i386/IA-32 and x86-64.

NAS4Free supports sharing across multiple Operating Systems. It also includes ZFS, disk encryption etc with protocols such as: Samba, CARP, Bridge, FTP, RSYNC, TFTP, NFS. Unlike other software’s, NAS4Free can be installed and operated from USB/ SSD Key, Hard Disk or can even be booted from LiveCD, LiveUSB with small USB key for config storage. NAS4Free has won awards including Project of month (August 2015) and Project of the week (May 2015).

NAS4Free Network Storage

NAS4Free Network Storage

Conclusion

These are some known Open Source Cloud storage and synchronization software’s which have either gained a lot of popularity over the years or have just been able to enter and make their mark in this industry with a long way to go. You can share any software that you or your organization might be using and we will be listing that with this list.

Source

ELRepo – Community Repo for Enterprise Linux (RHEL, CentOS & SL)

If you are using an Enterprise Linux distribution (Red Hat Enterprise Linux or one of its derivatives, such as CentOS or Scientific Linux) and need support for specific or new hardware, you are in the right place.

In this article we will discuss how to enable the ELRepo repository, a software source that includes everything from filesystem drivers to webcam drivers with everything in between (support for graphics, network cards, sound devices, and even new kernels).

Enabling ELRepo in Enterprise Linux

Although ELRepo is a third-party repository, it is well supported by an active community on Freenode (#elrepo) and a mailing list for users.

If you are still apprehensive about adding an independent repository to your software sources, note that the CentOS project lists it as trustworthy in its wiki (see here). If you still have concerns, feel free to ask away in the comments!

It is important to note that ELRepo not only provides support for Enterprise Linux 7, but also for previous versions. Considering that CentOS 5 is reaching its end of life (EOL) at the end of this month (March 2017) that may not seem like a big deal, but keep in mind that CentOS 6 won’t reach its EOL until March 2020.

Regardless of the EL version, you will need to import the repository’s GPG key before actually enabling it:

# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

Enable ELRepo in EL5

# rpm -Uvh http://www.elrepo.org/elrepo-release-5-5.el5.elrepo.noarch.rpm

Enable ELRepo in EL6

# rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm

Enable ELRepo in EL7

# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm

In this article we will only deal with EL7, and share a few examples in the next section.

Understand ELRepo Channels

To better organize the software contained in this repository, ELRepo is divided into 4 separate channels:

    • elrepo is the main channel and is enabled by default. It does not contain packages present in the official distribution.
    • elrepo-extras contains packages that replace some provided by the distribution. It is not enabled by default. To avoid confusion, when a package needs to be installed or updated from this repository, it can be temporarily enabled via yum as follows (replace package with an actual package name):
# yum --enablerepo=elrepo-extras install package
  • elrepo-testing provides packages that will at one point be part of the main channel but are still under testing.
  • elrepo-kernel provides long term and stable mainline kernels that have been specially configured for EL.

Both elrepo-testing and elrepo-kernel are disabled by default and can be enabled as in the case of elrepo-extrasif we need to install or update a package from them.

To list the available packages in each channel, run one of the following commands:

# yum --disablerepo="*" --enablerepo="elrepo" list available
# yum --disablerepo="*" --enablerepo="elrepo-extras" list available
# yum --disablerepo="*" --enablerepo="elrepo-testing" list available
# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

The following images illustrates the first example:

List ELRepo Available Packages

List ELRepo Available Packages

Summary

In this post we have explained what is ELRepo and what are the circumstances where you may want to add it to your software sources.

If you have any questions or comments about this article, feel free to use the form below to reach us. We look forward to hearing from you!

Source

How to Install and Configure ‘NethServer’ – A CentOS Based All-in-One Linux Distribution

NethServer is an Open Source powerful and secured Linux distribution, build on top of CentOS 6.6, designed for small offices and medium enterprises. Build-in with a large number of modules which can be simply installed through its web interface, NethServer can turn your box into a Mail server, FTP server, Web server, Web Filter, Firewall, VPN server, File Cloud server, Windows File Sharing server or Email Groupware server based on SOGo in no time just by hitting a few clicks.

Released in two editions, Community Edition, which is for free and Enterprise Edition, which comes with a paid support, this tutorial will cover the installation procedure of NethServer Free Edition (version 6.6) from an ISO image, although, it can, also, be installed from repositories on a pre-installed CentOS system using yumcommand to download software packages from web.

For example, if you wish to install NethServer on a pre-installed CentOS system, you can simply execute below commands to transform your current CentOS into NethServer.

# yum localinstall -y http://mirror.nethserver.org/nethserver/nethserver-release-6.6.rpm
# nethserver-install

To install additional nethserver modules, mention the name of the module as a parameter to the install script as shown below.

# nethserver-install nethserver-mail nethserver-nut

As I said above, this guide will only show installation procedure of NethServer Free Edition from an ISO image…

Download NethServer

NethServer ISO Image which can be obtained using the following download link:

  1. http://www.nethserver.org/getting-started-with-nethserver/

Before starting the installation procedure be aware that using this method based on CD ISO Image will format and destroy all your previous data from all your machine hard-disks, so, as a security measure make sure you remove all unwanted disk drives and keep only the disks where the system will be installed.

After the installation finishes you can re-attach the rest of the disks and add them into your NethServer LVM partitions (VolGroup-lv_root and VolGroup-lv-swap).

Step 1: Installation of NethServer

1. After you have downloaded the ISO Image, burn it to a CD or create a bootable USB drive, place the CD/USB into your machine CD drive / USB port and instruct your machine BIOS to boot from CD/USB. In order to boot from CD/USB, press F12 key while the BIOS is loading or consult your motherboard manual for the necessary boot key.

2. After the BIOS boot sequence completes, the first screen of NethServer should appear on your screen. Choose NethServer interactive install and press Enter key to continue further.

NethServer Boot Menu

NethServer Boot Menu

3. Wait a few seconds for the installer to load and a Welcome screen should appear. Form this screen choose your favorite Language, go to Next button using TAB or arrow keys and press again Enter to continue.

Choose Installation Language

Choose Installation Language

4. On the next screen choose your Network Interface for the internal network (Green), through which you will administer the server, then jump to Next using Tab key and press Enter to move to the interface and configure your network settings accordingly. When you’re done with network IP settings, choose Next tab and hit Enter to continue.

Choose Network Interface

Choose Network Interface

Network Configuration

Network Configuration

5. Finally, the last setting is to choose the Install tab and hit Enter key in order to install the NethServer.

Important: Be aware that this step is data destructive and will erase and format all your machine disks. After this step the installer will automatically configure and install the system until it reaches the end.

Select NethServer Install

Select NethServer Install

Installation Process

Installation Process

Installing Packages

Installing Packages

Step 2: Setting Up Root Password

6. After the installation finishes and the system reboots, login into your NethServer console using the following default credentials:

User : root
Password: Nethesis,1234

Once logged into the system, issue the following command in order to change the default root password (make sure you choose a strong password with at least 8 characters lenght, at least one upper case, one number and a special symbol):

# passwd root

Change NethServer Root Password

Change NethServer Root Password

Step 3: Initial NethServer Configurations

7. After the root password has been changed, it’s time to login to NethServer web administrative interface and do the initial configurations, by navigating to your server IP Address configured on installation process for the Internal network interface (green interface) on port 980 using the HTTPS protocol:

https://nethserver_IP:980

The first time you navigate to the above URL a security warning should be displayed on your browser. Accept the Self-Signed Certificate in order to proceed forward and the Log in page should appear.

Login with the root username and the root password you have already changed and the Welcome page should appear. Now, hit Next button to proceed with the initial configurations.

Accept SSL Certificate

Accept SSL Certificate

NethServer Login Credentials

NethServer Login Credentials

NethServer Control Panel

NethServer Control Panel

8. Next, set up your server Hostname, enter your Domain name and hit Next to move forward.

Set Hostname and Domain

Set Hostname and Domain

9. Choose your server physical Time zone from the list and hit Next button again.

Set Date and Timezone

Set Date and Timezone

10. The next page will ask you to change the SSH server default port. It’s a good practice to use this security measure and change the SSH port to an arbitrary port of your choice. Once the SSH port value filed is set hit the Next button to continue.

Change SSH Port for NethServer

Change SSH Port for NethServer

11. On the next page, choose the No, thanks option in order not to send statistics to nethserver.org and hit Nextbutton again to proceed further.

Usage Statistics

Usage Statistics


12.
 Now we have reached the final configuration. Review all the settings so far and once your done hit the Applybutton to write the changes into your system. Wait for a few seconds for tasks to complete.

Review NethServer Configuration

Review NethServer Configuration

Applying Changes

Applying Changes

13. Once the task finishes, go to Dashboard and review your machine StatusServices, and Disk Usage as illustrated on the below screenshots.

Check System Status

Check System Status

Check NethServer Services

Check NethServer Services

Check Disk Usage

Check Disk Usage

Step 4: Login through Putty and Update NethServer

14. The final step of this guide is to update your NethServer with the latest packages and security patches. Although this step can be done from the server’s console or through the web interface (Software Center -> Updates).

It’s a good time to remotely login through SSH using Putty as illustrated on the below screenshots and perform the upgrade procedure by issuing the following command:

# yum upgrade

Open Putty

Open Putty

SSH to NethServer

SSH to NethServer

Update NethServer

Update NethServer

While the upgrade process starts you will be asked some questions whether you accept a series of keys. Answer all with yes (y) and when the upgrade process finishes, reboot your system with the init 6 or reboot command in order to boot the system with the new installed kernel.

# init 6
OR
# reboot

That’ all! Now your machine is ready to become a Mail and Filter server, Web Server, Firewall, IDS, VPN, File server, DHCP server or whatever else configuration best suitable for your premises.

Reference Link: http://www.nethserver.org/

Source

How to Setup Local HTTP Yum Repository on CentOS 7

A software repository (“repo” in short) is a central file storage location to keep and maintain software packages, from which users can retrieve packages and install on their computers.

Repositories are often stored on servers on a network for example the internet, which can be accessed by multiple users. However, you can create and configure a local repository on your computer and access it as a single user or allow access to other machines on your LAN (Local Area Network).

One advantage of a setting up a local repository is that you don’t need internet connection to install sofware packages.

YUM (Yellowdog Updater Modified) is a widely used package management tool for RPM (RedHat Package Manager) based Linux systems, which makes sofware installation easy on Red Hat/CentOS Linux.

In this article, we will explain how to setup a local YUM repository over HTTP (Nginx) web server on CentOS 7 VPS and also show you how to find and install software packages on client CentOS 7 machines.

Our Testing Environment

Yum HTTP Repository Server:	CentOS 7 [192.168.0.100]
Client Machine:		CentOS 7 [192.168.0.101]

Step 1: Install Nginx Web Server

1. First start by installing Nginx HTTP server from the EPEL repository using the YUM package manager as follows.

# yum install epel-release
# yum install nginx 

2. Once you have installed Nginx web server, you can start it for the first time and enable it to start automatically at system boot.

 
# systemctl start nginx
# systemctl enable nginx
# systemctl status nginx

3. Next, you need to open port 80 and 443 to allow web traffic to Nginx service, update the system firewall rules to permit inbound packets on HTTP and HTTPS using the commands below.

# firewall-cmd --zone=public --permanent --add-service=http
# firewall-cmd --zone=public --permanent --add-service=https
# firewall-cmd --reload

4. Now you can confirm that your Nginx server is up and running, using the following URL; if you see the default Nginx web page, all is well.

http://SERVER_DOMAIN_NAME_OR_IP 

Nginx Default Page

Nginx Default Page

Step 2: Create Yum Local Repository

5. In this step, you need to install the required packages for creating, configuring and managing your local repository.

# yum install createrepo  yum-utils

6. Next, create the necessary directories (yum repositories) that will store packages and any related information.

# mkdir -p /var/www/html/repos/{base,centosplus,extras,updates}

7. Then use the reposync tool to synchronize CentOS YUM repositories to the local directories as shown.

# reposync -g -l -d -m --repoid=base --newest-only --download-metadata --download_path=/var/www/html/repos/
# reposync -g -l -d -m --repoid=centosplus --newest-only --download-metadata --download_path=/var/www/html/repos/
# reposync -g -l -d -m --repoid=extras --newest-only --download-metadata --download_path=/var/www/html/repos/
# reposync -g -l -d -m --repoid=updates --newest-only --download-metadata --download_path=/var/www/html/repos/
Sample Output
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.fibergrid.in
 * epel: mirror.xeonbd.com
 * extras: mirrors.fibergrid.in
 * updates: mirrors.fibergrid.in
base/7/x86_64/group                                                    | 891 kB  00:00:02     
No Presto metadata available for base
(1/9911): 389-ds-base-snmp-1.3.7.5-18.el7.x86_64.rpm                   | 163 kB  00:00:02     
(2/9911): 389-ds-base-devel-1.3.7.5-18.el7.x86_64.rpm                  | 267 kB  00:00:02     
(3/9911): ElectricFence-2.2.2-39.el7.i686.rpm                          |  35 kB  00:00:00     
(4/9911): ElectricFence-2.2.2-39.el7.x86_64.rpm                        |  35 kB  00:00:00     
(5/9911): 389-ds-base-libs-1.3.7.5-18.el7.x86_64.rpm                   | 695 kB  00:00:04     
(6/9911): GConf2-devel-3.2.6-8.el7.i686.rpm                            | 110 kB  00:00:00     
(7/9911): GConf2-devel-3.2.6-8.el7.x86_64.rpm                          | 110 kB  00:00:00     
(8/9911): GConf2-3.2.6-8.el7.i686.rpm                                  | 1.0 MB  00:00:06     

In the above commands, the option:

  • -g – enables removing of packages that fail GPG signature checking after downloading.
  • -l – enables yum plugin support.
  • -d – enables deleting of local packages no longer present in repository.
  • -m – enables downloading of comps.xml files.
  • --repoid – specifies the repository ID.
  • --newest-only – tell reposync to only pull the latest version of each package in the repos.
  • --download-metadata – enables downloading all the non-default metadata.
  • --download_path – specifies the path to download packages.

8. Next, check the contents of your local directories to ensure that all the packages have been synchronized locally.

# ls -l /var/www/html/repos/base/
# ls -l /var/www/html/repos/base/Packages/
# ls -l /var/www/html/repos/centosplus/
# ls -l /var/www/html/repos/centosplus/Packages/
# ls -l /var/www/html/repos/extras/
# ls -l /var/www/html/repos/extras/Packages/
# ls -l /var/www/html/repos/updates/
# ls -l /var/www/html/repos/updates/Packages/

9. Now create a new repodata for the local repositories by running the following commands, where the flag -gis used to update the package group information using the specified .xml file.

# createrepo -g comps.xml /var/www/html/repos/base/  
# createrepo -g comps.xml /var/www/html/repos/centosplus/	
# createrepo -g comps.xml /var/www/html/repos/extras/  
# createrepo -g comps.xml /var/www/html/repos/updates/  

10. To enable viewing of repositories and packages in them, via a web browser, create a Nginx server block which points to the root of your repositories as shown.

# vim /etc/nginx/conf.d/repos.conf 

Add the following configuration ot file repos.conf.

server {
        listen   80;
        server_name  repos.test.lab;	#change  test.lab to your real domain 
        root   /var/www/html/repos;
        location / {
                index  index.php index.html index.htm;
                autoindex on;	#enable listing of directory index
        }
}

Save the file and close it.

11. Then restart your Nginx server and view the repositories from a web browser using the following URL.

http://repos.test.lab

View Local Yum Repositories

View Local Yum Repositories

Step 3: Create Cron Job to Synchronize and Create Repositories

12. Next, add a cron job that will automatically synchronize your local repos with the official CentOS repos to grab the updates and security patches.

# vim /etc/cron.daily/update-localrepos

Add these commands in the script.

#!/bin/bash
##specify all local repositories in a single variable
LOCAL_REPOS=”base centosplus extras updates”
##a loop to update repos one at a time 
for REPO in ${LOCAL_REPOS}; do
reposync -g -l -d -m --repoid=$REPO --newest-only --download-metadata --download_path=/var/www/html/repos/
createrepo -g comps.xml /var/www/html/repos/$REPO/  
done

Save the script and close it and set the appropriate permissions on it.

# chmod 755 /etc/cron.daily/update-localrepos

Step 4: Setup Local Yum Repository on Client Machines

13. Now on your CentOS client machines, add your local repos to the YUM configuration.

# vim /etc/yum.repos.d/local-repos.repo

Copy and paste the configuration below in the file local-repos.repo (make changes where necessary).

[local-base]
name=CentOS Base
baseurl=http://repos.test.lab/base/
gpgcheck=0
enabled=1

[local-centosplus]
name=CentOS CentOSPlus
baseurl=http://repos.test.lab/centosplus/
gpgcheck=0
enabled=1

[local-extras]
name=CentOS Extras
baseurl=http://repos.test.lab/extras/
gpgcheck=0
enabled=1

[local-updates]
name=CentOS Updates
baseurl=http://repos.test.lab/updates/
gpgcheck=0
enabled=1

Save the file and start using your local YUM mirrors.

14. Next, run the following command to view your local repos in the list of available YUM repos, on the client machines.

#  yum repolist
OR
# yum repolist all

View Local Yum Repositories on Client

View Local Yum Repositories on Client

That’s all! In this article, we have explained how to setup a local YUM repository on CentOS 7. We hope that you found this guide useful. If you have any questions, or any other thoughts to share, use the comment form below.

Source

Manage Log Messages Under Systemd Using Journalctl [Comprehensive Guide]

Systemd is a cutting-edge system and service manager for Linux systems: an init daemon replacement intended to start processes in parallel at system boot. It is now supported in a number of current mainstream distribution including Fedora, Debian, Ubuntu, OpenSuSE, Arch, RHEL, CentOS, etc.

Earlier on, we explained the story behind ‘init’ and ‘systemd’; where we discussed what the two daemons are, why ‘init’ technically needed to be replaced with ‘systemd’ as well as the main features of systemd.

One of the main advantages of systemd over other common init systems is, support for centralized management of system and processes logging using a journal. In this article, we will learn how to manage and view log messages under systemd using journalctl command in Linux.

Important: Before moving further in this guide, you may want to learn how to manage ‘Systemd’ services and units using ‘Systemctl’ command, and also create and run new service units in systemd using shell scripts in Linux. However, if you are okay with all the above, continue reading through.

Configuring Journald for Collecting Log Messages Under Systemd

journald is a daemon which gathers and writes journal entries from the entire system; these are essentially boot messages, messages from kernel and from syslog or various applications and it stores all the messages in a central location – journal file.

You can control the behavior of journald via its default configuration file: /etc/systemd/journald.conf which is generated at compile time. This file contains options whose values you may change to suite your local environment requirements.

Below is a sample of what the file looks like, viewed using the cat command.

$ cat /etc/systemd/journald.conf 
Journald Configuration File
# See journald.conf(5) for details.

[Journal]
#Storage=auto
#Compress=yes
#Seal=yes
#SplitMode=uid
#SyncIntervalSec=5m
#RateLimitInterval=30s
#RateLimitBurst=1000
#SystemMaxUse=
#SystemKeepFree=
#SystemMaxFileSize=
#SystemMaxFiles=100
#RuntimeMaxUse=
#RuntimeKeepFree=
#RuntimeMaxFileSize=
#RuntimeMaxFiles=100
#MaxRetentionSec=
#MaxFileSec=1month
#ForwardToSyslog=yes
#ForwardToKMsg=no
#ForwardToConsole=no
#ForwardToWall=yes
#TTYPath=/dev/console
#MaxLevelStore=debug
#MaxLevelSyslog=debug
#MaxLevelKMsg=notice
#MaxLevelConsole=info
#MaxLevelWall=emerg

Note that various package installs and use configuration extracts in /usr/lib/systemd/*.conf.d/ and run time configurations can be found in /run/systemd/journald.conf.d/*.conf which you may not necessarily use.

Enable Journal Data Storage On Disk

A number of Linux distributions including Ubuntu and it’s derivatives like Linux Mint do not enable persistent storage of boot messages on disk by default.

It is possible to enable this by setting the “Storage” option to “persistent” as shown below. This will create the /var/log/journal directory and all journal files will be stored under it.

$ sudo vi /etc/systemd/journald.conf 
OR
$ sudo nano /etc/systemd/journald.conf 
[Journal]
Storage=persistent

For additional settings, find the meaning of all options which are supposed to be configured under the “[Journal]” section by typing.

$ man journald.conf

Setting Correct System Time Using Timedatectl Command

For reliable log management under systemd using journald service, ensure that the time settings including the timezone is correct on the system.

In order to view the current date and time settings on your system, type.

$ timedatectl 
OR
$ timedatectl status

Local time: Thu 2017-06-15 13:29:09 EAT
Universal time: Thu 2017-06-15 10:29:09 UTC
RTC time: Thu 2017-06-15 10:29:09
Time zone: Africa/Kampala (EAT, +0300)
Network time on: yes
NTP synchronized: yes
 RTC in local TZ: no

To set the correct timezone and possibly system time, use the commands below.

$ sudo timedatectl set-timezone  Africa/Kampala
$ sudo timedatectl set-time “13:50:00”

Viewing Log Messages Using Journalctl Command

journalctl is a utility used to view the contents of the systemd journal (which is written by journald service).

To show all collected logs without any filtering, type.

$ journalctl
View Log Messages
-- Logs begin at Wed 2017-06-14 21:56:43 EAT, end at Thu 2017-06-15 12:28:19 EAT
Jun 14 21:56:43 tecmint systemd-journald[336]: Runtime journal (/run/log/journal
Jun 14 21:56:43 tecmint kernel: Initializing cgroup subsys cpuset
Jun 14 21:56:43 tecmint kernel: Initializing cgroup subsys cpu
Jun 14 21:56:43 tecmint kernel: Initializing cgroup subsys cpuacct
Jun 14 21:56:43 tecmint kernel: Linux version 4.4.0-21-generic (buildd@lgw01-21)
Jun 14 21:56:43 tecmint kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-4.4.0-21-
Jun 14 21:56:43 tecmint kernel: KERNEL supported cpus:
Jun 14 21:56:43 tecmint kernel:   Intel GenuineIntel
Jun 14 21:56:43 tecmint kernel:   AMD AuthenticAMD
Jun 14 21:56:43 tecmint kernel:   Centaur CentaurHauls
Jun 14 21:56:43 tecmint kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]
Jun 14 21:56:43 tecmint kernel: x86/fpu: Supporting XSAVE feature 0x01: 'x87 flo
Jun 14 21:56:43 tecmint kernel: x86/fpu: Supporting XSAVE feature 0x02: 'SSE reg
Jun 14 21:56:43 tecmint kernel: x86/fpu: Supporting XSAVE feature 0x04: 'AVX reg
Jun 14 21:56:43 tecmint kernel: x86/fpu: Enabled xstate features 0x7, context si
Jun 14 21:56:43 tecmint kernel: x86/fpu: Using 'eager' FPU context switches.
Jun 14 21:56:43 tecmint kernel: e820: BIOS-provided physical RAM map:
Jun 14 21:56:43 tecmint kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000
Jun 14 21:56:43 tecmint kernel: BIOS-e820: [mem 0x0000000000090000-0x00000000000
Jun 14 21:56:43 tecmint kernel: BIOS-e820: [mem 0x0000000000100000-0x000000001ff
Jun 14 21:56:43 tecmint kernel: BIOS-e820: [mem 0x0000000020000000-0x00000000201
Jun 14 21:56:43 tecmint kernel: BIOS-e820: [mem 0x0000000020200000-0x00000000400

View Log messages Based On Boots

You can display a list of boot numbers (relative to the current boot), their IDs, and the timestamps of the first and last message corresponding to the boot with the --list-boots option.

$ journalctl --list-boots

-1 9fb590b48e1242f58c2579defdbbddc9 Thu 2017-06-15 16:43:36 EAT—Thu 2017-06-15 1
 0 464ae35c6e264a4ca087949936be434a Thu 2017-06-15 16:47:36 EAT—Thu 2017-06-15 1 

To view the journal entries from the current boot (number 0), use the -b switch like this (same as the sample output above).

$ journalctl -b

and to see a journal from the previous boot, use the -1 relative pointer with the -b option as below.

$ journalctl -b -1

Alternatively, use the boot ID like this.

$ journalctl -b 9fb590b48e1242f58c2579defdbbddc9

Filtering Log Messages Based On Time

To use time in Coordinated Universal Time (UTC) format, add the --utc options as follows.

$ journalctl --utc

To see all of the entries since a particular date and time, e.g. June 15th, 2017 at 8:15 AM, type this command.

$ journalctl --since "2017-06-15 08:15:00"
$ journalctl --since today
$ journalctl --since yesterday

Viewing Recent Log Messages

To view recent log messages (10 by default), use the -n flag as shown below.

$ journalctl -n
$ journalctl -n 20 

Viewing Log Messages Generated By Kernel

To see only kernel messages, similar to the dmesg command output, you can use the -k flag.

$ journalctl -k 
$ journalctl -k -b 
$ journalctl -k -b 9fb590b48e1242f58c2579defdbbddc9

Viewing Log Messages Generated By Units

To can view all journal entries for a particular unit, use the -u switch as follows.

$ journalctl -u apache2.service

To zero down to the current boot, type this command.

$ journalctl -b -u apache2.service

To show logs from the previous boot, use this.

$ journalctl -b -1 -u apache2.service

Below are some other useful commands:

$ journalctl -u apache2.service  
$ journalctl -u apache2.service --since today
$ journalctl -u apache2.service -u nagios.service --since yesterday

Viewing Log Messages Generated By Processes

To view logs generated by a specific process, specify it’s PID like this.

$ journalctl _PID=19487
$ journalctl _PID=19487 --since today
$ journalctl _PID=19487 --since yesterday

Viewing Log Messages Generated By User or Group ID

To view logs generated by a specific user or group, specify it’s user or group ID like this.

$ journalctl _UID=1000
$ journalctl _UID=1000 --since today
$ journalctl _UID=1000 -b -1 --since today

Viewing Logs Generated By a File

To show all logs generated by a file (possibly an executable), such as the D-Bus executable or bash executables, simply type.

$ journalctl /usr/bin/dbus-daemon
$ journalctl /usr/bin/bash

Viewing Log Messages By Priority

You can also filter output based on message priorities or priority ranges using the -p flag. The possible values are: 0 – emerg, 1 – alert, 2 – crit, 3 – err, 4 – warning, 5 – notice, 6 – info, 7 – debug):

$ journalctl -p err

To specify a range, use the format below (emerg to warning).

$ journalctl -p 1..4
OR
$ journalctl -p emerg..warning

View Log Messages in Real-Time

You can practically watch logs as they are being written with the -f option (similar to tail -f functionality).

$ journalctl -f

Handling Journal Display Formatting

If you want to control the output formatting of the journal entries, add the -o flag and use these options: cat, export, json, json-pretty, json-sse, short, short-iso, short-monotonic, short-precise and verbose(check meaning of options in the man page:

The cat option shows the actual message of each journal entry without any metadata (timestamp and so on).

$ journalctl -b -u apache2.service -o cat

Managing Journals On a System

To check the journal file for internal consistency, use the --verify option. If all is well, the output should indicate a PASS.

$ journalctl --verify

PASS: /run/log/journal/2a5d5f96ef9147c0b35535562b32d0ff/system.journal                               
491f68: Unused data (entry_offset==0)                                                                
PASS: /run/log/journal/2a5d5f96ef9147c0b35535562b32d0ff/system@816533ecd00843c4a877a0a962e124f2-0000000000003184-000551f9866c3d4d.journal
PASS: /run/log/journal/2a5d5f96ef9147c0b35535562b32d0ff/system@816533ecd00843c4a877a0a962e124f2-0000000000001fc8-000551f5d8945a9e.journal
PASS: /run/log/journal/2a5d5f96ef9147c0b35535562b32d0ff/system@816533ecd00843c4a877a0a962e124f2-0000000000000d4f-000551f1becab02f.journal
PASS: /run/log/journal/2a5d5f96ef9147c0b35535562b32d0ff/system@816533ecd00843c4a877a0a962e124f2-0000000000000001-000551f01cfcedff.journal

Deleting Old Journal Files

You can also display the current disk usage of all journal files with the --disk-usage options. It shows the sum of the disk usage of all archived and active journal files:

$ journalctl --disk-usage

To delete old (archived) journal files run the commands below:

$ sudo journalctl --vacuum-size=50M  #delete files until the disk space they use falls below the specified size
$ sudo journalctl --vacuum-time=1years	#delete files so that all journal files contain no data older than the specified timespan
$ sudo journalctl --vacuum-files=4     #delete files so that no more than the specified number of separate journal files remain in storage location

Rotating Journal Files

Last but not least, you can instruct journald to rotate journal files with the --rotate option. Note that this directive does not return until the rotation operation is finished:

$ sudo journalctl --rotate

For an in-depth usage guide and options, view the journalctl man page as follows.

$ man journalctl

Do check out some useful articles.

  1. Managing System Startup Process and Services (SysVinit, Systemd and Upstart)
  2. Petiti – An Open Source Log Analysis Tool for Linux SysAdmins
  3. How to Setup and Manage Log Rotation Using Logrotate in Linux
  4. lnav – Watch and Analyze Apache Logs from a Linux Terminal

That’s it for now. Use the feedback from below to ask any questions or add you thoughts on this topic.

Source

Understanding APT, APT-Cache and Their Frequently Used Commands

If you’ve ever used Debian or a Debian based distribution like Ubuntu or Linux Mint, then chances are that you’ve used the APT package system to install or remove software. Even if you’ve never dabbled on the command line, the underlying system that powers your package manager GUI is the APT system.

apt-get commands and apt-cache commands

Understanding APT and APT-Cache

Today, we are going to take a look at some familiar commands, and dive into some less or more frequently used APT commands, and shed some light on this brilliantly designed system.

What is APT?

APT stands for Advanced Package Tool. It was first seen in Debian 2.1 back in 1999. Essentially, APT is a management system for dpkg packages, as seen with the extension *.deb. It was designed to not only manage packages and updates, but to solve the many dependency issues when installing certain packages.

As anyone who was using Linux back in those pioneer days, we were all too familiar with the term “dependency hell” when trying to compile something from source, or even when dealing with a number of Red Hat’s individual RPM files.

APT solved all of these dependency issues automatically, making installing any package, regardless of the size or number of dependencies a one line command. To those of us who laboured for hours on these tasks, this was one of those “sun parting the clouds” moments in our Linux lives!

Understanding APT Configuration

This first file we are going to look at is one of APT’s configuration files.

$ sudo cat /etc/apt/sources.list
Sample Output
deb http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ precise main
deb-src http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ precise main

deb http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ precise-updates main
deb-src http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ precise-updates main

deb http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ precise universe
deb-src http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ precise universe
deb http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ precise-updates universe
deb-src http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ precise-updates universe

deb http://security.ubuntu.com/ubuntu precise-security main
deb-src http://security.ubuntu.com/ubuntu precise-security main
deb http://security.ubuntu.com/ubuntu precise-security universe
deb-src http://security.ubuntu.com/ubuntu precise-security universe

As you can probably deduce from my sources.list file, I’m using Ubuntu 12.04 (Precise Pangolin). I’m also using three repositories:

  1. Main Repository
  2. Universe Repository
  3. Ubuntu Security Repository

The syntax of this file is relatively simple:

deb (url) release repository

The accompanying line is the source file repository. It follows a similar format:

deb-src (url) release repository

This file is pretty much the only thing you’ll ever have to edit using APT, and chances are that the defaults will server you quite well and you will never need to edit it at all.

However, there are times that you might want to add third-party repositories. You would simple enter them using the same format, and then run the update command:

$ sudo apt-get update

NOTE: Be very mindful of adding third party repositories!!! Only add from trusted and reputable sources. Adding dodgy repositories or mixing releases can seriously mess up your system!

We’ve taken a look at our sources.list file and now know how to update it, so what’s next? Let’s install some packages. Let’s say that we are running a server and we want to install WordPress. First let’s search for the package:

$ sudo apt-cache search wordpress
Sample Output
blogilo - graphical blogging client
drivel - Blogging client for the GNOME desktop
drupal6-mod-views - views modules for Drupal 6
drupal6-thm-arthemia - arthemia theme for Drupal 6
gnome-blog - GNOME application to post to weblog entries
lekhonee-gnome - desktop client for wordpress blogs
libmarkdown-php - PHP library for rendering Markdown data
qtm - Web-log interface program
tomboy-blogposter - Tomboy add-in for posting notes to a blog
wordpress - weblog manager
wordpress-l10n - weblog manager - language files
wordpress-openid - OpenID plugin for WordPress
wordpress-shibboleth - Shibboleth plugin for WordPress
wordpress-xrds-simple - XRDS-Simple plugin for WordPress
zine - Python powered blog engine

What is APT-Cache?

Apt-cache is a command that simply queries the APT cache. We passed the search parameter to it, stating that, obviously, we want to search APT for it. As we can see above, searching for “wordpress” returned a number of packages that related to the search string with a short description of each package.

From this, we see the main package of “wordpress – weblog manager,” and we want to install it. But wouldn’t it be nice to see exactly what dependencies are going to be installed along with it? APT can tell us that as well:

$ sudo apt-cache showpkg wordpress
Sample Output
Versions:
3.3.1+dfsg-1 (/var/lib/apt/lists/us-west-2.ec2.archive.ubuntu.com_ubuntu_dists_precise_universe_binary-amd64_Packages)
 Description Language:
                 File: /var/lib/apt/lists/us-west-2.ec2.archive.ubuntu.com_ubuntu_dists_precise_universe_binary-amd64_Packages
                  MD5: 3558d680fa97c6a3f32c5c5e9f4a182a
 Description Language: en
                 File: /var/lib/apt/lists/us-west-2.ec2.archive.ubuntu.com_ubuntu_dists_precise_universe_i18n_Translation-en
                  MD5: 3558d680fa97c6a3f32c5c5e9f4a182a

Reverse Depends:
  wordpress-xrds-simple,wordpress
  wordpress-shibboleth,wordpress 2.8
  wordpress-openid,wordpress
  wordpress-l10n,wordpress 2.8.4-2
Dependencies:
3.3.1+dfsg-1 - libjs-cropper (2 1.2.1) libjs-prototype (2 1.7.0) libjs-scriptaculous (2 1.9.0) libphp-phpmailer (2 5.1) libphp-simplepie (2 1.2) libphp-snoopy (2 1.2.4) tinymce (2 3.4.3.2+dfsg0) apache2 (16 (null)) httpd (0 (null)) mysql-client (0 (null)) libapache2-mod-php5 (16 (null)) php5 (0 (null)) php5-mysql (0 (null)) php5-gd (0 (null)) mysql-server (2 5.0.15) wordpress-l10n (0 (null))
Provides:
3.3.1+dfsg-1 -
Reverse Provides:

This shows us that wordpress 3.3.1 is the version to be installed, the repository it is to be installed from, reverse dependencies, and other packages it depends on, plus their version numbers.

NOTE: (null means that the version is not defined, and the latest version in the repository will be installed.)

Now, the actual install command:

$ sudo apt-get install wordpress

That command will install WordPress-3.3.1 and all dependencies that are not currently installed.

Of course, that is not all you can do with APT. Some other useful commands are as follow:

NOTE: It is a good practice to run apt-get update before running any series of APT commands. Remember, apt-get update parses your /etc/apt/sources.list file and updates its database.

Uninstalling a package is just as easy as installing the package:

$ sudo apt-get remove wordpress

Unfortunately, the apt-get remove command leave all of the configuration files intact. To remove those as well, you’ll want to use apt-get purge:

$ sudo apt-get purge wordpress

Every now and then, you might run across a situation where there are broken dependencies. This usually happens when you don’t run apt-get update properly, mangling the database. Fortunately, APT has a fix for it:

$ sudo apt-get –f install

Since APT downloads all of the *.deb files from the repository right to your machine (stores them in/var/cache/apt/archives) you might want to periodically remove them to free up disk space:

$ sudo apt-get clean

This is just a small fraction of APTAPT-Cache and some of its useful commands. There are still lot to learn and explore some more advanced commands at below article.

  1. 25 Useful and Advanced Commands of APT-GET and APT-CACHE

As always, please have a look at the man pages for even more options. Once one gains a familiarity with APT, it is possible to write awesome Cron scripts to keep the system up to date.

Source

The 2018 Web Developer Roadmap An illustrated guide to becoming a Frontend or Backend Developer with links to courses

Web Developer in 2018

Here’s where you’ll start. You can choose either the Front-end, or Back-end path below. Regardless, there are eight recommendations in yellow that you should learn for either path.

Recommended learning for either path

Frontend Path & Courses for Learning Front End

Focus on yellow boxes and grow from there. Below the map are additional resources to aide your learning.

The Web Development Bootcamp

You need to learn the basics and build a solid foundation of web development principles. There are many ways to do this, but in my opinion, The Web Development Bootcamp is the best and easiest way.

The Advanced Web Development Bootcamp

Now that you’ve taken the first bootcamp and know how to build full stack web applications, it’s time to take your learning a little deeper. The Advanced Web Development Bootcamp introduces complex technologies, frameworks, and tools you can use to build beautiful, responsive, web applications.

HTML / CSS

Beginner JavaScript

Advanced JavaScript

React JS

Angular JS

Vue JS

Backend

Focus on yellow boxes and go from there. Below the map are additional resources to aide your learning.

Node JS

Ruby

Python

PHP

Java

MySQL

Closing Notes

You made it to the end of the article… Good luck on your Web Development journey! It’s certainly not going to be easy, but by following this guide, you are one stop closer to accomplishing your goal.

Source

Kurly – An Alternative to Most Widely Used Curl Program

Kurly is a free open source, simple but effective, cross-platform alternative to the popular curl command-line tool. It is written in Go programming language and works in the same way as curl but only aims to offer common usage options and procedures, with emphasis on the HTTP(S) operations.

In this tutorial we will learn how to install and use kurly program – an alternative to most widely used curl command in Linux.

Requirements:

  1. GoLang (Go Programming Language) 1.7.4 or higher.

How to Install Kurly (Curl Alternative) in Linux

Once you have installed Golang on your Linux machine, you can proceed to install kurly by cloning its git repository as shown.

$ go get github.com/davidjpeacock/kurly

Alternatively, you can install it via snapd – a package manager for snaps, on a number of Linux distributions. To use snapd, you need to install it on your system as shown.

$ sudo apt update && sudo apt install snapd	[On Debian/Ubuntu]
$ sudo dnf update && sudo dnf install snapd     [On Fedora 22+]

Then install kurly snap using the following command.

$ sudo snap install kurly

On Arch Linux, you can install from AUR, as follows.

$ sudo pacaur -S kurly
OR
$ sudo yaourt -S kurly

On CentOS/RHEL, you can download and install its RPM package using package manager as shown.

# wget -c https://github.com/davidjpeacock/kurly/releases/download/v1.2.1/kurly-1.2.1-0.x86_64.rpm
# yum install kurly-1.2.1-0.x86_64.rpm

How to Use Kurly (Curl Alternative) in Linux

Kurly focuses on the HTTP(S) realm, we will use Httpbin, a HTTP request and response service to partly demonstrate how kurly operates.

The following command will return the user agent, as defined in the http://www.httpbin.org/user-agentendpoint.

$ kurly http://httpbin.org/user-agent

Check User Agent

Check User Agent

Next, you can use kurly to download a file (for example Tomb-2.5.tar.gz encryption tool source code), preserving remote filename while saving output using -O flag.

$ kurly -O https://files.dyne.org/tomb/Tomb-2.5.tar.gz

To preserve remote timestamp and follow 3xx redirects, use the -R and -L flags respectively, as follows.

$ kurly -R -O -L https://files.dyne.org/tomb/Tomb-2.5.tar.gz

Download File Using Kurly

Download File Using Kurly

You can set a new name for the downloaded file, using the -o flag as shown.

$ kurly -R -o tomb.tar.gz -L https://files.dyne.org/tomb/Tomb-2.5.tar.gz  

Rename File While Downloading

Rename File While Downloading

This example shows how to upload a file, where the -T flag is used to specify the location of a file to upload. Under the http://httpbin.org/put endpoint, this command will return the PUT data as shown in the screenshot.

$ kurly -T ~/Pictures/kali.jpg https://httpbin.org/put

Upload File Using Kurly

Upload File Using Kurly

To view headers only from a URL use the -I or --head flag.

$ kurly -I https://google.com

View Website Headers from Terminal

View Website Headers from Terminal

To run it quietly, use the -s switch, this way, kurly will not produce any output.

$ kurly -s -R -O -L https://files.dyne.org/tomb/Tomb-2.5.tar.gz

Last but not least, you can set the maximum time to wait for an operation to complete in seconds, with the -mflag.

$ kurly -s -m 20 -R -O -L https://files.dyne.org/tomb/Tomb-2.5.tar.gz

To get a list of all kurly usage flags, consult its command-line help message.

$ kurly -h

For more information visit Kurly Github Repositoryhttps://github.com/davidjpeacock/kurly

Kurly is a curl-like tool, but with a few commonly used features under the HTTP(S) realm. Many of the curl-like features are yet to be added to it.

Source

Learn How to Set Your $PATH Variables Permanently in Linux

In Linux (also UNIX) $PATH is environment variable, used to tell the shell where to look for executable files. $PATH variable provides great flexibility and security to the Linux systems and it is definitely safe to say that it is one of the most important environment variables.

Don’t Miss: How to Set and Unset Local, User and System Wide Environment Variables

Programs/scripts that are located within the $PATH’s directory, can be executed directly in your shell, without specifying the full path to them. In this tutorial you are going to learn how to set $PATH variable globally and locally.

First, let’s see your current $PATH’s value. Open a terminal and issue the following command:

$ echo $PATH

The result should be something like this:

/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games

The result shows a list of directories separated by colons. You can easily add more directories by editing your user’s shell profile file.

In different shells this can be:

  1. Bash shell -> ~/.bash_profile, ~/.bashrc or profile
  2. Korn Shell -> ~/.kshrc or .profile
  3. Z shell -> ~/.zshrc  or .zprofile

Please note that depending on how you are logging to the system in question, different file might be read. Here is what the bash manual says, keep in mind that the files are similar for other shells:

/bin/bash
The bash executable
/etc/profile
The systemwide initialization file, executed for login shells
~/.bash_profile
The personal initialization file, executed for login shells
~/.bashrc
The individual per-interactive-shell startup file
~/.bash_logout
The individual login shell cleanup file, executed when a login shell exits
~/.inputrc
Individual readline initialization file|

Considering the above, you can add more directories to the $PATH variable by adding the following line to the corresponding file that you will be using:

$ export PATH=$PATH:/path/to/newdir

Of course in the above example, you should change “/path/to/newdir” with the exact path that you wish to set. Once you have modified your .*rc or .*_profile file you will need to call it again using the “source” command.

For example in bash you can do this:

$ source ~/.bashrc

Below, you can see an example of mine $PATH environment on a local computer:

marin@[TecMint]:[/home/marin] $ echo $PATH

/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/marin/bin
This is actually a good practice to create a local “bin” folder for users where they can place their executable files. Each user will have its separate folder to store his contents. This is also a good measure to keep your system secured.

Source

How to Connect Wi-Fi from Linux Terminal Using Nmcli Command

There are several command-line tools for managing a wireless network interface in Linux systems. A number of these can be used to simply view the wireless network interface status (whether it is up or down, or if it is connected to any network), such as iwiwlistipifconfig and others.

And some are used to connect to a wireless network, and these include: nmcli, is a command-line tool used to create, show, edit, delete, enable, and disable network connections, as well as control and display network device status.

First start by checking the name of your network device using the following command. From the output of this command, the device name/interface is wlp1s0 as shown.

$ iw dev

phy#0
	Interface wlp1s0
		ifindex 3
		wdev 0x1
		addr 38:b1:db:7c:78:c7
		type managed

Next, check the Wi-Fi device connection status using the following command.

iw wlp2s0 link

Not connected.

From the output above the device is not connected to any network, run the following command to scan available Wi-Fi networks.

sudo iw wlp2s0 scan
       
command failed: Network is down (-100)

Considering the output of the above command, the network device/interface is DOWN, you can turn it On (UP) with the ip command as shown.

$ sudo ip link set wlp1s0 up

If you get the following error, that means your Wifi is hard blocked on Laptop or Computer.

RTNETLINK answers: Operation not possible due to RF-kill

To remove or unblock you need to run the following command to solve the error.

$ echo "blacklist hp_wmi" | sudo tee /etc/modprobe.d/hp.conf
$ sudo rfkill unblock all

Then try to turn ON the network device once more, and it should work this time around.

$ sudo ip link set wlp1s0 up

If you know the ESSID of the Wi-Fi network you wish to connect to, move to the next step, otherwise issue the command below to scan available Wi-Fi networks again.

$ sudo iw wlp1s0 scan

And lastly, connect to the wi-fi network using following command, where Hackernet (Wi-Fi network SSID) and localhost22 (password/pre-shared key).

$ nmcli dev wifi connect Hackernet password localhost22

Once connected, verify your connectivity by doing a ping to an external machine and analyze the output of the ping as shown.

$ ping 8.8.8.8

PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=48 time=61.7 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=48 time=61.5 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=48 time=61.6 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=48 time=61.3 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=48 time=63.9 ms
^C
--- 8.8.8.8 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 61.338/62.047/63.928/0.950 ms

That’s It! I hope this article helped you to setup your Wi-Fi network from the Linux command line.

Source

WP2Social Auto Publish Powered By : XYZScripts.com