How to Install ionCube Loader in CentOS 7

ionCube is a commercial software suite consisting of a PHP encoder, package foundry, bundler, a real time site intrusion detection and error reporting application as well as a loader.

PHP encoder is an application for PHP software protection: used to secure, encrypt and license PHP source code. ionCube loader is an extension used to load PHP files protected and encoded using PHP encoder. It is mostly used in commercial software applications to protect their source code and prevent it from being visible.

Read AlsoHow to Install ionCube Loader in Debian and Ubuntu

In this article, we will show how to install and configure ionCube Loader with PHP in CentOS 7 and RHEL 7distributions.

Prerequisites:

Your server must have a running web server (Apache or Nginx) with PHP installed. If you don’t have a web server and PHP on your system, you can install them using yum package manager as shown.

Step 1: Install Apache or Nginx Web Server with PHP

1. If you already have a running web server Apache or Nginx with PHP installed on your system, you can jump to the Step 2, otherwise use the following yum command to install them.

-------------------- Install Apache with PHP --------------------
# yum install httpd php php-cli	php-mysql

-------------------- Install Nginx with PHP -------------------- 
# yum install nginx php php-fpm php-cli	php-mysql

2. After installing Apache or Nginx with PHP on your system, start the web server and make sure to enable it to auto start at system boot time using following commands.

-------------------- Start Apache Web Server --------------------
# systemctl start httpd
# systemctl enable httpd

-------------------- Start Nginx + PHP-FPM Server --------------------
# systemctl start nginx
# systemctl enable nginx
# systemctl start php-fpm
# systemctl enable php-fpm

Step 2: Download IonCube Loader

3. Go to the inocube’s website and download the installation files, but before that first you need to check whether your system is running on 64-bit or 32-bit architecture using the following command.

# uname -a

Linux tecmint.com 4.15.0-1.el7.elrepo.x86_64 #1 SMP Sun Jan 28 20:45:20 EST 2018 x86_64 x86_64 x86_64 GNU/Linux

The above output clearly shows that the system is running on 64-bit architecture.

As per your Linux system architecture type download the ioncube loader files into /tmp directory using following wget command.

-------------------- For 64-bit System --------------------
# cd /tmp
# wget https://downloads.ioncube.com/loader_downloads/ioncube_loaders_lin_x86-64.tar.gz

-------------------- For 32-bit System --------------------
# cd /tmp
# wget https://downloads.ioncube.com/loader_downloads/ioncube_loaders_lin_x86.tar.gz

4. Then unzip the downloaded file using the tar command and move into the decompressed folder. Then run the ls command to list the numerous ioncube loader files for different PHP versions.

# tar -xvf ioncube_loaders_lin_x86*
# cd ioncube/
$ ls -l

Ioncube Loader Files

Ioncube Loader Files

Step 3: Install ionCube Loader for PHP

5. There will be different ioncube loader files for various PHP versions, you need to select the right ioncube loader for your installed PHP version on your server. To know the php version installed on your server, run the command.

# php -v

Verify PHP Version

Verify PHP Version

The above output clearly shows that the system is using PHP 5.4.16 version, in your case it should be different version.

6. Next, find the location of the extension directory for PHP version 5.4, it is where the ioncube loader file will be installed. From the output of this command, the directory is /usr/lib64/php/modules.

# php -i | grep extension_dir

extension_dir => /usr/lib64/php/modules => /usr/lib64/php/modules

7. Next we need to copy ioncube loader for our PHP 5.4 version to the extension directory (/usr/lib64/php/modules).

# cp /tmp/ioncube/ioncube_loader_lin_5.4.so /usr/lib64/php/modules

Note: Make sure to replace the PHP version and extension directory in the above command according to your system configuration.

Step 4: Configure ionCube Loader for PHP

8. Now we need to configure ioncube loader to work with PHP, in the php.ini file.

# vim /etc/php.ini

Then add below line as the first line in the php.ini file.

zend_extension = /usr/lib64/php/modules/ioncube_loader_lin_5.4.so

Enable ionCube Loader in PHP

Enable ionCube Loader in PHP

Note: Make sure to replace the extension directory and PHP version in the above command according to your system configuration.

9. Then save and exit the file. Now we need to restart the Apache or Nginx web server for the ioncube loaders to come into effect.

-------------------- Start Apache Web Server --------------------
# systemctl restart httpd

-------------------- Start Nginx + PHP-FPM Server --------------------
# systemctl restart nginx
# systemctl restart php-fpm

Step 5: Test ionCube Loader

10. To test if ionCube loader is now installed and properly configured on your server, check your PHP version once more. You should be able to see a message indicating that PHP is installed and configured with the ioncube loader extension (status should be enabled), as shown in the following screenshot.

# php -v

Test ionCuber Loader

Test ionCuber Loader

The above output confirms that the PHP is now loaded and enabled with ioncube loader.

ionCube loader is a PHP extension for loading files secured and encoded with PHP encoder. We hope that everything worked on fine while following this guide, otherwise, use the feedback form below to send us your queries.

Source

How to Install pgAdmin4 in CentOS 7

PgAdmin4 is a easy to use web interface for managing PostgreSQL databases. It can be used on multiple platforms such as Linux, Windows and Mac OS X. In pgAdmin 4 there is migration from bootstrap 3 to bootstrap 4.

In this tutorial we are going to install pgAdmin 4 on a CentOS 7 system.

Note: This tutorial assumes that you already have PostgreSQL 9.2 or above installed on your CentOS 7. For instructions how to install it, you can follow our guide: How to install PostgreSQL 10 on CentOS and Fedora.

How to Install pgAdmin 4 in CentOS 7

This step should have been completed upon the installation of PostgreSQL, but if you haven’t, you can complete it with:

# yum -y install https://download.postgresql.org/pub/repos/yum/11/redhat/rhel-7-x86_64/pgdg-centos11-11-2.noarch.rpm

Now you are ready to install pgAdmin with:

# yum -y install pgadmin4

During the installation, due to dependencies, the following two will be installed as well – pgadmin4-web and httpd web server.

How to Configure pgAdmin 4 in CentOS 7

There are few minor configuration changes that need to be done to have pgAdmin4 running. First we will rename the sample conf file from pgadmin4.conf.sample to pgadmin4.conf:

# mv /etc/httpd/conf.d/pgadmin4.conf.sample /etc/httpd/conf.d/pgadmin4.conf

Adjust the file so it looks like this:

<VirtualHost *:80>
LoadModule wsgi_module modules/mod_wsgi.so
WSGIDaemonProcess pgadmin processes=1 threads=25
WSGIScriptAlias /pgadmin4 /usr/lib/python2.7/site-packages/pgadmin4-web/pgAdmin4.wsgi

<Directory /usr/lib/python2.7/site-packages/pgadmin4-web/>
        WSGIProcessGroup pgadmin
        WSGIApplicationGroup %{GLOBAL}
        <IfModule mod_authz_core.c>
                # Apache 2.4
                Require all granted
        </IfModule>
        <IfModule !mod_authz_core.c>
                # Apache 2.2
                Order Deny,Allow
                Deny from All
                Allow from 127.0.0.1
                Allow from ::1
        </IfModule>
</Directory>
</VirtualHost>

Next we will create logs and lib directories for pgAdmin4 and set their ownership:

# mkdir -p /var/lib/pgadmin4/
# mkdir -p /var/log/pgadmin4/
# chown -R apache:apache /var/lib/pgadmin4
# chown -R apache:apache /var/log/pgadmin4

And then we can extend the contents of our config_distro.py.

# vi /usr/lib/python2.7/site-packages/pgadmin4-web/config_distro.py

And add the following lines:

LOG_FILE = '/var/log/pgadmin4/pgadmin4.log'
SQLITE_PATH = '/var/lib/pgadmin4/pgadmin4.db'
SESSION_DB_PATH = '/var/lib/pgadmin4/sessions'
STORAGE_DIR = '/var/lib/pgadmin4/storage'

Finally we will create our user account, with which we will authenticate in the web interface. To do this, run:

# python /usr/lib/python2.7/site-packages/pgadmin4-web/setup.py

Create PgAdmin4 User

Create PgAdmin4 User

Now you can access your server’s http://ip-address/pgadmin4 or http://localhost/pgadmin4 to reach the pgAdmin4 interface:

PgAdmin4 Login

PgAdmin4 Login

To authenticate, use the email address and password that you have used earlier. Once authenticate, you should see the pgAdmin4 interface:

PgAdmin4 Dashboard

PgAdmin4 Dashboard

At your first login, you will need to add a new server to manage. Click on “Add New Server”. You will need to configure the PostgresQL connection. In the first tab “General”, enter the following settings:

  • Name – give name of the server you are configuring.
  • Comment – leave a comment to give description of the instance.

Add New Server to PgAdmin4

Add New Server to PgAdmin4

The second tab “Connection” is more important one, as you will have to enter:

  • Host – host/IP address of the PostgreSQL instance.
  • Port – default port is 5432.
  • Maintenance database – this should be postgres.
  • Username – the username which will be connecting. You can use postgres user.
  • Password – password for the above user.

PgAdmin4 Server Connection Settings

PgAdmin4 Server Connection Settings

When you have filled everything, Save the changes. If the connection was successful, you should see the following page:

PgAdmin4 Database Summary

PgAdmin4 Database Summary

This was it. Your pgAdmin4 installation is complete and you can start managing your PostgreSQL database.

Source

How to Recover or Rescue Corrupted Grub Boot Loader in CentOS 7

In this tutorial we’ll cover the process of rescuing a corrupted boot loader in CentOS 7 or Red Hat Enterprise Linux 7 and recover the a forgotten root password.

The GRUB boot loader can sometimes be damaged, compromised or deleted in CentOS due to various issues, such as hardware or software related failures or sometimes can be replaced by other operating systems, in case of dual-booting. A corrupted Grub boot loader makes a CentOS/RHEL system unable to boot and transfer the control further to Linux kernel.

The Grub boot loader stage one is installed on the first 448 bytes at the beginning of every hard disk, in an area typically known as the Master Boot Record (MBR).

Read AlsoHow to Rescue, Repair and Recover Grub Boot Loader in Ubuntu

The MBR maximum size is 512 byes long. If from some reason the first 448 bytes are overwritten, the CentOS or Red Hat Enterprise Linux cannot be loaded unless you boot the machine with a CentOS ISO image in rescue mode or using other boot loading methods and reinstall the MBR GRUB boot loader.

Requirements

  1. Download CentOS 7 DVD ISO Image

Recover GRUB Boot Loader in CentOS 7

1. On the first step, download the latest version of CentOS 7 ISO image and burn it to a DVD or create a bootable USB stick. Place the bootable image into your machine appropriate drive and reboot the machine.

While the BIOS performs the POSTs tests, press a special key (Esc, F2, F11, F12, Del depending on the motherboard instructions) in order to enter BIOS settings and modify the boot sequence so that the bootable DVD/USB image is booted first at machine start-up, as illustrated in the below image.

System Boot Menu

System Boot Menu

2. After the CentOS 7 bootable media has been detected, the first screen will appear in your machine monitor output. From the first menu choose the Troubleshooting option and press [enter] key to continue.

Select CentOS 7 Troubleshooting

Select CentOS 7 Troubleshooting

3. On the next screen choose Rescue a CentOS system option and press [enter] key to move further. A new screen will appear with the message ‘Press the Enter key to begin the installation process’. Here, just press [enter] key again to load the CentOS system to memory.

Rescue CentOS 7 System

Rescue CentOS 7 System

Rescue CentOS 7Process

Rescue CentOS 7Process

4. After the installer software loads into your machine RAM, the rescue environment prompt will appear on your screen. On this prompt type 1 in order to Continue with the system recovery process, as illustrated in the below image.

CentOS 7 Rescue Prompt

CentOS 7 Rescue Prompt

5. On the next prompt the rescue program will inform you that your system has been mounted under /mnt/sysimage directory. Here, as the rescue program suggests, type chroot /mnt/sysimage in order to change Linux tree hierarchy from the ISO image to the mounted root partition under your disk.

Mount CentOS 7 Image

Mount CentOS 7 Image

6. Next, identify your machine hard drive by issuing the below command in the rescue prompt.

# ls /dev/sd*

In case your machine uses an underlying old physical RAID controller, the disks will have other names, such as /dev/cciss. Also, in case your CentOS system is installed under a virtual machine, the hard disks can be named /dev/vda or /dev/xvda.

However, after you’ve identified your machine hard disk, you can start installing the GRUB boot loader by issuing the below commands.

# ls /sbin | grep grub2  # Identify GRUB installation command
# /sbin/grub2-install /dev/sda  # Install the boot loader in the boot partition of the first hard disk

Install Grub Boot Loader in CentOS 7

Install Grub Boot Loader in CentOS 7

7. After the GRUB2 boot loader is successfully installed in your hard disk MBR area, type exit to return to the CentOS boot ISO image tree and reboot the machine by typing init 6 in the console, as illustrated in the below screenshot.

Exit CentOS 7 Grub Prompt

Exit CentOS 7 Grub Prompt

8. After machine restart, you should, first, enter BIOS settings and change the boot order menu (place the hard disk with the installed MBR boot loader on the first position in boot menu order).

Save BIOS settings and, again, reboot the machine to apply the new boot order. After reboot the machine should start directly into the GRUB menu, as shown in the below image.

CentOS 7 Grub Menu

CentOS 7 Grub Menu

Congratulations! You’ve successfully repaired your CentOS 7 system damaged GRUB boot loader. Be aware that sometimes, after restoring the GRUB boot loader, the machine will restart once or twice in order to apply the new grub configuration.

Recover Root Password in CentOS 7

9. If you’ve forgotten the root password and you cannot log in to CentOS 7 system, you can basically reset (blank) the password by booting the CentOS 7 ISO DVD image in recovery mode and follow the same steps as shown above, until you reach step 6. While you’re chrooted into your CentOS installation file system, issue the following command in order to edit Linux accounts password file.

# vi /etc/shadow

In shadow file, identify the root password line (usually is the first line), enter vi edit mode by pressing the i key and delete the entire string in between the first colon “:” and the second colon ”:”, as illustrated in the below screenshot.

Root Encrypted Password

Root Encrypted Password

Delete Root Encrypted Password

Delete Root Encrypted Password

After you finish, save the file by pressing the following keys in this order Esc -> : -> wq!

10. Finally, exit the chrooted console and type init 6 to reboot the machine. After reboot, login to your CentOS system with the root account, which has no password configured now, and setup a new password for root user by executing the passwd command, as illustrated in the below screenshot.

Set New Root Password in CentOS 7

Set New Root Password in CentOS 7

That’s all! Booting a physical machine or a VM with a CentOS 7 DVD ISO image in recovery mode can help system administrators to perform various troubleshooting tasks for a broken system, such as recovering data or the ones described in the tutorial.

12 Open Source Cloud Storage Software to Store and Sync Your Data Quickly and Safely

Cloud by name indicates something which is very huge and present over a large area. Going by the name, in technical field, Cloud is something which is virtual and provides services to end users in form of storage, hosting of apps or virtualizing any physical space. Now a days, Cloud computing is used by small as well as large organizations for data storage or providing customers with its advantages which are listed above.

Free Open Source Cloud Storage Softwares for Linux

12 Free Open Source Cloud Storage Software’s

Mainly, three types of Services come associated with Cloud which are: SaaS (Software as a Service) for allowing users to access other publically available clouds of large organizations for storing their data like: gmailPaaS(Platform as a Service) for hosting of apps or software on Others public cloud ex: Google App Engine which hosts apps of users, IaaS (Infrastructure as a Service) for virtualizing any physical machine and availing it to customers to make them get feel of a real machine.

Cloud Storage

Cloud Storage means storage of data away from users local system and across the span of dedicated servers which are meant for this. At its earliest, CompuServe in 1983 offered its customers 128k of disk space which could be used to store files. Whereas this field is under active development and will be because of potential threats including: loss of data or information, data hacking or masquerading and other attacks, many organizations have come forward with their own solutions to Cloud Storage and Data Privacy which is strengthening and stabilizing its future.

In this article, we will present some of selected contributions for this concern which are open source and successfully being accepted by huge masses and big organizations.

1. OwnCloud

Dropbox replacement for Linux users, giving many functionalities which are similar to that of DropBox, ownCloud is a self-hosted file sync and share server.

Its open source functionality provides users with access to unlimited amount of storage space. Project started in January 2010 with aim to provide open source replacement for proprietary cloud storage service providers. It is written in PHP, JavaScript and available for Windows, Linux, OS X desktops and even successfully provides mobile clients for Android and iOS.

OwnCloud employs WebDav server for remote access and can integrate with large number of Databases including: SQLite, MariaDB, MySQL, Oracle Database, PostgreSQL.

Provides large number of features countable of which include: File storage and encryption, Music Streaming, content sharing across URL’s, Mozilla sync hosting and RSS/Atom feed reader, one-click app installation, Video and PDF viewer and many more.

The latest version of ownCloud i.e. 8.2 adds on other new features including: improved design, allows admin to notify users and set retention limits on files in the trash.

OwnCloud

OwnCloud

Read MoreInstall OwnCloud 8 to Create Personal Cloud Storage in Linux

2. Seafile

Another file hosting software system which exploits open source property to avail its users with all advantages they expect from a good cloud storage software system. It is written in C, Python with latest stable release being 4.4.3 released on 15th October 2015.

Seafile provides desktop client for Windows, Linux, and OS X and mobile clients for Android, iOS and Windows Phone. Along with a community edition released under General Public License, it also has a professional edition released under commercial license which provides extra features not supported in community edition i.e. user logging and text search.

Since it got open sourced in July 2012, it started gaining international attention. Its main features are syncing and sharing with main focus on data safety. Other features of Seafile which have made it common in many universities like: University Mainz, University HU Berlin and University Strasbourg and also among other thousands of people worldwide are: online file editing, differential sync to minimize the bandwidth required, client-side encryption to secure client data.

Seafile Cloud Storage

Seafile Cloud Storage

Read MoreInstall Seafile Secure Cloud Storage in Linux

3. Pydio

Earlier known by the name AjaXplorerPydio is a freeware aiming to provide file hosting, sharing and syncing. As a project it was initiated in 2009 by Charles du jeu and since 2010, it is on all NAS equipment’s supplied by LaCie.

Pydio is written in PHP and JavaScript and available for Windows, Mac OS and Linux and additionally for iOS and Android also. With nearly 500,000 downloads on Sourceforge, and acceptance by companies like Red Hat and Oracle, Pydio is one of the very popular Cloud Storage Software in the market.

In itself, Pydio is just a core which runs on a web server and can be accessed through any browser. Its integrated WebDAV interface makes it ideal for online file management and SSL/TLS encryption makes transmission channels encrypted securing the data and ensuring its privacy. Other features which come with this software are: text editor with syntax highlighting, audio and video playback, integration of Amazon, S3, FTP or MySQL Databases, image editor, file or folder sharing even through public URL’s.

Pydio Cloud Storage

Pydio Cloud Storage

 

4. Ceph

Ceph was initially started by Sage Well for his doctoral dissertation, and in fall 2007 he continued on this project full time and expanded the development team. In April 2014 Red Hat brought its development in-house. Till now 8 releases of Ceph have been released latest being Hammer in April 7, 2015. Ceph is a distributed cluster written in C++ and Perl and highly scalable and freely available.

Data can be populated in Ceph as block device, a file or in form Object through RADOS gateway which can present support for Amazon S3 and Openstack Swift API’s. Apart from being secure in terms of data, Scalable and reliable, other features provided by Ceph are:

  1. network file system which aims for high performance and large data storage.
  2. compatibility to VM clients.
  3. allowance of partial/complete reads/ writes.
  4. object level mappings.

Ceph Storage

Ceph Storage

5. Syncany

Released in about an year ago in near March 2014, Syncany is one of the lightest and open source cloud storage and file sharing application. It is currently being actively developed by Philipp C. Heckel and as of today, is available as a command line tool for all supported platforms, but GUI version is under active development.

One of the most important feature about Syncany is that it is a tool and requires you to bring in your own storage, which can be FTP or SFTP storage, WebDAV or Samba Shares, Amazon S3 buckets etc.

Other features which make it an awesome tool to have are: 128-bit AES+Twofish/GCM encryption for all the data leaving the local machine, file sharing support with which you can share your files with your friends, offsite storage as chosen by user instead of provider-based storage, interval-based or on-demand backups, binary compatible file versioning, local deduplication of files. It can be more advantageous for companies who want to use their own storage space rather trusting some providers provided storage.

Syncany Cloud Storage

Syncany Cloud Storage

6. Cozy

Not just a file sharing or synchronization tool or software, Cozy is bundled as a complete package of functions that can help you build your complete App Engine.

Like SyncanyCozy provides flexibility to user in terms of storage space. You can either use your own personal storage or trust Cozy team’s servers. It relies on some open source software’s for its complete functioning which are: CouchDB for Database storage and Whoosh for indexing. It is available for all platforms including smartphones.

Main features which make it a must to have Cloud storage software are: ability to store all the Contacts, Files, Calendar, etc in the Cloud and sync them between laptop and smartphone, provides ability to user to create his own apps and share them with other users by just sharing Git URL of the repository, hosting static websites or HTML5 video game consoles.

As a step further to provide its availability even for cheap hardware’s, Cozy team has introduced Cozy Light which performs well even on cheap hardware’s like: Rasberry Pi, small Digital Ocean VPS etc.

Cozy Cloud Storage

Cozy Cloud Storage

7. GlusterFS

GlusterFS is a network attached file storage system. Initially, started by Gluster Inc., this project is now under Red Hat Inc. After their purchase of Gluster Inc in 2011. Red Hat integrated Gluster FS with their Red Hat Storage Server changing its name to Red Hat Gluster Storage. It is available for platforms including Linux, OS X, NetBSd and OpenSolaris with some of its parts licensed under GPLv3 while others dual licensed under GPLv2. It has been used as a foundation for academic research.

GlusterFs uses a client-server model with servers being deployed as storage bricks. Client can connect to server with custom protocol over TCP/IP, Infiband or SDP and store files to the GlusterFs server. Various functionalities being employed by it over the files are: file-based mirroring and replication, file-based stripping, load balancing, scheduling and disk caching to name a few.

Other very useful feature of it is that it is flexible i.e. data here is stored on native file systems like: xfs, ext4 etc.

GlusterFS Storage

GlusterFS Storage

Read MoreHow to Install GlusterFS in Linux Systems

8. StackSync

StackSync is a Dropbox like tool running on top of OpenStack swift which is specially designed to tackle the needs of organizations to sync their data at one place. It us written in Java and released under GNU General public license v3.

Its framework is composed of three main components: a synchronization server, Openstack swift, desktop & mobile clients. While the server processes metadata and logic, Openstack is focused on storing the metadata while desktop and mobile clients help users sync their data to their personal cloud.

StackSync employs various data optimizations that allow it to scale to cater the needs of thousands of people with efficient use of cloud resources. Its other features are: provision of RESTful API as a Swift module which allows mobile apps and other third party applications to use it to sync data, separation between data and metadata which makes it flexible for deployment based on different configurations, provides both Public configuration which is useful for Public Cloud providers and Private configuration which solves the problems of big organizations aiming for a better cloud storage solution.

StackSync Cloud Storage

StackSync Cloud Storage

9. Git-annex

Git-annex is another file synchronization service developed by Joey Hess, released in October 2010 which also aims to solve file sharing and synchronization problems but independent of any commercial service or central server. It is written in Haskell and available for Linux, Android, OS X and Windows.

Git-annex manages the git repository of the user without storing the session into git again. But instead it stores only the linking to the file in the git repository and manages the files associated to the link at a separate place. It ensures the duplicacy of file which is needed in case recovery of lost information is required.

Further, it ensures availability of file data instantly as and when required which prevents files to present on each system. This reduces a lot of memory overhead. Notably, git-annex is available on various Linux distributions including: Fedora, Ubuntu, Debian etc.

Git-Annex

Git-Annex

10. Yandex.Disk

Yandex.Disk is a cloud storage and synchronization service released in April 2012 and available on all major platforms including: Linux, Windows, OS X, Android, iOS and Windows Phone. It allows users to synchronize data between different devices and share it with others online.

Various features provided by Yandex.Disk to its users are: built-in flash player that lets people preview songs, sharing of files with others by sharing download links, synchronization of files between different devices of same user, unlimited storage, WebDAV support allowing easy management of files by any application supporting WebDAV protocol.

Yandex-Disk

Yandex-Disk

11. Bitcasa

Developed by Bitcasa Inc. Which is a California Based company, Bitcasa is yet another solution for open source Cloud Storage and synchronization available for Windows, OS X, Android and Linux. Not directly an open source software, but it is still a part of Open Source community as it uses those software’s of which many are open sourced like: gcc/clang, libCurl, OpenSSL, APR, Rapid JSON etc.

With main features being file storage, access and sharing other features which make it popular among customers across more than 140 countries worldwide are: its convergent encryption protocol which is mostly safe but with less risks associated with it as reported by an article, provision of secure API’s and white label storage applications for OEM’s, network operators and software developers.

Bitcasa Storage

Bitcasa Storage

12. NAS4Free

NAS is acronym for ‘Network Attached Storage‘ and ‘4Free‘ indicates its free and open source nature. NAS4Free released under this name in March 2012. It is a network attached storage server software with a user interface written in PHP and released under Simplified BSD License. It supports platforms including i386/IA-32 and x86-64.

NAS4Free supports sharing across multiple Operating Systems. It also includes ZFS, disk encryption etc with protocols such as: Samba, CARP, Bridge, FTP, RSYNC, TFTP, NFS. Unlike other software’s, NAS4Free can be installed and operated from USB/ SSD Key, Hard Disk or can even be booted from LiveCD, LiveUSB with small USB key for config storage. NAS4Free has won awards including Project of month (August 2015) and Project of the week (May 2015).

NAS4Free Network Storage

NAS4Free Network Storage

Conclusion

These are some known Open Source Cloud storage and synchronization software’s which have either gained a lot of popularity over the years or have just been able to enter and make their mark in this industry with a long way to go. You can share any software that you or your organization might be using and we will be listing that with this list.

Source

ELRepo – Community Repo for Enterprise Linux (RHEL, CentOS & SL)

If you are using an Enterprise Linux distribution (Red Hat Enterprise Linux or one of its derivatives, such as CentOS or Scientific Linux) and need support for specific or new hardware, you are in the right place.

In this article we will discuss how to enable the ELRepo repository, a software source that includes everything from filesystem drivers to webcam drivers with everything in between (support for graphics, network cards, sound devices, and even new kernels).

Enabling ELRepo in Enterprise Linux

Although ELRepo is a third-party repository, it is well supported by an active community on Freenode (#elrepo) and a mailing list for users.

If you are still apprehensive about adding an independent repository to your software sources, note that the CentOS project lists it as trustworthy in its wiki (see here). If you still have concerns, feel free to ask away in the comments!

It is important to note that ELRepo not only provides support for Enterprise Linux 7, but also for previous versions. Considering that CentOS 5 is reaching its end of life (EOL) at the end of this month (March 2017) that may not seem like a big deal, but keep in mind that CentOS 6 won’t reach its EOL until March 2020.

Regardless of the EL version, you will need to import the repository’s GPG key before actually enabling it:

# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

Enable ELRepo in EL5

# rpm -Uvh http://www.elrepo.org/elrepo-release-5-5.el5.elrepo.noarch.rpm

Enable ELRepo in EL6

# rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm

Enable ELRepo in EL7

# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm

In this article we will only deal with EL7, and share a few examples in the next section.

Understand ELRepo Channels

To better organize the software contained in this repository, ELRepo is divided into 4 separate channels:

    • elrepo is the main channel and is enabled by default. It does not contain packages present in the official distribution.
    • elrepo-extras contains packages that replace some provided by the distribution. It is not enabled by default. To avoid confusion, when a package needs to be installed or updated from this repository, it can be temporarily enabled via yum as follows (replace package with an actual package name):
# yum --enablerepo=elrepo-extras install package
  • elrepo-testing provides packages that will at one point be part of the main channel but are still under testing.
  • elrepo-kernel provides long term and stable mainline kernels that have been specially configured for EL.

Both elrepo-testing and elrepo-kernel are disabled by default and can be enabled as in the case of elrepo-extrasif we need to install or update a package from them.

To list the available packages in each channel, run one of the following commands:

# yum --disablerepo="*" --enablerepo="elrepo" list available
# yum --disablerepo="*" --enablerepo="elrepo-extras" list available
# yum --disablerepo="*" --enablerepo="elrepo-testing" list available
# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

The following images illustrates the first example:

List ELRepo Available Packages

List ELRepo Available Packages

Summary

In this post we have explained what is ELRepo and what are the circumstances where you may want to add it to your software sources.

If you have any questions or comments about this article, feel free to use the form below to reach us. We look forward to hearing from you!

Source

How to Install and Configure ‘NethServer’ – A CentOS Based All-in-One Linux Distribution

NethServer is an Open Source powerful and secured Linux distribution, build on top of CentOS 6.6, designed for small offices and medium enterprises. Build-in with a large number of modules which can be simply installed through its web interface, NethServer can turn your box into a Mail server, FTP server, Web server, Web Filter, Firewall, VPN server, File Cloud server, Windows File Sharing server or Email Groupware server based on SOGo in no time just by hitting a few clicks.

Released in two editions, Community Edition, which is for free and Enterprise Edition, which comes with a paid support, this tutorial will cover the installation procedure of NethServer Free Edition (version 6.6) from an ISO image, although, it can, also, be installed from repositories on a pre-installed CentOS system using yumcommand to download software packages from web.

For example, if you wish to install NethServer on a pre-installed CentOS system, you can simply execute below commands to transform your current CentOS into NethServer.

# yum localinstall -y http://mirror.nethserver.org/nethserver/nethserver-release-6.6.rpm
# nethserver-install

To install additional nethserver modules, mention the name of the module as a parameter to the install script as shown below.

# nethserver-install nethserver-mail nethserver-nut

As I said above, this guide will only show installation procedure of NethServer Free Edition from an ISO image…

Download NethServer

NethServer ISO Image which can be obtained using the following download link:

  1. http://www.nethserver.org/getting-started-with-nethserver/

Before starting the installation procedure be aware that using this method based on CD ISO Image will format and destroy all your previous data from all your machine hard-disks, so, as a security measure make sure you remove all unwanted disk drives and keep only the disks where the system will be installed.

After the installation finishes you can re-attach the rest of the disks and add them into your NethServer LVM partitions (VolGroup-lv_root and VolGroup-lv-swap).

Step 1: Installation of NethServer

1. After you have downloaded the ISO Image, burn it to a CD or create a bootable USB drive, place the CD/USB into your machine CD drive / USB port and instruct your machine BIOS to boot from CD/USB. In order to boot from CD/USB, press F12 key while the BIOS is loading or consult your motherboard manual for the necessary boot key.

2. After the BIOS boot sequence completes, the first screen of NethServer should appear on your screen. Choose NethServer interactive install and press Enter key to continue further.

NethServer Boot Menu

NethServer Boot Menu

3. Wait a few seconds for the installer to load and a Welcome screen should appear. Form this screen choose your favorite Language, go to Next button using TAB or arrow keys and press again Enter to continue.

Choose Installation Language

Choose Installation Language

4. On the next screen choose your Network Interface for the internal network (Green), through which you will administer the server, then jump to Next using Tab key and press Enter to move to the interface and configure your network settings accordingly. When you’re done with network IP settings, choose Next tab and hit Enter to continue.

Choose Network Interface

Choose Network Interface

Network Configuration

Network Configuration

5. Finally, the last setting is to choose the Install tab and hit Enter key in order to install the NethServer.

Important: Be aware that this step is data destructive and will erase and format all your machine disks. After this step the installer will automatically configure and install the system until it reaches the end.

Select NethServer Install

Select NethServer Install

Installation Process

Installation Process

Installing Packages

Installing Packages

Step 2: Setting Up Root Password

6. After the installation finishes and the system reboots, login into your NethServer console using the following default credentials:

User : root
Password: Nethesis,1234

Once logged into the system, issue the following command in order to change the default root password (make sure you choose a strong password with at least 8 characters lenght, at least one upper case, one number and a special symbol):

# passwd root

Change NethServer Root Password

Change NethServer Root Password

Step 3: Initial NethServer Configurations

7. After the root password has been changed, it’s time to login to NethServer web administrative interface and do the initial configurations, by navigating to your server IP Address configured on installation process for the Internal network interface (green interface) on port 980 using the HTTPS protocol:

https://nethserver_IP:980

The first time you navigate to the above URL a security warning should be displayed on your browser. Accept the Self-Signed Certificate in order to proceed forward and the Log in page should appear.

Login with the root username and the root password you have already changed and the Welcome page should appear. Now, hit Next button to proceed with the initial configurations.

Accept SSL Certificate

Accept SSL Certificate

NethServer Login Credentials

NethServer Login Credentials

NethServer Control Panel

NethServer Control Panel

8. Next, set up your server Hostname, enter your Domain name and hit Next to move forward.

Set Hostname and Domain

Set Hostname and Domain

9. Choose your server physical Time zone from the list and hit Next button again.

Set Date and Timezone

Set Date and Timezone

10. The next page will ask you to change the SSH server default port. It’s a good practice to use this security measure and change the SSH port to an arbitrary port of your choice. Once the SSH port value filed is set hit the Next button to continue.

Change SSH Port for NethServer

Change SSH Port for NethServer

11. On the next page, choose the No, thanks option in order not to send statistics to nethserver.org and hit Nextbutton again to proceed further.

Usage Statistics

Usage Statistics


12.
 Now we have reached the final configuration. Review all the settings so far and once your done hit the Applybutton to write the changes into your system. Wait for a few seconds for tasks to complete.

Review NethServer Configuration

Review NethServer Configuration

Applying Changes

Applying Changes

13. Once the task finishes, go to Dashboard and review your machine StatusServices, and Disk Usage as illustrated on the below screenshots.

Check System Status

Check System Status

Check NethServer Services

Check NethServer Services

Check Disk Usage

Check Disk Usage

Step 4: Login through Putty and Update NethServer

14. The final step of this guide is to update your NethServer with the latest packages and security patches. Although this step can be done from the server’s console or through the web interface (Software Center -> Updates).

It’s a good time to remotely login through SSH using Putty as illustrated on the below screenshots and perform the upgrade procedure by issuing the following command:

# yum upgrade

Open Putty

Open Putty

SSH to NethServer

SSH to NethServer

Update NethServer

Update NethServer

While the upgrade process starts you will be asked some questions whether you accept a series of keys. Answer all with yes (y) and when the upgrade process finishes, reboot your system with the init 6 or reboot command in order to boot the system with the new installed kernel.

# init 6
OR
# reboot

That’ all! Now your machine is ready to become a Mail and Filter server, Web Server, Firewall, IDS, VPN, File server, DHCP server or whatever else configuration best suitable for your premises.

Reference Link: http://www.nethserver.org/

Source

How to Setup Local HTTP Yum Repository on CentOS 7

A software repository (“repo” in short) is a central file storage location to keep and maintain software packages, from which users can retrieve packages and install on their computers.

Repositories are often stored on servers on a network for example the internet, which can be accessed by multiple users. However, you can create and configure a local repository on your computer and access it as a single user or allow access to other machines on your LAN (Local Area Network).

One advantage of a setting up a local repository is that you don’t need internet connection to install sofware packages.

YUM (Yellowdog Updater Modified) is a widely used package management tool for RPM (RedHat Package Manager) based Linux systems, which makes sofware installation easy on Red Hat/CentOS Linux.

In this article, we will explain how to setup a local YUM repository over HTTP (Nginx) web server on CentOS 7 VPS and also show you how to find and install software packages on client CentOS 7 machines.

Our Testing Environment

Yum HTTP Repository Server:	CentOS 7 [192.168.0.100]
Client Machine:		CentOS 7 [192.168.0.101]

Step 1: Install Nginx Web Server

1. First start by installing Nginx HTTP server from the EPEL repository using the YUM package manager as follows.

# yum install epel-release
# yum install nginx 

2. Once you have installed Nginx web server, you can start it for the first time and enable it to start automatically at system boot.

 
# systemctl start nginx
# systemctl enable nginx
# systemctl status nginx

3. Next, you need to open port 80 and 443 to allow web traffic to Nginx service, update the system firewall rules to permit inbound packets on HTTP and HTTPS using the commands below.

# firewall-cmd --zone=public --permanent --add-service=http
# firewall-cmd --zone=public --permanent --add-service=https
# firewall-cmd --reload

4. Now you can confirm that your Nginx server is up and running, using the following URL; if you see the default Nginx web page, all is well.

http://SERVER_DOMAIN_NAME_OR_IP 

Nginx Default Page

Nginx Default Page

Step 2: Create Yum Local Repository

5. In this step, you need to install the required packages for creating, configuring and managing your local repository.

# yum install createrepo  yum-utils

6. Next, create the necessary directories (yum repositories) that will store packages and any related information.

# mkdir -p /var/www/html/repos/{base,centosplus,extras,updates}

7. Then use the reposync tool to synchronize CentOS YUM repositories to the local directories as shown.

# reposync -g -l -d -m --repoid=base --newest-only --download-metadata --download_path=/var/www/html/repos/
# reposync -g -l -d -m --repoid=centosplus --newest-only --download-metadata --download_path=/var/www/html/repos/
# reposync -g -l -d -m --repoid=extras --newest-only --download-metadata --download_path=/var/www/html/repos/
# reposync -g -l -d -m --repoid=updates --newest-only --download-metadata --download_path=/var/www/html/repos/
Sample Output
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.fibergrid.in
 * epel: mirror.xeonbd.com
 * extras: mirrors.fibergrid.in
 * updates: mirrors.fibergrid.in
base/7/x86_64/group                                                    | 891 kB  00:00:02     
No Presto metadata available for base
(1/9911): 389-ds-base-snmp-1.3.7.5-18.el7.x86_64.rpm                   | 163 kB  00:00:02     
(2/9911): 389-ds-base-devel-1.3.7.5-18.el7.x86_64.rpm                  | 267 kB  00:00:02     
(3/9911): ElectricFence-2.2.2-39.el7.i686.rpm                          |  35 kB  00:00:00     
(4/9911): ElectricFence-2.2.2-39.el7.x86_64.rpm                        |  35 kB  00:00:00     
(5/9911): 389-ds-base-libs-1.3.7.5-18.el7.x86_64.rpm                   | 695 kB  00:00:04     
(6/9911): GConf2-devel-3.2.6-8.el7.i686.rpm                            | 110 kB  00:00:00     
(7/9911): GConf2-devel-3.2.6-8.el7.x86_64.rpm                          | 110 kB  00:00:00     
(8/9911): GConf2-3.2.6-8.el7.i686.rpm                                  | 1.0 MB  00:00:06     

In the above commands, the option:

  • -g – enables removing of packages that fail GPG signature checking after downloading.
  • -l – enables yum plugin support.
  • -d – enables deleting of local packages no longer present in repository.
  • -m – enables downloading of comps.xml files.
  • --repoid – specifies the repository ID.
  • --newest-only – tell reposync to only pull the latest version of each package in the repos.
  • --download-metadata – enables downloading all the non-default metadata.
  • --download_path – specifies the path to download packages.

8. Next, check the contents of your local directories to ensure that all the packages have been synchronized locally.

# ls -l /var/www/html/repos/base/
# ls -l /var/www/html/repos/base/Packages/
# ls -l /var/www/html/repos/centosplus/
# ls -l /var/www/html/repos/centosplus/Packages/
# ls -l /var/www/html/repos/extras/
# ls -l /var/www/html/repos/extras/Packages/
# ls -l /var/www/html/repos/updates/
# ls -l /var/www/html/repos/updates/Packages/

9. Now create a new repodata for the local repositories by running the following commands, where the flag -gis used to update the package group information using the specified .xml file.

# createrepo -g comps.xml /var/www/html/repos/base/  
# createrepo -g comps.xml /var/www/html/repos/centosplus/	
# createrepo -g comps.xml /var/www/html/repos/extras/  
# createrepo -g comps.xml /var/www/html/repos/updates/  

10. To enable viewing of repositories and packages in them, via a web browser, create a Nginx server block which points to the root of your repositories as shown.

# vim /etc/nginx/conf.d/repos.conf 

Add the following configuration ot file repos.conf.

server {
        listen   80;
        server_name  repos.test.lab;	#change  test.lab to your real domain 
        root   /var/www/html/repos;
        location / {
                index  index.php index.html index.htm;
                autoindex on;	#enable listing of directory index
        }
}

Save the file and close it.

11. Then restart your Nginx server and view the repositories from a web browser using the following URL.

http://repos.test.lab

View Local Yum Repositories

View Local Yum Repositories

Step 3: Create Cron Job to Synchronize and Create Repositories

12. Next, add a cron job that will automatically synchronize your local repos with the official CentOS repos to grab the updates and security patches.

# vim /etc/cron.daily/update-localrepos

Add these commands in the script.

#!/bin/bash
##specify all local repositories in a single variable
LOCAL_REPOS=”base centosplus extras updates”
##a loop to update repos one at a time 
for REPO in ${LOCAL_REPOS}; do
reposync -g -l -d -m --repoid=$REPO --newest-only --download-metadata --download_path=/var/www/html/repos/
createrepo -g comps.xml /var/www/html/repos/$REPO/  
done

Save the script and close it and set the appropriate permissions on it.

# chmod 755 /etc/cron.daily/update-localrepos

Step 4: Setup Local Yum Repository on Client Machines

13. Now on your CentOS client machines, add your local repos to the YUM configuration.

# vim /etc/yum.repos.d/local-repos.repo

Copy and paste the configuration below in the file local-repos.repo (make changes where necessary).

[local-base]
name=CentOS Base
baseurl=http://repos.test.lab/base/
gpgcheck=0
enabled=1

[local-centosplus]
name=CentOS CentOSPlus
baseurl=http://repos.test.lab/centosplus/
gpgcheck=0
enabled=1

[local-extras]
name=CentOS Extras
baseurl=http://repos.test.lab/extras/
gpgcheck=0
enabled=1

[local-updates]
name=CentOS Updates
baseurl=http://repos.test.lab/updates/
gpgcheck=0
enabled=1

Save the file and start using your local YUM mirrors.

14. Next, run the following command to view your local repos in the list of available YUM repos, on the client machines.

#  yum repolist
OR
# yum repolist all

View Local Yum Repositories on Client

View Local Yum Repositories on Client

That’s all! In this article, we have explained how to setup a local YUM repository on CentOS 7. We hope that you found this guide useful. If you have any questions, or any other thoughts to share, use the comment form below.

Source

Manage Log Messages Under Systemd Using Journalctl [Comprehensive Guide]

Systemd is a cutting-edge system and service manager for Linux systems: an init daemon replacement intended to start processes in parallel at system boot. It is now supported in a number of current mainstream distribution including Fedora, Debian, Ubuntu, OpenSuSE, Arch, RHEL, CentOS, etc.

Earlier on, we explained the story behind ‘init’ and ‘systemd’; where we discussed what the two daemons are, why ‘init’ technically needed to be replaced with ‘systemd’ as well as the main features of systemd.

One of the main advantages of systemd over other common init systems is, support for centralized management of system and processes logging using a journal. In this article, we will learn how to manage and view log messages under systemd using journalctl command in Linux.

Important: Before moving further in this guide, you may want to learn how to manage ‘Systemd’ services and units using ‘Systemctl’ command, and also create and run new service units in systemd using shell scripts in Linux. However, if you are okay with all the above, continue reading through.

Configuring Journald for Collecting Log Messages Under Systemd

journald is a daemon which gathers and writes journal entries from the entire system; these are essentially boot messages, messages from kernel and from syslog or various applications and it stores all the messages in a central location – journal file.

You can control the behavior of journald via its default configuration file: /etc/systemd/journald.conf which is generated at compile time. This file contains options whose values you may change to suite your local environment requirements.

Below is a sample of what the file looks like, viewed using the cat command.

$ cat /etc/systemd/journald.conf 
Journald Configuration File
# See journald.conf(5) for details.

[Journal]
#Storage=auto
#Compress=yes
#Seal=yes
#SplitMode=uid
#SyncIntervalSec=5m
#RateLimitInterval=30s
#RateLimitBurst=1000
#SystemMaxUse=
#SystemKeepFree=
#SystemMaxFileSize=
#SystemMaxFiles=100
#RuntimeMaxUse=
#RuntimeKeepFree=
#RuntimeMaxFileSize=
#RuntimeMaxFiles=100
#MaxRetentionSec=
#MaxFileSec=1month
#ForwardToSyslog=yes
#ForwardToKMsg=no
#ForwardToConsole=no
#ForwardToWall=yes
#TTYPath=/dev/console
#MaxLevelStore=debug
#MaxLevelSyslog=debug
#MaxLevelKMsg=notice
#MaxLevelConsole=info
#MaxLevelWall=emerg

Note that various package installs and use configuration extracts in /usr/lib/systemd/*.conf.d/ and run time configurations can be found in /run/systemd/journald.conf.d/*.conf which you may not necessarily use.

Enable Journal Data Storage On Disk

A number of Linux distributions including Ubuntu and it’s derivatives like Linux Mint do not enable persistent storage of boot messages on disk by default.

It is possible to enable this by setting the “Storage” option to “persistent” as shown below. This will create the /var/log/journal directory and all journal files will be stored under it.

$ sudo vi /etc/systemd/journald.conf 
OR
$ sudo nano /etc/systemd/journald.conf 
[Journal]
Storage=persistent

For additional settings, find the meaning of all options which are supposed to be configured under the “[Journal]” section by typing.

$ man journald.conf

Setting Correct System Time Using Timedatectl Command

For reliable log management under systemd using journald service, ensure that the time settings including the timezone is correct on the system.

In order to view the current date and time settings on your system, type.

$ timedatectl 
OR
$ timedatectl status

Local time: Thu 2017-06-15 13:29:09 EAT
Universal time: Thu 2017-06-15 10:29:09 UTC
RTC time: Thu 2017-06-15 10:29:09
Time zone: Africa/Kampala (EAT, +0300)
Network time on: yes
NTP synchronized: yes
 RTC in local TZ: no

To set the correct timezone and possibly system time, use the commands below.

$ sudo timedatectl set-timezone  Africa/Kampala
$ sudo timedatectl set-time “13:50:00”

Viewing Log Messages Using Journalctl Command

journalctl is a utility used to view the contents of the systemd journal (which is written by journald service).

To show all collected logs without any filtering, type.

$ journalctl
View Log Messages
-- Logs begin at Wed 2017-06-14 21:56:43 EAT, end at Thu 2017-06-15 12:28:19 EAT
Jun 14 21:56:43 tecmint systemd-journald[336]: Runtime journal (/run/log/journal
Jun 14 21:56:43 tecmint kernel: Initializing cgroup subsys cpuset
Jun 14 21:56:43 tecmint kernel: Initializing cgroup subsys cpu
Jun 14 21:56:43 tecmint kernel: Initializing cgroup subsys cpuacct
Jun 14 21:56:43 tecmint kernel: Linux version 4.4.0-21-generic (buildd@lgw01-21)
Jun 14 21:56:43 tecmint kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-4.4.0-21-
Jun 14 21:56:43 tecmint kernel: KERNEL supported cpus:
Jun 14 21:56:43 tecmint kernel:   Intel GenuineIntel
Jun 14 21:56:43 tecmint kernel:   AMD AuthenticAMD
Jun 14 21:56:43 tecmint kernel:   Centaur CentaurHauls
Jun 14 21:56:43 tecmint kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]
Jun 14 21:56:43 tecmint kernel: x86/fpu: Supporting XSAVE feature 0x01: 'x87 flo
Jun 14 21:56:43 tecmint kernel: x86/fpu: Supporting XSAVE feature 0x02: 'SSE reg
Jun 14 21:56:43 tecmint kernel: x86/fpu: Supporting XSAVE feature 0x04: 'AVX reg
Jun 14 21:56:43 tecmint kernel: x86/fpu: Enabled xstate features 0x7, context si
Jun 14 21:56:43 tecmint kernel: x86/fpu: Using 'eager' FPU context switches.
Jun 14 21:56:43 tecmint kernel: e820: BIOS-provided physical RAM map:
Jun 14 21:56:43 tecmint kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000
Jun 14 21:56:43 tecmint kernel: BIOS-e820: [mem 0x0000000000090000-0x00000000000
Jun 14 21:56:43 tecmint kernel: BIOS-e820: [mem 0x0000000000100000-0x000000001ff
Jun 14 21:56:43 tecmint kernel: BIOS-e820: [mem 0x0000000020000000-0x00000000201
Jun 14 21:56:43 tecmint kernel: BIOS-e820: [mem 0x0000000020200000-0x00000000400

View Log messages Based On Boots

You can display a list of boot numbers (relative to the current boot), their IDs, and the timestamps of the first and last message corresponding to the boot with the --list-boots option.

$ journalctl --list-boots

-1 9fb590b48e1242f58c2579defdbbddc9 Thu 2017-06-15 16:43:36 EAT—Thu 2017-06-15 1
 0 464ae35c6e264a4ca087949936be434a Thu 2017-06-15 16:47:36 EAT—Thu 2017-06-15 1 

To view the journal entries from the current boot (number 0), use the -b switch like this (same as the sample output above).

$ journalctl -b

and to see a journal from the previous boot, use the -1 relative pointer with the -b option as below.

$ journalctl -b -1

Alternatively, use the boot ID like this.

$ journalctl -b 9fb590b48e1242f58c2579defdbbddc9

Filtering Log Messages Based On Time

To use time in Coordinated Universal Time (UTC) format, add the --utc options as follows.

$ journalctl --utc

To see all of the entries since a particular date and time, e.g. June 15th, 2017 at 8:15 AM, type this command.

$ journalctl --since "2017-06-15 08:15:00"
$ journalctl --since today
$ journalctl --since yesterday

Viewing Recent Log Messages

To view recent log messages (10 by default), use the -n flag as shown below.

$ journalctl -n
$ journalctl -n 20 

Viewing Log Messages Generated By Kernel

To see only kernel messages, similar to the dmesg command output, you can use the -k flag.

$ journalctl -k 
$ journalctl -k -b 
$ journalctl -k -b 9fb590b48e1242f58c2579defdbbddc9

Viewing Log Messages Generated By Units

To can view all journal entries for a particular unit, use the -u switch as follows.

$ journalctl -u apache2.service

To zero down to the current boot, type this command.

$ journalctl -b -u apache2.service

To show logs from the previous boot, use this.

$ journalctl -b -1 -u apache2.service

Below are some other useful commands:

$ journalctl -u apache2.service  
$ journalctl -u apache2.service --since today
$ journalctl -u apache2.service -u nagios.service --since yesterday

Viewing Log Messages Generated By Processes

To view logs generated by a specific process, specify it’s PID like this.

$ journalctl _PID=19487
$ journalctl _PID=19487 --since today
$ journalctl _PID=19487 --since yesterday

Viewing Log Messages Generated By User or Group ID

To view logs generated by a specific user or group, specify it’s user or group ID like this.

$ journalctl _UID=1000
$ journalctl _UID=1000 --since today
$ journalctl _UID=1000 -b -1 --since today

Viewing Logs Generated By a File

To show all logs generated by a file (possibly an executable), such as the D-Bus executable or bash executables, simply type.

$ journalctl /usr/bin/dbus-daemon
$ journalctl /usr/bin/bash

Viewing Log Messages By Priority

You can also filter output based on message priorities or priority ranges using the -p flag. The possible values are: 0 – emerg, 1 – alert, 2 – crit, 3 – err, 4 – warning, 5 – notice, 6 – info, 7 – debug):

$ journalctl -p err

To specify a range, use the format below (emerg to warning).

$ journalctl -p 1..4
OR
$ journalctl -p emerg..warning

View Log Messages in Real-Time

You can practically watch logs as they are being written with the -f option (similar to tail -f functionality).

$ journalctl -f

Handling Journal Display Formatting

If you want to control the output formatting of the journal entries, add the -o flag and use these options: cat, export, json, json-pretty, json-sse, short, short-iso, short-monotonic, short-precise and verbose(check meaning of options in the man page:

The cat option shows the actual message of each journal entry without any metadata (timestamp and so on).

$ journalctl -b -u apache2.service -o cat

Managing Journals On a System

To check the journal file for internal consistency, use the --verify option. If all is well, the output should indicate a PASS.

$ journalctl --verify

PASS: /run/log/journal/2a5d5f96ef9147c0b35535562b32d0ff/system.journal                               
491f68: Unused data (entry_offset==0)                                                                
PASS: /run/log/journal/2a5d5f96ef9147c0b35535562b32d0ff/system@816533ecd00843c4a877a0a962e124f2-0000000000003184-000551f9866c3d4d.journal
PASS: /run/log/journal/2a5d5f96ef9147c0b35535562b32d0ff/system@816533ecd00843c4a877a0a962e124f2-0000000000001fc8-000551f5d8945a9e.journal
PASS: /run/log/journal/2a5d5f96ef9147c0b35535562b32d0ff/system@816533ecd00843c4a877a0a962e124f2-0000000000000d4f-000551f1becab02f.journal
PASS: /run/log/journal/2a5d5f96ef9147c0b35535562b32d0ff/system@816533ecd00843c4a877a0a962e124f2-0000000000000001-000551f01cfcedff.journal

Deleting Old Journal Files

You can also display the current disk usage of all journal files with the --disk-usage options. It shows the sum of the disk usage of all archived and active journal files:

$ journalctl --disk-usage

To delete old (archived) journal files run the commands below:

$ sudo journalctl --vacuum-size=50M  #delete files until the disk space they use falls below the specified size
$ sudo journalctl --vacuum-time=1years	#delete files so that all journal files contain no data older than the specified timespan
$ sudo journalctl --vacuum-files=4     #delete files so that no more than the specified number of separate journal files remain in storage location

Rotating Journal Files

Last but not least, you can instruct journald to rotate journal files with the --rotate option. Note that this directive does not return until the rotation operation is finished:

$ sudo journalctl --rotate

For an in-depth usage guide and options, view the journalctl man page as follows.

$ man journalctl

Do check out some useful articles.

  1. Managing System Startup Process and Services (SysVinit, Systemd and Upstart)
  2. Petiti – An Open Source Log Analysis Tool for Linux SysAdmins
  3. How to Setup and Manage Log Rotation Using Logrotate in Linux
  4. lnav – Watch and Analyze Apache Logs from a Linux Terminal

That’s it for now. Use the feedback from below to ask any questions or add you thoughts on this topic.

Source

Understanding APT, APT-Cache and Their Frequently Used Commands

If you’ve ever used Debian or a Debian based distribution like Ubuntu or Linux Mint, then chances are that you’ve used the APT package system to install or remove software. Even if you’ve never dabbled on the command line, the underlying system that powers your package manager GUI is the APT system.

apt-get commands and apt-cache commands

Understanding APT and APT-Cache

Today, we are going to take a look at some familiar commands, and dive into some less or more frequently used APT commands, and shed some light on this brilliantly designed system.

What is APT?

APT stands for Advanced Package Tool. It was first seen in Debian 2.1 back in 1999. Essentially, APT is a management system for dpkg packages, as seen with the extension *.deb. It was designed to not only manage packages and updates, but to solve the many dependency issues when installing certain packages.

As anyone who was using Linux back in those pioneer days, we were all too familiar with the term “dependency hell” when trying to compile something from source, or even when dealing with a number of Red Hat’s individual RPM files.

APT solved all of these dependency issues automatically, making installing any package, regardless of the size or number of dependencies a one line command. To those of us who laboured for hours on these tasks, this was one of those “sun parting the clouds” moments in our Linux lives!

Understanding APT Configuration

This first file we are going to look at is one of APT’s configuration files.

$ sudo cat /etc/apt/sources.list
Sample Output
deb http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ precise main
deb-src http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ precise main

deb http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ precise-updates main
deb-src http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ precise-updates main

deb http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ precise universe
deb-src http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ precise universe
deb http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ precise-updates universe
deb-src http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ precise-updates universe

deb http://security.ubuntu.com/ubuntu precise-security main
deb-src http://security.ubuntu.com/ubuntu precise-security main
deb http://security.ubuntu.com/ubuntu precise-security universe
deb-src http://security.ubuntu.com/ubuntu precise-security universe

As you can probably deduce from my sources.list file, I’m using Ubuntu 12.04 (Precise Pangolin). I’m also using three repositories:

  1. Main Repository
  2. Universe Repository
  3. Ubuntu Security Repository

The syntax of this file is relatively simple:

deb (url) release repository

The accompanying line is the source file repository. It follows a similar format:

deb-src (url) release repository

This file is pretty much the only thing you’ll ever have to edit using APT, and chances are that the defaults will server you quite well and you will never need to edit it at all.

However, there are times that you might want to add third-party repositories. You would simple enter them using the same format, and then run the update command:

$ sudo apt-get update

NOTE: Be very mindful of adding third party repositories!!! Only add from trusted and reputable sources. Adding dodgy repositories or mixing releases can seriously mess up your system!

We’ve taken a look at our sources.list file and now know how to update it, so what’s next? Let’s install some packages. Let’s say that we are running a server and we want to install WordPress. First let’s search for the package:

$ sudo apt-cache search wordpress
Sample Output
blogilo - graphical blogging client
drivel - Blogging client for the GNOME desktop
drupal6-mod-views - views modules for Drupal 6
drupal6-thm-arthemia - arthemia theme for Drupal 6
gnome-blog - GNOME application to post to weblog entries
lekhonee-gnome - desktop client for wordpress blogs
libmarkdown-php - PHP library for rendering Markdown data
qtm - Web-log interface program
tomboy-blogposter - Tomboy add-in for posting notes to a blog
wordpress - weblog manager
wordpress-l10n - weblog manager - language files
wordpress-openid - OpenID plugin for WordPress
wordpress-shibboleth - Shibboleth plugin for WordPress
wordpress-xrds-simple - XRDS-Simple plugin for WordPress
zine - Python powered blog engine

What is APT-Cache?

Apt-cache is a command that simply queries the APT cache. We passed the search parameter to it, stating that, obviously, we want to search APT for it. As we can see above, searching for “wordpress” returned a number of packages that related to the search string with a short description of each package.

From this, we see the main package of “wordpress – weblog manager,” and we want to install it. But wouldn’t it be nice to see exactly what dependencies are going to be installed along with it? APT can tell us that as well:

$ sudo apt-cache showpkg wordpress
Sample Output
Versions:
3.3.1+dfsg-1 (/var/lib/apt/lists/us-west-2.ec2.archive.ubuntu.com_ubuntu_dists_precise_universe_binary-amd64_Packages)
 Description Language:
                 File: /var/lib/apt/lists/us-west-2.ec2.archive.ubuntu.com_ubuntu_dists_precise_universe_binary-amd64_Packages
                  MD5: 3558d680fa97c6a3f32c5c5e9f4a182a
 Description Language: en
                 File: /var/lib/apt/lists/us-west-2.ec2.archive.ubuntu.com_ubuntu_dists_precise_universe_i18n_Translation-en
                  MD5: 3558d680fa97c6a3f32c5c5e9f4a182a

Reverse Depends:
  wordpress-xrds-simple,wordpress
  wordpress-shibboleth,wordpress 2.8
  wordpress-openid,wordpress
  wordpress-l10n,wordpress 2.8.4-2
Dependencies:
3.3.1+dfsg-1 - libjs-cropper (2 1.2.1) libjs-prototype (2 1.7.0) libjs-scriptaculous (2 1.9.0) libphp-phpmailer (2 5.1) libphp-simplepie (2 1.2) libphp-snoopy (2 1.2.4) tinymce (2 3.4.3.2+dfsg0) apache2 (16 (null)) httpd (0 (null)) mysql-client (0 (null)) libapache2-mod-php5 (16 (null)) php5 (0 (null)) php5-mysql (0 (null)) php5-gd (0 (null)) mysql-server (2 5.0.15) wordpress-l10n (0 (null))
Provides:
3.3.1+dfsg-1 -
Reverse Provides:

This shows us that wordpress 3.3.1 is the version to be installed, the repository it is to be installed from, reverse dependencies, and other packages it depends on, plus their version numbers.

NOTE: (null means that the version is not defined, and the latest version in the repository will be installed.)

Now, the actual install command:

$ sudo apt-get install wordpress

That command will install WordPress-3.3.1 and all dependencies that are not currently installed.

Of course, that is not all you can do with APT. Some other useful commands are as follow:

NOTE: It is a good practice to run apt-get update before running any series of APT commands. Remember, apt-get update parses your /etc/apt/sources.list file and updates its database.

Uninstalling a package is just as easy as installing the package:

$ sudo apt-get remove wordpress

Unfortunately, the apt-get remove command leave all of the configuration files intact. To remove those as well, you’ll want to use apt-get purge:

$ sudo apt-get purge wordpress

Every now and then, you might run across a situation where there are broken dependencies. This usually happens when you don’t run apt-get update properly, mangling the database. Fortunately, APT has a fix for it:

$ sudo apt-get –f install

Since APT downloads all of the *.deb files from the repository right to your machine (stores them in/var/cache/apt/archives) you might want to periodically remove them to free up disk space:

$ sudo apt-get clean

This is just a small fraction of APTAPT-Cache and some of its useful commands. There are still lot to learn and explore some more advanced commands at below article.

  1. 25 Useful and Advanced Commands of APT-GET and APT-CACHE

As always, please have a look at the man pages for even more options. Once one gains a familiarity with APT, it is possible to write awesome Cron scripts to keep the system up to date.

Source

The 2018 Web Developer Roadmap An illustrated guide to becoming a Frontend or Backend Developer with links to courses

Web Developer in 2018

Here’s where you’ll start. You can choose either the Front-end, or Back-end path below. Regardless, there are eight recommendations in yellow that you should learn for either path.

Recommended learning for either path

Frontend Path & Courses for Learning Front End

Focus on yellow boxes and grow from there. Below the map are additional resources to aide your learning.

The Web Development Bootcamp

You need to learn the basics and build a solid foundation of web development principles. There are many ways to do this, but in my opinion, The Web Development Bootcamp is the best and easiest way.

The Advanced Web Development Bootcamp

Now that you’ve taken the first bootcamp and know how to build full stack web applications, it’s time to take your learning a little deeper. The Advanced Web Development Bootcamp introduces complex technologies, frameworks, and tools you can use to build beautiful, responsive, web applications.

HTML / CSS

Beginner JavaScript

Advanced JavaScript

React JS

Angular JS

Vue JS

Backend

Focus on yellow boxes and go from there. Below the map are additional resources to aide your learning.

Node JS

Ruby

Python

PHP

Java

MySQL

Closing Notes

You made it to the end of the article… Good luck on your Web Development journey! It’s certainly not going to be easy, but by following this guide, you are one stop closer to accomplishing your goal.

Source

WP2Social Auto Publish Powered By : XYZScripts.com