The Professional Approach to Upgrading Linux Servers

With the release of Ubuntu 18.04, I thought it would be the perfect time to talk about server upgrades. Specifically, I’m going to share with you the process that I’m using to perform upgrades.

I don’t shy away from work, but I hate doing work that really isn’t needed. That’s why my first question when it comes to upgrades is:

Is this upgrade even necessary?

The first thing to know is the EOL (End of Life) for support for the OS you’re using. Here are the current EOLs for Ubuntu:

Ubuntu 14.04 LTS: April 2019
Ubuntu 16.04 LTS: April 2021
Ubuntu 18.04 LTS: April 2023

(By the way, Red Hat Enterprise Linux delivers at least 10 years of support for their major releases. This is just one example why you’ll find RHEL being used in large organizations.)

So, if you are thinking of upgrading from Ubuntu 16.04 to 18.04, consider if the service that server provides is even needed beyond April 2021. If the server is going away in the next couple of years, then it probably isn’t worth your time to upgrade it.

If you do decide to go ahead with the upgrade, then…

Determine What Software Is Being Used

Hopefully, you have a script or used some sort of documented process to build the existing server. If so, then you have a good idea of what’s already on the server.

If you don’t, it’s time to start researching.

Look at the running processes with the “ps” command. I like using “ps -ef” because it shows every process (-e) with a full-format listing (-f).

ps -ef

Look at any non-default users in /etc/passwd. What processes are they running? You can show the processes of a given user by using the “-u” option to “ps.”

ps -fu www-data
ps -fu haproxy

Determine what ports are open and what processes have those ports open:

sudo netstat -nutlp
sudo lsof -nPi

Look for any cron jobs being used.

sudo ls -lR /etc/cron*
sudo ls -lR /var/spool/cron

Look for other miscellaneous clues such as disk usage and sudo configurations.

df -h
sudo du -h /home | sort -h
sudo cat /etc/sudoers
sudo ls -l /etc/sudoers.d

Determine the Current Software Versions

Now that you have a list of software that is running on your server, determine what versions are being used. Here’s an example list for an Ubuntu 16.04 system:

  • HAProxy 1.6.3
  • Nginx 1.10.3
  • MariaDB 10.0.34

One way to get the versions is to look at the packages like so:

dpkg -l haproxy nginx mariadb-server

Determine the New Software Versions

Now it’s time to see what version of each piece of software ships with the new distro version. For ubuntu 18.04 you can use “apt show PKG_NAME”:

apt show HAProxy

To display just the version, grep it out like so:

apt show HAProxy | grep -i version

Here’s our list for Ubuntu 18.04:

  • HAProxy 1.8.81
  • Nginx 1.14.0
  • MariaDB 10.1.29

Read the Release Notes

Now, find the release notes for each version of each piece of software. In this example, we are upgrading HAProxy from 1.6.3 to 1.8.81. Most software these days conform to Semantic Versioning guidelines. In short, given a version number MAJOR.MINOR.PATCH, increment the:

MAJOR version when you make incompatible API changes,
MINOR version when you add functionality in a backwards-compatible manner, and
PATCH version when you make backwards-compatible bug fixes.

This means we’re the most concerned with major versions and somewhat concerned with minor versions, and we can pretty much ignore the patch version. This means we can think of this upgrade as being from 1.6 to 1.8.

Because it’s the same major version (1), we should be fine to just perform the upgrade. That’s the theory, anyway. It doesn’t always work in practice.

In this case, read the release notes for HAProxy versions 1.7 and 1.8. Look for any signs of backward compatibility issues such as configuration syntax changes. Also look for new default values and then consider how those new default values could affect the environment.

Repeat this process for the other major pieces of software. In this example that would be going from Nginx 1.10 to 1.14 and MariaDB 10.0 to 10.1.

Make Changes Based on the Release Notes

Based on the information from the release notes, make any required or desired adjustments to the configuration files.

If you have your configuration files stored in version control, make your changes there. If you have configuration files or modifications performed by your build scripts, make your changes there. If you aren’t doing either one of those, DO IT FOR THIS DEPLOYMENT/UPGRADE. 😉 Seriously, just make a copy of the configuration files and make your changes to them. That way you can push them to your new server when it’s time to test.

If you’re not sure what configuration file or files a given service uses, refer to its documentation. You can read the man page or visit the website for the software.

Also, you can list the contents of its package and look for “etc”, “conf”, and “cfg”. Here’s an example from an Ubuntu 16.04 system:

dpkg -L haproxy | grep -E ‘etc|cfg|conf’

The “dpkg -L” command lists the files in the package while the grep command matches “etc”, “cfg”, or “conf”. The “-E” option is for extended regular expressions. The pipe (|) acts as an “or” in regular expressions.

You can also use the locate command.

locate haproxy | grep -E ‘etc|cfg|conf’

In case you’re wondering, the main configuration file for haproxy is haproxy.cfg.

Install the Software on the Upgraded Server

Now install the major pieces of software on a new server running the new release of the distro.

Of course, use a brand new server installation. You want to test your changes before you put them into production.

By the way, if you have a dedicated non-production (test/dev) network, use it for this test. If you have a service on the server you are upgrading that connects to other servers/services, it’s a good idea to isolate it from production. You don’t want to accidentally perform a production action when you’re testing. This means you may need to replicate those other servers in your non-production environment before you can fully test the particular upgrade that you’re working on.

If you have deployment scripts you can use them to perform the installs. If you use Ansible or the like, use it against the new server. Or you can manually perform the install, making notes of all the commands you run so that you can put them in a script later on. For example, to manually install HAProxy on Ubuntu 18.04, run:

apt install -y haproxy

Next, put the configuration files in place.

Start the Services

If the software that you are installing is a service, make sure it starts at boot time.

sudo systemctl enable haproxy

Start the service:

sudo systemctl start haproxy

If your existing deployment script starts the service automatically, perform a restart to make sure that any of the new configuration file changes are being used.

sudo systemctl restart haproxy

See if the service is running.

sudo systemctl status

If it failed, read the error message and make the required corrections. Perhaps there is a configuration option that worked with the previous version that isn’t valid with the new version, for example.

Import the Data

If you have services that store data, such as a database service, then import test data into the system.

If you don’t have test data, then copy over your production data to the new server.

If you are using production data, you need to be very careful at this point.

1) You don’t want to accidentally destroy or alter any production data and…

2) You don’t want your new system taking any unwanted actions based on production data.

On point #1, you don’t want to make a costly mistake such as getting your source and destinations mixed up and end up overwriting (or deleting) production data. Pro tip: make sure you have good production backups that you can restore.

One point #2, you don’t want to do something like double charge the business’s customers or send out duplicate emails, etc. To this end, stop all the software and services that are not required for the import before you do it. For example, disable cron jobs and stop any in-house written software running on the test system that might kick off an action.

It’s a good idea to have TEST data. If you don’t have test data, perhaps you can use this upgrade as an opportunity to create some. Take a copy of the production data and anonymize it. Change real email addresses to fake ones, etc.

As previously mentioned, do your tests on a non-production network that cannot directly touch production.

Perform Service Checks

If you have a service availability monitoring tool (and why wouldn’t you???), then point it at the new server. Let it do its job and tell you if something isn’t working correctly. For example, you may have installed and started HAProxy, but perhaps it didn’t open up the proper port because you accidentally forgot to copy over the configuration.

Whether or not you have a service availability monitoring tool, use what you know about the service to see if it’s working properly. For example, did it open up the proper port or ports? (Use the “netstat” and “lsof” commands from above). Are there any error messages you should be concerned about?

If you’re at all familiar with the service, test it. If it’s a web server, does it serve up the proper web pages? If it’s a database server, can you run queries against it?

If you’re not the familiar with the service or a normal user of the service, it’s time to enlist help. If you have a team that is responsible for testing, hand it over to them. Maybe it’s time to for someone in the business who uses the service to check it out and see if it works as expected.

If you don’t have a regression testing process in place, now would be a good time to create one. The goal is to make changes and know that those changes haven’t broken the service. Upgrading the OS is a major change that has the potential to break things in a major way.

Prepare for Production

Once you’ve completed this entire process and tested your work, put all your notes into a production implementation plan. Use that plan as a checklist when you’re ready to go into production. It’s probably worth it to test your plan on another newly installed system to make sure everything goes smoothly. This is especially true when you are working on a really important system.

By the way, don’t think less of yourself for having a detailed plan and checklist. It actually shows your professionalism and commitment to doing good work.

For example, would you rather fly on a plane with a pilot who uses a checklist or one who just “wings it.” I don’t care how smart or talented that pilot is, I want them to double check their work when it comes to my life.

Yes, It’s a Lot of Work

You might be thinking to yourself, “Wow, this is a very tedious and time-consuming process.” And you’d be right.

If you want to be a good/great Linux professional, this is exactly what it takes. Attention to detail and hard work are part of the job.

The good news is that you get compensated in proportion to your professionalism and level of responsibility.

If it was fast and easy, everyone would be doing it, right?

Hopefully, this post gave you some ideas beyond just blindly upgrading and hoping for the best. 😉

Speaking of the best…. I wish you the best!

Jason

P.S. If you’re ready to level-up your Linux skills, check out the courses I created for you here.

Source

LMDE 3 “Cindy” Cinnamon – BETA Release – The Linux Mint Blog

This is the BETA release for LMDE 3 “Cindy”.

LMDE 3 Cindy

LMDE is a Linux Mint project and it stands for “Linux Mint Debian Edition”. Its main goal is for the Linux Mint team to see how viable our distribution would be and how much work would be necessary if Ubuntu was ever to disappear. LMDE aims to be as similar as possible to Linux Mint, but without using Ubuntu. The package base is provided by Debian instead.

There are no point releases in LMDE. Other than bug fixes and security fixes Debian base packages stay the same, but Mint and desktop components are updated continuously. When ready, newly developed features get directly into LMDE, whereas they are staged for inclusion on the next upcoming Linux Mint point release.

Important info:

The release notes provide important information about known issues, as well as explanations, workarounds and solutions.

To read the release notes, please visit:

Release Notes for LMDE 3

System requirements:

  • 1GB RAM (2GB recommended for a comfortable usage).
  • 15GB of disk space (20GB recommended).
  • 1024×768 resolution (on lower resolutions, press ALT to drag windows with the mouse if they don’t fit in the screen).

Notes:

  • The 64-bit ISO can boot with BIOS or UEFI.
  • The 32-bit ISO can only boot with BIOS.
  • The 64-bit ISO is recommended for all modern computers (Almost all computers sold since 2007 are equipped with 64-bit processors).

Upgrade instructions:

  • This BETA release might contain critical bugs, please only use it for testing purposes and to help the Linux Mint team fix issues prior to the stable release.
  • It will be possible to upgrade from this BETA to the stable release.

Bug reports:

  • Bugs in this release should be reported on Github at https://github.com/linuxmint/lmde-3-cinnamon-beta/issues.
  • Create one issue per bug.
  • As described in the Linux Mint Troubleshooting Guide, do not report or create issues for observations.
  • Be as accurate as possible and include any information that might help developers reproduce the issue or understand the cause of the issue:
    • Bugs we can reproduce, or which cause we understand are usually fixed very easily.
    • It is important to mention whether a bug happens “always”, or “sometimes”, and what triggers it.
    • If a bug happens but didn’t happen before, or doesn’t happen in another distribution, or doesn’t happen in a different environment, please mention it and try to pinpoint the differences at play.
    • If we can’t reproduce a particular bug and we don’t understand its cause, it’s unlikely we’ll be able to fix it.
  • Please visit https://github.com/linuxmint/Roadmap to follow the progress of the development team between the BETA and the stable release.

Download links:

Here are the download links for the 64-bit ISO:

A 32-bit ISO image is also available at https://www.linuxmint.com/download_all.php.

Integrity and authenticity checks:

Once you have downloaded an image, please verify its integrity and authenticity.

Anyone can produce fake ISO images, it is your responsibility to check you are downloading the official ones.

Enjoy!

We look forward to receiving your feedback. Many thanks in advance for testing the BETA!

Source

Linux Scoop — Ubuntu Budgie 18.04 LTS Ubuntu…

Ubuntu Budgie 18.04 LTS – See What’s New

Ubuntu Budgie 18.04 LTS is the latest release of Ubuntu budgie. As part of Ubuntu 18.04 flavor this release ships with latest Budgie desktop 10.4 as default desktop environment. Powered by Linux 4.15 kernel and shipping with the same internals as Ubuntu 18.04 LTS (Bionic Beaver), the Ubuntu Budgie 18.04 LTS official flavor will be supported for three years, until April 2021.

Prominent new features include support for adding OpenVNC connections through the NetworkManager applet, better font handling for Chinese and Korean languages, improved keyboard shortcuts, color emoji support for GNOME Characters and other GNOME apps, as well as window-shuffler capability.

Source

How to change the color of your BASH prompt | Elinux.co.in | Linux Cpanel/ WHM blog

You can change the color of your BASH prompt to green with this command:

export PS1=”e[0;32m[[email protected]h W]$ e[m”

It will change the colour of bash temporarily. To make it permanent then add code in bash_profile page.

vi ~/.bash_profile

and paste above code save the file and you are done.

For other colors please see the attached list:

Color Code
Black 0;30
Blue 0;34
Green 0;32
Cyan 0;36
Red 0;31
Purple 0;35
Brown 0;33
Blue 0;34
Green 0;32
Cyan 0;36
Red 0;31
Purple 0;35
Brown 0;33
Light Color Code
Light Black 1;30
Light Blue 1;34
Light Green 1;32
Light Cyan 1;36
Light Red 1;31
Light Purple 1;35
Light Brown 1;33
Light Blue 1;34
Light Green 1;32
Light Cyan 1;36
Light Red 1;31
Light Purple 1;35
Light Brown 1;33

Source

how to force user to change their password on next login in linux ?

Method 1:

To force a user to change his/her password, first of all the password must have expired and to cause a user’s password to expire, you can use the passwd command, which is used to change a user’s password by specifying the -e or –expire switch along with username as shown.

#passwd –expire ravi

#chage -l ravi

Last password change : password must be changed

Password expires : password must be changed

Password inactive : password must be changed

Account expires : never

Minimum number of days between password change : 0

Maximum number of days between password change : 99999

Number of days of warning before password expires : 7

After running the passwd command above, you can see from the output of thechage command that the user’s password must be changed. Once the userravi tries to login next time, he will be prompted to change his password before he can access a shell .

Method 2:

Using chage command:

chage command – Change user password expiry information

Use the following syntax to force a user to change their password at next logon on a Linux:

# chage -d 0 user-name

In this example, force ravi to change his password at next logon, enter:

# chage -d 0 ravi

  • -d 0 : Set the number of days since January 1st, 1970 when the password was last changed. The date may also be expressed in the format YYYY-MM-DD. By setting it to zero, you are going to force user to change password upon first login.

Source

OWASP Security Shepherd – Insecure Cryptographic Storage Challenge 1 Solution – LSB – ls /blog


Thanks for visiting and today we have another OWASP Security Shepherd Solution for you. This time it’s the Insecure Cryptographic Storage Challenge. Cryptography is usually the safest way to communicate online but this method of encryption is not secure at all.

Get your Linux career soaring with 16% off courses site wide. COUPON CODE: LSB16

icsc1

That’s all very straight forward. The key has been encrypted using Roman Cipher. This is incorrect, the correct term is Caesar Cipher. A Caesar Cipher takes a letter from the alphabet, say A, and use a number, like 5. This would change an A to an F, moving 5 places in the alphabet.

REGISTER TODAY FOR YOUR KUBERNETES FOR DEVELOPERS (LFD259) COURSE AND CKAD CERTIFICATION TODAY! $499!

So we need to copy the cipher and go to a decoder that’s available online. We just need to paste the code into the decoder and try 5, 6, 7 places and so on.

https://www.dcode.fr/caesar-cipher easily does this for us.

icsc2

We will leave out how many places in the alphabet that this cipher moves as we would like you to try it yourself. Another challenge down, check!!.

Thanks for reading and if you enjoyed this post please leave comment. Don’t forget to follow also for more tutorials and challenges. Peace.

QuBits 2018-09-21

BUNDLE CLOUD FOUNDRY FOR DEVELOPERS COURSE(LFD232) AND THE CFCD CERTIFICATION FOR $499!

Source

How to Install Rocket.Chat on Debian 9 – LinuxCloudVPS Blog

How to install rocket.chat on Debian 9

Rocket.chat is an open source application that can be used as a team communication solution and can be deployed on your own server. There are many options for this application, such as chatting with team members and friends, using audio and video chat, interacting with website visitors in real time, sharing files and more. In this tutorial, we will install and deploy Rocket.Chat on Debian 9 server. Let’s get started!

1. Update the system

Once you are logged in to your server, you need to update and upgrade RPM packages. You can upgrade and upgrade your server with the following commands:

sudo apt update
sudo apt upgrade -y

2. Install Dependencies

Before starting the Rocket.Chat installation, you need to install the following required dependencies so that the application can work:

Node.js – an open source cross-platform JavaScript run-time environment.
MongoDB – is an open-source leading NoSQL database program written in C++.
cURL- know as “Client URL” is the command-line tool for transferring data.
GraphicsMagick – is a collection of tools and image processing libraries. GraphicsMagick is an ImageMagick fork.

First, we are going to install cURL , MongoDB and GraphicsMagick with this command:

sudo apt install -y curl mongodb graphicsmagick

Run the next command to install the Node.js:

sudo curl -sL https://deb.nodesource.com/setup | sudo bash –

sudo apt install -y nodejs

also you need to install the npm package

sudo npm install -g n

Use n to download and install Node.js version 8.9.3 which is required by Rocket.Chat

sudo n 8.9.3

You can check the Node.js current version with the following command:

node –version
v8.9.3

In order for some npm packages which require building from source, you will need to install the build-essentials and python-dev packages:

sudo apt install build-essential python-dev

Now when you set all the necessary dependencies we can continue with installing the Rocket.Chat.

3. Installing Rocket.Chat

We will use the curl command to download the Rocket.Chat latest version and we will extract into the /opt directory:

cd /opt
curl -L https://releases.rocket.chat/latest/download -o rocket.chat.tgz
tar zxvf rocket.chat.tgz
mv bundle Rocket.Chat
cd Rocket.Chat/programs/server
npm install
cd ../..

There are two ways to populate and start the Rocket.Chat. First one is manually to set the necessary environment variables and start the Rocket.Chat:

export ROOT_URL=http://your_domain-or-IP_addres:3000/
export MONGO_URL=mongodb://localhost:27017/rocketchat
export PORT=3000

Replace ‘your_domain-or-IP_addres’ with your actual domain name or server’s IP address.

Run the Rocket.Chat server

node main.js

The second one is to create the Rocket.Chat systemd service unit:

nano /etc/systemd/system/rocketchat.service
[Unit]
Description=RocketChat Server
After=network.target remote-fs.target nss-lookup.target mongod.target nginx.target # Remove or Replace nginx with your proxy

[Service]
ExecStart=/usr/local/bin/node /opt/Rocket.Chat/main.js # The location of node and location of main.js
Restart=always # When is set to always, the service will be restarted in any case (hit a timeout, got terminated)
RestartSec=15 # If node service crashes, restart the service after 15 seconds
StandardOutput=syslog # Output to syslog
StandardError=syslog # Output to syslog
SyslogIdentifier=nodejs-example
#User=<alternate user>
#Group=<alternate group>
Environment=NODE_ENV=production PORT=3000 ROOT_URL=https://your_domain.com MONGO_URL=mongodb://localhost:27017/rocketchat

[Install]
WantedBy=multi-user.target

You can change ROOT_URL and replace the domain you want to use. If you do not have an available domain, you can instead enter your IP address on your server. You can also change the port number that is currently set to 3000 to a port number of your choice.

In order to notify the systemd that you just created a new unit file you need to execute the following command:

sudo systemctl daemon-reload

Start the MongoDB and Rocket.Chat services:

sudo systemctl start mongodb
sudo systemctl start rocketchat

You can check the status of the service by running the command:

sudo systemctl status rocketchat
Output:

● rocketchat.service – RocketChat Server
Loaded: loaded (/etc/systemd/system/rocketchat.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2018-08-25 08:35:50 CDT; 4s ago
Main PID: 894 (node)
CGroup: /system.slice/rocketchat.service
└─894 /usr/local/bin/node /opt/Rocket.Chat/main.js

If the rocketchat.service is enabled and there are no errors, you can enable it also the automatically to start at boot time:

sudo systemctl enable rocketchat

4. Access Rocket.Chat in the web browser

Now, open http://your_domain-or-IP_addres:3000 in your favorite web browser and you should see the RocketChat login/register screen. The first user created will get administrative privileges by default.

That’s it. You have successfully installed Rocket.Chat on your Debian 9 VPS. For more information about how to manage your Rocket.Chat installation, please refer to the official Rocket.Chat documentation.

installing rocket.chat on debian 9

Of course, you don’t have to do any of this if you use one of our Debian Cloud VPS plans, in which case you can simply ask our expert Linux admins to set Rocket.Chat web communication and collaboration software for you. They are available 24×7 and will take care of your request immediately.

Be the first to write a comment.

Source

Mod_Expires Configuration In Apache – LinuxAdmin.io

How to setup mod_expires in Apache

mod_expires is a module which runs on the Apache web server. It allows manipulation of the cache control headers to leverage browser caching. What this means more specifically is that you will be able to set values on how long the image will be stored by the browser when a client makes a request. This will greatly improve page load times on subsequent requests by that same client. If a asset it set to not change very often, then using longer cache times are better, if the asset changes frequently you would want to use a shorter cache time so that returning visitors will see the updated asset. You can read more about the granular configuration on Apache’s site.

This guide assumes you already have a working version of Apache web server. If you do not, please see How To Install Apache.

Verify mod_expires is loaded

First you will want to check to see if mod_expires is loaded by performing the following

$ httpd -M 2>&1|grep expires
expires_module (static)

If it returns nothing, you will need to verify mod_expires is loaded the the config:

cat httpd.conf|grep expires
#LoadModule expires_module modules/mod_expires.so

If it is commented out, you will want to uncomment it and restart Apache.

Configure mod_expires Rules

You will now want to set rules for your site. The configuration can either be placed in the .htaccess or directly in the Apache vhost stanza. The expiration time you will want to set largely depends on how long you plan on keeping the asset as it is. The below ruleset is fairly conservative, if you do not plan on updating those media types at all, you can set them for even a year before expiration.

In this example we will just set the mod_expires values in the .htaccess file in the document root

nano .htaccess

Add the following, adjust any for longer or shorter times depending on your needs:

<IfModule mod_expires.c>
# Turn on the module.
ExpiresActive on
# Set the default expiry times.
ExpiresDefault “access plus 2 days”
ExpiresByType image/jpg “access plus 1 month”
ExpiresByType image/gif “access plus 1 month”
ExpiresByType image/jpeg “access plus 1 month”
ExpiresByType image/png “access plus 1 month”
ExpiresByType text/css “access plus 1 month”
ExpiresByType text/javascript “access plus 1 month”
ExpiresByType application/javascript “access plus 1 month”
ExpiresByType application/x-shockwave-flash “access plus 1 month”
ExpiresByType text/css “now plus 1 month”
ExpiresByType image/ico “access plus 1 month”
ExpiresByType image/x-icon “access plus 1 month”
ExpiresByType text/html “access plus 600 seconds”
</IfModule>

Once you have set those values, further subsequent requests should now start setting expires headers. If you set the expires values directly in the Apache v-host stanza, you will want to restart Apache.

Testing mod_expires to ensure its working correctly

There are a couple different ways, you can use the developer tools in a browser to verify the expires value is being set correctly. You can also test this functionality with curl. You will want to curl the URL of the file you are checking

$ curl -Is https://linuxadmin.io/wp-content/uploads/2017/04/linuxadmin_io_logo.png
HTTP/1.1 200 OK
Date: Mon, 09 Oct 2017 23:10:29 GMT
Content-Type: image/png
Content-Length: 6983
Connection: keep-alive
Set-Cookie: __cfduid=d7768a9a20888ada8e0cee831245051cc1507590629; expires=Tue, 09-Oct-18 23:10:29 GMT; path=/; domain=.linuxadmin.io; HttpOnly
Last-Modified: Fri, 28 Apr 2017 02:21:20 GMT
ETag: “5902a720-1b47”
Expires: Sun, 30 Sep 2018 21:49:53 GMT
Cache-Control: max-age=31536000
CF-Cache-Status: HIT
Accept-Ranges: bytes
Server: cloudflare-nginx
CF-RAY: 3ab5037b7ac291dc-EWR

The line you are checking for is what starts with Expires:

Expires: Sun, 30 Sep 2018 21:49:53 GMT

this should return a time based on the mod_expires value you set. That is it for setting mod_expires headers in Apache to leverage browser caching.

Oct 9, 2017LinuxAdmin.io

Source

Configure Graylog Server with Puppet | Lisenet.com :: Linux | Security

We’re going to use Puppet to install and configure a Graylog server.

This article is part of the Homelab Project with KVM, Katello and Puppet series. See here (CentOS 7) and here (CentOS 6) for blog posts on how to configure a Graylog server manually.

Homelab

We have a CentOS 7 VM installed which we want to configure as a Graylog server:

syslog.hl.local (10.11.1.14) – Graylog/Elasticsearch/MongoDB with Apache frontend

SELinux set to enforcing mode.

See the image below to identify the homelab part this article applies to.

Configuration with Puppet

Puppet master runs on the Katello server.

Puppet Modules

We use graylog-graylog Puppet module to configure the server. The module only manages Graylog itself. We need other modules to install the required dependencies like Java, MongoDB, Elasticsearch and Apache (as a reverse proxy):

  1. puppetlabs-java
  2. elastic-elasticsearch
  3. puppet-mongodb
  4. puppetlabs-apache
  5. saz-rsyslog

Please see each module’s documentation for features supported and configuration options available.

Katello Repositories

Repositories for Graylog, Elasticsearch and MongoDB are provided by Katello (we configured them here).

Note that Graylog 2.4 does not work with Elasticsearch 6.x, we’ll therefore use Elasticsearch 5.x.

Install MongoDB

class { ‘mongodb::globals’:
## Use Katello repository
manage_package_repo => false,
}->
class { ‘mongodb::server’:
ensure => ‘present’,
restart => true,
bind_ip => [‘127.0.0.1’],
port => 27017,
smallfiles => true,
}

Install Elasticsearch

class { ‘elasticsearch’:
ensure => ‘present’,
status => ‘enabled’,
## Use Katello repository
manage_repo => false,
restart_on_change => true,
}->
elasticsearch::instance { ‘graylog’:
config => {
‘cluster.name’ => ‘graylog’,
‘network.host’ => ‘127.0.0.1’,
},
jvm_options => [
‘-Xms512m’,
‘-Xmx512m’
]
}

Install Java and Graylog

include ::java

class { ‘graylog::server’:
enable => true,
ensure => ‘running’,
config => {
‘is_master’ => true,
‘password_secret’ => ‘3jC93bTD…OS7F7H87O’,
‘root_password_sha2’ => ‘008e3a245354b…f0d9913325f26b’,
‘web_enable’ => true,
‘web_listen_uri’ => ‘http://syslog.hl.local:9000/’,
‘rest_listen_uri’ => ‘http://syslog.hl.local:9000/api/’,
‘rest_transport_uri’ => ‘http://syslog.hl.local:9000/api/’,
‘root_timezone’ => ‘GMT’,
}
}->
##
## Use a script to automatically create
## UDP Syslog/GELF inputs via Graylog API.
##

file { ‘/root/syslog_inputs.sh’:
ensure => file,
source => ‘puppet:///homelab_files/syslog_inputs.sh’,
owner => ‘0’,
group => ‘0’,
mode => ‘0700’,
notify => Exec[‘create_syslog_inputs’],
}
exec {‘create_syslog_inputs’:
command => ‘/root/syslog_inputs.sh’,
refreshonly => true,
}

The content of the file syslog_inputs.sh can be seen below.

We create two Graylog inputs, one for syslog to bind to UDP 1514, and one for GELF. See the section below for port redirection from UDP 514 to UDP 1514 as Graylog cannot bind to UDP 514 unless run as root.

#!/bin/bash
GRAYLOG_URL=”http://admin:[email protected]:9000/api/system/inputs”;

GRAYLOG_INPUT_SYSLOG_UDP=’
{
“global”: “true”,
“title”: “Syslog UDP”,
“configuration”: {
“port”: 1514,
“bind_address”: “0.0.0.0”
},
“type”: “org.graylog2.inputs.syslog.udp.SyslogUDPInput”
}’;

GRAYLOG_INPUT_GELF_UDP=’
{
“global”: “true”,
“title”: “Gelf UDP”,
“configuration”: {
“port”: 12201,
“bind_address”: “0.0.0.0”
},
“type”: “org.graylog2.inputs.gelf.udp.GELFUDPInput”
}’;

curl -s -X POST -H “Content-Type: application/json” -d “$” $ >/dev/null;
curl -s -X POST -H “Content-Type: application/json” -d “$” $ >/dev/null;

exit 0;

Configure Firewall

Configure firewall to allow WebUI, syslog and GELF traffic. Also configure port redirection as Graylog cannot bind to UDP 514 unless run as root.

firewall { ‘007 allow Graylog HTTP/S’:
dport => [80, 443, 9000],
source => ‘10.11.1.0/24’,
proto => tcp,
action => accept,
}->
firewall { ‘008 allow Syslog’:
dport => [‘514’, ‘1514’],
source => ‘10.11.1.0/24’,
proto => udp,
action => accept,
}->
firewall { ‘009 redirect Syslog 514 to Graylog 1514’:
chain => ‘PREROUTING’,
jump => ‘REDIRECT’,
proto => ‘udp’,
dport => ‘514’,
toports => ‘1514’,
table => ‘nat’,
}->
firewall { ‘010 allow Gelf’:
dport => [‘12201’],
source => ‘10.11.1.0/24’,
proto => udp,
action => accept,
}

Apache Reverse Proxy with TLS

Install Apache as a reverse proxy for Graylog.

class { ‘apache’:
default_vhost => false,
default_ssl_vhost => false,
default_mods => false,
mpm_module => ‘prefork’,
server_signature => ‘Off’,
server_tokens => ‘Prod’,
trace_enable => ‘Off’,
}
include apache::mod::proxy
include apache::mod::proxy_http
include apache::mod::rewrite
include apache::mod::ssl
include apache::mod::headers

apache::vhost { ‘graylog_http’:
port => 80,
servername => ‘syslog.hl.local’,
rewrites => [
{ rewrite_rule => [‘(.*) https://%%’],
rewrite_cond => [‘% off’],
},
],
docroot => false,
manage_docroot => false,
suphp_engine => ‘off’,
}
apache::vhost { ‘graylog_https’:
port => 443,
servername => ‘syslog.hl.local’,
docroot => false,
manage_docroot => false,
suphp_engine => ‘off’,
ssl => true,
ssl_cert => ‘/etc/pki/tls/certs/hl.crt’,
ssl_key => ‘/etc/pki/tls/private/hl.key’,
ssl_protocol => [‘all’, ‘-SSLv2’, ‘-SSLv3’],
ssl_cipher => ‘HIGH:!aNULL!MD5:!RC4’,
ssl_honorcipherorder => ‘On’,
## Pass a string of custom configuration directives
custom_fragment => ‘
ProxyRequests Off
<Proxy *>
Require ip 10.11.1.0/24
</Proxy>
<Location />
RequestHeader set X-Graylog-Server-URL “https://syslog.hl.local/api/”
ProxyPass http://syslog.hl.local:9000/
ProxyPassReverse http://syslog.hl.local:9000/
</Location>
‘,
}

Configure Log Forwarding on All Servers

We want to configure all homelab servers to forward syslog to Graylog.

This needs to go in to the main environment manifest file /etc/puppetlabs/code/environments/homelab/manifests/site.pp so that configuration is applied to all servers.

class { ‘rsyslog::client’:
log_remote => true,
log_local => true,
remote_servers => false,
server => ‘syslog.hl.local’,
port => ‘1514’,
remote_type => ‘udp’,
remote_forward_format => ‘RSYSLOG_SyslogProtocol23Format’,
}

The result should be something like this:

All servers forward logs to Graylog.

Source

Your next Linux computer? A Samsung mobile phone — The Ultimate Linux Newbie Guide

The Samsung DeX Dock with the Samsung S8 will shortly be able to run stock Linux distributions.The Samsung DeX Dock with the Samsung S8 will shortly be able to run stock Linux distributions.

Yep, you can soon use a Samsung Mobile phone as a fully fledged Linux PC running Ubuntu. Sit it on the Samsung Dock, which is attached to a monitor, keyboard and mouse and et voila!

Samsung say it’s aimed at developers, but there’s no reason why it shouldn’t be suitable for general use. It’ll run Ubuntu. Samsung released this information at its developer conference in San Francisco last week. The new app called “Linux on Galaxy” works best when paired with a DeX station. Apparently it can run on a range of smartphones compatible with the DeX station, including the Samsung Galaxy S8, S8 Plus, or the latest Note 8.

If you want to sign up to receive a notification from Samsung when the project goes live, visit this link: https://seap.samsung.com/linux-on-galaxy

NB: This isn’t the first time Linux has been available in a similar setup. Recently, we covered how to run Linux on your Android phone with or without root access. From there, you can mirror/cast your mobile screen to a TV or monitor with HDMI, and use a bluetooth keyboard and mouse. Here’s a video demonstration of how to do that:

Source

WP2Social Auto Publish Powered By : XYZScripts.com