Think Global: How to Overcome Cultural Communication Challenges | Linux.com

In today’s workplace, our colleagues may not be located in the same office, city, or even country. A growing number of tech companies have a global workforce comprised of employees with varied experiences and perspectives. This diversity allows companies to compete in the rapidly evolving technological environment.

But geographically dispersed teams can face challenges. Managing and maintaining high-performing development teams is difficult even when the members are co-located; when team members come from different backgrounds and locations, that makes it even harder. Communication can deteriorate, misunderstandings can happen, and teams may stop trusting each other—all of which can affect the success of the company.

What factors can cause confusion in global communication? In her book, “The Culture Map,” Erin Meyer presents eight scales into which all global cultures fit. We can use these scales to improve our relationships with international colleagues. She identifies the United States as a very low-context culture in the communication scale. In contrast, Japan is identified as a high-context culture.

Read more at OpenSource.com

Source

Love Microsoft Teams? Love Linux? Then you won’t love this

Learn to love the browser instead

Microsoft loves Linux. Unless you are a Linux user who happens to want to use Teams. In that case, you probably aren’t feeling the love quite so much.

Users of that other collaboration platform, Slack, have enjoyed a Linux client for some time. Teams users, on the other hand, have had to make do with a browser experience that is often less than ideal. Hence the fifth most requested Teams feature in Microsoft’s UserVoice forum is a Linux client.

The request was made nearly two years ago and, at time of writing, has attracted 5,376 votes and 37 pages of comments. Yesterday, however, Linux users hoping to become first class citizens were dealt a cruel blow. A Microsoft representative has admitted that no, there is no dedicated engineering resource working on a Linux client.

The omission is an odd one. Microsoft has a Skype client for Linux, so a similar client for Teams should not be beyond the imagination of the Windows giant. Particularly given its much-publicised love for the Linux platform.

Using Teams through a browser on Linux is a limiting experience. Video conferencing, calling and desktop sharing are problematic, if not impossible. In the current documentation for Teams, Microsoft states that Meetings is supported on Chrome 59 or later, but Firefox users are out of luck for Calling or Meetings and should download a desktop client. Oh, or use Edge.

Neither of the latter two options are really viable for Linux users.

One enterprising Teams enthusiast has published a method of coaxing video calls and presentations into life on Linux via Chrome or Chromium, but the process is a little convoluted and effectively has the browser pretend it is actually Edge in order to prevent Teams from ignoring it.

It is all rather unsatisfactory, and we’ve contacted Microsoft to get more detail on its decision.

Linux has a vanishingly small share of the desktop market compared to Windows. However, this has not stopped Microsoft releasing developer tools such as Visual Studio Code on the platform. In the light of that, it seems an odd call to exclude those same developers from the full fat version of Redmond’s collaboration vision. ®

Sponsored:
Following Bottomline’s journey to the Hybrid Cloud

Source

Ubuntu 18.10 Is A Nice Upgrade For Radeon Gamers, Especially For Steam VR

Among the changes to find in Ubuntu 18.10 are the latest stable Linux kernel as well as a significant Mesa upgrade and also the latest X.Org Server. These component upgrades make for a better Linux gaming experience particularly if using a modern AMD Radeon graphics card. Here are some results as well as whether it’s worthwhile switching to Linux 4.19 and Mesa 18.3-dev currently on Ubuntu 18.10.

 

 

The move from Linux 4.15 on Ubuntu 18.04 LTS to now Linux 4.18 with Ubuntu 18.10 is significant due to many AMDGPU DRM improvements during that time as covered in numerous Phoronix articles over the summer. There is also the transition from Mesa 18.0.5 to Mesa 18.2.2 that is very significant especially for the RADV Vulkan driver with performance optimizations, new Vulkan extensions, and numerous fixes. The RadeonSI Gallium3D driver has also received improvements as well this year though in Mesa 18.0 it was already quite mature so if you are solely using OpenGL applications/games the impact is likely much less.

The largest underlying upgrade is from X.Org Server 1.19.6 to X.Org Server 1.20.1. X.Org Server 1.20 is a very big update given its lengthy development cycle. There are DRI3 additions, server-side GLVND, many XWayland improvements, a lot of GLAMOR 2D optimizations, and more.

 

 

For Radeon gamers the X.Org Server 1.20 adoption is most significant if you have an HTC VIVE headset and utilize Steam VR for virtual reality gaming… X.Org Server 1.20 paired with Linux 4.18 have the necessary bits around RandR leasing, non-desktop quirk handling for VR headsets, and other plumbing/infrastructure work made by Keith Packard over the past nearly two years to improve the support around VR on Linux. I’ll have some fresh Linux VR tests using Ubuntu 18.10 coming up soon on Phoronix.

Given the numerous upgrades, I ran some benchmarks with a Radeon RX 580 and RX Vega 64 graphics cards to show the impact of the upgrade. The configurations tested were:

– Ubuntu 18.04.1 LTS with its stock Linux 4.15 + X.Org Server 1.19.6 + Mesa 18.0.5 built against LLVM 6.0.

– Ubuntu 18.10 with its default Linux 4.18 kernel + X.Org Server 1.20.1 + Mesa 18.2.2 built against LLVM 7.0.

– Ubuntu 18.10 when upgrading to the Linux 4.19 Git kernel using the Ubuntu Mainline Kernel PPA and also using the Oibaf PPA to switch to Mesa 18.3-devel Git built against LLVM 7.0 for the latest open-source graphics experience.

Various OpenGL and Vulkan games were benchmarked using the Phoronix Test Suite.
Source

Linux Today – ShieldX Integrates Intention Engine Into Elastic Security Platform

Oct 17, 2018, 09:00

(Other stories by Sean Michael Kerner)

ShieldX announced its new Elastic Security Platform on Oct. 17 providing organizations with Docker container based data center security, that uses advanced machine learning to determine intent.

At the core of the Elastic Security Platform is a technology that ShieldX calls the Adaptive Intention Engine that automatically determines the right policy and approach for security controls across multicloud environments. The intent-based security model can provide network microsegmentation, firewall and malware detection capabilities, among other features

Complete Story

Related Stories:

Source

How to Prepare for (and Ace) Your Next Technical Interview

Typically, companies hiring Linux and other IT professionals are resorting to a standard interview process. The whole operation is hacked together like Frankenstein’s monster. It’s complicated and stressful for all involved – and it’s what you’ve got to suffer through if you want a job in IT.

This article will help you get ready to master your next technical interview and keep your cool in the process. It’s kind of disappointing that an industry known for innovation is still mired in old-school, haphazard hiring practices, but it’s what we’re stuck with for now. So read on to learn how to prepare yourself mentally so you can show up as your best self, wow the hiring manager, and keep stress at bay – not just at the interview, but during the entire hiring process.

We’re jumping into this assuming that you’re at the beginning stages, but you’ve been in touch with the hiring manager at the company. You’ve sent in your resume, and now we’re moving on to your first direct interaction.

The Phone Screen Interview

Once the company receives your resume and sees that you meet the minimum qualifications, they’re likely going to request a phone screening. This isn’t a technical interview. Instead, they just want to make sure you’re not crazy. It’ll last about 15 to 20 minutes, and the questions will be along the lines of:

  • Why are you looking for a new job?
  • Why did you apply here?
  • What about the job description caught your eye?
  • What kind of work have you been doing?

Seriously, all they’re doing is seeing if you can speak without sounding like a psycho. Don’t ask stupid questions like “Do you have free beer on Fridays?” Don’t go super-deep, either. Just answer their questions in a coherent manner and you should be just fine.

There’s a chance they may ask you a few tech-oriented questions, but it’s going to be more along the lines of, “I see you’ve worked with Red Hat Enterprise Linux. Tell me about it.”

Really, they’re just weeding people out at this stage, so this is not where you have to super-wow anyone. Prove that you’re not totally crazy, prove that you’re more or less capable of talking to people and holding a job, and you’ll be fine.

If you want to prepare, just make sure you have good answers to the questions above, why you’re looking for a new job, and what you’ve been up to.

The First Technical Screen

After the phone interview, you might have a technical phone interview, or be assigned a take-home project to check your technical chops.

The goal for the hiring manager is to get the number of interviews down to a reasonable level. After reading through everyone’s resumes and doing the first phone interview, they’ve gotten rid of about 90 percent of the clunkers, but there are still too many candidates to bring in for an interview. The tech screen will usually be pretty straightforward and simple questions to see whether or not you know what you said you know on your resume.

Typically, they’ll say, “I see on your resume you worked at X, Y, and Z.” Then you can say, “Yes, that’s where I did such-and-such and was in charge of this-and-that, and I did such-and-such as well.” Just give a basic overview. Don’t get too technical unless they ask for more detail.

If they do ask more questions, that’s a great sign. The deeper questions they ask, the better. At this point, it can be good to clarify the technical knowledge of the person you’re speaking to so you know if you’re talking to an actual technical person, or a receptionist who is just going down a checklist.

But be careful – it can be touchy. Do NOT assume that just because you’re talking to a woman, she isn’t technical. More than once, applicants have totally sunk their chances of getting hired because they were a total sexist jerk by assuming the person they were speaking to was a receptionist or administrative assistant, and not the platform engineer.

So if they don’t say, “Hey, this is Emily, I’m a senior platform engineer,” you can ask by phrasing it like, “I didn’t catch your position… what is your role?” Determine who you are talking to, no matter what they look or sound like. Be polite and you’ll be okay.

A take-home tech project will usually be a small programming project where you’ll need to do some scripting, or you’ll use some general purpose programming language of your choice to solve some problem. You might have to do some log analysis, for instance, and then you submit your code. (If you need help with shell scripting, check out my course on it here.)

To really show your best side, comment on your code and also provide a README, even if it’s just a couple of paragraphs long. It doesn’t have to be extensive or complicated. Just say something like, “This function does X. It takes this type of input and produces this type of output.”

If your project is timed and you don’t have time to write test cases or handle people putting in bad input, you can tell them how it will fail. You can say, “I didn’t have time to do this, but if your input is a string it’s going to fail this way.” Make sure to tell them what you weren’t able to test, otherwise they’re just going to assume that you didn’t know that you needed to test.

You can do the minimum, or you can use these projects to show that you’re a professional and you care about the quality of what you’re working on. (Hint: Don’t do the bare minimum.)

Whatever you hand in, be ready to talk through it afterwards. Don’t just copy the answer off the Internet. Be prepared to explain why you approached it the way you did. If you just copy the answer, you’re just going to look like a bigger jerk later when you get caught. Even if it’s too hard, just do your best and be upfront about it. The biggest thing is to be honest and act like a professional. You never know when you are going to run into someone again, so you don’t want to burn any bridges by lying yourself into a job.

The First In-Person Interview

After you make it past the first tech review, you’ll be invited in to meet people face-to-face. This is make-or-break time, and you need to be prepared. But don’t try to learn a totally new programming language or cram for your in-person meetings. You just aren’t going to be able to do it. You might think you’re pulling it off, but people who know are experienced in that area will know immediately that your knowledge is about an inch deep.

But while you don’t want to cram, you still can review things that you should know because of your previous roles.

Let’s say you’ve been using Puppet for configuration management for a couple years or you’ve done worked on Puppet at home, but you know they use Ansible for configuration management in the role you are applying for. I would definitely read over some Ansible documentation and look at a few examples, but your wheelhouse should be Puppet, so you better know Puppet. If they asked you to write an Ansible playbook, you can say, “I don’t really know how I would do it in Ansible, but I can show you how I do it in Puppet…” The main thing is to know what you SHOULD know and sound confident that you can learn what you need to learn for the new role.

Confidence is really key. It’s easy to stress out and get anxious. But remind yourself it’s not black or white. It’s rarely a case where you will totally pass or totally fail. If you don’t know something, just talk through it. You can be totally upfront and say, “I’m kind of stressed and I’m drawing a blank,” and then start breaking down the issue. Like, “OK, I don’t know how that config works, but I know that this has to happen so the request can go from A to B, and I know it’s going to talk to these three servers…” Then you’re giving them information and you’re showing them that even under duress you have a high level understanding and you’ve clearly worked with it before and you’re comfortable with it. That can be just as important is having gotten the answer “right.” And it can be even more important to show them your thought process.

Even the more technical aspects like whiteboarding or live coding are still just as much about seeing your thought process as they are seeing your familiarity with the tools of the language and your ability to get the right answer in a reasonable amount of time. After all, standing up in front of three or four people and working through a problem in real time on a whiteboard with a marker that probably doesn’t work… that’s not really a natural situation to put an engineer in. But even so, chances are you’re going to be in that situation, so be ready. And remember to talk through it and keep communicating.

It’s better to get it slightly wrong and be communicating about your thought process the whole time than it is to stand there silently for five minutes and then arrive at the right answer. After all, they’re not hiring you to solve that one specific problem on the board; they’re hiring you to solve a whole general class of problems that may or may not be like that. They want to know how you think, and the only way they’re going to know that is if you tell them what you’re thinking as you work through it.

Taking Breaks During the Interview

At some point during your in-person interviews, you’re going to get a break or two. You might think, “They’re so nice, they’re letting me have some downtime,” or “Awesome, free lunch.” Don’t get fooled! You are STILL being evaluated during this time, often for personality or behavior.

There are plenty of ways for the interview to go wrong even when you’re not standing at the whiteboard. For instance, if you’re offered something to eat or drink, be polite about it. Say, “No thanks,” or “Yes please.” But don’t insult the company complaining about their drink choices or grumbling because they only have Diet Coke when you would much prefer Cherry Coke Zero.

As soon as you step out of your car and into the front door of the company, you’re being judged. You’re on stage, and you’re on until you walk out the door. If you’re sitting in the waiting room, don’t be doing anything out of the ordinary. People will notice if you stick your gum under your chair or eat all the good chocolates out of the bowl on the reception desk and leave the wrappers on the floor.

Don’t give them any excuse to think you’re anything but awesome. (Because you ARE awesome, right!)

Being Interviewed by an Executive

Depending on how big the company is, you’ll likely meet with a director, VP, or even the COO, CEO, or CTO. This is the high level interview where you’re going to show them that you understand what the company does. If you don’t know before you get the interview, make sure you at least have a basic understanding of the company’s purpose before you start interviewing.

This is also a great way to show your interest in them. You can say, “I read that you do XYZ. Can you tell me how that works?” That gives you a chance to interview them a little bit, too.

Technical Interviewing Tips

Now that you know what to expect during the process, here are some additional tips to make sure you’re the most prepared and successful you can be:

Don’t be afraid to ask about the process. Sure, this article has given you a great outline, but it will vary a bit from company to company. Asking how the individual company does things will help you be better prepared. You can also ask when you can expect to hear back at each stage. It shows you’re interested in moving forward.

At least a quarter of the interview process should be you interviewing the company. Don’t be afraid to ask questions like:

  • Can I see where I’m going to work?
  • Can I meet some of the people that I’m going to work with?
  • What is the standard vacation policy?

If something’s important to you ask!

Word of warning: You don’t want to start out your first interview by asking how many vacation days you get, or if it’s okay that you come in at noon three times a week. But at some point in the process, this will become a negotiation, and you want answers to the questions that are important to you.

Don’t fall for the ping-pong table. Having a cool office is nice and all, but getting unlimited Monster drinks and access to bean bag chairs doesn’t make up for getting paid $10K under market.

Make sure you meet your boss. And by “boss,” that means the person who’s going to determine how much you’re going to get paid. If you don’t meet that person at some point in the interview process, that’s a warning sign. It doesn’t mean you should jump ship, but it is something worth asking about. Sometimes it can indicate that they’re in so much turmoil they don’t know who you’re going to report to. Sometimes it means things are in such flux that the org chart changes from one day to the next. And sometimes it just means it’s an oversight. But it’s worth asking about.

Prepare your questions for the interviewer. The worst thing you can do when someone asks you if you have any questions is to say, “No.” Instead, create a written list of questions about the company. Even if you have five or six interviews in a day, you can ask similar versions of the same question to each of your interviewers. Contrary to what you might think, asking questions actually shows your intelligence level.

• How much technical debt is there?
• How much freedom do you have to choose the solutions you implement?
• What’s the worst part of your job? (You probably won’t get a 100 percent truthful answer, but you’ll get a glimpse!)

Confidently Tackle Your Next IT Interview!

Now that you know what to expect, hopefully you’ll remember that the most important part of the interview process is confidence. Knowing the “right” answer to any question is great, but it’s even more important to show that you’re an incredible, intelligent person who would be awesome to work with.

Source

Linux Scoop — Xubuntu 18.04 LTS

Xubuntu 18.04 LTS – See What’s New

Xubuntu 18.04 LTS is the latest release og Xubuntu. this release
features latest version of Xfce 4.12 as default desktop, include Xfce
components.

In Xubuntu 18.04, various GNOME apps have been swapped out with
corresponding MATE apps, including Evince with Atril, File Roller with
Engrampa, and GNOME Calculator with MATE Calculator. The new
xfce4-notifyd panel plugin is included, allowing you to easily toggle
“Do Not Disturb” mode for notifications as well as view missed
notifications.

Xubuntu 18.04 LTS also comes with an updated Greybird GTK+ theme that
includes a new dark style, better HiDPI support, greater consistency
between GTK+ 2 and GTK+ 3 apps, GTK+ 3 styles for Google Chrome and
Chromium web browsers, smaller switches, and improved scales. However,
the GTK Theme Configuration tool was removed and it’s no longer possible
to override colors in themes.

Download Xubuntu 18.04
Source

How to install zabbix agent on windows ?

Zabbix Agent is installed on remote systems needs to monitor through Zabbix server. The Zabbix agent collects resource utilization and applications data on client system and provide such information to zabbix server on their requests.

install Zabbix agent service on windows system

Step 1 – Download Agent Source Code

Download latest windows zabbix agent source code from zabbix official site or use below link to download zabbix agent 3.0.0.

After downloading the zipped archive of zabbix client, extract its content under c:zabbix directory.

Step 2 – Create Agent Configuration File

Now make of copy of sample configuration file c:zabbixconfzabbix_agentd.win.conf to create zabbix agent configuration file at c:zabbixzabbix_agentd.conf. Now edit configuration and update following values.

#Server=[zabbix server ip]
#Hostname=[Hostname of client system ]

Server=192.168.1.26

Serveractive=192.168.1.26
Hostname=linuxforfreshers.com

Step 3: Install Zabbix Agent as Windows Service

Lets install zabbix agent as windows server by executing following command from command line.

c:zabbixbinwin64> zabbix_agentd.exe -c c:zabbixzabbix_agentd.conf –install

zabbix_agentd.exe [9084]: service [Zabbix Agent] installed successfully

zabbix_agentd.exe [9084]: event source [Zabbix Agent] installed successfully

Step 4 – Start/Stop Agent Service

Use following command to start zabbix agent service from command line

c:zabbixbinwin64> zabbix_agentd.exe –start

zabbix_agentd.exe [7048]: service [Zabbix Agent] started successfully

c:zabbixbinwin64> zabbix_agentd.exe –stop

zabbix_agentd.exe [9608]: service [Zabbix Agent] stopped successfully

Uninstalling agent

c:zabbixbinwin64> zabbix_agentd.exe -c c:zabbixzabbix_agentd.conf –uninstall

Source

OWASP Security Shepherd – Cross Site Scripting One Solution – LSB – ls /blog

Welcome back to LSB my budding hackers. Today’s lesson is about Cross Site Scripting (Or XSS). Cross-Site Scripting (XSS) attacks are a type of injection, in which malicious scripts are injected into otherwise benign and trusted websites. XSS attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user. Flaws that allow these attacks to succeed are quite widespread and occur anywhere a web application uses input from a user within the output it generates without validating or encoding it.

REGISTER TODAY FOR YOUR KUBERNETES FOR DEVELOPERS (LFD259) COURSE AND CKAD CERTIFICATION TODAY! $499!

An attacker can use XSS to send a malicious script to an unsuspecting user. The end user’s browser has no way to know that the script should not be trusted, and will execute the script. Because it thinks the script came from a trusted source, the malicious script can access any cookies, session tokens, or other sensitive information retained by the browser and used with that site.

xss1

So our task today is to get an alert on the web page to show that it’s vulnerable to this type of attack. On the web page we are presented with a search box and that is all we have for this puzzle.

xss2

A common piece of Javascript that hackers use to find out if a page is vulnerable to XSS is alert(“XSS”). This small bit of code is asking the web page to show us an alert prompt so that we know the page is vulnerable. Let’s try it. Enter the code in the search box and click on the Get This User button.

$299 WILL ENROLL YOU IN OUR SELF PACED COURSE – LFS205 – ADMINISTERING LINUX ON AZURE!

xss3

This worked first time!!

xss4

Above is the alert message from injecting the Javascript into the page.

How to Protect Yourself

The primary defenses against XSS are described in the OWASP XSS Prevention Cheat Sheet.

Also, it’s crucial that you turn off HTTP TRACE support on all web servers. An attacker can steal cookie data via Javascript even when document.cookie is disabled or not supported by the client. This attack is mounted when a user posts a malicious script to a forum so when another user clicks the link, an asynchronous HTTP Trace call is triggered which collects the user’s cookie information from the server, and then sends it over to another malicious server that collects the cookie information so the attacker can mount a session hijack attack. This is easily mitigated by removing support for HTTP TRACE on all web servers.

The OWASP ESAPI project has produced a set of reusable security components in several languages, including validation and escaping routines to prevent parameter tampering and the injection of XSS attacks. In addition, the OWASP WebGoat Project training application has lessons on Cross-Site Scripting and data encoding.

Thank for reading and don’t forget to like, comment and of course, follow our blog. Until next time.

QuBits 2018-09-12

Source

How To Reset A MySQL Root Password

MySQL contains it own ‘root’ password independent of the system root password, this is a guide on how to reset the MySQL root password. To reset it you will need root access on the server that has the MySQL instance. The same process applies to percona and mariadb servers as well, the only differences will be the stop and start commands (mariadb for mariadb)

If you already know the root password, you can also connect to directly to MySQL and reset the password that way. This can be used for resetting any users MySQL password as well.

Connect to MySQL:

mysql -uroot -p

Select the mysql database:

use mysql;

Update the root password:

update user set password=PASSWORD(“newpass”) where User=’root’;

Load the new privileges:

flush privileges;

Exit MySQL:

quit;

Thats it for resetting a user password in mysql.

This covers how to reset the mysql root password if you do not know the current password.

Stop MySQL

First you will need to stop the mysql service

On CentOS 6:

/etc/init.d/mysql stop

On Centos/RHEL 7:

systemctl stop mysql

Start mysqld_safe

You will then want to run mysql_safe with the skip grant tables option to bypass passwords with MySQL:

mysqld_safe –skip-grant-tables &

Reset MySQL Root Password

You will now want to connect to MySQL as root:

mysql -uroot

Then use the mysql database:

use mysql;

Set a new password:

update user set password=PASSWORD(“newpass”) where User=’root’;

You will want to replace newpass with the password you want to use

Flush the privileges:

flush privileges;

Exist mysql:

exit;

Restart MySQL

On Centos 6:

/etc/init.d/mysql restart

On Centos 7:

systemctl restart mysql

Test New Root MySQL Password:

mysql -u root -p

You should now be able to connect successfully to mysql as root using the new password you set.

Sep 4, 2017LinuxAdmin.io

Source

Katello: Working with Puppet Modules and Creating the Main Manifest | Lisenet.com :: Linux | Security

Working with Katello – part 4. We’re going to install Puppet modules, we’re also going to create a custom firewall module, define some rules, configure Puppet to serve files from a custom location and declare the site manifest.

This article is part of the Homelab Project with KVM, Katello and Puppet series.

Homelab

We have Katello installed on a CentOS 7 server:

katello.hl.local (10.11.1.4) – see here for installation instructions

See the image below to identify the homelab part this article applies to.

Puppet Configuration

What we’ve done in the previous article is we’ve created a new environment called “homelab”. What we haven’t done yet is we haven’t created a Puppet folder structure.

Folder Structure

Let us go ahead and create a folder structure:

# mkdir -p /etc/puppetlabs/code/environments/homelab/

Create the main manifest and set appropriate group permissions:

# touch /etc/puppetlabs/code/environments/homelab/manifests/site.pp
# chgrp puppet /etc/puppetlabs/code/environments/homelab/manifests/site.pp
# chmod 0640 /etc/puppetlabs/code/environments/homelab/manifests/site.pp

We can now go ahead and start installing Puppet modules.

Puppet Modules

Below is a list of Puppet modules that we have installed and are going to use. It may look like a long list at first, but it really isn’t. Some of the modules are installed as dependencies.

We can see modules for SELinux, Linux security limits, kernel tuning (sysctl), as well as OpenLDAP and sssd, Apache, WordPress and MySQL, Corosync and NFS, Zabbix and SNMP.

There is the whole Graylog stack with Java/MongoDB/Elasticsearch, also Keepalived and HAProxy.

# puppet module list –environment homelab
/etc/puppetlabs/code/environments/homelab/modules
├── arioch-keepalived (v1.2.5)
├── camptocamp-openldap (v1.16.1)
├── camptocamp-systemd (v1.1.1)
├── derdanne-nfs (v2.0.7)
├── elastic-elasticsearch (v6.2.1)
├── graylog-graylog (v0.6.0)
├── herculesteam-augeasproviders_core (v2.1.4)
├── herculesteam-augeasproviders_shellvar (v2.2.2)
├── hunner-wordpress (v1.0.0)
├── lisenet-lisenet_firewall (v1.0.0)
├── puppet-archive (v2.3.0)
├── puppet-corosync (v6.0.0)
├── puppet-mongodb (v2.1.0)
├── puppet-selinux (v1.5.2)
├── puppet-staging (v3.1.0)
├── puppet-zabbix (v6.2.0)
├── puppetlabs-accounts (v1.3.0)
├── puppetlabs-apache (v2.3.1)
├── puppetlabs-apt (v4.5.1)
├── puppetlabs-concat (v2.2.1)
├── puppetlabs-firewall (v1.12.0)
├── puppetlabs-haproxy (v2.1.0)
├── puppetlabs-java (v2.4.0)
├── puppetlabs-mysql (v5.3.0)
├── puppetlabs-ntp (v7.1.1)
├── puppetlabs-pe_gem (v0.2.0)
├── puppetlabs-postgresql (v5.3.0)
├── puppetlabs-ruby (v1.0.0)
├── puppetlabs-stdlib (v4.24.0)
├── puppetlabs-translate (v1.1.0)
├── razorsedge-snmp (v3.9.0)
├── richardc-datacat (v0.6.2)
├── saz-limits (v3.0.2)
├── saz-rsyslog (v5.0.0)
├── saz-ssh (v3.0.1)
├── saz-sudo (v5.0.0)
├── sgnl05-sssd (v2.7.0)
└── thias-sysctl (v1.0.6)
/etc/puppetlabs/code/environments/common (no modules installed)
/etc/puppetlabs/code/modules (no modules installed)
/opt/puppetlabs/puppet/modules (no modules installed)
/usr/share/puppet/modules (no modules installed)

The lisenet-lisenet_firewall module is the one we’ve generated ourselves. We’ll discuss it shortly.

Now, how do we actually install modules into our homelab environment? The default Puppet environment is production (see the previous article), that’s where all modules go by default. In order to install them into the homelab environment, we can define the installation command with the homelab environment specified:

# MY_CMD=”puppet module install –environment homelab”

To install modules, we can now use something like this:

# $MY_CMD puppetlabs-firewall ;
$MY_CMD puppetlabs-accounts ;
$MY_CMD puppetlabs-ntp ;
$MY_CMD puppet-selinux ;
$MY_CMD saz-ssh ;
$MY_CMD saz-sudo ;
$MY_CMD saz-limits ;
$MY_CMD thias-sysctl

This isn’t a full list of modules, but rather the one required by our main manifest (see the main manifest paragraph below). We could also loop the module list if we wanted to install everything in one go.

Let us go back to the firewall module. We want to be able to pass custom firewall data through the Katello WeUI by using a smart class parameter. Create a new firewall module:

# cd /etc/puppetlabs/code/environments/homelab/modules
# puppet module generate lisenet-lisenet_firewall

Create manifests:

# touch ./lisenet_firewall/manifests/

All good, let us create the rules. Here is the content of the file pre.pp (only allow ICMP and SSH by default):

class lisenet_firewall::pre {
Firewall {
require => undef,
}
firewall { ‘000 drop all IPv6’:
proto => ‘all’,
action => ‘drop’,
provider => ‘ip6tables’,
}->
firewall { ‘001 allow all to lo interface’:
proto => ‘all’,
iniface => ‘lo’,
action => ‘accept’,
}->
firewall { ‘002 reject local traffic not on loopback interface’:
iniface => ‘! lo’,
proto => ‘all’,
destination => ‘127.0.0.1/8’,
action => ‘reject’,
}->
firewall { ‘003 allow all ICMP’:
proto => ‘icmp’,
action => ‘accept’,
}->
firewall { ‘004 allow related established rules’:
proto => ‘all’,
state => [‘RELATED’, ‘ESTABLISHED’],
action => ‘accept’,
}->
firewall { ‘005 allow SSH’:
proto => ‘tcp’,
source => ‘10.0.0.0/8′,
state => [ “NEW” ],
dport => ’22’,
action => ‘accept’,
}
}

Here is the content of the file post.pp:

class lisenet_firewall::post {
firewall {‘999 drop all’:
proto => ‘all’,
action => ‘drop’,
before => undef,
}
}

The main module manifest init.pp:

class lisenet_firewall($firewall_data = false) {
include lisenet_firewall::pre
include lisenet_firewall::post

resources { “firewall”:
purge => true
}

Firewall {
before => Class[‘lisenet_firewall::post’],
require => Class[‘lisenet_firewall::pre’],
}

if $firewall_data != false {
create_resources(‘firewall’, $firewall_data)
}
}

One other thing we have to take care of after installing modules is SELinux context:

# restorecon -Rv /etc/puppetlabs/code/environments/homelab/

At this stage Katello has no knowledge of our newly installed Puppet modules. We have to go to the Katello WebUI, navigate to:

Configure > Puppet Environments > Import environments from katello.hl.local

This will import the modules into the homelab environment. Do the same for Puppet classes:

Configure > Puppet Classes > Import environments from katello.hl.local

This will import the lisenet_firewall class.

Strangely, I couldn’t find a Hammer command to perform the imports above, chances are I may have overlooked something. If you know how to do that with the Hammer, then let me know in the comments section.

Configure lisenet_firewall Smart Class Parameter

Open Katello WebUI, navigate to:

Configure > Puppet Classes

Find the class lisenet_firewall, edit Smart Class Parameter, and set the $firewall_data key param type to yaml. This will allow to pass in any additional firewall rules via yaml, e.g.:

“007 accept TCP Apache requests”:
dport:
– “80”
– “443”
proto: tcp
source: “10.0.0.0/8”
action: accept

See the image below to get the idea.

The next step is to assign the lisenet_firewall class to the host group that we’ve created previously, what in turn will apply default firewall rules defined in the manifests pre.pp and post.pp, as well as allow us to add new firewall rules directly to any host (which is a member of the group) via yaml.

We can view the host list to get a host ID:

# hammer host list

Then verify that the parameter has been applied, e.g.:

# hammer host sc-params –host-id “32”
—|—————|—————|———-|—————–
ID | PARAMETER | DEFAULT VALUE | OVERRIDE | PUPPET CLASS
—|—————|—————|———-|—————–
58 | firewall_data | | true | lisenet_firewall
—|—————|—————|———-|—————–

Serving Files from a Custom Location

Puppet automatically serves files from the files directory of every module. This does the job for the most part, however, when working in a homelab environment, we prefer to have a custom mount point where we can store all files.

The file fileserver.conf configures custom static mount points for Puppet’s file server. If custom mount points are present, file resources can access them with their source attributes.

Create a custom directory to serve files from:

# mkdir /etc/puppetlabs/code/environments/homelab/homelab_files

To create a custom mount point, open the file /etc/puppetlabs/puppet/fileserver.conf and add the following:

[homelab_files]
path /etc/puppetlabs/code/environments/homelab/homelab_files
allow *

As a result, files in the path directory will be served at puppet:///homelab_files/.

There are a couple of files that we want to create and put in the directory straight away, as these will be used by the main manifest.

We’ll strive to use encryption as much as possible, therefore we’ll need to have a TLS/SSL certificate. Let us go ahead and generate a self-signed one. We want to create a wildcard certificate so that we can use it with any homelab service, therefore when asked for a Common Name, type *.hl.local.

# cd /etc/puppetlabs/code/environments/homelab/homelab_files
# DOMAIN=hl
# openssl genrsa -out “$DOMAIN”.key 2048 && chmod 0600 “$DOMAIN”.key
# openssl req -new -sha256 -key “$DOMAIN”.key -out “$DOMAIN”.csr
# openssl x509 -req -days 1825 -sha256 -in “$DOMAIN”.csr
-signkey “$DOMAIN”.key -out “$DOMAIN”.crt
# openssl pkcs8 -topk8 -inform pem -in “$DOMAIN”.key
-outform pem -nocrypt -out “$DOMAIN”.pem

Ensure that the files have been created:

# ls
hl.crt hl.csr hl.key hl.pem

Verify the certificate:

# openssl x509 -in hl.crt -text -noout|grep CN
Issuer: C=GB, L=Birmingham, O=HomeLab, CN=*.hl.local
Subject: C=GB, L=Birmingham, O=HomeLab, CN=*.hl.local

All looks good, we can proceed forward and declare the main manifest.

Define the Main Manifest for the Homelab Environment

Edit the main manifest file /etc/puppetlabs/code/environments/homelab/manifests/site.pp and define any global overrides for the homelab environment.

Note how the TLS certificate that we created previously is configured to be deployed on all servers.

##
## File: site.pp
## Author: Tomas at www.lisenet.com
## Date: March 2018
##
## This manifest defines services in the following order:
##
## 1. OpenSSH server config
## 2. Packages and services
## 3. Sudo and User config
## 4. SELinux config
## 5. Sysctl config
## 6. System security limits
##

##
## The name default (without quotes) is a special value for node names.
## If no node statement matching a given node can be found, the default
## node will be used.
##

node ‘default’ {}

##
## Note: the lisenet_firewall class should not be assigned here,
## but rather added to Katello Host Groups. This is to allow us
## to utilise Smart Class Parameters and add additional rules
## per host by using Katello WebUI.
##

#################################################
## OpenSSH server configuration for the env
#################################################

## CentOS 7 OpenSSH server configuration
if ($facts[‘os’][‘family’] == ‘RedHat’) and ($facts[‘os’][‘release’][‘major’] == ‘7’) {
class { ‘ssh::server’:
validate_sshd_file => true,
options => {
‘Port’ => ’22’,
‘ListenAddress’ => ‘0.0.0.0’,
‘Protocol’ => ‘2’,
‘SyslogFacility’ => ‘AUTHPRIV’,
‘LogLevel’ => ‘INFO’,
‘MaxAuthTries’ => ‘3’,
‘MaxSessions’ => ‘5’,
‘AllowUsers’ => [‘root’,’tomas’],
‘PermitRootLogin’ => ‘without-password’,
‘HostKey’ => [‘/etc/ssh/ssh_host_ed25519_key’, ‘/etc/ssh/ssh_host_rsa_key’],
‘PasswordAuthentication’ => ‘yes’,
‘PermitEmptyPasswords’ => ‘no’,
‘PubkeyAuthentication’ => ‘yes’,
‘AuthorizedKeysFile’ => ‘.ssh/authorized_keys’,
‘KerberosAuthentication’ => ‘no’,
‘GSSAPIAuthentication’ => ‘yes’,
‘GSSAPICleanupCredentials’ => ‘yes’,
‘ChallengeResponseAuthentication’ => ‘no’,
‘HostbasedAuthentication’ => ‘no’,
‘IgnoreUserKnownHosts’ => ‘yes’,
‘PermitUserEnvironment’ => ‘no’,
‘UsePrivilegeSeparation’ => ‘yes’,
‘StrictModes’ => ‘yes’,
‘UsePAM’ => ‘yes’,

‘LoginGraceTime’ => ’60’,
‘TCPKeepAlive’ => ‘yes’,
‘AllowAgentForwarding’ => ‘no’,
‘AllowTcpForwarding’ => ‘no’,
‘PermitTunnel’ => ‘no’,
‘X11Forwarding’ => ‘no’,
‘Compression’ => ‘delayed’,
‘UseDNS’ => ‘no’,
‘Banner’ => ‘none’,
‘PrintMotd’ => ‘no’,
‘PrintLastLog’ => ‘yes’,
‘Subsystem’ => ‘sftp /usr/libexec/openssh/sftp-server’,

‘Ciphers’ => ‘[email protected],[email protected],[email protected],aes256-ctr,aes192-ctr,aes128-ctr’,
‘MACs’ => ‘[email protected],[email protected],[email protected]‘,
‘KexAlgorithms’ => ‘[email protected],diffie-hellman-group18-sha512,diffie-hellman-group16-sha512,diffie-hellman-group14-sha256′,
‘HostKeyAlgorithms’ => ‘ssh-ed25519,[email protected],ssh-rsa,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,[email protected],[email protected],[email protected],[email protected],[email protected]‘,
},
}
}

#################################################
## Packages/services configuration for the env
#################################################
## We want these packages installed on all servers
$packages_to_install = [
‘bzip2’,
‘deltarpm’,
‘dos2unix’,
‘gzip’,
‘htop’,
‘iotop’,
‘lsof’,
‘mailx’,
‘net-tools’,
‘nmap-ncat’,
‘postfix’,
‘rsync’,
‘screen’ ,
‘strace’,
‘sudo’,
‘sysstat’,
‘unzip’,
‘vim’ ,
‘wget’,
‘xz’,
‘yum-cron’,
‘yum-utils’,
‘zip’,
]
package { $packages_to_install: ensure => ‘installed’ }

## We do not want these packages on servers
$packages_to_purge = [
‘aic94xx-firmware’,
‘alsa-firmware’,
‘alsa-utils’,
‘ivtv-firmware’,
‘iw’,
‘iwl1000-firmware’,
‘iwl100-firmware’,
‘iwl105-firmware’,
‘iwl135-firmware’,
‘iwl2000-firmware’,
‘iwl2030-firmware’,
‘iwl3160-firmware’,
‘iwl3945-firmware’,
‘iwl4965-firmware’,
‘iwl5000-firmware’,
‘iwl5150-firmware’,
‘iwl6000-firmware’,
‘iwl6000g2a-firmware’,
‘iwl6000g2b-firmware’,
‘iwl6050-firmware’,
‘iwl7260-firmware’,
‘iwl7265-firmware’,
‘wireless-tools’,
‘wpa_supplicant’,
]
package { $packages_to_purge: ensure => ‘purged’ }

##
## Manage some specific services below
##
service { ‘kdump’: enable => false, }
service { ‘puppet’: enable => true, }
service { ‘sysstat’: enable => false, }
service { ‘yum-cron’: enable => true, }

##
## Configure NTP
##
class { ‘ntp’:
servers => [ ‘admin1.hl.local’, ‘admin2.hl.local’ ],
restrict => [‘127.0.0.1’],
}

##
## Configure Postfix via postconf
## Note how we configure smtp_fallback_relay
##
service { ‘postfix’: enable => true, ensure => “running”, }
exec { “configure_postfix”:
path => ‘/usr/bin:/usr/sbin:/bin:/sbin’,
provider => shell,
command => “postconf -e ‘inet_interfaces = localhost’
‘relayhost = admin1.hl.local’
‘smtp_fallback_relay = admin2.hl.local’
‘smtpd_banner = $hostname ESMTP'”,
unless => “grep ^smtp_fallback_relay /etc/postfix/main.cf”,
notify => Exec[‘restart_postfix’]
}
exec {‘restart_postfix’:
path => ‘/usr/bin:/usr/sbin:/bin:/sbin’,
provider => shell,
## Using service rather than systemctl to make it portable
command => “service postfix restart”,
refreshonly => true,
}

if ($facts[‘os’][‘release’][‘major’] == ‘7’) {
## Disable firewalld and install iptables-services
package { ‘iptables-services’: ensure => ‘installed’ }
service { ‘firewalld’: enable => “mask”, ensure => “stopped”, }
service { ‘iptables’: enable => true, ensure => “running”, }
service { ‘ip6tables’: enable => true, ensure => “running”, }
service { ‘tuned’: enable => true, }
package { ‘chrony’: ensure => ‘purged’ }
}

## Wildcard *.hl.local TLS certificate for homelab
file {‘/etc/pki/tls/certs/hl.crt’:
ensure => ‘file’,
source => ‘puppet:///homelab_files/hl.crt’,
path => ‘/etc/pki/tls/certs/hl.crt’,
owner => ‘0’,
group => ‘0’,
mode => ‘0644’,
}
file {‘/etc/pki/tls/private/hl.key’:
ensure => ‘file’,
source => ‘puppet:///homelab_files/hl.key’,
path => ‘/etc/pki/tls/private/hl.key’,
owner => ‘0’,
group => ‘0’,
mode => ‘0640’,
}
}

#################################################
## Sudo and Users configuration for the env
#################################################

class { ‘sudo’:
purge => true,
config_file_replace => true,
}
sudo::conf { ‘wheel_group’:
content => “%wheel ALL=(ALL) ALL”,
}

## These are necessary for passwordless SSH
file {‘/root/.ssh’:
ensure => ‘directory’,
owner => ‘0’,
group => ‘0’,
mode => ‘0700’,
}->
file {‘/root/.ssh/authorized_keys’:
ensure => ‘file’,
owner => ‘0’,
group => ‘0’,
mode => ‘0600’,
content => “# Managed by Puppetnnnssh-rsa key-stringn”,
}

#################################################
## SELinux configuration for the environment
#################################################

class { selinux:
mode => ‘enforcing’,
type => ‘targeted’,
}

#################################################
## Sysctl configuration for the environment
#################################################

sysctl { ‘fs.suid_dumpable’: value => ‘0’ }
sysctl { ‘kernel.dmesg_restrict’: value => ‘1’ }
sysctl { ‘kernel.kptr_restrict’: value => ‘2’ }
sysctl { ‘kernel.randomize_va_space’: value => ‘2’ }
sysctl { ‘kernel.sysrq’: value => ‘0’ }
sysctl { ‘net.ipv4.tcp_syncookies’: value => ‘1’ }
sysctl { ‘net.ipv4.tcp_timestamps’: value => ‘1’ }
sysctl { ‘net.ipv4.conf.default.accept_source_route’: value => ‘0’ }
sysctl { ‘net.ipv4.conf.all.accept_redirects’: value => ‘0’ }
sysctl { ‘net.ipv4.conf.default.accept_redirects’: value => ‘0’ }
sysctl { ‘net.ipv4.conf.all.send_redirects’: value => ‘0’ }
sysctl { ‘net.ipv4.conf.default.send_redirects’: value => ‘0’ }
sysctl { ‘net.ipv4.conf.all.secure_redirects’: value => ‘0’ }
sysctl { ‘net.ipv4.conf.default.secure_redirects’: value => ‘0’ }
sysctl { ‘net.ipv4.conf.all.rp_filter’: value => ‘1’ }
sysctl { ‘net.ipv4.conf.default.rp_filter’: value => ‘1’ }
sysctl { ‘net.ipv4.conf.all.log_martians’: value => ‘1’ }
sysctl { ‘net.ipv4.conf.default.log_martians’: value => ‘1’ }
sysctl { ‘net.ipv6.conf.lo.disable_ipv6’: value => ‘0’ }
sysctl { ‘net.ipv6.conf.all.disable_ipv6’: value => ‘0’ }
sysctl { ‘net.ipv6.conf.default.disable_ipv6’: value => ‘0’ }
sysctl { ‘net.ipv6.conf.all.accept_redirects’: value => ‘0’ }
sysctl { ‘net.ipv6.conf.default.accept_redirects’: value => ‘0’ }
sysctl { ‘vm.swappiness’: value => ’40’ }

#################################################
## Security limits configuration for the env
#################################################

limits::limits{‘*/core’: hard => 0; }
limits::limits{‘*/fsize’: both => 67108864; }
limits::limits{‘*/locks’: both => 65535; }
limits::limits{‘*/nofile’: both => 65535; }
limits::limits{‘*/nproc’: both => 16384; }
limits::limits{‘*/stack’: both => 32768; }
limits::limits{‘root/locks’: both => 65535; }
limits::limits{‘root/nofile’: both => 65535; }
limits::limits{‘root/nproc’: both => 16384; }
limits::limits{‘root/stack’: both => 32768; }

## Module does not manage the file /etc/security/limits.conf
## We might as well warn people from editing it.
file {‘/etc/security/limits.conf’:
ensure => ‘file’,
owner => ‘0’,
group => ‘0’,
mode => ‘0644’,
content => “# Managed by Puppetnn”,
}

Any server that uses the Puppet homelab environment will get the configuration above applied.

What’s Next?

While using the Puppet homelab environment gives us flexibility to develop and test Puppet modules without having to publish them (Katello content views are published in order to lock their contents in place), once we hit production, we will need to be able to define a stable state of the modules so that anything that hasn’t been tested yet doesn’t get rolled into the environment.

Katello allows us to use a separate lifecycle for Puppet modules, we’ll discuss this in the next article.

Source

WP2Social Auto Publish Powered By : XYZScripts.com