FOSS Project Spotlight: Tutanota, the First Encrypted Email Service with an App on F-Droid

Seven years ago, we started building Tutanota, an encrypted email service
with a strong focus on security, privacy and open source. Long before the
Snowden revelations, we felt there was a need for easy-to-use encryption that
would
allow everyone to communicate online without being snooped upon.

""

Figure 1. The Tutanota team’s motto: “We fight for privacy with automatic
encryption.”

As developers, we know how easy it is to spy on email that travels through the
web. Email, with its federated setup is great, and that’s why it has
become the main form of online communication and still is. However, from a
security perspective, the federated setup is troublesome—to say the
least.

End-to-end encrypted email is difficult to handle on desktops (with key
generation, key sharing, secure storing of keys and so on), and it’s close to impossible on
mobile devices. For the average, not so tech-savvy internet user, there are a
lot of pitfalls, and the probability of doing something wrong is, unfortunately,
rather high.

That’s why we decided to build Tutanota: a secure email service that
is so easy to use, everyone can send confidential email, not only the
tech-savvy. The entire encryption process runs locally on users’
devices, and it’s fully automated. The automatic encryption also enabled us to build
fully encrypted email apps for Android and iOS.

Finally, end-to-end encrypted email is starting to become the standard:
58% of all email sent from Tutanota already are end-to-end encrypted, and
the percentage is constantly
rising
.

""

Figure 2. Easy email encryption on desktops and mobile devices is now possible for
everyone.

The Open-Source Email Service to Get Rid of Google

As open-source enthusiasts, our apps have been open source from the start, but
putting them on F-Droid was a challenge. As with all email services, we have used
Google’s FCM for push notifications. On top of that, our encrypted email
service was based on Cordova, which the F-Droid servers are not able to
build.

Not being able to publish our Android app on F-Droid was one of the main
reasons we started to re-build the entire Tutanota web client. We are privacy
and open-source enthusiasts; we ourselves use F-Droid. Consequently, we
thought that our app must be published there, no matter the effort.

When rebuilding our email client, we made sure not to use Cordova anymore and
to replace Google’s FCM for push notifications.

The Challenge to Replace Google’s FCM

GCM (or, as it’s now called, FCM, for Firebase Cloud Messaging) is a service
owned by Google. Unfortunately, FCM includes Google’s tracking code for
analytics purposes, which we didn’t want to use. And, even more
important: to use FCM, you have to send all your notification data to Google.
You also have to use Google’s proprietary libraries.

Because of privacy and security concerns, we didn’t send any info in
the notification messages. Therefore, the push notification
mentioned only that you received a new message without a reference to the mailbox
in which that message has been placed.

We wanted our users to be able to use Tutanota on every ROM and every device,
without the control of a third-party. That’s why we decided to take on the
challenge and to build a push notification service ourselves.

When we started designing our push system, we set the following goals:

  • It must be secure.
  • It must be fast.
  • It must be power-efficient.

We’ve researched how others (Signal, Wire, Conversations, Riot,
Facebook and Mastodon) have been solving similar problems, and we had several
options in mind, including WebSockets, MQTT, Server Sent Events and HTTP/2
Server Push.

We settled for the SSE (Server Sent Events), because it seemed like a simple
solution. By that, I mean “easy to implement, easy to debug”.
Debugging these types of things can be a major headache, so one should not
underestimate that factor. Another argument in favor of that solution was relative power
efficiency. We didn’t need upstream messages, and constant connection was
not our goal.

So, What Is SSE?

SSE is a web API that allows a server to send events to connected
clients. It’s a relatively old API, which is, in my opinion, underused.
We’d never heard of SSE before the federated network Mastodon, which
uses SSE for real-time timeline updates, and it works great.

The protocol itself is very simple and resembles good old polling. The client
opens a connection, and the server keeps it open. It’s different from
classical polling in that we keep this connection open for multiple events.
The server can send events and data messages, they’re just separated by
new lines. So the only thing the client needs to do is to open a connection
with a big timeout and read the stream in a loop.

SSE fits our needs better than WebSocket would (it’s cheaper and converges
faster, because it’s not duplex). We’ve seen multiple chat apps
trying to use WebSocket for push notifications, and it didn’t seem power-efficient.

We had some experience with WebSocket already, and we knew that firewalls
don’t like keepalive connections. To solve this, we used the same
workaround for SSE that we did for WebSocket. We sent “heartbeat” empty
messages every few minutes. We made this interval adjustable from the server
side and randomized it not to overwhelm the server.

In the end, we had to do some work—I could describe loads of challenges
we had to overcome to make this finally work, but maybe some other time. Yet,
it was totally worth it. Our new app is still in beta, but thanks to
non-blocking IO, we’ve been able to maintain thousands of simultaneous
connections without problems. Our users are no longer forced to use Google
Play Services, and we’ve been able to publish our app on
F-Droid
.

As a side-note: wouldn’t it be great if the user could just pick a
“push notifications provider” in the phone settings and the OS managed
all these hard details by itself, so every app that doesn’t want to be
policed by the platform owner didn’t have to invent the system anew? It
could be end-to-end encrypted between the app and the app server. There’s
no real technical difficulty in that, but as long as our systems are
controlled by big players, we as app developers have to solve this by
ourselves.

Tutanota Is the First App of an Email Service Available on F-Droid

Our app release on F-Droid really excites us, as it proves that it is possible
to build a secure email service that’s completely
Google-free, giving people a real open-source alternative to the data-hungry
market-leader Gmail.

This is a remarkable step, as so far no other email service has managed (or
cared) to publish its app on F-Droid. The reason for this is that, in
general, email services rely on Google’s FCM for push notifications, which
makes an F-Droid release impossible.

The F-Droid team also welcomed our move in the right direction:

We are
happy to see how enthusiastic Tutanota is about F-Droid and free software,
having rewritten their app from scratch so it could be included. Furthermore,
they take special measures to avoid tracking you, and the security looks
solid with support for end-to-end encryption and two-factor
authentication.

We are very excited about this release as well. And, we are thankful for the
dedication and hard work of the numerous F-Droid volunteers helping us to
publish our app there. We are also proud that the new Android app finally
comes without any ties to Google services. As a secure email service, this is
very important to us. We encourage our users to leave
Google
behind,
so offering a Google-free Android app, therefore, is a minimum requirement
for
us.

""

Figure 3. The new Tutanota client comes with a dark theme—a nice and minimalistic
design that lets you easily encrypt email messages to every email address in the
world.

A Privacy-Focused Email Service for Everyone

We’ve been using Tutanota ourselves for a couple years now. The new
Tutanota client and apps are fast, come with a nice and minimalistic design,
enable search on encrypted data, and support 2FA and auto-sync. Since we’ve
added search, there’s no major feature missing for professional use
any longer, and we’ve noticed the numbers of new users rising constantly. We recommend
that everyone who wants to stop third parties from reading their private
email to
just give it a try.

Source

Introducing CCVPN: A Project in Collaboration with China Mobile, Vodafone and Huawei

As operators continue to experience growing demands on their networks in the lead up to 5G, the need for high-bandwidth, flat, and super high-speed Optical Transport Networks (OTNs) is greater than ever. Combined with an increasingly global market, there is a clear need for service providers to work across international boundaries and provide end-to-end services for their customers that is carrier and geographic-agnostic.

Enter the Cross-domain, Cross-layer VPN (CCVPN) use case, coming with the next ONAP release, Casablanca (due in late 2018). Piloted by Linux Foundation Platinum members China Mobile, Vodafone and Huawei — with contributions from a handful of other vendors — in response to evolving market needs, CCVPN enables code that will allow ONAP to automate and orchestrate cloud-enabled, software-defined VPN services across network operator borders. This means that operators will be able to provision a VPN service that cross international borders by accessing and orchestrating resources on other carrier networks.

The use case was demonstrated on-stage at Open Networking Summit Europe and includes two ONAP instances: one deployed by China Mobile and one deployed by Vodafone. Both instances orchestrate the respective operator underlay OTNs networks, overlay SD-WAN networks and leverage each others networks for for cross-operator VPN service delivery.

In addition to provisioning cross-domain, cross-layer VPN, this effort represents true collaboration to solve industry challenges. By combining forces, developers from different companies are continuing to work together and with the community to refine features to fully enable CCVPN as part of the Casablanca release. To learn more about ONAP, please visit www.onap.org; more details on the CCVPN project are available on the project Wiki page here. Blog posts from Huawei and Vodafone are also available for additional information.

Source

Linux/Unix desktop fun: sl – a mirror version of ls

One of the most common mistakes is typing sl instead of ls command. I set up an alias, i.e., alias sl=ls; but then you may miss out the steam train with a whistle.

sl is a joke software or classic UNIX game. It is a steam locomotive runs across your screen if you type “sl” (Steam Locomotive) instead of “ls” by mistake. Now there is a twist to older sl command.

sl – a mirror version of ls

From the blog post:

I didn’t like it and made another program of the same name. My sl just mirrors the output of ls. It accepts most ls(1) arguments and is best enjoyed with -l.

source code

The program is written in the bash shell. Here is the source code:

#!/bin/bash
# sl – prints a mirror image of ls. (C) 2017 Tobias Girstmair, https://gir.st/, GPLv3

LEN=$(ls “$@” |wc -L) # get the length of the longest line

ls “$@” | rev | while read -r line
do
printf “%$.$s\n” “$line” | sed ‘s/^(s+)(S+)/21/’
done

#!/bin/bash
# sl – prints a mirror image of ls. (C) 2017 Tobias Girstmair, https://gir.st/, GPLv3 LEN=$(ls “$@” |wc -L) # get the length of the longest line ls “$@” | rev | while read -r line
do
printf “%$.$s\n” “$line” | sed ‘s/^(s+)(S+)/21/’
done

Run it as follows

First create ~/bin/ directory using the mkdir command:

$ mkdir ~/bin/ Next, store above source code. cd into the ~/bin/ using the cd command:

$ cd ~/bin/
$ vi sl Save and close the file. Set permission on your shell script using the chmod command:

$ chmod +x sl Test it:

$ ls -l
$ ./sl -l Sample outputs from sl command:

txt.qaf.detaeler.km >- txt.smc.detaeler.km 05:41 32 ceD 91 keviv keviv 1 xwrxwrxwrl
qaf.detaeler.km 72:41 11 ceD 709 keviv keviv 1 x-rx-rxwr-
etalpmet.qaf.detaeler.km 34:51 61 voN 121 keviv keviv 1 –r–r-wr-
txt.qaf.detaeler.km 85:00 01 beF 014 keviv keviv 1 –r–r-wr-
spit.detaeler.km 94:41 32 ceD 709 keviv keviv 1 x-rx-rxwr-
etalpmet.spit.detaeler.km 84:41 32 ceD 121 keviv keviv 1 –r–r-wr-
ssr.setadpu.km 95:00 7 naJ 618 keviv keviv 1 x-rx-rxwr-
etalpmet.ssr.setadpu.km 24:22 2 naJ 463 keviv keviv 1 –r–r-wr-
txt.ssr.setadpu.km 22:12 02 beF 4221 keviv keviv 1 –r–r-wr-
hs.014.xnign 43:11 6 naJ 684 keviv keviv 1 x-rx-rxwr-
hs.103.moc.tfarcxin 5102 52 rpA 631 keviv keviv 1 x-rx-rxwr-
etacsufbo 5102 91 luJ 9931 keviv keviv 1 –r–r-wr-
hs.lapyap 84:41 02 ceD 865 keviv keviv 1 x-rx-rxwr-
txt.lapyap 7102 03 naJ 4131 keviv keviv 1 –r–r-wr-
hs.daolputsop 3102 13 ceD 135 keviv keviv 1 x-rx-rxwr-
hs.daolpuerp 3102 13 ceD 734 keviv keviv 1 x-rx-rxwr-
hs.niamod.eralfduolc.lla.egrup 7102 81 yaM 6401 keviv keviv 1 x-rx-rxwr-
nohtyp 05:20 5 beF 6904 keviv keviv 2 x-rx-rxwrd
ls 92:61 13 raM 672 keviv keviv 1 x-rx-rxwr-
resu.tidder.ecruos 7102 42 naJ 911 keviv keviv 1 x-rx-rxwr-
014.deteled.sgat 95:32 02 raM 97732 keviv keviv 1 –r–r-wr-
hs.teewt 53:10 62 naJ 58653 keviv keviv 1 x-rx-rxwr-
tob-rettiwt 90:32 4 beF 6904 keviv keviv 2 x-rx-rxwrd
smc.elif.daolpu 7102 9 nuJ 907 keviv keviv 1 x-rx-rxwr-
qaf.elif.daolpu 7102 9 nuJ 807 keviv keviv 1 x-rx-rxwr-
pit.elif.daolpu 7102 9 nuJ 907 keviv keviv 1 x-rx-rxwr-
hs.egamidaolpu 3102 81 tcO 3911 keviv keviv 1 x-rx-rxwr-
nalnoekaw 00:41 21 tcO 1325 keviv keviv 1 x-rx-rxwr-
2x 7102 52 nuJ 017 keviv keviv 1 x-rx-rxwr-

txt.qaf.detaeler.km >- txt.smc.detaeler.km 05:41 32 ceD 91 keviv keviv 1 xwrxwrxwrl
qaf.detaeler.km 72:41 11 ceD 709 keviv keviv 1 x-rx-rxwr-
etalpmet.qaf.detaeler.km 34:51 61 voN 121 keviv keviv 1 –r–r-wr-
txt.qaf.detaeler.km 85:00 01 beF 014 keviv keviv 1 –r–r-wr-
spit.detaeler.km 94:41 32 ceD 709 keviv keviv 1 x-rx-rxwr-
etalpmet.spit.detaeler.km 84:41 32 ceD 121 keviv keviv 1 –r–r-wr-
ssr.setadpu.km 95:00 7 naJ 618 keviv keviv 1 x-rx-rxwr-
etalpmet.ssr.setadpu.km 24:22 2 naJ 463 keviv keviv 1 –r–r-wr-
txt.ssr.setadpu.km 22:12 02 beF 4221 keviv keviv 1 –r–r-wr-
hs.014.xnign 43:11 6 naJ 684 keviv keviv 1 x-rx-rxwr-
hs.103.moc.tfarcxin 5102 52 rpA 631 keviv keviv 1 x-rx-rxwr-
etacsufbo 5102 91 luJ 9931 keviv keviv 1 –r–r-wr-
hs.lapyap 84:41 02 ceD 865 keviv keviv 1 x-rx-rxwr-
txt.lapyap 7102 03 naJ 4131 keviv keviv 1 –r–r-wr-
hs.daolputsop 3102 13 ceD 135 keviv keviv 1 x-rx-rxwr-
hs.daolpuerp 3102 13 ceD 734 keviv keviv 1 x-rx-rxwr-
hs.niamod.eralfduolc.lla.egrup 7102 81 yaM 6401 keviv keviv 1 x-rx-rxwr-
nohtyp 05:20 5 beF 6904 keviv keviv 2 x-rx-rxwrd
ls 92:61 13 raM 672 keviv keviv 1 x-rx-rxwr-
resu.tidder.ecruos 7102 42 naJ 911 keviv keviv 1 x-rx-rxwr-
014.deteled.sgat 95:32 02 raM 97732 keviv keviv 1 –r–r-wr-
hs.teewt 53:10 62 naJ 58653 keviv keviv 1 x-rx-rxwr-
tob-rettiwt 90:32 4 beF 6904 keviv keviv 2 x-rx-rxwrd
smc.elif.daolpu 7102 9 nuJ 907 keviv keviv 1 x-rx-rxwr-
qaf.elif.daolpu 7102 9 nuJ 807 keviv keviv 1 x-rx-rxwr-
pit.elif.daolpu 7102 9 nuJ 907 keviv keviv 1 x-rx-rxwr-
hs.egamidaolpu 3102 81 tcO 3911 keviv keviv 1 x-rx-rxwr-
nalnoekaw 00:41 21 tcO 1325 keviv keviv 1 x-rx-rxwr-
2x 7102 52 nuJ 017 keviv keviv 1 x-rx-rxwr-

How to setup bash shell alias

The syntax is:

alias name=value Add the following to the ~/.bashrc file:

echo ‘alias sl=”/home/$USER/bin/sl -l”‘ >> ~/.bashrc Load it:

$ source ~/.bashrc Test it:

$ slsl - a mirror version of ls command

How to verify sl command execution path

Use the type command or command command as follows:

$ type -a sl

sl is aliased to `/home/vivek/bin/sl -l’
sl is /home/vivek/bin/sl
sl is /usr/games/sl $ command -V sl

alias sl=’/home/vivek/bin/sl -l’

You can temporarily disable an alias using any one of the following method:

“command”
command”
sl
ls
command ls
command sl For more info see this page.

Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

Source

Around 62 Percent of All Internet Sites Will Run an Unsupported PHP Version in 10 Weeks | Linux.com

The highly popular PHP 5.x branch will stop receiving security updates at the end of the year.

According to statistics from W3Techs, roughly 78.9 percent of all Internet sites today run on PHP. But on December 31, 2018, security support for PHP 5.6.x will officially cease, marking the end of all support for any version of the ancient PHP 5.x branch.

This means that starting with next year, around 62 percent of all Internet sites still running a PHP 5.x version will stop receiving security updates for their server and website’s underlying technology, exposing hundreds of millions of websites, if not more, to serious security risks.

Read more at ZDNet

Click Here!

Source

Cool, calm and collected | Linux Format

Buy it now!

Read a sample

Have GNU/Linux distributions fallen into a dull routine of refresh and release? It would be easy for a casual user to get the idea that Linux distros aren’t innovating. Indeed, years can pass between major releases for long-standing distros such as Debian and Slackware. As you’d expect, it’s behind the scenes where there’s constant work on improving, securing and bug squashing.

For many users, swan-like stability is key: keep everything on the surface calm and smooth, with frantic development activity well out of sight. There’s no doubt open source distros do offer this, but if you crave new horizons then there’s a continuous swarm of newly developed distros buzzing around the flowering core branches of the distro family tree.

In a way, this issue is a tale of two distro types. On the one hand we have the newly released and refreshed Mint 19: we love it, and you can read the full review and then go give it a whirl. While on the other you have the ever-updated rolling release distros in our Roundup. They’re all examples of how the open source GNU/Linux ecosystem enables people to experiment and launch things off in new directions.

If you’re happy with your distro then we have plenty of projects for you to try. We explain how to get an email server up and running without incurring the wrath of your ISP. There’s a guide to video encoding with Handbrake; explain simple steps you can take to secure your system; enter the world of amateur radio; and try our hand at coding some online bot spotters. As always, this issue of Linux Format feels packed to rafters, so enjoy!

Write in now, we want to hear from you!
lxf.letters@futurenet.com

Send your problems and solutions to:

lxf.answers@futurenet.com

Catch all the FLOSS news at our

evil Facebook page

or follow us on the

Twitters

.

Source

Managing Linux Users and Groups

In this video tutorial and cheat sheet you’ll learn:

  • User management commands in Linux with examples.
  • Adding, deleting, changing Linux user accounts.
  • Managing groups in Linux.

Video Transcript:

Each account consists of a username and a unique number called the UID which is short for user ID. Also, each account has a default group to which it belongs, some comments associated with that account, a shell to start when the user logs into the system, and a home directory. All of this information is stored in the /etc/passwd file. Note that /etc/passwd
is spelled “passwd.”

Historically, encrypted password information was also stored in the /etc/passwd file. However, the /etc/passwd file is actually readable by anyone on the system, so storing password information, even though it’s encrypted, is actually a security risk. So now, by default, the encrypted password information is stored in /etc/shadow. That file is only readable by the superuser or the root account on the system.

Managing users on a Linux system is fairly straightforward. If you want to create an account, use the useradd command and to delete accounts, use the userdel command. To modify existing accounts, just use the usermod command. These commands listed on your screen are the low-level Linux commands and they’re available on all the Linux distributions. However, some distros provide their own account creation tools that you can use if you so choose.

Just like the /etc/passwd file contains account information, the /etc/group file contains group information. To create a group use the groupadd command. The groupdel command will delete a group. To modify a group, use the groupmod command.

To see what groups an account is in, use the groups command. If you specify an account after the groups command, it will show all the group memberships for that specified account. If you happen to execute the groups command without any arguments, it displays the groups that the current user is in.

If you want to switch to another account, use the su command, which stands for switch user. To verify what account you’re currently using, simply run the whoami command and it will return the account name.

The sudo command is used to allow one user to run commands as another user. This is most commonly used to allow a normal user to execute a program as the superuser, so you can think of sudo as “superuser do.” To start a shell as another user, run “sudo -s” or you can also run “sudo su”.

By the way, the file that stores the sudo configuration is /etc/sudoers. To modify the sudoers file, use the visudo command. It has syntax checking built in so you don’t accidentally break the sudo configuration.

If you found this video helpful then I know you’re going to learn so much more in my Learn Linux in 5 Days course available at LinuxTrainingAcademy.com. In it, you’ll learn exactly what you need to know about the Linux operating system in order to become a proficient and professional user in a very short period of time.

In the course, you’ll start at the very beginning by choosing a Linux distribution and installing it. From there, you’ll learn the most important Linux concepts and commands, plus you’ll be guided step-by-step through several practical and real-world examples.

By the way, this course also comes with a 30-day money-back guarantee which means you have everything to gain and absolutely nothing to lose by trying it out.

So, if you can spare just a few minutes a day and want to learn the ins and outs of the Linux operating system, join me and the other students in this course today. I look forward to seeing you in the course!

Source

Node.js Innovator Program – Women in Linux

Writing a mobile or web application in Node.js? Join the Joyent Node.js Innovator Program to get free cloud infrastructure and Node.js expertise.

Joyent offers a cloud environment optimized for designing, deploying and debugging Node.js applications. We run Node.js ourselves at large scale and offer direct access to the Joyent engineering team for operations best practices and production time debugging assistance.

Because we believe the Joyent Cloud is the best place to run Node.js, we’re putting our money where our mouth is and launching the Node.js Innovator Program. Members of the year-long incubator program receive:

  • Up to $25,000 in Joyent Cloud hosting credits*
  • Custom half-day kickoff & training session with Joyent Node.js experts
  • Co-marketing opportunities for your Node.js application
  • Eligibility for Joyent’s Node.js Innovator of the Year Award
  • Networking & idea sharing with fellow incubator members

APPLY NOW



Source

Configure Apache Server and Deploy WordPress with Puppet | Lisenet.com :: Linux | Security

We’re going to use Puppet to install Apache and WordPress.

This article is part of the Homelab Project with KVM, Katello and Puppet series.

Homelab

We have two CentOS 7 servers installed which we want to configure as follows:

web1.hl.local (10.11.1.21) – Apache server with WordPress and NFS mount
web2.hl.local (10.11.1.22) – Apache server with WordPress and NFS mount

SELinux set to enforcing mode.

See the image below to identify the homelab part this article applies to.

WordPress from a Custom Tarball

We won’t be downloading WordPress from the Internet, but will be using our own custom build (tarball) instead. Each build is source controlled and tested locally prior releasing to the environment.

The name of the tarball is wordpress-latest.tar.gz, and it’s currently stored on the Katello server under /var/www/html/pub/. This allows us to pull the archive from https://katello.hl.local/pub/wordpress-latest.tar.gz.

Note that the file wp-config.php is never stored inside the tarball, but gets generated by Puppet.

Redundand Apache/MySQL Architecture

To increase redundancy, each Apache server will be configured to use a different MySQL database server.

Since our MySQL nodes are configured to use a Master/Master replication, there should, in theory, be no difference in terms of data that’s stored on each VM.

We’ll plug web1.hl.local in to db1.hl.local, and web2.hl.local in to db2.hl.local.

NFS Mount for Uploads

Both Apache servers will need an NFS client configured to mount shared storage.

While users can use either of the Apache servers to upload files, we need to ensure that regardless of the VM they end up on, files across WordPress instances are the same. We could use rsync to solve the problem, but I’m not sure on how well that would scale. Perhaps something to look at in the future.

Configuration with Puppet

Puppet master runs on the Katello server.

Puppet Modules

We use the following Puppet modules:

  1. derdanne-nfs – to mount an NFS share for WordPress /uploads folder
  2. puppetlabs-apache – to install and configure Apache
  3. puppet-selinux – to configure SELinux booleans (e.g. httpd_use_nfs)
  4. hunner-wordpress – to install WordPress

Please see each module’s documentation for features supported and configuration options available.

Firewall Configuration

Configure both Apache servers to allow HTTPS traffic:

firewall { ‘007 allow HTTPS’:
dport => ‘443’,
source => ‘10.11.1.0/24’,
proto => tcp,
action => accept,
}

There may be an insignificant performance penalty incurred while using encryption between HAProxy and Apache compared to plaintext HTTP, but we want to ensure that traffic is secured. HAProxy can be re-configured to offload TLS if this becomes a problem.

SELinux Booleans

Configure SELinux on both Apache servers. These are required in order to allow Apache to use NFS and connect to a remote MySQL instance. We also want to allow Apache (WordPress) to send email notifications.

selinux::boolean { ‘httpd_use_nfs’:
persistent => true, ensure => ‘on’
}->
selinux::boolean { ‘httpd_can_network_connect_db’:
persistent => true, ensure => ‘on’
}->
selinux::boolean { ‘httpd_can_sendmail’:
persistent => true, ensure => ‘on’
}

Configure NFS Client

This needs to be applied for both Apache servers.

class { ‘::nfs’:
server_enabled => false,
client_enabled => true,
}->
nfs::client::mount { ‘/var/www/html/wp-content/uploads’:
server => ‘nfsvip.hl.local’,
share => ‘/nfsshare/uploads’,
}

The virtual IP (which NFS cluster runs on) is 10.11.1.31, and the DNS name is nfsvip.hl.local. The cluster is configured to export /nfsshare. See here for more info.

Install Apache

This needs to be applied for both Apache servers.

We deploy one Apache virtualhost, and configure log forwarding to Graylog (see the custom_fragment section). I wrote a separate post for how to send Apache logs to Graylog, take a look here if you need more info.

TLS certificates are taken care of by the main Puppet manifest for the environment. See here for more info.

package { ‘php-mysql’:
ensure => ‘installed’
}->
class { ‘apache’:
default_vhost => false,
default_ssl_vhost => false,
default_mods => false,
mpm_module => ‘prefork’,
server_signature => ‘Off’,
server_tokens => ‘Prod’,
trace_enable => ‘Off’,
log_formats => { graylog_access => ‘{ “version”: “1.1”, “host”: “%V”, “short_message”: “%r”, “timestamp”: %{%s}t, “level”: 6, “_user_agent”: “%i”, “_source_ip”: “%h”, “_duration_usec”: %D, “_duration_sec”: %T, “_request_size_byte”: %O, “_http_status_orig”: %s, “_http_status”: %>s, “_http_request_path”: “%U”, “_http_request”: “%U%q”, “_http_method”: “%m”, “_http_referer”: “%i”, “_from_apache”: “true” }’ },
}
include apache::mod::alias
include apache::mod::headers
include apache::mod::php
include apache::mod::rewrite
include apache::mod::ssl
::apache::mod { ‘logio’: }

## Configure VirtualHosts
apache::vhost { ‘blog_https’:
port => 443,
servername => ‘blog.hl.local’,
docroot => ‘/var/www/html’,
manage_docroot => false,
options => [‘FollowSymLinks’,’MultiViews’],
override => ‘All’,
suphp_engine => ‘off’,
ssl => true,
ssl_cert => ‘/etc/pki/tls/certs/hl.crt’,
ssl_key => ‘/etc/pki/tls/private/hl.key’,
ssl_protocol => [‘all’, ‘-SSLv2’, ‘-SSLv3’],
ssl_cipher => ‘HIGH:!aNULL!MD5:!RC4’,
ssl_honorcipherorder => ‘On’,
redirectmatch_status => [‘301’],
redirectmatch_regexp => [‘(.*).gz’],
redirectmatch_dest => [‘/’],
custom_fragment => ‘CustomLog “|/usr/bin/nc -u syslog.hl.local 12201” graylog_access’,
}

Install WordPress from Our Custom Tarball

We want to use our existing MySQL configuration, meaning that we don’t want WordPress creating any databases nor users. The details below are the ones we used when setting up MySQL servers.

The only thing that’s going to different here is the db_host parameter – one Apache server uses db1.hl.local, another one uses db2.hl.local.

class { ‘wordpress’:
db_user => ‘dbuser1’,
db_password => ‘PleaseChangeMe’,
db_name => ‘blog’,
db_host => ‘db1.hl.local’,
create_db => false,
create_db_user => false,
install_dir => ‘/var/www/html’,
install_url => ‘http://katello.hl.local/pub’,
version => ‘latest’,
wp_owner => ‘apache’,
wp_group => ‘root’,
}

Note how we set the owner to apache. This is something we may want to harden further depending on security requirements.

If all goes well, we should end up with both servers using the NFS share:

[[email protected] ~]# df -h|egrep “File|uploads”
Filesystem Size Used Avail Use% Mounted on
nfsvip.hl.local:/nfsshare/uploads 2.0G 53M 1.9G 3% /var/www/html/wp-content/uploads
[[email protected] ~]# df -h|egrep “File|uploads”
Filesystem Size Used Avail Use% Mounted on
nfsvip.hl.local:/nfsshare/uploads 2.0G 53M 1.9G 3% /var/www/html/wp-content/uploads

We should also be able to create Graylog dashboard widgets by using Apache data, e.g.:

What’s Next?

We’ll look into putting a pair of HAProxy servers in front of Apache to perform load balancing.

Source

An easy to use gui to run and get docker containers — The Ultimate Linux Newbie Guide

Example of near single-click installation of containerised apps with PortainerExample of near single-click installation of containerised apps with Portainer

I met up with the team from Portainer.io in my home town of Wellington, New Zealand when they paid a visit to a Linux user group there. Their product is an awesome graphical, browser based docker container management system. You can download containers that are pre-baked, ready to go, such a MongoDB and Apache as well as many other popular tools of the trade. Not only can you roll containers in seconds, you can also work storage volumes and networking in just a few clicks of your mouse.

Want to know more?

If you use Docker at all, or if you’ve wanted to start looking into containerisation and DevOps, Portainer really is one of the best tools to get started with; It is currently compatible with Docker engine and Docker Swarm. Portainer is completely free and open source. If you are interested in knowing more, you should check out this review over at 2daygeek.

Source

Arch Linux – News: Perl library path change

The perl package now uses a versioned path for compiled modules. This means
that modules built for a non-matching perl version will not be loaded any more
and must be rebuilt.

A pacman hook warns about affected modules during the upgrade by showing output
like this:

WARNING: ‘/usr/lib/perl5/vendor_perl’ contains data from at least 143 packages which will NOT be used by the installed perl interpreter.
-> Run the following command to get a list of affected packages: pacman -Qqo ‘/usr/lib/perl5/vendor_perl’

You must rebuild all affected packages against the new perl package before you
can use them again. The change also affects modules installed directly via
CPAN. Rebuilding will also be necessary again with future major perl updates
like 5.28 and 5.30.

Please note that rebuilding was already required for major updates prior to
this change, however now perl will no longer try to load the modules and then fail in strange ways.

If the build system of some software does not detect the change automatically,
you can use perl -V:vendorarch in your PKGBUILD to query perl for the
correct path. There is also sitearch for software that is not packaged with
pacman.

Source

WP2Social Auto Publish Powered By : XYZScripts.com