Kubernetes Container Control Comes To Power Systems

October 29, 2018, Timothy Prickett Morgan

The moment that Google created a clone of parts of its internal Borg cluster and container management system and open sourced it as the Kubernetes project, the jig was pretty much up.

Google had done a lot of the fundamental work to bring containers to the Linux platform starting way back in 2005, and had shared its techniques with the open source community, leading directly to the Docker container format and the engine that runs it atop the Linux kernel. While Docker, the company, got a jump start with its Docker Swarm container orchestrator and then its fuller Docker Enterprise container management system, the world quickly shifted from the Docker stack to Kubernetes in its raw and myriad commercialized formats. Much as Linux users had shifted their loyalties away from the Eucalyptus cloud controller to OpenStack in a heartbeat because of the completely open nature of OpenStack and the backing of Rackspace Hosting and NASA as founders and then a slew of open source developers and commercial entities piling on. Kubernetes has emerged as the de facto standard for container orchestration, and it is supported on Linux and Windows Server, the two dominant operating systems in the datacenter these days.

The Docker Enterprise stack can be loaded on bare metal Linux running on the Linux-only versions of Power Systems based on either Power8 or Power9 processors; it can also run on standard Power Systems machines that support Linux, AIX, and IBM i atop the PowerVM hypervisor – but only on Linux partitions. So IBM i and AIX shops can run containerized applications on those Linux partitions.

Had this been another, earlier era, and had the IBM i platform been generating the very high revenues that it enjoyed in its heyday two and three decades ago, IBM might have been talking about moving the IBM i operating system to a Linux kernel and containerizing the whole operating system and its related systems software into Docker containers. That has not happened yet and we do not think it ever will given the cost and the relatively low (compared to historical highs) return on that investment. But as we discussed a year and a half ago, IBM could create quasi-native Docker containers using a PASE runtime environment, but it would have to be based on a Linux kernel instead of the AIX kernel. Even if IBM had did all of that, it is not clear how to containerize RPG or COBOL applications. Java, PHP, Node.js, and any other open source programming language could have its applications run in these quasi-native Docker containers. But RPG and COBOL present an interesting obstacle. IBM could create a clone runtime for RPG and COBOL that looks and smells like the Docker Engine but that runs on a baby IBM i kernel or passes directly through the microcode to the actual IBM i kernel.

Even if applications running on IBM i written in RPG and COBOL can’t be containerized, that doesn’t mean IBM i shops should not benefit from Linux and containerized applications. That Db2 for i database is the real asset, and there are definitely ways to use Node.js and Java to extract data from that database and pass it off to containerized applications running on Linux partitions that in turn support Docker containers that are orchestrated by Kubernetes. IBM i would be extended, much as integrating the OS/2 High Performance File System inside of the platform gave us the Integrated File System, for instance. That didn’t negate the value of native applications written in RPG and COBOL that had native access to the integrated database in OS/400 and IBM i. This is no different, in concept, even if it is quite different in implementation.

Even if the combination of Docker containers and Kubernetes orchestration for those containers is not native in IBM i, there are a number of ways to get containers running on Power Systems on whole Linux machines or Linux partitions on hybrid IBM i-Linux machines.

The first one, as we have discussed, is Docker Enterprise Edition for Power. IBM grabbed the open source GCC Go compiler and created a native Power-Linux Docker daemon and runtime and offers its own support contracts for Docker. Big Blue does Level 1 and Level 2 support, with backing from Docker itself for Level 3 support. The stack includes the core Docker Engine and the Docker Trusted Registry, a private version of the public Docker Hub container registry. Prices range from $750 to $2,000 per node per year for support.

The second method also comes from IBM. Last year, Big Blue launched IBM Cloud Private, an on-premises variant of its IBM Cloud public cloud platform and container orchestration frameworks, based on Cloud Foundry and Kubernetes. These container orchestration and platform cloud layer services are available on subscription-based virtual machines as well as dedicated hosts on the IBM Cloud, and as of nearly a year ago, were made available on Power, X86, and System z machines in on-premises datacenters.

IBM says that it has 400 customers, most of them very large enterprises, that typically buy its System z and Power Systems machines so far for the private version of IBM cloud. The cloud setups also include a hybrid on-premises/cloud and multiple cloud development environment called Microclimate that brings together the integrated development environments as well as hooks into the Jenkins continuous application/continuous integration tool as well as the Kubernetes container orchestrator.

The IBM Cloud Private stack also includes a tool called Transformation Advisor, which pulls information out of existing WebSphere environments and suggests the means to break those applications into microservices and snaps into the tool to let developers begin that process. (It is not clear what Transformation Advisor would suggest if it saw Java applications hitting WebSphere on the IBM i platform.) There is also another tool in the private cloud called Vulnerability Advisor that examines access control and other aspects of security for potential vulnerabilities and suggests ways to fix them. In the past two weeks, IBM announced a new cross-platform management tool called Multi-Cloud Manager that weaves together the Helm application manager for Kubernetes with the Terraform public cloud provisioning tool, the Prometheus system management and monitoring tool, and the Grafana visualization tool to create a whole new management layer for this open software container stack.

Again, all of this can run on the Linux partitions on any IBM i machine, but it remains to be seen how these tools can interface with the IBM i partitions, if at all.

The last way to bring commercially supported Kubernetes container control to Power Systems machines was just revealed last week, in announcement letter 218-391, as Red Hat has ported its OpenShift Container Platform 3.10 to Power Systems machines – once again, of course, running atop Linux. In this case, Power8 and Power9 systems running the new Enterprise Linux 7.5 distribution from Red Hat. The offering is available from either IBM or Red Hat, and it is being restricted to any L, LC, or AC class Power Systems machine – those are the Linux-only boxes – but that is nonsense. There should be a way to run it on the PowerVM hypervisor as well as the custom KVM hypervisor if customers really want to do it.

The OpenShift Container Platform is the Kubernetes container management system, akin to Docker Swarm and Enterprise or the IBM Multicloud Manager and also stack packages up a whole bunch of things into Kubernetes containers, including:

  • The Red Hat Enterprise Linux operating system itself
  • The Apache and Nginx Web servers
  • The MySQL, PostgreSQL, and MariaDB relational databases
  • The MongoDB document datastore
  • The Node.js, Ruby, PHP, Perl application development languages

OpenShift Container Platform does not include the continuous integration/continuous development tools or virtualized networking and virtualized storage needed for a full container environment, but there are hooks to plug many different variations of these into the stack.

Having brought these different Kubernetes stacks to Power Systems, now IBM needs to go the extra step and tell IBM i shops how they can integrate them. Integration is, of course, the hallmark of the IBM i platform.

Source

There is no… » Linux Magazine

Jon

Paw Prints: Writings of the maddog

Oct 29, 2018 GMT, Jon maddog Hall

IBM bought Red Hat Software.

The world wide web is alive with the news, and many of the people who have worked and used Red Hat in the last 25 years are lamenting the “fall” of their beloved company and software.

I understand how they feel.

  • The first company I worked for, Aetna Life and Casualty is much smaller than it used to be through various economic reasons.
  • The college I taught at, Hartford State Technical College, was merged with the state community colleges and is not even mentioned today.
  • Bell Laboratories, renamed Lucent and broken off from the world’s largest telephone company, purchased by Alcatel, then by Nokia.
  • Digital Equipment Corporation (DEC), once the second largest computer company in the world was purchased by Compaq, then by HP.
  • SGI (who I worked for briefly) is gone.

Believe me, I know the pain.

Yet IBM has been a friend of Linux for a long time.

As early as 1998 IBM said they were going to support Linux, one of the first major companies that said that while Microsoft was at its peak and calling Linux “a virus and a cancer”.

I still remember the IBM ads of the early 2000s touting Linux on TV and in magazines. I remember the little white-haired boy who represented Linux and how “on spot” the IBM advertisements were.

In 2001 we all cheered when IBM announced they had invested a billion USD in Linux (and made two billion from that investment).

I was invited to Austin, Texas by Daniel Frye, the VP of Open Source for IBM when Lou Gerstner Jr. (IBM’s CEO) wrote the memo that made Open Source a focal point of IBM.
Lou wrote that in the past IBM had produced closed source products unless someone make a case for the product being Open Source. In the future IBM would produce Open Source products unless someone made a case for the product to be closed source.

Being from DEC, and knowing how engineers often were put through the legal and business gauntlet when they wanted to make a product Open Source, I understood the power of that memo from Lou.

I remember that day in Austin, when Dan asked me if the Open Source community would be afraid of IBM taking an active interest in Linux. I told him that some would, but the people I respected (Linus, Alan Cox, David Miller and others) would welcome IBM’s involvement in Linux, GNU, and Open Source.

I remember when people left the Linux project because “other people were making money on the work I do”. This was and is a wrong attitude. You write Free Software for whatever reason you write it. The fact that other people make money off of it is not a concern as long as they obey the license you wrote it under.
IBM has Open Source advocates all over the world. Their purchase of Red Hat should increase the exposure of Red Hat to even more people, to allow Red Hat to be used in even larger commercial-grade opportunities.

The statements I have read from both companies state that Red Hat will still be an autonomous division of IBM. We will see how true that is, but it is a good sign that Jim Whitehurst is to remain at the helm of Red Hat and will join IBM’s executive team.

Early on IBM hired many FOSS developers, even for projects not directly in their line of business. They gave support to Apache and many other Open Source projects. They were sponsors of many Open Source conferences.
IBM even has a server line called “LinuxONE” which touts security, scalability and lightening speed.
I can not predict the future, but if the past is any example of IBM’s respect and love for Linux, than Red Hat should be confident in their future.

Carpe Diem.

Source

Bryan Lunduke Is New LJ Deputy Editor

Portland, Oregon, October 29, 2018 — Today, Bryan Lunduke announced that he is officially joining the Linux Journal team as “Deputy Editor” of the illustrious — and long-running — Linux magazine. “I’ve been a fan of Linux Journal for almost as long as I’ve been using Linux,” beamed Lunduke. “To be joining a team that has been producing such an amazing magazine for nearly a quarter of a century? It’s a real honor.” In November of 2017, SUSE—the first Linux-focused company ever created—announced Lunduke’s departure to re-focus on journalism. Now, furthering that goal, Lunduke has joined the first Linux-focused magazine ever created. Lunduke’s popular online show, the aptly named “Lunduke Show”, will continue to operate as a completely independent entity with no planned changes to production schedules or show content. Sources say Lunduke is “feeling pretty fabulous right about now.” No confirmation, as yet, on if Lunduke is currently doing a “happy dance”. At least one source suggests this is likely.

Source

Virsh KVM Commands For Management

Virsh Commands for KVM

Virsh is a command line executable to control existing virtual machines. It includes a number of commands to help manage KVM(kernel-based virtual machine) instances. It allows for easy command line management often remotely of virtual machines through SSH access.

To enter the command shell over ssh just type the following:

virsh

From there you can type the help command to view all of the possible commands. We will cover some of the more common commands for managing virtual machines. Virsh commands can also be executed directly from the command line as opposed to starting the virsh enviroment by prefixing each command with virsh.

View all of the virtual machines on this system

# virsh list –all

and you will see a similar output:

Id Name State
—————————————————-
5 centos7-1 running
84 centos7-2 running
85 centos7-3 running

Start a virtual machine

# virsh start server-name

Replace server-name with the virtual machine you are attempting to start

Stopping a virtual machine

Shutdown a virtual machine:

# virsh shutdown server-name

This will attempt to gracefully stop a virtual machine and power it off.

Power off a virtual machine

# virsh destroy server-name

This will immediately power off a virtual machine. This is the equivalent to removing the power cord on a running physical server.

Reboot a virtual machine

# virsh reboot server-name

Auto start configuration for virtual machines

To have a virtual machine start every time a server reboots type the following:

# virsh autostart server-name

To prevent a virtual machine from starting every time a server reboots type the following

# virsh autostart –disable server-name

Attach a console to a virtual machine

To view the console output of a virtual machine type the following

# virsh console server-name

There are many more things virsh can do, this is just the basics to get you through power management of a virtual machine on the command line. If you have not already, please check out multi-part series on setting up KVM:

Part 1: KVM Installation On CentOS
Part 2: Bridged Networking Setup For KVM Virtualization
Part 3: Creating A New Virtual Machine With KVM

May 1, 2017LinuxAdmin.io

Source

The D in Systemd stands for ‘Dammmmit!’ A nasty DHCPv6 packet can pwn a vulnerable Linux box – The Register

Hole opens up remote-code execution to miscreants – or a crash, if you’re lucky

Sad penguin photo via Shutterstock

A security bug in Systemd can be exploited over the network to, at best, potentially crash a vulnerable Linux machine, or, at worst, execute malicious code on the box.

The flaw therefore puts Systemd-powered Linux computers – specifically those using systemd-networkd – at risk of remote hijacking: maliciously crafted DHCPv6 packets can try to exploit the programming cockup and arbitrarily change parts of memory in vulnerable systems, leading to potential code execution. This code could install malware, spyware, and other nasties, if successful.

The vulnerability – which was made public this week – sits within the written-from-scratch DHCPv6 client of the open-source Systemd management suite, which is built into various flavors of Linux.

This client is activated automatically if IPv6 support is enabled, and relevant packets arrive for processing. Thus, a rogue DHCPv6 server on a network, or in an ISP, could emit specially crafted router advertisement messages that wake up these clients, exploit the bug, and possibly hijack or crash vulnerable Systemd-powered Linux machines.

Here’s the Red Hat Linux summary:

Felix Wilhelm, of the Google Security team, was credited with discovering the flaw, designated CVE-2018-15688. Wilhelm found that a specially crafted DHCPv6 network packet could trigger “a very powerful and largely controlled out-of-bounds heap write,” which could be used by a remote hacker to inject and execute code.

“The overflow can be triggered relatively easy by advertising a DHCPv6 server with a server-id >= 493 characters long,” Wilhelm noted.

In addition to Ubuntu and Red Hat Enterprise Linux, Systemd has been adopted as a service manager for Debian, Fedora, CoreOS, Mint, and SUSE Linux Enterprise Server. We’re told RHEL 7, at least, does not use the vulnerable component by default.

Systemd creator Lennart Poettering has already published a security fix for the vulnerable component – this should be weaving its way into distros as we type.

If you run a Systemd-based Linux system, and rely on systemd-networkd, update your operating system as soon as you can to pick up the fix when available and as necessary.

The bug will come as another argument against Systemd as the Linux management tool continues to fight for the hearts and minds of admins and developers alike. Though a number of major admins have in recent years adopted and championed it as the replacement for the old Init era, others within the Linux world seem to still be less than impressed with Systemd and Poettering’s occasionally controversial management of the tool. ®

Source

Download Live Voyager 18.04.1.1

Live Voyager is an open source distribution of Linux based on a special edition of the highly acclaimed Ubuntu operating system that uses Xfce as its main desktop environment. It includes the same stable and reliable base as Xubuntu and on top of that various tweaks and additional software.

It’s distributed as Live DVDs for 32-bit and 64-bit architectures

The distribution is available for download as Live DVD ISO images, one for each of the supported hardware platforms (amd64 (64-bit) and i386 (32-bit)), that can be deployed to USB flash drives using Disks or UNetbootin apps, or burned to DVD discs using any CD/DVD burning software.

Boot menu options

Except for the fact that it uses a different background image, the boot prompt of the Live DVDs is identical with the one used by the Xubuntu Linux operating system, allowing users to try the distribution without installing anything on their computers.

In addition, users can boot an existing operating system installed on the first disk drive, run a memory diagnostic test, check the disk for defects (only if using a DVD media), as well as to directly install the system, without testing it (not recommended).

Productive and modern desktop environment with a dock

The Xfce-powered desktop environment has been tweaked to offer users a futuristic graphical session that is comprised of a top panel, from where users can access the unique main menu, launch applications and interact with running programs, as well as a dock (app launcher) located on the bottom edge of the screen.

Contains great applications

Default applications include the Mozilla Firefox web browser, Pidgin multi-protocol instant messenger, AbiWord word processor, Gnumeric spreadsheet editor, Mozilla Thunderbird email and news client, Darktable and GIMP image editors, gThumb and Ristretto image viewers, and MComix comic book viewer.

Additionally, it comes with the Transmission torrent downloader, Hotot Twitter client, XChat IRC client, Clementine audio player, VLC Media Player, Parole video player, PiTiVi video editor, Cheese webcam viewer, Transmageddon video transcoder, FreetuxTV TV viewer, Kazam screen recording tool, and Ubuntu Software Center for installing extra apps.

A really great and lightweight distribution of Linux

Live Voyager is really great, lightweight and modern operating system that uses a stable and reliable Ubuntu/Xubuntu base, an astonishing icon theme, a different and handy main menu, and a great selection of applications. We recommend to use it on low-end machines or computers that don’t support resource hungry OSes.

Source

Secure Apache with Let’s Encrypt on Debian 9 – Linux.com

Let’s Encrypt is a certificate authority created by the Internet Security Research Group (ISRG). It provides free SSL certificates via fully automated process designed to eliminate manual certificate creation, validation, installation and renewal.

Certificates issued by Let’s Encrypt are are valid for 90 days from the issue date and trusted by all major browsers today.

This tutorial will guide you through the process of obtaining a free Let’s Encrypt using the certbot tool on Debian 9. We’ll also show how to configure Apache to use the new SSL certificate and enable HTTP/2.

Source

Compile Apache 2.4 From Source

Compile Apache From Source

Compiling Apache 2.4 from source is easy and allows for more customization later on. It also allows for control over where it is installed

Install some required depedencies:

yum install -y wget pcre-devel openssl openssl-devel expat-devel

First get the latest version, at the time of writing this it is 2.4.25. The link to download the latest version is here

wget -O /usr/src/httpd-2.4.25.tar.gz http://mirror.nexcess.net/apache//httpd/httpd-2.4.25.tar.gz

Change directories to /usr/src

cd /usr/src

Uncompress the tar

tar xfvz httpd-2.4.25.tar.gz

Go to the directory:

cd httpd-2.4.25

The build we are creating requires apr and apr-util so to the src lib directory and download the following (APR download page):

cd ./srclib
wget http://mirror.stjschools.org/public/apache//apr/apr-1.5.2.tar.gz
wget http://mirrors.gigenet.com/apache//apr/apr-util-1.5.4.tar.gz

Uncompress them and rename them:

tar xfvz apr-1.5.2.tar.gz; mv apr-1.5.2 apr
tar xfvz apr-util-1.5.4.tar.gz; mv apr-util-1.5.4 apr-util

Configure Apache:

Go back to the main source directory:

cd ../

To view the configure options type the following

./configure –help

This is a sample config:

./configure
–enable-layout=RedHat
–prefix=/usr
–enable-expires
–enable-headers
–enable-rewrite
–enable-cache
–enable-mem-cache
–enable-speling
–enable-usertrack
–enable-module=so
–enable-unique_id
–enable-logio
–enable-ssl=shared
–with-ssl=/usr
–enable-proxy=shared
–with-included-apr

Make:

make

Install:

make install

Start Services:

CentOS 7

Create a systemd start file by creating /etc/systemd/system/httpd.service and add

[Unit]
Description=The Apache HTTP Server

[Service]
Type=forking
PIDFile=/var/apache/httpd.pid
ExecStart=/usr/sbin/apachectl start
ExecReload=/usr/sbin/apachectl graceful
ExecStop=/usr/sbin/apachectl stop
KillSignal=SIGCONT
PrivateTmp=true

[Install]
WantedBy=multi-user.target

Configure it to start on boot:

systemctl enable httpd

Start it

systemctl start httpd

CentOS 6

chkconfig –add httpd

chkconfig httpd on

service httpd start

That’s it for compiling Apache from source. If you visit the http://ip-address of the server you should see a default page for apache.

You can also verify its running by typing the following:

# ps aux|grep httpd
root 1101 0.0 0.0 105376 836 pts/0 S+ 21:03 0:00 grep httpd
nobody 15834 0.0 0.4 270908 8288 ? S 03:46 0:00 /usr/sbin/httpd -DHAVE_PROXY_FTP -DHAVE_AUTH_TOKEN -DHAVE_PROXY_HTTP -DHAVE_RPAF -DHAVE_PHP5 -DHAVE_XSENDFILE -DHAVE_AUTH_MYSQL -DHAVE_PROXY -DHAVE_PROXY_AJP -DHAVE_PROXY_BALANCER -DHAVE_PROXY_SCGI -DHAVE_SSL -DHAVE_PROXY_CONNECT -DSSL -DSSL -DSSL
nobody 15835 0.0 0.4 270764 8224 ? S 03:46 0:00 /usr/sbin/httpd -DHAVE_PROXY_FTP -DHAVE_AUTH_TOKEN -DHAVE_PROXY_HTTP -DHAVE_RPAF -DHAVE_PHP5 -DHAVE_XSENDFILE -DHAVE_AUTH_MYSQL -DHAVE_PROXY -DHAVE_PROXY_AJP -DHAVE_PROXY_BALANCER -DHAVE_PROXY_SCGI -DHAVE_SSL -DHAVE_PROXY_CONNECT -DSSL -DSSL -DSSL
nobody 15836 0.0 0.4 270764 8224 ? S 03:46 0:00 /usr/sbin/httpd -DHAVE_PROXY_FTP -DHAVE_AUTH_TOKEN -DHAVE_PROXY_HTTP -DHAVE_RPAF -DHAVE_PHP5 -DHAVE_XSENDFILE -DHAVE_AUTH_MYSQL -DHAVE_PROXY -DHAVE_PROXY_AJP -DHAVE_PROXY_BALANCER -DHAVE_PROXY_SCGI -DHAVE_SSL -DHAVE_PROXY_CONNECT -DSSL -DSSL -DSSL
nobody 15837 0.0 0.4 270764 8224 ? S 03:46 0:00 /usr/sbin/httpd -DHAVE_PROXY_FTP -DHAVE_AUTH_TOKEN -DHAVE_PROXY_HTTP -DHAVE_RPAF -DHAVE_PHP5 -DHAVE_XSENDFILE -DHAVE_AUTH_MYSQL -DHAVE_PROXY -DHAVE_PROXY_AJP -DHAVE_PROXY_BALANCER -DHAVE_PROXY_SCGI -DHAVE_SSL -DHAVE_PROXY_CONNECT -DSSL -DSSL -DSSL
nobody 15838 0.0 2.1 321656 40916 ? S 03:46 0:00 /usr/sbin/httpd -DHAVE_PROXY_FTP -DHAVE_AUTH_TOKEN -DHAVE_PROXY_HTTP -DHAVE_RPAF -DHAVE_PHP5 -DHAVE_XSENDFILE -DHAVE_AUTH_MYSQL -DHAVE_PROXY -DHAVE_PROXY_AJP -DHAVE_PROXY_BALANCER -DHAVE_PROXY_SCGI -DHAVE_SSL -DHAVE_PROXY_CONNECT -DSSL -DSSL -DSSL
nobody 15839 0.0 1.6 292740 32276 ? S 03:46 0:00 /usr/sbin/httpd -DHAVE_PROXY_FTP -DHAVE_AUTH_TOKEN -DHAVE_PROXY_HTTP -DHAVE_RPAF -DHAVE_PHP5 -DHAVE_XSENDFILE -DHAVE_AUTH_MYSQL -DHAVE_PROXY -DHAVE_PROXY_AJP -DHAVE_PROXY_BALANCER -DHAVE_PROXY_SCGI -DHAVE_SSL -DHAVE_PROXY_CONNECT -DSSL -DSSL -DSSL
nobody 15840 0.0 1.6 291948 31052 ? S 03:46 0:00 /usr/sbin/httpd -DHAVE_PROXY_FTP -DHAVE_AUTH_TOKEN -DHAVE_PROXY_HTTP -DHAVE_RPAF -DHAVE_PHP5 -DHAVE_XSENDFILE -DHAVE_AUTH_MYSQL -DHAVE_PROXY -DHAVE_PROXY_AJP -DHAVE_PROXY_BALANCER -DHAVE_PROXY_SCGI -DHAVE_SSL -DHAVE_PROXY_CONNECT -DSSL -DSSL -DSSL
nobody 15841 0.0 1.6 291948 31048 ? S 03:46 0:00 /usr/sbin/httpd -DHAVE_PROXY_FTP -DHAVE_AUTH_TOKEN -DHAVE_PROXY_HTTP -DHAVE_RPAF -DHAVE_PHP5 -DHAVE_XSENDFILE -DHAVE_AUTH_MYSQL -DHAVE_PROXY -DHAVE_PROXY_AJP -DHAVE_PROXY_BALANCER -DHAVE_PROXY_SCGI -DHAVE_SSL -DHAVE_PROXY_CONNECT -DSSL -DSSL -DSSL

httpd -M will show the compiled modules:

# httpd -M
Loaded Modules:
core_module (static)
authn_file_module (static)
authn_default_module (static)
authz_host_module (static)
authz_groupfile_module (static)
authz_user_module (static)
authz_default_module (static)
auth_basic_module (static)
cache_module (static)
mem_cache_module (static)
include_module (static)
filter_module (static)
log_config_module (static)
logio_module (static)
env_module (static)
expires_module (static)
headers_module (static)
usertrack_module (static)
unique_id_module (static)
setenvif_module (static)
version_module (static)
mpm_prefork_module (static)
http_module (static)
mime_module (static)
status_module (static)
autoindex_module (static)
asis_module (static)
cgi_module (static)
negotiation_module (static)
dir_module (static)
actions_module (static)
speling_module (static)
userdir_module (static)
alias_module (static)
rewrite_module (static)
so_module (static)
ssl_module (shared)

Configuration Changes

The configuration is located in /etc/httpd/conf/httpd.conf typically.

How to change default ports

Apache listens on ports by the Listen directive to change it from port 80 (default)

Listen 8080

How to enable ssl in httpd.conf

Edit /etc/httpd/conf/httpd.conf and ensure the following line is uncommented:

LoadModule ssl_module modules/mod_ssl.so

And add the following line

Listen 443

You will need to restart the service if you make any changes to the configuration files.

Source

Robothorium, the sci-fi robotic turn-based dungeon crawler adds gamepad support and multiple choices events

Goblinz Studio have taken on a lot of feedback from players of their sci-fi dungeon crawler Robothorium and it’s showing in their recent updates.

They’ve made tons of improvements to nearly all aspects of the game, including fixing plenty of bugs. They’ve been able to add in new robots, a new scenario, a hardcore game mode and with the latest update they’ve added in gamepad support along with a new multiple choice event system.

When I checked out the initial Early Access release, I was reasonably impressed with the rather refined experience that it offers.

After trying it out for a good while again today, I came off even more impressed than I was before. It feels a lot better, especially since they revamped the previously lacklustre trap system to give you multiple choices. Previously when you hit a room with a trap, it activated right away. Now you can now destroy them, hack them or deactivate them, so that’s a previous issue I had with it sorted out completely. They also added additional types of traps, so it’s more interesting overall.

They’ve also added new items, new sounds and so much more recently it’s worth another look. Very keen to see what else they improve, as it’s actually quite fun.

You can grab Robothorium on Humble Store and Steam.

Source

WP2Social Auto Publish Powered By : XYZScripts.com