How to install zabbix agent on windows ?

Zabbix Agent is installed on remote systems needs to monitor through Zabbix server. The Zabbix agent collects resource utilization and applications data on client system and provide such information to zabbix server on their requests.

install Zabbix agent service on windows system

Step 1 – Download Agent Source Code

Download latest windows zabbix agent source code from zabbix official site or use below link to download zabbix agent 3.0.0.

After downloading the zipped archive of zabbix client, extract its content under c:zabbix directory.

Step 2 – Create Agent Configuration File

Now make of copy of sample configuration file c:zabbixconfzabbix_agentd.win.conf to create zabbix agent configuration file at c:zabbixzabbix_agentd.conf. Now edit configuration and update following values.

#Server=[zabbix server ip]
#Hostname=[Hostname of client system ]

Server=192.168.1.26

Serveractive=192.168.1.26
Hostname=linuxforfreshers.com

Step 3: Install Zabbix Agent as Windows Service

Lets install zabbix agent as windows server by executing following command from command line.

c:zabbixbinwin64> zabbix_agentd.exe -c c:zabbixzabbix_agentd.conf –install

zabbix_agentd.exe [9084]: service [Zabbix Agent] installed successfully

zabbix_agentd.exe [9084]: event source [Zabbix Agent] installed successfully

Step 4 – Start/Stop Agent Service

Use following command to start zabbix agent service from command line

c:zabbixbinwin64> zabbix_agentd.exe –start

zabbix_agentd.exe [7048]: service [Zabbix Agent] started successfully

c:zabbixbinwin64> zabbix_agentd.exe –stop

zabbix_agentd.exe [9608]: service [Zabbix Agent] stopped successfully

Uninstalling agent

c:zabbixbinwin64> zabbix_agentd.exe -c c:zabbixzabbix_agentd.conf –uninstall

Source

OWASP Security Shepherd – Cross Site Scripting One Solution – LSB – ls /blog

Welcome back to LSB my budding hackers. Today’s lesson is about Cross Site Scripting (Or XSS). Cross-Site Scripting (XSS) attacks are a type of injection, in which malicious scripts are injected into otherwise benign and trusted websites. XSS attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user. Flaws that allow these attacks to succeed are quite widespread and occur anywhere a web application uses input from a user within the output it generates without validating or encoding it.

REGISTER TODAY FOR YOUR KUBERNETES FOR DEVELOPERS (LFD259) COURSE AND CKAD CERTIFICATION TODAY! $499!

An attacker can use XSS to send a malicious script to an unsuspecting user. The end user’s browser has no way to know that the script should not be trusted, and will execute the script. Because it thinks the script came from a trusted source, the malicious script can access any cookies, session tokens, or other sensitive information retained by the browser and used with that site.

xss1

So our task today is to get an alert on the web page to show that it’s vulnerable to this type of attack. On the web page we are presented with a search box and that is all we have for this puzzle.

xss2

A common piece of Javascript that hackers use to find out if a page is vulnerable to XSS is alert(“XSS”). This small bit of code is asking the web page to show us an alert prompt so that we know the page is vulnerable. Let’s try it. Enter the code in the search box and click on the Get This User button.

$299 WILL ENROLL YOU IN OUR SELF PACED COURSE – LFS205 – ADMINISTERING LINUX ON AZURE!

xss3

This worked first time!!

xss4

Above is the alert message from injecting the Javascript into the page.

How to Protect Yourself

The primary defenses against XSS are described in the OWASP XSS Prevention Cheat Sheet.

Also, it’s crucial that you turn off HTTP TRACE support on all web servers. An attacker can steal cookie data via Javascript even when document.cookie is disabled or not supported by the client. This attack is mounted when a user posts a malicious script to a forum so when another user clicks the link, an asynchronous HTTP Trace call is triggered which collects the user’s cookie information from the server, and then sends it over to another malicious server that collects the cookie information so the attacker can mount a session hijack attack. This is easily mitigated by removing support for HTTP TRACE on all web servers.

The OWASP ESAPI project has produced a set of reusable security components in several languages, including validation and escaping routines to prevent parameter tampering and the injection of XSS attacks. In addition, the OWASP WebGoat Project training application has lessons on Cross-Site Scripting and data encoding.

Thank for reading and don’t forget to like, comment and of course, follow our blog. Until next time.

QuBits 2018-09-12

Source

How To Reset A MySQL Root Password

MySQL contains it own ‘root’ password independent of the system root password, this is a guide on how to reset the MySQL root password. To reset it you will need root access on the server that has the MySQL instance. The same process applies to percona and mariadb servers as well, the only differences will be the stop and start commands (mariadb for mariadb)

If you already know the root password, you can also connect to directly to MySQL and reset the password that way. This can be used for resetting any users MySQL password as well.

Connect to MySQL:

mysql -uroot -p

Select the mysql database:

use mysql;

Update the root password:

update user set password=PASSWORD(“newpass”) where User=’root’;

Load the new privileges:

flush privileges;

Exit MySQL:

quit;

Thats it for resetting a user password in mysql.

This covers how to reset the mysql root password if you do not know the current password.

Stop MySQL

First you will need to stop the mysql service

On CentOS 6:

/etc/init.d/mysql stop

On Centos/RHEL 7:

systemctl stop mysql

Start mysqld_safe

You will then want to run mysql_safe with the skip grant tables option to bypass passwords with MySQL:

mysqld_safe –skip-grant-tables &

Reset MySQL Root Password

You will now want to connect to MySQL as root:

mysql -uroot

Then use the mysql database:

use mysql;

Set a new password:

update user set password=PASSWORD(“newpass”) where User=’root’;

You will want to replace newpass with the password you want to use

Flush the privileges:

flush privileges;

Exist mysql:

exit;

Restart MySQL

On Centos 6:

/etc/init.d/mysql restart

On Centos 7:

systemctl restart mysql

Test New Root MySQL Password:

mysql -u root -p

You should now be able to connect successfully to mysql as root using the new password you set.

Sep 4, 2017LinuxAdmin.io

Source

Katello: Working with Puppet Modules and Creating the Main Manifest | Lisenet.com :: Linux | Security

Working with Katello – part 4. We’re going to install Puppet modules, we’re also going to create a custom firewall module, define some rules, configure Puppet to serve files from a custom location and declare the site manifest.

This article is part of the Homelab Project with KVM, Katello and Puppet series.

Homelab

We have Katello installed on a CentOS 7 server:

katello.hl.local (10.11.1.4) – see here for installation instructions

See the image below to identify the homelab part this article applies to.

Puppet Configuration

What we’ve done in the previous article is we’ve created a new environment called “homelab”. What we haven’t done yet is we haven’t created a Puppet folder structure.

Folder Structure

Let us go ahead and create a folder structure:

# mkdir -p /etc/puppetlabs/code/environments/homelab/

Create the main manifest and set appropriate group permissions:

# touch /etc/puppetlabs/code/environments/homelab/manifests/site.pp
# chgrp puppet /etc/puppetlabs/code/environments/homelab/manifests/site.pp
# chmod 0640 /etc/puppetlabs/code/environments/homelab/manifests/site.pp

We can now go ahead and start installing Puppet modules.

Puppet Modules

Below is a list of Puppet modules that we have installed and are going to use. It may look like a long list at first, but it really isn’t. Some of the modules are installed as dependencies.

We can see modules for SELinux, Linux security limits, kernel tuning (sysctl), as well as OpenLDAP and sssd, Apache, WordPress and MySQL, Corosync and NFS, Zabbix and SNMP.

There is the whole Graylog stack with Java/MongoDB/Elasticsearch, also Keepalived and HAProxy.

# puppet module list –environment homelab
/etc/puppetlabs/code/environments/homelab/modules
├── arioch-keepalived (v1.2.5)
├── camptocamp-openldap (v1.16.1)
├── camptocamp-systemd (v1.1.1)
├── derdanne-nfs (v2.0.7)
├── elastic-elasticsearch (v6.2.1)
├── graylog-graylog (v0.6.0)
├── herculesteam-augeasproviders_core (v2.1.4)
├── herculesteam-augeasproviders_shellvar (v2.2.2)
├── hunner-wordpress (v1.0.0)
├── lisenet-lisenet_firewall (v1.0.0)
├── puppet-archive (v2.3.0)
├── puppet-corosync (v6.0.0)
├── puppet-mongodb (v2.1.0)
├── puppet-selinux (v1.5.2)
├── puppet-staging (v3.1.0)
├── puppet-zabbix (v6.2.0)
├── puppetlabs-accounts (v1.3.0)
├── puppetlabs-apache (v2.3.1)
├── puppetlabs-apt (v4.5.1)
├── puppetlabs-concat (v2.2.1)
├── puppetlabs-firewall (v1.12.0)
├── puppetlabs-haproxy (v2.1.0)
├── puppetlabs-java (v2.4.0)
├── puppetlabs-mysql (v5.3.0)
├── puppetlabs-ntp (v7.1.1)
├── puppetlabs-pe_gem (v0.2.0)
├── puppetlabs-postgresql (v5.3.0)
├── puppetlabs-ruby (v1.0.0)
├── puppetlabs-stdlib (v4.24.0)
├── puppetlabs-translate (v1.1.0)
├── razorsedge-snmp (v3.9.0)
├── richardc-datacat (v0.6.2)
├── saz-limits (v3.0.2)
├── saz-rsyslog (v5.0.0)
├── saz-ssh (v3.0.1)
├── saz-sudo (v5.0.0)
├── sgnl05-sssd (v2.7.0)
└── thias-sysctl (v1.0.6)
/etc/puppetlabs/code/environments/common (no modules installed)
/etc/puppetlabs/code/modules (no modules installed)
/opt/puppetlabs/puppet/modules (no modules installed)
/usr/share/puppet/modules (no modules installed)

The lisenet-lisenet_firewall module is the one we’ve generated ourselves. We’ll discuss it shortly.

Now, how do we actually install modules into our homelab environment? The default Puppet environment is production (see the previous article), that’s where all modules go by default. In order to install them into the homelab environment, we can define the installation command with the homelab environment specified:

# MY_CMD=”puppet module install –environment homelab”

To install modules, we can now use something like this:

# $MY_CMD puppetlabs-firewall ;
$MY_CMD puppetlabs-accounts ;
$MY_CMD puppetlabs-ntp ;
$MY_CMD puppet-selinux ;
$MY_CMD saz-ssh ;
$MY_CMD saz-sudo ;
$MY_CMD saz-limits ;
$MY_CMD thias-sysctl

This isn’t a full list of modules, but rather the one required by our main manifest (see the main manifest paragraph below). We could also loop the module list if we wanted to install everything in one go.

Let us go back to the firewall module. We want to be able to pass custom firewall data through the Katello WeUI by using a smart class parameter. Create a new firewall module:

# cd /etc/puppetlabs/code/environments/homelab/modules
# puppet module generate lisenet-lisenet_firewall

Create manifests:

# touch ./lisenet_firewall/manifests/

All good, let us create the rules. Here is the content of the file pre.pp (only allow ICMP and SSH by default):

class lisenet_firewall::pre {
Firewall {
require => undef,
}
firewall { ‘000 drop all IPv6’:
proto => ‘all’,
action => ‘drop’,
provider => ‘ip6tables’,
}->
firewall { ‘001 allow all to lo interface’:
proto => ‘all’,
iniface => ‘lo’,
action => ‘accept’,
}->
firewall { ‘002 reject local traffic not on loopback interface’:
iniface => ‘! lo’,
proto => ‘all’,
destination => ‘127.0.0.1/8’,
action => ‘reject’,
}->
firewall { ‘003 allow all ICMP’:
proto => ‘icmp’,
action => ‘accept’,
}->
firewall { ‘004 allow related established rules’:
proto => ‘all’,
state => [‘RELATED’, ‘ESTABLISHED’],
action => ‘accept’,
}->
firewall { ‘005 allow SSH’:
proto => ‘tcp’,
source => ‘10.0.0.0/8′,
state => [ “NEW” ],
dport => ’22’,
action => ‘accept’,
}
}

Here is the content of the file post.pp:

class lisenet_firewall::post {
firewall {‘999 drop all’:
proto => ‘all’,
action => ‘drop’,
before => undef,
}
}

The main module manifest init.pp:

class lisenet_firewall($firewall_data = false) {
include lisenet_firewall::pre
include lisenet_firewall::post

resources { “firewall”:
purge => true
}

Firewall {
before => Class[‘lisenet_firewall::post’],
require => Class[‘lisenet_firewall::pre’],
}

if $firewall_data != false {
create_resources(‘firewall’, $firewall_data)
}
}

One other thing we have to take care of after installing modules is SELinux context:

# restorecon -Rv /etc/puppetlabs/code/environments/homelab/

At this stage Katello has no knowledge of our newly installed Puppet modules. We have to go to the Katello WebUI, navigate to:

Configure > Puppet Environments > Import environments from katello.hl.local

This will import the modules into the homelab environment. Do the same for Puppet classes:

Configure > Puppet Classes > Import environments from katello.hl.local

This will import the lisenet_firewall class.

Strangely, I couldn’t find a Hammer command to perform the imports above, chances are I may have overlooked something. If you know how to do that with the Hammer, then let me know in the comments section.

Configure lisenet_firewall Smart Class Parameter

Open Katello WebUI, navigate to:

Configure > Puppet Classes

Find the class lisenet_firewall, edit Smart Class Parameter, and set the $firewall_data key param type to yaml. This will allow to pass in any additional firewall rules via yaml, e.g.:

“007 accept TCP Apache requests”:
dport:
– “80”
– “443”
proto: tcp
source: “10.0.0.0/8”
action: accept

See the image below to get the idea.

The next step is to assign the lisenet_firewall class to the host group that we’ve created previously, what in turn will apply default firewall rules defined in the manifests pre.pp and post.pp, as well as allow us to add new firewall rules directly to any host (which is a member of the group) via yaml.

We can view the host list to get a host ID:

# hammer host list

Then verify that the parameter has been applied, e.g.:

# hammer host sc-params –host-id “32”
—|—————|—————|———-|—————–
ID | PARAMETER | DEFAULT VALUE | OVERRIDE | PUPPET CLASS
—|—————|—————|———-|—————–
58 | firewall_data | | true | lisenet_firewall
—|—————|—————|———-|—————–

Serving Files from a Custom Location

Puppet automatically serves files from the files directory of every module. This does the job for the most part, however, when working in a homelab environment, we prefer to have a custom mount point where we can store all files.

The file fileserver.conf configures custom static mount points for Puppet’s file server. If custom mount points are present, file resources can access them with their source attributes.

Create a custom directory to serve files from:

# mkdir /etc/puppetlabs/code/environments/homelab/homelab_files

To create a custom mount point, open the file /etc/puppetlabs/puppet/fileserver.conf and add the following:

[homelab_files]
path /etc/puppetlabs/code/environments/homelab/homelab_files
allow *

As a result, files in the path directory will be served at puppet:///homelab_files/.

There are a couple of files that we want to create and put in the directory straight away, as these will be used by the main manifest.

We’ll strive to use encryption as much as possible, therefore we’ll need to have a TLS/SSL certificate. Let us go ahead and generate a self-signed one. We want to create a wildcard certificate so that we can use it with any homelab service, therefore when asked for a Common Name, type *.hl.local.

# cd /etc/puppetlabs/code/environments/homelab/homelab_files
# DOMAIN=hl
# openssl genrsa -out “$DOMAIN”.key 2048 && chmod 0600 “$DOMAIN”.key
# openssl req -new -sha256 -key “$DOMAIN”.key -out “$DOMAIN”.csr
# openssl x509 -req -days 1825 -sha256 -in “$DOMAIN”.csr
-signkey “$DOMAIN”.key -out “$DOMAIN”.crt
# openssl pkcs8 -topk8 -inform pem -in “$DOMAIN”.key
-outform pem -nocrypt -out “$DOMAIN”.pem

Ensure that the files have been created:

# ls
hl.crt hl.csr hl.key hl.pem

Verify the certificate:

# openssl x509 -in hl.crt -text -noout|grep CN
Issuer: C=GB, L=Birmingham, O=HomeLab, CN=*.hl.local
Subject: C=GB, L=Birmingham, O=HomeLab, CN=*.hl.local

All looks good, we can proceed forward and declare the main manifest.

Define the Main Manifest for the Homelab Environment

Edit the main manifest file /etc/puppetlabs/code/environments/homelab/manifests/site.pp and define any global overrides for the homelab environment.

Note how the TLS certificate that we created previously is configured to be deployed on all servers.

##
## File: site.pp
## Author: Tomas at www.lisenet.com
## Date: March 2018
##
## This manifest defines services in the following order:
##
## 1. OpenSSH server config
## 2. Packages and services
## 3. Sudo and User config
## 4. SELinux config
## 5. Sysctl config
## 6. System security limits
##

##
## The name default (without quotes) is a special value for node names.
## If no node statement matching a given node can be found, the default
## node will be used.
##

node ‘default’ {}

##
## Note: the lisenet_firewall class should not be assigned here,
## but rather added to Katello Host Groups. This is to allow us
## to utilise Smart Class Parameters and add additional rules
## per host by using Katello WebUI.
##

#################################################
## OpenSSH server configuration for the env
#################################################

## CentOS 7 OpenSSH server configuration
if ($facts[‘os’][‘family’] == ‘RedHat’) and ($facts[‘os’][‘release’][‘major’] == ‘7’) {
class { ‘ssh::server’:
validate_sshd_file => true,
options => {
‘Port’ => ’22’,
‘ListenAddress’ => ‘0.0.0.0’,
‘Protocol’ => ‘2’,
‘SyslogFacility’ => ‘AUTHPRIV’,
‘LogLevel’ => ‘INFO’,
‘MaxAuthTries’ => ‘3’,
‘MaxSessions’ => ‘5’,
‘AllowUsers’ => [‘root’,’tomas’],
‘PermitRootLogin’ => ‘without-password’,
‘HostKey’ => [‘/etc/ssh/ssh_host_ed25519_key’, ‘/etc/ssh/ssh_host_rsa_key’],
‘PasswordAuthentication’ => ‘yes’,
‘PermitEmptyPasswords’ => ‘no’,
‘PubkeyAuthentication’ => ‘yes’,
‘AuthorizedKeysFile’ => ‘.ssh/authorized_keys’,
‘KerberosAuthentication’ => ‘no’,
‘GSSAPIAuthentication’ => ‘yes’,
‘GSSAPICleanupCredentials’ => ‘yes’,
‘ChallengeResponseAuthentication’ => ‘no’,
‘HostbasedAuthentication’ => ‘no’,
‘IgnoreUserKnownHosts’ => ‘yes’,
‘PermitUserEnvironment’ => ‘no’,
‘UsePrivilegeSeparation’ => ‘yes’,
‘StrictModes’ => ‘yes’,
‘UsePAM’ => ‘yes’,

‘LoginGraceTime’ => ’60’,
‘TCPKeepAlive’ => ‘yes’,
‘AllowAgentForwarding’ => ‘no’,
‘AllowTcpForwarding’ => ‘no’,
‘PermitTunnel’ => ‘no’,
‘X11Forwarding’ => ‘no’,
‘Compression’ => ‘delayed’,
‘UseDNS’ => ‘no’,
‘Banner’ => ‘none’,
‘PrintMotd’ => ‘no’,
‘PrintLastLog’ => ‘yes’,
‘Subsystem’ => ‘sftp /usr/libexec/openssh/sftp-server’,

‘Ciphers’ => ‘[email protected],[email protected],[email protected],aes256-ctr,aes192-ctr,aes128-ctr’,
‘MACs’ => ‘[email protected],[email protected],[email protected]‘,
‘KexAlgorithms’ => ‘[email protected],diffie-hellman-group18-sha512,diffie-hellman-group16-sha512,diffie-hellman-group14-sha256′,
‘HostKeyAlgorithms’ => ‘ssh-ed25519,[email protected],ssh-rsa,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,[email protected],[email protected],[email protected],[email protected],[email protected]‘,
},
}
}

#################################################
## Packages/services configuration for the env
#################################################
## We want these packages installed on all servers
$packages_to_install = [
‘bzip2’,
‘deltarpm’,
‘dos2unix’,
‘gzip’,
‘htop’,
‘iotop’,
‘lsof’,
‘mailx’,
‘net-tools’,
‘nmap-ncat’,
‘postfix’,
‘rsync’,
‘screen’ ,
‘strace’,
‘sudo’,
‘sysstat’,
‘unzip’,
‘vim’ ,
‘wget’,
‘xz’,
‘yum-cron’,
‘yum-utils’,
‘zip’,
]
package { $packages_to_install: ensure => ‘installed’ }

## We do not want these packages on servers
$packages_to_purge = [
‘aic94xx-firmware’,
‘alsa-firmware’,
‘alsa-utils’,
‘ivtv-firmware’,
‘iw’,
‘iwl1000-firmware’,
‘iwl100-firmware’,
‘iwl105-firmware’,
‘iwl135-firmware’,
‘iwl2000-firmware’,
‘iwl2030-firmware’,
‘iwl3160-firmware’,
‘iwl3945-firmware’,
‘iwl4965-firmware’,
‘iwl5000-firmware’,
‘iwl5150-firmware’,
‘iwl6000-firmware’,
‘iwl6000g2a-firmware’,
‘iwl6000g2b-firmware’,
‘iwl6050-firmware’,
‘iwl7260-firmware’,
‘iwl7265-firmware’,
‘wireless-tools’,
‘wpa_supplicant’,
]
package { $packages_to_purge: ensure => ‘purged’ }

##
## Manage some specific services below
##
service { ‘kdump’: enable => false, }
service { ‘puppet’: enable => true, }
service { ‘sysstat’: enable => false, }
service { ‘yum-cron’: enable => true, }

##
## Configure NTP
##
class { ‘ntp’:
servers => [ ‘admin1.hl.local’, ‘admin2.hl.local’ ],
restrict => [‘127.0.0.1’],
}

##
## Configure Postfix via postconf
## Note how we configure smtp_fallback_relay
##
service { ‘postfix’: enable => true, ensure => “running”, }
exec { “configure_postfix”:
path => ‘/usr/bin:/usr/sbin:/bin:/sbin’,
provider => shell,
command => “postconf -e ‘inet_interfaces = localhost’
‘relayhost = admin1.hl.local’
‘smtp_fallback_relay = admin2.hl.local’
‘smtpd_banner = $hostname ESMTP'”,
unless => “grep ^smtp_fallback_relay /etc/postfix/main.cf”,
notify => Exec[‘restart_postfix’]
}
exec {‘restart_postfix’:
path => ‘/usr/bin:/usr/sbin:/bin:/sbin’,
provider => shell,
## Using service rather than systemctl to make it portable
command => “service postfix restart”,
refreshonly => true,
}

if ($facts[‘os’][‘release’][‘major’] == ‘7’) {
## Disable firewalld and install iptables-services
package { ‘iptables-services’: ensure => ‘installed’ }
service { ‘firewalld’: enable => “mask”, ensure => “stopped”, }
service { ‘iptables’: enable => true, ensure => “running”, }
service { ‘ip6tables’: enable => true, ensure => “running”, }
service { ‘tuned’: enable => true, }
package { ‘chrony’: ensure => ‘purged’ }
}

## Wildcard *.hl.local TLS certificate for homelab
file {‘/etc/pki/tls/certs/hl.crt’:
ensure => ‘file’,
source => ‘puppet:///homelab_files/hl.crt’,
path => ‘/etc/pki/tls/certs/hl.crt’,
owner => ‘0’,
group => ‘0’,
mode => ‘0644’,
}
file {‘/etc/pki/tls/private/hl.key’:
ensure => ‘file’,
source => ‘puppet:///homelab_files/hl.key’,
path => ‘/etc/pki/tls/private/hl.key’,
owner => ‘0’,
group => ‘0’,
mode => ‘0640’,
}
}

#################################################
## Sudo and Users configuration for the env
#################################################

class { ‘sudo’:
purge => true,
config_file_replace => true,
}
sudo::conf { ‘wheel_group’:
content => “%wheel ALL=(ALL) ALL”,
}

## These are necessary for passwordless SSH
file {‘/root/.ssh’:
ensure => ‘directory’,
owner => ‘0’,
group => ‘0’,
mode => ‘0700’,
}->
file {‘/root/.ssh/authorized_keys’:
ensure => ‘file’,
owner => ‘0’,
group => ‘0’,
mode => ‘0600’,
content => “# Managed by Puppetnnnssh-rsa key-stringn”,
}

#################################################
## SELinux configuration for the environment
#################################################

class { selinux:
mode => ‘enforcing’,
type => ‘targeted’,
}

#################################################
## Sysctl configuration for the environment
#################################################

sysctl { ‘fs.suid_dumpable’: value => ‘0’ }
sysctl { ‘kernel.dmesg_restrict’: value => ‘1’ }
sysctl { ‘kernel.kptr_restrict’: value => ‘2’ }
sysctl { ‘kernel.randomize_va_space’: value => ‘2’ }
sysctl { ‘kernel.sysrq’: value => ‘0’ }
sysctl { ‘net.ipv4.tcp_syncookies’: value => ‘1’ }
sysctl { ‘net.ipv4.tcp_timestamps’: value => ‘1’ }
sysctl { ‘net.ipv4.conf.default.accept_source_route’: value => ‘0’ }
sysctl { ‘net.ipv4.conf.all.accept_redirects’: value => ‘0’ }
sysctl { ‘net.ipv4.conf.default.accept_redirects’: value => ‘0’ }
sysctl { ‘net.ipv4.conf.all.send_redirects’: value => ‘0’ }
sysctl { ‘net.ipv4.conf.default.send_redirects’: value => ‘0’ }
sysctl { ‘net.ipv4.conf.all.secure_redirects’: value => ‘0’ }
sysctl { ‘net.ipv4.conf.default.secure_redirects’: value => ‘0’ }
sysctl { ‘net.ipv4.conf.all.rp_filter’: value => ‘1’ }
sysctl { ‘net.ipv4.conf.default.rp_filter’: value => ‘1’ }
sysctl { ‘net.ipv4.conf.all.log_martians’: value => ‘1’ }
sysctl { ‘net.ipv4.conf.default.log_martians’: value => ‘1’ }
sysctl { ‘net.ipv6.conf.lo.disable_ipv6’: value => ‘0’ }
sysctl { ‘net.ipv6.conf.all.disable_ipv6’: value => ‘0’ }
sysctl { ‘net.ipv6.conf.default.disable_ipv6’: value => ‘0’ }
sysctl { ‘net.ipv6.conf.all.accept_redirects’: value => ‘0’ }
sysctl { ‘net.ipv6.conf.default.accept_redirects’: value => ‘0’ }
sysctl { ‘vm.swappiness’: value => ’40’ }

#################################################
## Security limits configuration for the env
#################################################

limits::limits{‘*/core’: hard => 0; }
limits::limits{‘*/fsize’: both => 67108864; }
limits::limits{‘*/locks’: both => 65535; }
limits::limits{‘*/nofile’: both => 65535; }
limits::limits{‘*/nproc’: both => 16384; }
limits::limits{‘*/stack’: both => 32768; }
limits::limits{‘root/locks’: both => 65535; }
limits::limits{‘root/nofile’: both => 65535; }
limits::limits{‘root/nproc’: both => 16384; }
limits::limits{‘root/stack’: both => 32768; }

## Module does not manage the file /etc/security/limits.conf
## We might as well warn people from editing it.
file {‘/etc/security/limits.conf’:
ensure => ‘file’,
owner => ‘0’,
group => ‘0’,
mode => ‘0644’,
content => “# Managed by Puppetnn”,
}

Any server that uses the Puppet homelab environment will get the configuration above applied.

What’s Next?

While using the Puppet homelab environment gives us flexibility to develop and test Puppet modules without having to publish them (Katello content views are published in order to lock their contents in place), once we hit production, we will need to be able to define a stable state of the modules so that anything that hasn’t been tested yet doesn’t get rolled into the environment.

Katello allows us to use a separate lifecycle for Puppet modules, we’ll discuss this in the next article.

Source

Ubuntu 18.10 (Cosmic Cuttlefish) released [LWN.net]

Ubuntu 18.10 (Cosmic Cuttlefish) released

[Development] Posted Oct 18, 2018 18:33 UTC (Thu) by jake

Ubuntu has announced the release of its latest version, 18.10 (or “Cosmic Cuttlefish”). It has lots of updated packages and such, and is available in both a desktop and server version; there are also multiple flavors that were released as well. More information can be found in the release notes. “The Ubuntu kernel has been updated to the 4.18 based Linux kernel,
our default toolchain has moved to gcc 8.2 with glibc 2.28, and we’ve
also updated to openssl 1.1.1 and gnutls 3.6.4 with TLS1.3 support.

Ubuntu Desktop 18.04 LTS brings a fresh look with the community-driven
Yaru theme replacing our long-serving Ambiance and Radiance themes. We
are shipping the latest GNOME 3.30, Firefox 63, LibreOffice 6.1.2, and
many others.

Ubuntu Server 18.10 includes the Rocky release of OpenStack including
the clustering enabled LXD 3.0, new network configuration via netplan.io,
and iteration on the next-generation fast server installer. Ubuntu Server
brings major updates to industry standard packages available on private
clouds, public clouds, containers or bare metal in your datacentre.”

Full Story (comments: none)

Source

Debian 8.8 KDE Desktop Installation on Oracle VirtualBox

Debian 8.8 KDE Desktop on VirtualBox
Debian GNU/Linux 8.8 KDE Desktop on Oracle VirtualBox

This video tutorial shows

Debian 8.8 KDE Desktop installation

on

Oracle VirtualBox

step by step. This tutorial is also helpful to install Debian 8.8 on physical computer or laptop hardware. We also install

Guest Additions

on Debian 8.8 KDE Desktop for better performance and usability features: Automatic Resizing Guest Display, Shared Folder, Seamless Mode and Shared Clipboard, Improved Performance and Drag and Drop.

Debian GNU/Linux 8.8 KDE Desktop Installation Steps:

  1. Create Virtual Machine on Oracle VirtualBox
  2. Start Debian 8.8 KDE Desktop Installation
  3. Install Guest Additions
  4. Test Guest Additions Features: Automatic Resizing Guest Display and Shared Clipboard

Installing Debian 8.8 KDE Desktop on Oracle VirtualBox

 

Debian 8.8 New Features and Improvements

Debian 8.8

mainly adds corrections for security problems to the stable release, along with a few adjustments for serious problems. Security advisories were already published separately and are referenced where available. Those who frequently install updates from security.debian.org won’t have to update many packages and most updates from security.debian.org are included in this update.

Debian 8.8

is not a new version of Debian. It’s just a

Debian 8 image

with the latest updates of some of the packages. So, if you’re running a Debian 8 installation with all the latest updates installed, you don’t need to do anything.

Debian Website:

https://www.debian.org/

What is KDE Desktop?

The KDE Community is an international technology team dedicated to creating a free and user-friendly computing experience, offering an advanced graphical desktop, a wide variety of applications for communication, work, education and entertainment and a platform to easily build new applications upon. In this regard, the resources provided by KDE make it a central development hub and home for many popular applications and projects like Calligra Suite, Krita, digiKam, and many others.

KDE Website:

https://www.kde.org/

Hope you found this Debian 8.8 KDE Desktop installation on Oracle VirtualBox tutorial helpful and informative. Please consider sharing it. Your feedback and questions are welcome!

Source

Install Anaconda Python and Jupyter Notebooks for Data Science

Getting started with Anaconda

To explain what is Anaconda, we will quote its definition from the official website:

Anaconda is a free, easy-to-install package manager, environment manager and Python distribution with a collection of 1,000+ open source packages with free community support. Anaconda is platform-agnostic, so you can use it whether you are on Windows, macOS or Linux.

It is easy to secure and scale any data science project with Anaconda as it natively allows you to take a project from your laptop directly to deployment cluster. A complete set of features can be shown here with the official image as well:

Anaconda Enterprise

Anaconda Enterprise

To show in brief what Anaconda is, here are some quick points:

  • It contains Python and hundreds of packages which are especially useful if you are getting started or experienced with Data Science and Machine Learning
  • It comes with conda package manager and virtual environments which development very easy
  • It allows you to get started with development very fast without wasting your time to setup tools for Data Science and Machine Learning

You can install Anaconda from here. It will automatically install Python on your machine so you don’t have to install it separately.

Anaconda vs Jupyter Notebooks

Whenever I try to discuss Anaconda with people who are beginners with Python and Data Science, they get confused between Anaconda and Jupyter Notebooks. We will quote the difference in one line:

Anaconda is package manager. Jupyter is a presentation layer.

Anaconda tries to solve the dependency hell in python—where different projects have different dependency versions—so as to not make different project dependencies require different versions, which may interfere with each other.

Jupyter tries to solve the issue of reproducibility in the analysis by enabling an iterative and hands-on approach to explaining and visualizing code; by using rich text documentation combined with visual representations, in a single solution.

Anaconda is similar to pyenv, venv and minconda; it’s meant to achieve a python environment that’s 100% reproducible on another environment, independent of whatever other versions of a project’s dependencies are available. It’s a bit similar to Docker, but restricted to the Python ecosystem.

Jupyter is an amazing presentation tool for analytical work; where you can present code in “blocks,” combines with rich text descriptions between blocks, and the inclusion of formatted output from the blocks, and graphs generated in a well-designed matter by way of another block’s code.

Jupyter is incredibly good in analytical work to ensure reproducibility in someone’s research, so anyone can come back many months later and visually understand what someone tried to explain, and see exactly which code drove which visualization and conclusion.

Often in analytical work, you will end up with tons of half-finished notebooks explaining Proof-of-Concept ideas, of which most will not lead anywhere initially. Some of these presentations might months later—or even years later—present a foundation to build from for a new problem.

Using Anaconda and Jupyter Notebook from Anaconda

Finally, we will have a look at some commands with which we will be able to use Anaconda, Python and Jupyter on our Ubuntu machine. First, we will download the installer script from the Anaconda website with this command:

curl -O -k https://repo.anaconda.com/archive/Anaconda3-5.2.0-Linux-x86_64.sh

We also need to ensure the data integrity of this script:

sha256sum Anaconda3-5.2.0-Linux-x86_64.sh

We will get the following output:

Check Anaconda integrity

Check Anaconda integrity

We can now run the Anaconda script:

bash Anaconda3-5.2.0-Linux-x86_64.sh

Once you accept the terms, provide a location for installation of packages or just hit Enter for it to take the default location. Once the installation is completed, we can activate the installation with this command:

Finally, test the installation:

Making an Anaconda Environment

Once we have a complete installation in place, we can use the following command to create a new environment:

conda create –name my_env python=3

We can now activate the environment we made:

With this, our command prompt will change, reflecting an Active Anaconda environment. To continue with setting up a Jupyter environment, continue with this lesson which is an excellent lesson on How to install Jupyter Notebooks on Ubuntu and start using them.

Conclusion: Install Anaconda Python and Jupyter Notebooks for Data Science

In this lesson, we studied how we can install and start using the Anaconda environment on Ubuntu 18.04 which is an excellent environment manager to have, especially for beginners for Data Science and Machine Learning. This is just a very simple introduction of many lessons to come for Anaconda, Python,Data Science and Machine Learning. Share your feedback for the lesson with me or to LinuxHint Twitter handle.

Source

TimelineJS: An interactive, JavaScript timeline building tool

TimelineJS 3 is an open source storytelling tool that anyone can use to create visually rich, interactive timelines to post on their websites. To get started, simply click “Make a Timeline” on the homepage and follow the easy step-by-step instructions.

TimelineJS was developed at Northwestern University’s KnightLab in Evanston, Illinois. KnightLab is a community of designers, developers, students, and educators who work on experiments designed to push journalism into new spaces. TimelineJS has been used by more than 250,000 people, according to its website, to tell stories viewed millions of times. And TimelineJS3 is available in more than 60 languages.

Joe Germuska, the “chief nerd” who runs KnightLab’s technology, professional staff, and student fellows, explains, “TimelineJS was originally developed by Northwestern professor Zach Wise. He assigned his students a task to tell stories in a timeline format, only to find that none of the free available tools were as good as he thought they could be. KnightLab funded some of his time to develop the tool in 2012. Near the end of that year, I joined the lab, and among my early tasks was to bring TimelineJS in as a fully supported project of the lab. The next year, I helped Zach with a rewrite to address some issues. Along the way, many students have contributed. Interestingly, a group of students from Victoria University in Wellington, New Zealand, worked on TimelineJS (and some of our other tools) as part of a class project in 2016.”

“In general, we designed TimelineJS to make it easy for non-technical people to tell rich, dynamic stories on the web in the context of events in time.”

Users create timelines by adding content into a Google spreadsheet. KnightLab provides a downloadable template that can be edited to create custom timelines. Experts can use their JSON skills to create custom installations while keeping TimelineJS’s core functionality.

This easy-to-follow Vimeo video shows how to get started with TimelineJS, and I used it myself to create my first timeline.

Open sourcing the Adirondacks

Reid Larson, research and scholarly communication librarian at Hamilton College in Clinton, New York, began searching for ways to combine open data and visualization to chronicle the history of Essex County (a county in northern New York that makes up part of the Adirondacks), in the 1990s, when he was the director of the Essex County Historical Society/Adirondack History Center Museum.

“I wanted to take all the open data available on the history of Essex County and be able to present it to people visually. Most importantly, I wanted to make sure that the data would be available for use even if the applications used to present it are no longer available or supported,” Larson explains.

Now at Hamilton College, Larson has found TimelineJS to be the ideal open source program to do just what he wanted: Chronicle and present a visually appealing timeline of selected places.

“It was a professor who was working on a project that required a solution such as Timeline, and after researching the possibilities, I started using Timeline for that project and subsequent projects,” Larson adds.

TimelineJS can be used via a web browser, or the source code can be downloaded from GitHub for local use.

“I’ve been using the browser version, but I push it to the limits to see how far I can go with it, such as adding my own HTML tags. I want to fully understand it so that I can educate the students and faculty at Hamilton College on its uses,” Larson says.

An open source Eagle Scout project

Not only has Larson used TimelineJS for collegiate purposes, but his son, Erik, created an interactive historical website for his Eagle Scout project in 2017 using WordPress. The project is a chronicle of places in Waterville, New York, just south of Clinton, in Oneida County. Erik explains that he wants what he started to expand beyond the 36 places in Waterville. “The site is an experiment in online community building,” Erik’s website reads.

Larson says he did a lot of the “tech work” on the project so that Erik could concentrate on content. The site was created with Omeka, an open source web publishing platform for sharing digital collections and creating media-rich online exhibits, and Curatescape, a framework for the open source Omeka CMS.

Larson explains that a key feature of TimelineJS is that it uses Google Sheets to store and organize the data used in the timeline. “Google Sheets is a good structure for organizing data simply, and that data will be available even if TimelineJS becomes unavailable in the future.”

Larson says that he prefers using ArcGIS over KnightLab’s StoryMap because it uses spreadsheets to store content, whereas StoryMap does not. Larson is looking forward to integrating augmented reality into his projects in the future.

Create your own open source timeline

I plan on using TimelineJS to create interactive content for the Development and Alumni Relations department at Clarkson University, where I am the development communications specialist. To practice with working with it, I created a simple timeline of the articles I’ve written for Opensource.com:

As Reid Larson stated, it is very easy to use and the results are quite satisfactory. I was able to get a working timeline created and posted to my WordPress site in a matter of minutes. I used media that I had already uploaded to my Media Library in WordPress and simply copied the image address. I typed in the dates, locations, and information in the other cells and used “publish to web” under “file” in the Google spreadsheet. That produced a link and embed code. I created a new post in my WordPress site and pasted in the embed code, and the timeline was live and working.

Of course, there is more customization I need to do, but I was able to get it working quickly and easily, much as Reid said it would.

I will continue experimenting with TimelineJS on my own site, and when I get more comfortable with it, I’ll use it for my professional projects and try out the other apps that KnightLab has created for interactive, visually appealing storytelling.

What might you use TimelineJS for?

Source

Pinguy OS Puts On a Happier GNOME 3 Face | Reviews

By Jack M. Germain

Jul 17, 2018 11:06 AM PT

Pinguy OS Puts On a Happier GNOME 3 Face

Pinguy OS 18.04 is an Ubuntu-based distribution that offers a non-standard GNOME desktop environment intended to be friendlier for new Linux users.

This distro is a solid Linux OS with a focus on simple and straightforward usability for the non-geek desktop user. If you do not like tinkering with settings or having numerous power-grabbing fancy screen animations, Pinguy OS could be a good choice.

The GNOME desktop is the only user interface option, but Pinguy OS’ developer, Antoni Norman, tweaked the desktop environment with some different software options not usually packaged with GNOME.

His refusal to settle for the run-of-the-mill software typical of mainstream GNOME choices is one of this distro’s strongest features. The developer gives you better application options to create the best user experience within the modified GNOME environment.

Pinguy OS is a great pick for beginning Linux users because it is easy to use and offers a satisfying experience. It is also a no-nonsense computing platform for seasoned Linux users who want a GNOME environment that makes more sense.

Pinguy OS comes with user-friendly enhancements and out-of-the-box support for multimedia codecs and browser plugins. The modified GNOME user interface has enhanced menus, panels and dock bars. It includes a handpicked selection of popular desktop applications for many common computing tasks.

Sensible Modernizing

I last looked at Pinguy OS four years ago and found it both useful and easy to use. The developer offers a major upgrade about once yearly. This latest release, which arrived earlier this month, shows significant updating.

For instance, it includes GNOME 3.28. The developer tweaked many of the components to ensure a fast and modern OS. Gone are the gEdit text editor in favor of Pluma. In addition to providing better performance, Pluma is a suitable clone replacement. The file manager app is Nemo 3.8.3.

No email client is bundled with this latest release, but Thunderbird is readily available from repositories. The developer suggests using the GNOME email notifications, which is part of the GNOME desktop and works once you enter online account info into the GNOME account panel.

One of the benefits of running Pinguy OS used to be its support for 32-bit systems. However, the latest tweaking done to Pinguy OS made 32-bit versions a bad user experience. This latest release does not run on very old hardware.

Changes That Work

Earlier versions of Pinguy OS ran Docky, an aging launch dock app. It did not mesh well with the latest Pinguy OS build, so gone it is. In its place are Simple Dock and Places Status Indicator.

Pinguy OS 18.04 panel bar

Pinguy OS 18.04 combines application listings, system panel bar tools and workspace switcher into one multifunction panel. Plus, it provides a panel bar for notifications and a Simple Dock for quick launch.

Simple Dock and Places Status Indicator are GNOME extensions. Like Docky, Simple Dock places a quick launch bar at the bottom of the screen. Places Status Indicator adds a menu for quickly navigating places in the system.

Simple dock at the bottom of the screen and the panel bar across the top of the screen provide easy access to all system tools. The menu button at the left of the top panel has additional tweaks and improvements.

Some of the default GNOME apps have been replaced with MATE versions. This is another example of why Pinguy OS is not just another retread built on standard GNOME 3.

Earlier versions came with the Conky desktop applets, but all the adjusting done in the Pinguy OS 18.04 made it a distraction at best. The developer reasoned that the OS did not need Conky because it confused new users.

I can not agree more. I have found Conky to be clunky. Most of its displays focused on system readouts. Putting them on a desktop just adds to the clutter.

Under the Hood

Pinguy OS is basically Linux Mint infrastructure under the covers, but the GNOME 3 environment is redesigned with many nice usability features. The tweaking in this latest Pinguy OS goes well beyond the GNOME 3 you see in Linux Mint, however.

Pinguy OS has only one desktop flavor. It comes in two options, though: full version or the mini edition. This supports the developer’s goal of making an uncomplicated desktop environment.

The mini option gives you less prepackaged software, but you can add the software not included with a few mouse clicks.

This release uses Linux Kernel 4.15.0-23-generic. It also includes OpenGL version string 3.1 Mesa 18.1.1.

If you are a game player who fancies Window games, you will like the inclusion of Winepak’s repository. This makes it easy to install your favorites.

Pinguy OS 18.04 also ships with a new GDM and GTK Theme, which contributes greatly to giving the OS a more modern look.

Look and Feel

The desktop itself is clutter-free. You can not place icons there. That is a feature (or not) of the GNOME 3 desktop.

However, it also reinforces one of the distro’s driving principles. The goal of Pinguy OS is to give users a clean desktop with a fine-tuned interface that works without confusion. This distro does that.

Simplicity is not the only distinguishing trait. Pinguy OS is a thing of beauty. Pinguy OS comes with an eye-catching collection of artwork that randomly displays as a new background every five minutes or so.

Pinguy OS 18.04 desktop weather applet

Pinguy OS has a clutter-free desktop and a handy weather applet built into the top panel.

This process is controlled by the Variety application. You can change the timing interval and other options for the background images in the Variety Preferences panel.

Pinguy provides a reasonably solid out-of-the-box experience, but the GNOME 3 desktop limits functionality for the sake of simplicity. That is an important distinction.

A panel bar sits at the top of the screen. It holds the traditional menu button in the left corner and system notification icons on the right half of the bar. You can not add or remove any items from the bar.

A Matter of Taste

Do not get me wrong. Placing simplicity above functionality is a point of user perspective about the GNOME 3 desktop — I do not mean that as a criticism.

GNOME 3 is the foundation under several popular desktop environments. What you can see and do with it is a matter of what the developer does. This developer does a lot.

Pinguy OS is not your typical plain-Jane GNOME desktop. Pinguy OS is a solid, functional OS.

New Linux users will not be frustrated by it, but seasoned Linux users might want an advanced setting tool, which does not exist.

My personal preference is a bottom panel that puts notifications, quick launch icons, and a virtual workplace switcher applet a single mouse click at hand. I’d like to see a few icon launchers on the desktop for added convenience.

That is my comfort zone. Standard GNOME 3 dumbs down the process of navigating quickly. It unnecessarily hides access to moving around open applications on numerous virtual workplaces.

Pinguy OS has enough tweaking to build in a suitable workaround for such limitations. So in that regard, this distro gives you a better integration of the GNOME desktop.

Change for the Better

Earlier versions of Pinguy OS used the default full-screen display to show installed applications. The current release has a much better menu system. The far left corner of the panel bar has a Menu button and a Places button.

Click Places for a dropdown listing of folders such as downloads, documents, music, pictures and videos. Clicking on any of these opens a file manager with more options.

Click the Menu button to open a trilogy of functionality. This is a handy mechanism that pulls together what usually is done with several clicks in standard GNOME.

The Simple Dock provides quick access to a few essentials. The apps there include the Web browser, software store, terminal, trash and system monitoring tools.

Multipurpose Panel Bar

When you click the Main Menu button, a panel drops down from the top left corner of the screen. Across the top of this panel are buttons to restart the GNOME shell, suspend /shut down /log out user, lock screen, view Web bookmarks, view recent files, toggle startup apps view, and view applications in list/grid view.

A search window makes finding applications fast. As you type letters, a list of icons for matching applications appears. Click the gear button in the far right of this top row to open a GNOME Menu settings panel. It is here that you can turn on/off numerous features such as activating hot corners.

Down the left edge of the main menu panel is a list of categories that includes Frequent Apps and Favorite Apps. You see that list in the large display area in the center of the dropdown panel. Depending on whether you set grid or list view, a vertical list of program titles fills either the display area or a mini version of the full-screen display that you see in standard GNOME 3.

Built-in Workspace Switcher

What I really dislike about the usual display for virtual workspaces is having to push the mouse pointer into the top left hot corner to slide out the panel from the right edge of the screen. Pinguy OS has a much better solution.

The right edge of the Main Menu panel automatically shows the virtual workspaces in thumbnail view. What a concept! It is simple and efficient.

This approach makes it very handy to navigate among different virtual desktops with a single mouse click. Other features lets you use window actions to move an application to another workspace or jump to a new location using shortcuts.

Settings Supremacy

The top panel bar in GNOME (including Pinguy OS) does not dock open applications or provide any panel applets. That short-circuits many of the special features the panel provides in other Linux desktop environments.

However, Pinguy OS makes up for that by providing a consolidation of system settings. This is a very useful alternative.

Access the system settings from the Main Menu /System Tools /Settings. The list of settings and preferences resembles the dropdown top panel on an Android device. It is very straightforward and complete.

Pinguy OS 18.04 preference panels

A design based on simplicity puts nearly all of the system settings into an Android-style set of preference panels.

A second settings panel of sorts is available by clicking the Gear button at the far right top of the Main Menu. Click on a category to see a full panel view of preferences to turn on/off. This settings panel provides much of the functionality that would otherwise be provided in a fully functional panel bar at the top (or bottom) of the Linux screen.

Bottom Line

Pinguy OS may not satisfy power users who like to control navigation with keyboard shortcuts and advanced system settings. However, if you just want your system to work from the start, Pinguy OS has a lot going for it.

Do not let this distro’s self-avowed fervor for simplicity let you misjudge its power and usability. If you think it is too basic for serious users, your thinking might be skewed.

Even if you do not prefer the GNOME 3 desktop, give Pinguy OS a try. It is not your standard GNOME. This OS improves upon most of GNOME 3’s shortcomings. It offers a solid, better GNOME integration.

Want to Suggest a Review?

Is there a Linux software application or distro you’d like to suggest for review? Something you love or would like to get to know?

Please
email your ideas to me, and I’ll consider them for a future Linux Picks and Pans column.

And use the Reader Comments feature below to provide your input!

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.

Source

Cool-Retro-Term is a great Mimic of old Command Lines, Install in Ubuntu/Linux Mint – NoobsLab

Cool-retro-term is a free terminal emulator developed by

Filippo Scognamiglio

, it mimics the look and feel of the old cathode tube screens. If you are tired of your current terminal than it comes in hand as eye-candy, it is customizable and reasonably lightweight terminal emulator. It uses the Konsole engine which is powerful and mature, it requires Qt 5.2 or higher to run terminal emulator.

It has pre-configured templates so you can use them with just one click, profiles includes: Amber, Green, Scanlines, Pixelated, Apple ][, Vintage, IBM Dos, IBM 3287, and Transparent Green. Further more you can create your own profile and use it.

It’s preferences offers a lot of customization: you can adjust brightness, contrast, and opacity; font; font scaling and width; cool effects for terminal; and you can control FPS, texture quality, scanlines quality, and bloom quality. Further more you can dive into settings to change colors, shadows etc.

cool retro terminal

Preferences

Note:

Make sure to use right commands according to your distribution version.

Available for Ubuntu 18.04 Bionic/Linux Mint 19/18.3/18.2/and other Ubuntu derivatives
To install Cool-Retro-Term in Ubuntu/Linux Mint open Terminal (Press Ctrl+Alt+T) and copy the following commands in the Terminal:
Available for Ubuntu 16.04 Xenial/14.04 Trusty/12.04 Precise/Linux Mint 18/17/13/ and other Ubuntu derivatives
To install Cool-Retro-Term in Ubuntu/Linux Mint open Terminal (Press Ctrl+Alt+T) and copy the following commands in the Terminal:
What do you say about this great application? Let us know in the comment below

Source

WP2Social Auto Publish Powered By : XYZScripts.com