How to Setup FTP Server with VSFTPD on Ubuntu 18.04

FTP (File Transfer Protocol) is a standard network protocol used to transfer files to and from a remote network. For more secure and faster data transfers, use SCP.

There are many open source FTP servers available for Linux. The most popular and widely used are PureFTPd, ProFTPD and vsftpd. In this tutorial we’ll be installing vsftpd. It is a stable, secure and fast FTP server. We will also show you how configure vsftpd to restrict users to their home directory and encrypt the entire transmission with SSL/TLS.

Although this tutorial is written for Ubuntu 18.04 the same instructions apply for Ubuntu 16.04 and any Debian based distribution, including Debian, Linux Mint and Elementary OS.

Prerequisites

Before continuing with this tutorial, make sure you are logged in as a user with sudo privileges.

Installing vsftpd on Ubuntu 18.04

The vsftpd package is available in the Ubuntu repositories. To install it, simply run the following commands:

sudo apt update
sudo apt install vsftpd

vsftpd service will automatically start after the installation process is complete. Verify it by printing the service status:

sudo systemctl status vsftpd

The output will look something like below, showing that the vsftpd service is active and running:

* vsftpd.service – vsftpd FTP server
Loaded: loaded (/lib/systemd/system/vsftpd.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2018-10-15 03:38:52 PDT; 10min ago
Main PID: 2616 (vsftpd)
Tasks: 1 (limit: 2319)
CGroup: /system.slice/vsftpd.service
`-2616 /usr/sbin/vsftpd /etc/vsftpd.conf

Configuring vsftpd

The vsftpd server can be configured by editing the /etc/vsftpd.conf file. Most of the settings are documented inside the configuration file. For all available options visit the official vsftpd page.

In the following sections we will go over some important settings needed to configure a secure vsftpd installation.

Start by opening the vsftpd config file:

sudo nano /etc/vsftpd.conf

1. FTP Access

We’ll allow access to the FTP server only the local users, find the anonymous_enable and local_enable directives and verify your configuration match to lines below:

/etc/vsftpd.conf

anonymous_enable=NO
local_enable=YES

2. Enabling uploads

Uncomment the write_enable setting to allow changes to the filesystem such as uploading and deleting files.

/etc/vsftpd.conf

3. Chroot Jail

To prevent the FTP users to access any files outside of their home directories uncomment the chroot setting.

/etc/vsftpd.conf

By default to prevent a security vulnerability, when chroot is enabled vsftp will refuse to upload files if the directory that users are locked in is writable.

  • Method 1. – The recommended method to allow upload is to keep chroot enabled, and configure FTP directories. In this tutorial we will create an ftp directory inside the user home which will serve as the chroot and a writable uploads directory for uploading files.

    /etc/vsftpd.conf

    user_sub_token=$USER
    local_root=/home/$USER/ftp

  • Method 2. – Another option is to add the following directive in the vsftpd configuration file. Use this option if you must to grant writable access to your user to its home directory.

    /etc/vsftpd.conf

    allow_writeable_chroot=YES

4. Passive FTP Connections

vsftpd can use any port for passive FTP connections. We’ll specify the minimum and maximum range of ports and later open the range in our firewall.

Add the following lines to the configuration file:

/etc/vsftpd.conf

pasv_min_port=30000
pasv_max_port=31000

5. Limiting User Login

To allow only certain users to login to the FTP server add the following lines at the end of the file:

/etc/vsftpd.conf

userlist_enable=YES
userlist_file=/etc/vsftpd.user_list
userlist_deny=NO

When this option is enabled you need to explicitly specify which users are able to login by adding the user names to the /etc/vsftpd.user_list file (one user per line).

6. Securing Transmissions with SSL/TLS

In order to encrypt the FTP transmissions with SSL/TLS, you’ll need to have an SSL certificate and configure the FTP server to use it.

You can use an existing SSL certificate signed by a trusted Certificate Authority or create a self signed certificate.

If you have a domain or subdomain pointing to the FTP server’s IP address you can easily generate a free Let’s Encrypt SSL certificate.

In this tutorial we will generate a self signed SSL certificate using the openssl command.

The following command will create a 2048-bit private key and self signed certificate valid for 10 years. Both the private key and the certificate will be saved in a same file:

sudo openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout /etc/ssl/private/vsftpd.pem -out /etc/ssl/private/vsftpd.pem

Now that the SSL certificate is created open the vsftpd configuration file:

sudo nano /etc/vsftpd.conf

Find the rsa_cert_file and rsa_private_key_file directives, change their values to the pam file path and set the ssl_enable directive to YES:

/etc/vsftpd.conf

rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem
ssl_enable=YES

If not specified otherwise, the FTP server will use only TLS to make secure connections.

Restart the vsftpd Service

Once you are done editing, the vsftpd configuration file (excluding comments) should look something like this:

/etc/vsftpd.conf

listen=NO
listen_ipv6=YES
anonymous_enable=NO
local_enable=YES
write_enable=YES
dirmessage_enable=YES
use_localtime=YES
xferlog_enable=YES
connect_from_port_20=YES
chroot_local_user=YES
secure_chroot_dir=/var/run/vsftpd/empty
pam_service_name=vsftpd
rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem
ssl_enable=YES
user_sub_token=$USER
local_root=/home/$USER/ftp
pasv_min_port=30000
pasv_max_port=31000
userlist_enable=YES
userlist_file=/etc/vsftpd.user_list
userlist_deny=NO

Save the file and restart the vsftpd service for changes to take effect:

sudo systemctl restart vsftpd

Opening the Firewall

If you are running an UFW firewall you’ll need to allow FTP traffic.

To open port 21 (FTP command port), port 20 (FTP data port) and 30000-31000 (Passive ports range), run the following commands:

sudo ufw allow 20:21/tcp
sudo ufw allow 30000:31000/tcp

To avoid being locked out we will aso open the port 22:

Reload the UFW rules by disabling and re-enabling UFW:

sudo ufw disable
sudo ufw enable

To verify the changes run:

Status: active

To Action From
— —— —-
20:21/tcp ALLOW Anywhere
30000:31000/tcp ALLOW Anywhere
OpenSSH ALLOW Anywhere
20:21/tcp (v6) ALLOW Anywhere (v6)
30000:31000/tcp (v6) ALLOW Anywhere (v6)
OpenSSH (v6) ALLOW Anywhere (v6)

Creating FTP User

To test our FTP server we will create a new user.

  • If you already have a user which you want to grant FTP access skip the 1st step.
  • If you set allow_writeable_chroot=YES in your configuration file skip the 3rd step.
  1. Create a new user named newftpuser:

  2. Add the user to the allowed FTP users list:

    echo “newftpuser” | sudo tee -a /etc/vsftpd.user_list

  3. Create the FTP directory tree and set the correct permissions:

    sudo mkdir -p /home/newftpuser/ftp/upload
    sudo chmod 550 /home/newftpuser/ftp
    sudo chmod 750 /home/newftpuser/ftp/upload
    sudo chown -R newftpuser: /home/newftpuser/ftp

    As discussed in the previous section the user will be able to upload its files to the ftp/upload directory.

At this point your FTP server is fully functional and you should be able to connect to your server with any FTP client that can be configured to use TLS encryption such as FileZilla.

Disabling Shell Access

By default, when creating a user, if not explicitly specified the user will have SSH access to the server.

To disable shell access, we will create a new shell which will simply print a message telling the user that their account is limited to FTP access only.

Create the /bin/ftponly shell and make it executable:

echo -e ‘#!/bin/shnecho “This account account is limited to FTP access only.”‘ | sudo tee -a /bin/ftponly
sudo chmod a+x /bin/ftponly

Append the new shell to the list of valid shells in the /etc/shells file

echo “/bin/ftponly” | sudo tee -a /etc/shells

Change the user shell to /bin/ftponly:

sudo usermod newftpuser -s /bin/ftponly

Conclusion

In this tutorial, you learned how to install and configure a secure and fast FTP server on your Ubuntu 18.04 system.

Source

OrientDB: How To Install the NoSQL DataBase on CentOS 7



OrientDB NoSQL DBMS

Introduction – NoSQL and OrientDB

When talking about databases, in general, we refer to two major families: RDBMS (Relational Database Management System), which use as user and application program interface a language named Structured Query Language (or SQL) and non-relational database management systems, or NoSQL databases. OrientDB is part of the second family.

Between the two models there is a huge difference in the way they consider (and store) data.

Relational Database Management Systems

In the relational model (like MySQL, or its fork, MariaDB), a database is a set of tables, each containing one or more data categories organized in columns. Each row of the DB contains a unique instance of data for categories defined by columns.

Just as an example, consider a table containing customers. Each row correspond to a customer, with columns for name, address, and every required information.
Another table could contain an order, with product, customer, date and everything else. A user of this DB can obtain a view that fits its needs, for example a report about customers that bought products in a specific range of prices.

NoSQL Database Management Systems

In the NoSQL (or Not only SQL) database management systems, databases are designed implementing different “formats” for data, like a document, key-value, graph and others. The database systems realized with this paradigm are built especially for large-scale database clusters, and huge web applications. Today, NoSQL databases are used by major companies like Google and Amazon.

Document databases

Document databases store data in document format. The usage of this kind of DBs is usually raised with JavaScript and JSON, however, XML and other formats are accepted. An example is MongoDB.

Key-value databases

This is a simple model pairing a unique key with a value. These systems are performant and highly scalable for caching. Examples include BerkeleyDB and MemcacheDB.

Graph databases

As the name predicts, these databases store data using graph models, meaning that data is organized as nodes and interconnections between them. This is a flexible model which can evolve over time and use. These systems are applied where there is the necessity of mapping relationships.
Examples are IBM Graphs and Neo4j and OrientDB.

OrientDB

OrientDB, as stated by the company behind it, is a multi-model NoSQL Database Management System that “combines the power of graphs with documents, key/value, reactive, object-oriented and geospatial models into one scalable, high-performance operational database“.

OrientDB has also support for SQL, with extensions to manipulate trees and graphs.

Prerequisites

  • One server running CentOS 7
  • OpenJDK or Oracle Java installed on the server

Goals

This tutorial explains how to install and configure OrientDB Community on a server powered by CentOS 7.

OrientDB Installation

Step 1 – Create a New User

First of all, create a new user to run OrientDB. Doing this will let to run the database on an “isolated environment”. To create a new user, execute the following command:

# adduser orientdb -d /opt/orientdb

Step 2 – Download OrientDB Binary Archive

At this point, download the OrientDB archive in the /opt/orientdb directory:

# wget https://orientdb.com/download.php?file=orientdb-community-importers-2.2.29.tar.gz -O /opt/orientdb/orientdb.tar.gz

Note: at the time we write, 2.2.29 is the latest stable version.

Step 3 – Install OrientDB

Extract the downloaded archive:

# cd /opt/orientdb
# tar -xf orientdb.tar.gz

tar will extract the files in a directory named orientdb-community-importers-2.2.29. Move everything in /opt/orientdb:

# mv orientdb-community*/* .

Make the orientdb user the owner of the extracted files:

# chown -R orientdb:orientdb /opt/orientdb

Start OrientDB Server

Starting the OrientDB server requires the execution of the shell script contained in orientdb/bin/:

# /opt/orientdb/bin/server.sh

During the first start, this installer will display some information and will ask for an OrientDB root password:

+—————————————————————+
| WARNING: FIRST RUN CONFIGURATION |
+—————————————————————+
| This is the first time the server is running. Please type a |
| password of your choice for the ‘root’ user or leave it blank |
| to auto-generate it. |
| |
| To avoid this message set the environment variable or JVM |
| setting ORIENTDB_ROOT_PASSWORD to the root password to use. |
+—————————————————————+

Root password [BLANK=auto generate it]: ********
Please confirm the root password: ********

To stop OrientDB, hit Ctrl+C.

Create a systemd Service for OrientDB

Create a new ststemd service to easily manage OrientDB start and stop. With a text editor, create a new file:

# $EDITOR /etc/systemd/system/orientdb.service

In this file, paste the following content:

[Unit]
Description=OrientDB service
After=network.target

[Service]
Type=simple
ExecStart=/opt/orientdb/bin/server.sh
User=orientdb
Group=orientdb
Restart=always
RestartSec=9
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=orientdb

[Install]
WantedBy=multi-user.target

Save the file and exit.

Reload systemd daemon service:

# systemctl daemon-reload

At this point, start OrientDB with the following command:

# systemctl start orientdb

Enable it to start at boot time:

# systemctl enable orientdb

Conclusion

In this tutorial we have seen a brief comparison between RDBMS and NoSQL DBMS. We have also installed and completed a basic configuration of OrientDB Community server-side.

This is the first step for deploying a full OrientDB infrastructure, ready for managing large-scale systems data.

Source

AMD Dual EPYC 7601 Benchmarks – 9-Way AMD EPYC / Intel Xeon Tests On Ubuntu 18.10 Server Review

Arriving earlier this month was a Dell PowerEdge R7425 server at Phoronix that was equipped with two AMD EPYC 7601 processors, 512GB of RAM, and 20 Samsung 860 EVO SSDs to make for a very interesting test platform and our first that is based on a dual EPYC design with our many other EPYC Linux benchmarks to date being 1P. Here is a look at the full performance capabilities of this 64-core / 128-thread server compared to a variety of other AMD EPYC and Intel Xeon processors while also doubling as an initial look at the performance of these server CPUs on Ubuntu 18.10.

This Dell PowerEdge R7425 server with the two EPYC 7601 processors has been absolutely dominating the benchmarks since its arrival. Last week was an initial look at the performance capabilities when checking out the Linux application scaling up to 128 threads while in this article are a lot more benchmarks compared to the other current server CPUs I had available for benchmarking. This benchmark comparison with Ubuntu 18.10 consisted of:

– EPYC 7251

– EPYC 7351P

– EPYC 7401P

– EPYC 7551

– EPYC 7601

– 2 x EPYC 7601

– Xeon E5-2687W v3

– Xeon Silver 4108

– 2 x Xeon Gold 6138

This was based upon the processors/hardware I had available. While Ubuntu 18.10 is not a long-term support (LTS) release like Ubuntu 18.04, it makes for an interesting test target due to its newer software components: namely the Linux 4.18 kernel and GCC 8.2 compiler… This should be akin to Red Hat Enterprise Linux 8.0 / CentOS 8 and other upcoming Linux distribution releases. This up-to-date stable Linux kernel also provides all of the latest Spectre / Meltdown / Foreshadow protection and the systems had their up-to-date BIOS/firmware releases.

While the number of current-generation Xeon systems I have is limited, thanks to the open-source Phoronix Test Suite it’s very easy to compare any of your own Linux systems/servers to the benchmarks found in this article while doing so in a fully-automated and side-by-side manner. Simply install the Phoronix Test Suite and run phoronix-test-suite benchmark 1810150-SK-AMDEPYC1243 to kick off your own standardized benchmark comparison.

Let’s have a look at how this dual EPYC Dell PowerEdge server performs against the assortment of other Intel/AMD processors on Ubuntu 18.10.



Source

Automotive Grade Linux dips into telematics with 6.0 release

Oct 16, 2018 — by Eric Brown

— 98 views

The Automotive Grade Linux project has released Unified Code Base 6.0 in-vehicle infotainment stack with new software profiles for telematics and instrument cluster.

The Linux Foundation’s Automotive Grade Linux project version 6.0 (“Funky Flounder”) of its Unified Code Base 6.0 distribution is now available for download. The new release for the first time expands the open source in-vehicle infotainment (IVI) stack to support telematics hooks and instrument cluster displays.

“The addition of the telematics and instrument cluster profiles opens up new deployment possibilities for AGL,” stated Dan Cauchy, Executive Director of Automotive Grade Linux at the Linux Foundation. “Motorcycles, fleet services, rental car tracking, basic economy cars with good old-fashioned radios, essentially any vehicle without a head unit or infotainment display can now leverage the AGL Unified Code Base as a starting point for their products.”

Key features for UCB 6.0 include:

  • Device profiles for telematics and instrument cluster
  • Core AGL Service layer can be built stand-alone
  • Reference applications including media player, tuner, navigation, web browser, Bluetooth, WiFi, HVAC control, audio mixer and vehicle controls
  • Integration with simultaneous display on IVI system and instrument cluster
  • Multiple display capability including rear seat entertainment
  • Wide range of hardware board support including Renesas, Qualcomm Technologies, Intel, Texas Instrument, NXP and Raspberry Pi
  • Software Development Kit (SDK) with application templates
  • SmartDeviceLink ready for easy integration and access to smartphone applications
  • Application Services APIs for navigation, voice recognition, Bluetooth, audio, tuner and CAN signaling
  • Near Field Communication (NFC) and identity management capabilities including multilingual support
  • Over-The-Air (OTA) upgrade capabilities
  • Security frameworks with role-based-access control

Mercedes-Benz
Vans Sprinter

In June, AGL announced that Mercedes-Benz Vans was using UCB for upcoming vans equipped with next generation connectivity and robotics technology. The announcement followed Toyota’s larger commitment to AGL for its 2018 Toyota Camry cars, as well as some Prius models.

UCB 6.0 does not include the virtualization framework for UCB, which was announced in July. Destined for a future UCB release, the virtualization technology includes a novel bus architecture that can encompass both critical and non-critical systems and support a variety of open source and proprietary virtualization solutions running simultaneously.

In other recent automotive technology news, last month the Wall Street Journal reported that a fully featured version of Google’s Android Auto technology will be used for IVI systems built into Renault-Nissan-Mitsubishi Alliance cars starting in 2021. The carmaker collective is the world’s largest, with 10.6 million vehicles sold in 2017. Earlier this year, Google and Volvo demonstrated a 2020 XC40 model that runs Android Auto together with Volvo’s Sensus skin.

Further information

AGL’s Unified Code Base 6.0 is available now for download. More information may be found on this UCB 6.0 release notes page.

Source

New Manjaro Beta Builds a Better Arch | Reviews

By Jack M. Germain

Oct 10, 2018 5:00 AM PT

New Manjaro Beta Builds a Better Arch

Manjaro Linux offers the best of two worlds. It puts a user-friendly face on an Arch-based distro, and it gives you a choice of sensible and productive desktop interfaces.

The Manjaro Linux team released its latest updates running the KDE, Xfce and GNOME desktops, Manjaro Linux 18.0 Beta 7, late last month. All three are solid performers and seem to be ready for final release. A key benefit of Manjaro Linux is the rolling release method to push new versions to users without requiring full reinstallation, as is the case with most other Linux distributions.

Manjaro Linux is a fast, traditional desktop-oriented operating system based on Arch Linux. Arch itself is renowned for being an exceptionally fast, powerful and lightweight distribution that provides access to the very latest cutting-edge — and bleeding-edge — software. Manjaro exceeds that reputation and delivers more benefits.

Using an Arch-based distro such as Manjaro is not the same thing as using a pure Arch Linux descendant, however. Arch is geared toward more experienced or technically minded users.

Arch-based distros generally are beyond the reach of those who lack the technical expertise (or persistence) required to use it. If for no other reason, Arch Linux derivatives are monsters to install and configure. Those processes require more than a passing knowledge with the command line interface. Manjaro Linux is different.

Developed in Austria, France and Germany, Manjaro provides all the benefits of the Arch operating system with a focus on user-friendliness and accessibility. The prime directive for all things Arch is simplicity, modernity and pragmatism.

Manjaro Linux’s in-house system tools, easy installation application and better range of software packages make it a better Arch-based distro than Arch Linux itself. Manjaro offers much more.

Arch-Based or Real Arch?

Manjaro Linux is not Arch Linux — but yes, it is based on Arch underpinnings and Arch principles. That is a good thing.

Nor is using Manjaro Linux the same as using pure Arch or more direct derivatives, such as Antergos Linux, which I recently
reviewed. Even if Arch is outside your comfort zone, though, you will be rewarded with a satisfying computing experience with Manjaro Linux.

I have not seriously looked at Manjaro Linux in several years. I had a take-it-or-leave-it reaction to the much earlier version I checked out back then. However, Manjaro has come a long way. The developers’ goal to be an independent Arch branch is the key to the distro’s success.

Manjaro is independent of Arch and has its own development team. Manjaro’s user base targets newcomers, not the more technically inclined, experienced Linux user.

Manjaro breaks away from the pure Arch mold to make a better Arch-based platform. It is easier to use. A few more distinct differences separate Manjaro and Arch.

A Better Way

Manjaro’s independence is one of its key distinguishing traits. That is clearly evident in its software packages. Manjaro has its own repositories that are not affiliated with Arch Linux. The benefit is that fewer things break, because the Manjaro team takes more time to make sure its software packages are compatible. In addition, the repositories contain software packages the Arch community does not provide.

Manjaro also includes its own distribution-specific tools, such as the Manjaro Hardware Detection utility and the Manjaro Settings Manager. Also, Manjaro has its own way of doing system functions, compared to Arch.

Manjaro’s developers built this Arch Linux derivative around a series of system apps that make using it much easier. For example, Arch distros usually require familiarity with terminal windows to carry out package installations and removals. Manjaro’s front-end assistance and improved system tools give less-experienced users considerable handholding.

One software feature Manjaro closely shares with Arch Linux is compatibility with the AUR, or Arch Users Repository. This added access expands your free software stores within the Arch community. It has more of the latest software additions that are not yet vetted for the official Arch repository. This is a community-driven repository for Arch Linux users.

An added benefit of using the AUR is a simplified package installation process in Manjaro. Community members port applications to the AUR and provide scripts to install applications not packaged for Arch or Manjaro.

Another invaluable tool is the console-based net-installer Manjaro-Architect. You can install any of Manjaro’s official or community-maintained editions, or you can configure your own custom-built Manjaro system.

If you have several desktop or laptop computers and want to create identical systems, this is the way to get that job done efficiently and painlessly. Manjaro-Architect downloads all packages in their latest versions during installation. Manjaro-Architect supports systemd, disk encryption, and a variety of file systems, including LVM and btrfs. You can view a tutorial for using the architect tool on the Manjaro forum.

Manjaro Kudos

I was particularly impressed with Manjaro’s hardware support, especially for Broadcom wireless cards. Several of my laptops are plagued with problems related to these quirky wireless devices. More times than not, they fail to be detected when I test a Linux distro. Manjaro eliminates that issue.

I like not having to configure PPAs (Personal Package Archives) in a package manager when I install less standard software packages in other distros. The large software repository Manjaro provides plus the Arch User Repository make PPAs unnecessary.

This year I’ve noticed increasing delays with other Linux distros in booting into the desktop screen. Some of these delays are caused by patches needed to work around vulnerability issues with Intel and AMD processor chips. That is less of a problem with Manjaro Linux. It has a fast bootup sequence.

Extra Editions

I downloaded the latest official Manjaro ISOs for the KDE, Xfce and GNOME desktop editions for testing. I was pleased to see that the only real difference among them was the look and feel of the desktop environments. The specialized in-house system apps and Manjaro-specific software provided a unified computing platform across all three desktop editions.

Numerous community-maintained editions provide some newer and experimental alternative desktop editions. These community releases still carry the release label of 17.1.12, indicating last year’s base releases. According to a note on the website, these community desktop alternatives are updated with the latest software, however.

Manjaro-GNOME desktop

The Manjaro-GNOME desktop includes the latest refinements that make GNOME easy to use.

These alternative desktop environments range from a few better-known choices to a couple of very obscure desktop projects: Awesome, Bspwm, Budgie, Cinnamon, Deepin, i3, LXDE/LXQT, Mate and Openbox.

The community editions might be more suitable for less-experienced Linux users. The only drawback is a delay in updating the current versions.

Tough Choices

Picking a preferred desktop from the three primary candidates — KDE, Xfce and GNOME — is largely a matter of personal choice. All three desktop environments worked as expected.

Some Linux distros use in-house modifications to the desktop to tweak performance or better align the desktop environment with the distro’s philosophy and design styles. All three releases appeared to be fairly standard versions.

The Xfce desktop is a lightweight environment that is fast and uses fewer system resources. It is visually appealing with little to no eye candy or animations. Despite its so-called lightweight structure, Xfce is a fully functional desktop with modern features and a fair amount of configurability.

Manjaro-Xfce edition user interface

The Manjaro-Xfce edition sports the welcome screen common to all Manjaro editions. The standard Xfce user interface offers a modern integration of classic Linux bottom panel and simplified main menu.

The GNOME Desktop Environment uses the Wayland display server by default. It has a simplified appearance with a less impressive feature set. Most of its customization potential is done via extensions.

The KDE desktop is the most feature-rich and versatile desktop environment of the three. It provides several different menu styles to access applications. Its built-in interface provides easy access for installing new themes.

Manjaro-GNOME user interface

The Manjaro-KDE edition has an attractive collection of background images, along with a multipurpose main menu panel, plus lots of eye candy displays that make the user interface fun and productive.

One of the pluses in running the KDE edition is the desktop customization. You have access to a collection of snappy widgets you can add to your desktop. The result is a much more configurable resource-heavy desktop.

Bottom Line

Regardless of which desktop you select, the welcome screen introduces Manjaro tools and get-acquainted details such as documentation, support tips, and links to the project site.

You can get a full experience in using the live session ISOs without making any changes to the computer’s hard drive. That is another advantage to running Manjaro Linux over a true Arch distro. Arch distros usually do not provide live session environments. Most that do lack any automatic installation launcher from within the live session.

Caution: When you attempt to run the boot menu from the Manjaro DVD, pay attention to the startup menu. It is a bit confusing. To start the live session, go halfway down the list of loading choices to select the Boot Manjaro option. The other menu options let you configure non-default choices for keyboard, language, etc.

After the live medium loads the Manjaro live session, browse the categories in the welcome window. You can click the Launch Installer button in the welcome window or launch it after experiencing the live session by clicking on the desktop install icon or running the installation program from the main menu.

Installation is a simple and straightforward process. The Calamares installer allows newcomers to easily set up the distro. It gives advanced users lots of customization options.

Want to Suggest a Review?

Is there a Linux software application or distro you’d like to suggest for review? Something you love or would like to get to know?

Please
email your ideas to me, and I’ll consider them for a future Linux Picks and Pans column.

And use the Reader Comments feature below to provide your input!

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.

Source

A Cybersecurity Weak Link: Linux and IoT


Linux powers many of the IoT devices on which we’ve come to rely — something that enterprises must address.

When Linus Torvalds developed a free operating system back in 1991 in his spare time, nobody could have guessed where it would lead.

Linux is not only the backbone of the Internet and the Android operating system, it’s also in domestic appliances, motor vehicles, and pretty much anything else that requires a minimal operating system to run dedicated software. The Internet of Things (IoT) is very much powered by Linux.

But when Chrysler announced a recall of 1.4 million vehicles back in 2015 after a pair of hackers demonstrated a remote hijack of a Jeep’s digital systems, the risks involved with hacking IoT devices were dramatically illustrated.

So, what does the rise of Linux and IoT mean for cybersecurity in the enterprise? Let’s take a look.

Our Networks Have Changed
Today’s defense solutions and products mostly address Windows-based attacks. It’s the most prevalent operating system in the enterprise, and the majority of system administrators are tasked with solving the security problems it brings.

Over time, however, the popularity of Windows in enterprise IT has weakened. A growing number of DevOps and advanced users are choosing Linux for their workstations. In parallel, the internal and external services a typical enterprise offering has moved away from Windows-based devices to Linux: Ubuntu, SUSE, and Red Hat. Linux containers have broad appeal for enterprises because they make it easier to ensure consistency across environments and multiple deployment targets such as physical servers, virtual machines, and private or public clouds. However, many Linux container deployments are focused on performance, which often comes at the expense of security.

Beyond that, every device used in the network is now connected to the same networks where all the most valuable assets reside. What used to be a simple fax machine has now become a server. Our switches and routers are moving into the backbone of our most secure networks, bringing along the potential for cyber breaches as they do so.

Malware Authors’ Heaven
Let’s shift our attention from the defender to the attackers, whose strategy often is to use minimal effort for maximum impact. In many cases, keeping things simple proves to be enough.

If you look at your network from the attacker’s perspective, there are enough open doors to penetrate without the hassle of crossing the security mechanisms of the most common operating system. Of course, that doesn’t mean you can relax the effort to secure your Windows devices; there are still some severe weaknesses (social engineering anyone?).

Here are a few notable breaches involving IoT or, by extension, Linux-controlled devices:

1. Compromising a Network by Sending a Fax
Check Point researchers have revealed details of two critical remote code execution (RCE) vulnerabilities they discovered in the communication protocols used in tens of millions of fax machines globally. (A patch is available on HP’s support page.)

2. The Mirai Botnet
In October 2016, the largest distributed denial-of-service attack ever was launched on service provider Dyn using an IoT botnet, which led to huge portions of the Internet going down. The Mirai botnet caused infected computers to continually search the Internet for vulnerable IoT devices such as digital cameras and DVRs, and then used known default usernames and passwords to log in and infect them with malware.

3. 465,000 Abbott Pacemakers Vulnerable to Hacking
In the summer of 2016, the FDA and Homeland Security issued alerts about vulnerabilities in Abbott pacemakers that required a firmware update to close security holes. The unpatched firmware made it possible for an attacker to drain the pacemaker battery or exfiltrate user medical data. (The firmware was updated a year later.)

Regaining Control
As there are many different IoT devices and inherent vulnerabilities, patching can be overwhelming. That said, you can’t protect what you can’t see, so start with the basics: map out what you have and gain visibility into traffic, including the growing blind spot of encrypted traffic. This will allow you to introduce IoT security into your already existing security program.

The next step is to ensure no default authentication is set for any of your devices and to start patching. Patching can’t fix everything, but it can discourage any attackers probing your network.

On the Linux side, there are enterprise-grade solutions available, some of which are more intrusive than others: they’ll cover your assets at the cost of kernel intrusion. Other Linux-based solutions focus on visibility and monitoring “userland” behavior and processes. This allows you to keep more control, but also can result in easier bypasses for malware.

Conclusion
Although preparation is the key to addressing IoT and Linux cyberattacks, there is still much else that can be done. On the IoT side, device manufacturers need to develop a common set of security mechanisms and standards. Until that time, the best approach is to reduce the attack surface to a bare minimum: retire old devices, patch all devices that are a must, and use vendors that invest in security and enforce authentication wherever possible. On the Linux side, the situation is somewhat better, as software solutions and the main vendors continue to invest in securing the operating systems. However, there’s no doubt that malware authors will persist in exploring and exploiting weaknesses in the OS and software whenever and wherever they find them.

While defenders need to seal every gap and plug every hole, an attacker just needs one way in. In some cases, that could come from your Linux and IoT. An IoT revolution is occurring, and the speed of change is bringing with it multiple security implications, some of which may be as yet unknown. The enterprise needs to be ready, and it needs to be vigilant.

Related Content:


Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.


Migo Kedem is the Senior Director of Products and Marketing at SentinelOne. Before joining SentinelOne, Mr. Kedem spent a decade in building cybersecurity products for Palo Alto Networks and Checkpoint. View Full Bio

More Insights



Source

Celebrating KDE’s 22nd Birthday with Some Inspiring Facts from its Glorious Past!

Last updated October 14, 2018 By Avimanyu Bandyopadhyay 4 Comments

Wishing A Very Happy Birthday to KDE!

Let us Celebrate this moment by looking back into its Glorious history with some Inspiring Facts on this legendary and much-loved Desktop Environment!

Happy Birthday KDE

22 years ago, Matthias Ettrich (now a Computer Scientist and Software Engineer at Here), then a Computer Science student at the Eberhard Karls University of Tübingen, was not quite happy as a Common Desktop Environment (CDE) user.

He wanted an interface that was more comfortable, simpler and easy to use, with a better look and feel. Thus, the Kool Desktop Environment (KDE) project was born!

Note that KDE is clearly some pun intended to CDE!

Trivia: The official mascot of KDE is Konqi who has a girlfriend named Katie. Previously there used to be a wizard named Kandalf but was later replaced by Konqi because many people loved and preferred the mascot to be this charming and friendly dragon!

Screenshot of earlier version of KDE desktopKonqi from the early days who replaced Kandalf (right)

Some Interesting and Inspiring Facts on KDE

We’ve looked back into some Interesting yet Inspiring events that took place over the last 22 years of the KDE project:

Development

15 developers met in Arnsberg, Germany, in 1997, to work on the KDE project and discuss its future. This event came to be known as KDE One followed by KDE Two and KDE Three and so on in the later years. They even had one for a beta version.

The KDE Free Qt Foundation Agreement

The foundation agreement for the KDE Free Qt Foundation was signed by KDE e.V. and Trolltech, then owner of the Qt Foundation who ensured the permanent availability of Qt as Free Software.

First Stable Version

The first stable version of KDE was released in 1998, in addition to highlighting an application development framework, the KOM/OpenParts, and an office suite preview. KDE 1.x Screenshots are available here.

The KDE Women Initiative

The community women’s group, KDE Women, was created and announced in March 2001 with the primary goal to increase the number of women in free software communities, particularly in KDE.

1 Million Commits

The community reached 1 million commits within a span of only 19 months, from 500,000 in January 2006 and 750,000 in December 2007, with the launch of KDE 4 at the same time.

Release Candidate of Development Platform Announced

A release candidate of KDE’s development platform consisting of basic libraries and tools to develop KDE applications was announced on October 2007.

First KDE & Qt event in India

The first conference of the KDE and Qt community in India happened in Bengaluru in March 2011 that became an annual event henceforth.

GCompris and KDE

In December 2014, the educational software suite GCompris joined the project incubator of KDE community (We have previously discussed GCompris, which is bundled with Escuelas Linux, a comprehensive educational distro for teachers and students).

KDE Slimbooks

In 2016, the KDE community partnered with a Spanish laptop retailer and announced the launch of the KDE Slimbook, an ultrabook with KDE Plasma and KDE Applications pre-installed. Slimbook offers a pre-installed version of KDE Neon and can be purchased from their website.

Check out the entire timeline in detail here for a more comprehensive outline or you can take a look at this 19-year span coverage:

Today, KDE is
powered by three great projects:

KDE Plasma

Previously called Plasma Workspaces, KDE Plasma facilitates a unified workspace environment for running and managing applications on various devices like desktops, netbooks, tablets or even smartphones.

Currently, KDE Plasma 5.14 is the most recent version and was released some days ago. The KDE Plasma 5 project is the fifth generation of the desktop environment and is the successor to KDE Plasma 4.

KDE Applications

KDE Applications are a bundled set of applications and libraries designed by the KDE community. Most of these applications are cross-platform, though primarily made for Linux.

A very recent project in this category is a music player called Elisa focused on an optimised integration with Plasma.

KDE Development Platform

The KDE Development Platform is what significantly empowers the above two initiatives, and is a collection of libraries and software frameworks released by KDE to promote better collaboration among the community to develop KDE software.

A Personal Note

It was an honour covering this article on KDE’s Birthday and I would like to take this opportunity to brief some of my personal favourite KDE based apps and distros that I have extensively used in the past and continue to:

Favorite KDE Apps

Amarok

The best feature I
like about this legendary music player is how it compiles your music
collection and retrieves lyrics from an online database!

KolourPaint

There are so many ways in which this beautiful program is a lot better than MS Paint!

Kaffeine

A KDE based
multimedia player with simple and easy to use features with support
for digital TV.

Favorite KDE-based Distros

Kubuntu

Many of you might be already aware of it. Instead of GNOME, this Ubuntu-based distro uses KDE as its default desktop environment.

SimplyMepis

SimplyMepis is a Debian based Linux that was started by Warren Woodford in 2003. SimplyMepis 11 uses Plasma 4 as its default desktop environment.

Some more lesser known but great apps are enlisted here. Many of these apps made to our list of best applications for Ubuntu.

Hope you liked our favourite moments in KDE history on their 22nd Anniversary! Please do write about any thoughts you might have about any of your memorable experiences with KDE in the comments below.


About Avimanyu Bandyopadhyay

Avimanyu is a Doctoral Researcher on GPU-based Bioinformatics and a big-time Linux fan. He strongly believes in the significance of Linux and FOSS in Scientific Research. Deep Learning with GPUs is his new excitement! He is a very passionate video gamer (his other side) and loves playing games on Linux, Windows and PS4 while wishing that all Windows/Xbox One/PS4 exclusive games get support on Linux some day! Both his research and PC gaming are powered by his own home-built computer. He is also a former Ubisoft Star Player (2016) and mostly goes by the tag “avimanyu786” on web indexes.

Source

Download VyOS 1.1.1 / 1.2.0 RC3

VyOS is a freely distributed and open source Linux-based operating system that uses the latest upstream Vyatta release to provide system administrators with a network OS that includes only open source software for transforming any computer into a viable and reliable network router or firewall.

Distributed as 32-bit and 64-bit installable only CDs

While the distribution is available for download as installable only CD ISO image or approximately 200MB in size each, designed to run on physical 32-bit and 64-bit platforms, it is distributed as virtual ISO images that run on virtual platforms.

Text-mode boot loader with useful information

The CD ISO images can be burned onto blank CD discs or written on USB flash drives of at least 512MB capacity in order to boot them from the BIOS of the PC. From the boot prompt, press the F1 key to get more information about the available boot options or boot the operating system, login with the “vyos” username and password combination and start the text-mode installation.

Text-mode installation for experienced users

As expected, the entire installation process is interactive and text based, requiring the user to partition the disk drive, copy the configuration files from the bootable medium to the local drive, setup users and passwords and install the bootloader.

After a reboot, the machine will boot directly into the newly installed VyOS operating system. You can log in using the “vyos” username (without quotes) and the password set during the installation process.

Bottom line

If you’ve always wanted to install the Vyatta operating system on your computer, but did not have to money to buy a fully supported edition, do not hesitate to grab the VyOS Linux distribution from Softpedia, as it is completely free and open source, and supports paravirtual drivers and runs perfectly on virtual platforms.

Vyatta fork Network firewall Network router Vyatta Firewall Router VPN

Source

WP2Social Auto Publish Powered By : XYZScripts.com