Seeing Further – 5 Things to Know About SUSE HPC

Share with friends and colleagues on social media

High performance computing (HPC) – the use of supercomputers and parallel processing techniques for solving complex computational problems – has traditionally been limited to the world of large research institutions, academia, governments and massive enterprises. But now, advanced analytics applications using artificial intelligence (AI), machine learning (ML), deep learning and cognitive computing are increasingly being used in the intelligence community, engineering and cognitive industries.

The need to analyze massive amounts of data and transaction-intensive workloads are driving the use of HPC into the business arena and making these tools mainstream for a variety of industries. Commercial users are getting into high performance applications for fraud detection, personalized medicine, manufacturing, smart cities, autonomous vehicles and many other areas. In order to effectively and efficiently run these workloads, SUSE has built a comprehensive and cohesive OS platform. In this blog, I will illustrate five things you should know about our SUSE solutions for AI over HPC.

Stronger partnerships

The first thing to know is how vital SUSE partnerships are to our HPC business. While the SLE HPC product can be obtained through direct Sales, it historically has been made available via our IHV and ISV partners. But obtaining the OS and associated HPC tools is only half of the story. Our key partnerships provide opportunities to innovate and contribute to open source development in AI/ML/DL and leading-edge advanced analytics applications.

Hewlett Packard Enterprises’ HPC software includes open source, HPE-developed and commercial HPC software that’s validated, integrated and performance-optimized for their systems. SUSE is the preferred HPE partner for Linux, HPC, OpenStack and Cloud Foundry solutions. And SUSE technology is embedded in every HPE ProLiant Server to power the intelligent provisioning feature. We have several joint papers that describe how SUSE and HPE together deliver HPC power to enterprises.

ARM System on a Chip (SoC) partners are driving new HPC adoptions in the modern data center. And SUSE is helping transform the 64-bit ARM platform to an enterprise computing platform by being the first commercial Linux distributor to fully support ARM servers. In fact, SUSE provides ARM HPC functionality as part of SLE HPC. The increased server density on the latest 64-bit ARM processors really helps to optimize the overall infrastructure costs – making Arm-based supercomputers more affordable. ARM SoC partners include Marvell (formerly Cavium), AMD, HPE, Cray, MACOM, Huawei HiSilicon, Mellanox, XILINX, Gigabyte, Qualcomm and more.

Cray builds their own Cray Linux Environment (CLE) – an adaptive operating system, purpose-built for HPC and designed for performance, reliability and compatibility – it also happens to be built on SUSE Linux Enterprise. Cray supercomputers continue to have a majority share of the Top500 sites around the world. And Cray is a key player in HPC, producing both Intel-based and ARM-powered supercomputers.

Lenovo’s strategy is to provide open access to clusters on their new highly efficient processors. SUSE and Lenovo jointly defined the scope of the Lenovo HPC stack using SUSE HPC componentry. In turn, Lenovo created the LiCO (Lenovo Intelligent Computing Orchestration) adaptation – a premier AI/HPC package tailored to power AI/ML/DL workloads.

Those are just a few highlights of key partnerships for SUSE and HPC. Others include NVIDIA, Microsoft Azure, Fujitsu, Intel, Univa, Dell Technologies, Altair, ANSYS, MathWorks, Supermicro and Bright Computing. Another aspect of partnering in open source is continuing to be a major contributor in communities that guide parallel computing – including OpenHPC (where SUSE is a founding member), OpenMP and many more involved in shaping HPC tools.

More differentiators

The second thing to know is the clear and concise set of HPC platform differentiators. This list encompasses what’s available in the SUSE OS as well as for HPC storage and HPC in the cloud:

  • SUSE Enterprise Storage is Ceph-based and software-defined, providing backup and archival storage for HPC environments that is very easy to manage
  • SLE HPC is enabled for Microsoft Azure and AWS Cloud
  • SLE HPC and associated HPC packages are fully supported for Aarch64 (Arm) and x86-64 architectures
  • Supported HPC packages, such as slurm for cluster workload management, are included with SLE HPC subscriptions. Also in the same HPC Module are Ganglia for cluster monitoring, OpenMPI, OpenBLAS, FFTW, HDF, Munge, MVAPICH and more.
  • SLE HPC is priced very competitively, and uses a simple, one price per cluster node model
  • SLE HPC provides ESPOS (Extended Service Pack Overlay Support) for longer service life for each service pack
  • SUSE Enterprise Linux is used in about half of the top 100 HPC systems around the world
  • SUSE Package Hub includes SUSE-curated and community-supported packages for HPC.

AI/ML focus

The third thing to know is our increased focus on the AI/ML market space and how we are providing the most efficient and effective HPC platform for these new workloads in a parallel computing environment. Technologies like cognitive computing, the Internet of Things and smart cities are powered by high performance computing and fueled by advanced data analytics. Businesses around the world today are recognizing that a Linux-based HPC infrastructure is vital to supporting the analytics applications of tomorrow. And we are finding that HPC is not just for scientific research any longer, and being adopted across banking, healthcare, retail, utilities and manufacturing.

In healthcare, an HPC platform underlies applications such as AI for precision medicine, diagnoses and treatment plans, cancer research, genomics and drug research. In the automotive world, we see HPC being used in aerodynamic designs, engine performance and timing, fuel consumption, safety systems and AI driverless operations. In manufacturing, HPC is vital for computational fluid dynamics, heat dissipation system design, AI advanced robotics, automated systems and other high-performance designs. And in energy, we find HPC as the basis for air flow designs in renewable energy, wind turbines and heating/cooling efficiencies.

SUSE Linux Enterprise HPC is integral to a highly scalable parallel computing infrastructure for supporting AI/ML and analytics applications being used across industries.

Restructured product

The fourth thing to know is how we’ve restructured our SLE HPC product with our goal of making HPC easier to implement and adapt. With SUSE’s concerted effort to make HPC easier to adopt, implement and maintain we have recently made the following changes:

  • Invoked a simple, one price per cluster node model with significantly reduced list prices that can be used by IHVs, ISVs and direct Sales.
  • SLE HPC is available for x86 and Arm HPC clusters
  • SLE HPC has a new “level 3 support” SKU specifically for partners
  • There are multiple service life options including Extended Service Pack Overlap Support and Long-Term Service Pack Support
  • There are revised terms and conditions for smaller cluster sizes and increased clarity on defining compute nodes
  • More frequent updates on demand for popular HPC packages, supported by SUSE

Growing market share

The fifth and final thing to know is that SUSE continues to grow its market share in the supercomputing arena, as evident by the market share in the latest Top500 report. The latest analysis of the Top500 supercomputer sites report shows that half of the top 30 run SUSE, expanding to 40% of the top 100. One of the most compelling statistics from the report is when we look at the vendor share of paid OS, which represents 116 supercomputers in the top 500 list. Here we see that over half of the paid Linux OS in the top 500 are running SUSE.

From the same segment, we also calculated the paid OS “performance share”, which is based on the total number of cores across 116 supercomputers. Here again we see that over half of the paid-for Linux OS in the top 500 are SUSE.

I will be providing more specifics on all of the areas I talked about in this blog post over the next several months, but hopefully I’ve given you a decent “first look”.

With our open and highly collaborative approach through our strong partner ecosystem, we can help deliver the required knowledge, skills and capabilities that will shape the adoption of HPC and AI technologies today and power the new analytics applications of tomorrow.

For more information about SUSE’s solutions for HPC, please visit https://www.suse.com/programs/high-performance-computing/ and https://www.suse.com/products/server/hpc/ and https://www.suse.com/solutions/hpc-storage/ .

Thanks for reading!

Source

Steam Play for Linux now lets you play over 2,600 Windows games


Steam Play for Linux now lets you play over 2,600 Windows games

ProtonDB has said users can play over 2,600 Windows games on Linux since the launch of the new Steam Play for Linux in August.

Valve launched Steam Play with Proton, making it easier for gamers to play Window games on Linux that had not yet been ported to the operating system, including games such as The Witcher 3, Dark Souls 3, and Dishonored.

Not all games may run perfectly on Linux, but the number of available games is growing daily.

This, however, is often the case with Windows 10, as it cannot play older games as well as the previous versions of Windows could – even under Compatibility Mode.

Since August, the database of games compatible with Proton has increased to over 2,600, which is more than half of the 5,000 Linux-native games that you can get through the Steam store.

Source

How To Install VirtualBox on CentOS 7

VirtualBox is an open source cross-platform virtualization software which allows you to run multiple guest operating systems (virtual machines) simultaneously.

In this tutorial we will show you how to install VirtualBox from the Oracle repositories on CentOS 7 systems.

Prerequisites

Before continuing with this tutorial, make sure you are logged in as a user with sudo privileges.

Installing VirtualBox from Oracle repositories

Follow the steps below to install the VirtualBox on your CentOS 7 machine:

  1. Start by downloading the build tools necessary for compiling the vboxdrv kernel module:

    sudo yum install kernel-devel kernel-headers make patch gcc

  2. Download the Oracle Linux repo file to /etc/yum.repos.d directory using the following wget command:

    sudo wget https://download.virtualbox.org/virtualbox/rpm/el/virtualbox.repo -P /etc/yum.repos.d

  3. Install the latest version of VirtualBox 5.2.x by typing:

    sudo yum install VirtualBox-5.2

    During the installation you will be prompted to import repository the GPG key. Type y and hit Enter. Once the installation is complete you will see the following output:

    Creating group ‘vboxusers’. VM users must be member of that group!

    Verifying : VirtualBox-5.2-5.2.20_125813_el7-1.x86_64

    Installed:
    VirtualBox-5.2.x86_64 0:5.2.20_125813_el7-1

  4. To verify that your VirtualBox installation was successful, run the following command which will check the status of the vboxdrv service.

    The output should look something like this indicating that the service is enabled and active :

    ● vboxdrv.service – VirtualBox Linux kernel module
    Loaded: loaded (/usr/lib/virtualbox/vboxdrv.sh; enabled; vendor preset: disabled)
    Active: active (exited) since Thu 2018-10-25 21:31:52 UTC; 6s ago

Installing VirtualBox Extension Pack

Thr VirtualBox Extension Pack provides several useful functionalities for guest machines such as virtual USB 2.0 and 3.0 devices, support for RDP, images encryption and more.

At the time of writing this article, the latest version of VirtualBox is 5.2.20. Before downloading the extension pack using the command bellow you should check the VirtualBox download page to see if a newer version is available.

Download the extension pack file by typing:

wget https://download.virtualbox.org/virtualbox/5.2.20/Oracle_VM_VirtualBox_Extension_Pack-5.2.20.vbox-extpack

When the download is completed import the extension pack using the following command:

sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.2.20.vbox-extpack

You will be presented with the Oracle license and prompted to accept the terms and conditions.

Do you agree to these license terms and conditions (y/n)?

Type y and hit Enter. Once the installation is completed you will see the following output:

0%…10%…20%…30%…40%…50%…60%…70%…80%…90%…100%
Successfully installed “Oracle VM VirtualBox Extension Pack”.

Starting VirtualBox

Now that you have VirtualBox installed on your CentOS system you can start it either from the command line by typing VirtualBox or by clicking on the VirtualBox icon (Applications -> System Tools -> Oracle VM VirtualBox).

When the VirtualBox is started for the first time, a window like the following should appear:

Conclusion

You have learned how to install VirtualBox on your CentOS 7 machine. You can now install your first Windows or Linux guest machine. To find more information about VirtualBox visit the official VirtualBox documentation page.

If you have any question, please leave a comment below.

Source

System76’s new ‘open-source computer’ will be available for preorder November 1

The hardware vendor specializing in Linux systems has set a date for its latest venture. Their new Thelio desktop systems will be available for preorder soon.

For those of you unaware of System76 [Official Site], they’ve been selling Linux-powered laptops, mini computers and servers for a few years now and have even created their own Ubuntu derivative named Pop!_OS. Last month they started teasing their newest project, Thelio, which aims to be an open hardware desktop system.

Details on what the hardware will entail specifically are still a little light, and we’ll likely only know for sure when the system goes up for preorder, but there’s a few things that we can say for sure. In reply to a tweet sent by Liam asking whether or not they’d have a custom motherboard, the CEO clarified that “we’re pulling proprietary functionality off the mainboard and onto a custom, open source (hardware and firmware) daughter board.”

This open firmware will be GPLv3-licensed and you can already check out the master repository for the Thelio on GitHub. Personally, I can’t really make much heads or tails of the various bits of code and teaser blueprints and hardware schematics that System76 and its CEO have been posting in the last few weeks but I can say that I am excited to see a hardware vendor work on their own custom solutions for the Linux desktop.

I suppose we’ll just have to see what the prices are like when preorders go live November 1. Systems are expected to be shipped in December of this year. You may also want to check out the animated saga that System76 have created around Thelio.

Source

Download Bitnami Ghost Stack Linux 2.3.0-0

Bitnami Ghost Stack is a free and multiplatform software project, a native installer that has been designed from offset to allow you to deploy the Ghost application and its runtime dependencies on desktop computers or laptops. Cloud images, a virtual appliance and a Docker container are also available for the Ghost app.

What is Ghost?

Ghost is an open source, platform-independent and free web-based application, a beautifully designed and completely customizable software designed especially for publishing content on the web, allowing users to write and publish their own blogs.

Installing Bitnami Ghost Stack

The Bitnami Ghost Stack product is distributed as native installers for all mainstream operating systems, including all GNU/Linux distributions, as well as the Microsoft Windows and Mac OS X operating systems, supporting 32-bit and 64-bit (recommended) computers.

To install Ghost on your personal computer, simply download the package that corresponds to your computer’s operating system and hardware architecture, run it and follow the instructions displayed on the screen.

Run Ghost in the cloud

Thanks to Bitnami, users are now able to run Ghost in the cloud with their hosting platform of choice. Pre-built cloud images for the Windows Azure and Amazon EC2 cloud hosting services are also available for download on the project’s homepage (see link below).

Virtualize Ghost on VMware and VirtualBox

In addition to deploying Ghost in the cloud or on personal computers, it is possible to virtualize it using Bitnami’s virtual appliance for the VMware ESX, ESXi and Oracle VirtualBox virtualization software.

The Ghost Docker container and LAMP/WAMP/MAMP module

A Ghost Docker container will also be available on the project’s website, but Bitnami does not provide a Ghost module for its LAMP, WAMP and MAMP stacks, which could have allows users to deploy the application on personal computer, without having to deal with its runtime dependencies.

Source

SUSE Linux Enterprise Server 12 STIG is available at Defense Information Systems Agency (DISA)

Share with friends and colleagues on social media

SUSE Linux Enterprise Server 12 STIG has been approved by Defense Information Systems Agency (DISA) and posted on IASE. This assists with the adoption of SUSE Linux Enterprise Server 12 in the US Federal Government and with Government Contractors.

STIG-SLES 12

What is STIG? Where does it come from?

The Security Technical Implementation Guides (STIGs) define the configuration and settings of United States Department of Defense (DoD) IT systems that provide a standardization of the security profile for a particular technology. These cybersecurity guidelines are developed from the Security Requirements Guides (SRGs) that are produced by the Defense Information Systems Agency (DISA).

STIGs are widely used by the United States government and allies, government contractors, and various commercial entities to provide a cybersecurity methodology for securing and hardening operating systems to a DoD security standard.

The SUSE Linux Enterprise Server 12 STIG has several items to note for System Administrators and Security Auditors such as:

  • AppArmor

The SUSE Linux Enterprise Server (SLES) 12 STIG references AppArmor, a Linux Security Module for implementing mandatory access controls (MAC) and application white listing in place of SELinux.

  • Common Access Card (CAC) Support

The SLES 12 STIG prescribes the use of two-factor authentication to access IT resources. Support for CAC smart cards was verified and detailed in a SUSE Blog Configuring Smart Card authentication on SUSE Linux Enterprise.

The acceptance and approval of the SLES 12 STIG continues the commitment of SUSE Security to meet various federal and international security standards such as Common Criteria and Federal Information Processing Standards (FIPS) 140-2.

More information

You can access the SLES 12 STIG and latest SUSE security certifications information at

You can reach out to SUSE security team at https://www.suse.com/support/security/contact/ or Adam Belmonte, Manager-Federal Programs (phone: 978-394-4780, email).

Share with friends and colleagues on social media

Source

MySQL Replication and MEMORY Tables – Lisenet.com

Memory tables do not play well with replication.

The Problem

After upgrading MySQL server from 5.6 to 5.7, we noticed that Master/Slave replication started to fail with the following error:

Could not execute Delete_rows event on table my_database.my_table; Can’t find record in ‘my_table’, Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event’s master log bin-log.003023, end_log_pos 552195868

If we restart the slave, we lose content of our MEMORY tables, and MySQL replication breaks.

Working Towards the Solution

MySQL Binary Logging: MySQL 5.6 vs MySQL 5.7

Prior to MySQL 5.7.7, the default binlog_format was STATEMENT. That’s what we used before the upgrade.

In MySQL 5.7.7 and later, the default is ROW. This is what we have after the upgrade.

Now, on MySQL 5.6, STATEMENT replication will often continue to run, with contents of the table just being different as there is a little checks whenever statements produce the same results on the slave.

ROW replication, however, will complain about a non-existent ROW for UPDATE or DELETE operation.

Workaround: use SQL_SLAVE_SKIP_COUNTER

When replication is broken because a row was not found and it cannot be deleted, we can do the following:

STOP SLAVE;
SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1;
START SLAVE;

This will skip the offending statement and resume replication. Be careful with it! In our case it’s fine, because the application logic is such that the contents of MEMORY tables can be safely lost (the table in question is used for caching).

Note that this approach is not a solution, because our relication will get broken as soon as there is another update or delete statement that affects MEMORY tables.

Solution: do not replicate MEMORY tables

If we don’t need MEMORY tables on the slave, then we can stop replicating them.

We need to create a replication filter which keeps the slave thread from replicating a statement in which any table matches the given wildcard pattern.

In our case, we would use the following:

–replicate-wild-ignore-table=”my_database.my_table”

If we have more than one database that has this problem, we can use a wildcard:

–replicate-wild-ignore-table=”%.my_table”

The above will not replicate updates that use a table where the database name is any, and the table matches “my_table”.

This can be done on the fly as well:

STOP SLAVE;
CHANGE REPLICATION FILTER REPLICATE_WILD_IGNORE_TABLE = (‘%.my_table’);
START SLAVE;

References

https://www.percona.com/blog/2010/10/15/replication-of-memory-heap-tables/
https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-features-memory.html

Source

Container-based server platform for Linux device management goes open source

Resin.io changed its name to balena and released an open source version of its IoT fleet management platform for Linux devices called openBalena. Targets include the Intel NUC, Jetson TX2, Raspberry Pi, and a new RPi CM3 carrier called the balenaFin.

A lot has happened with Resin.io since we covered its Docker container focused Resin.io cloud IoT platform and open source ResinOS Linux distro two years ago. Resin.io started out with a goal to create a “git push for devices” and develop lightweight Docker containers for Linux devices to enable easy security updates and IoT device management. It has since expanded beyond that to provide a comprehensive, scalable platform for IoT fleet management. Now the company has announced a name-change to balena in conjunction with the release of an open source openBalena version of its software.

New names for Balena technologies (left) and new balenaFin carrier for RPi CM3
(click images to enlarge)

 

Resin.io changed its name due “to trademark issues, to cannabis references, and to people mishearing it as ‘raisin.’” explained founder and CEO Alexandros Marinos in a blog announcement. (We

interviewed Marinos

in a Nov. 2016 feature on the use of container technologies in embedded devices.) The non-smokable new branding is based on the company’s balena container engine, now called balenaEngine, which derives its name from the engine’s underlying

Moby Project

container technology.

openBalena is an open source version of the Resin.io server/cloud platform for managing fleets of Linux-based IoT devices, now referred to as balenaCloud. The open source ResinOS distro, meanwhile, is now called balenaOS. Resin.io’s Etcher software for fast image writes to flash drives is now called balenaEtcher and the Project Fin carrier board for the Raspberry Pi Compute Module, which is now available starting at $129, is now called balenaFin (see farther below).

While balenaOS is an open source spinoff of the container-based device software that works with balenaCloud, the new openbalena is an open version of the balenaCloud server software. Customers can now choose between letting balena manage their fleet of devices or building their own openBalena based server platform that manages fleets of devices running balenaOS.

openBalena is a reduced feature version of the commercial product. However, the components shared by both commercial and open versions are closely aligned, which “will allow us to release updates for the project as we update our cloud product, while also allowing open source contributions to flow back into the cloud product,” writes Marinos. The new deployment workflows and tools to accomplish this coordination will be announced soon.

openBalena offers balenaCloud core features such as “the powerful API, the built-in device VPN, as well as our spectacular provisioning workflow,” writes Marinos. It can also similarly scale to large fleets of devices. However, openBalena is single-user rather than supporting multiple users. It’s controlled solely via the already open sourced balena CLI tool rather than balenaCloud’s web-based dashboard, and it lacks “updates with binary container deltas.”

On the device side, openBalena integrates the Yocto Project and Docker-based balenaOS. The client software been updated to let devices more easiy “join and leave a server” so you can set up your own openBalena server instead of being directed to balenaCloud.

openBalena’s CLI lets you provision and configure devices, push updates, check status, and view logs. Its backend services can securely store device information, allow remote management via a built-in VPN service, and distribute container images to devices.

On the server side, openBalena requires the following releases (or higher): Docker 18.05.0, Docker Compose 1.11, OpenSSL 1.0.0, and Python 2.7 or 3.4. The beta version of openBalena supports targets including Raspberry Pi boards, Intel NUC boards, the Nvidia Jetson TX2 module, and the new balenaFin. It appears it will eventually support the full list of devices supported by balenaCloud, most of which are detailed on LinuxGizmos in stories such as our catalog of 116 Linux hacker boards.These include Samsung’s Artik boards, Variscite’s DART-6UL, Aaeon’s UP board, the Banana Pi M1+, the BeagleBone and BeagleBone Green/Green Wireless, SolidRun’s HummingBoard i2, the Odroid-C1/C1+ and Odroid-XU4, the Orange Pi Plus2, Technologic’s TS-4900, and Siemens’ IOT2000 gateway.

balenaFin

The balenaFin, which was announced back in March, is a carrier board for the Raspberry Pi Compute Module 3 Lite (CM3 Lite). The Lite has the same 1.2GHz quad-core, Cortex-A53 Broadcom SoC as the standard version, but has an unpopulated eMMC socket with traces exposed via SODIMM-200.

balenaFin (left) and block diagram
(click images to enlarge)

 

The 91 x 90mm balenaFin is optimized for balena duty but can also be used as a general-purpose hacker board. The board, which has been available to selected customers in a pre-release version, is now publicly available in 8GB eMMC 5.1 ($129), 16GB eMMC ($139), or 32GB ($159) versions. There’s also a $179 dev kit version with 8GB that bundles the CM3 Lite, cables, standoffs, screws, and a 12V PSU. A DIN-rail case adds $25 to the price.

balenaFin detail views
(click images to enlarge)

In addition to running on the CM3, the board integrates a Samsung Artik 020 MCU module. The balenaFin is further equipped with HDMI and 10/100 Ethernet ports, 2x USB 2.0 host ports, and a 40-pin RPi GPIO connector. Wireless support includes a WiFi/Bluetooth module and mini-PCIe and Nano-SIM slots for cellular. You also get a 6-24V DC input, an RTC, and extended temperature support. For a full spec list, see our earlier Fin report.

Further information

The beta version of openBalena is now available for free download, and the unpriced balenaFin is available for pre-order. More information may be found in the openBalena announcement, openBalena product page, and openBalena GitHub page. More on the balenaFin, including link to shopping pages, may be found here.

Source

Red Hat Enterprise Linux 7.6 Released

Red Hat SoftwareOpen SourceOperating SystemsSoftwareITLinuxTechnology

Red Hat Enterprise Linux 7.6 Released (lwn.net)

Posted on Tuesday October 30, 2018 @07:20PM

Fresh on the heels of the IBM purchase announcement, Red Hat released RHEL 7.6 today. Business press release is here and full release notes are here. It’s been a busy week for Red Hat, as Fedora 29 also released earlier this morning. No doubt CentOS and various other rebuilds will begin their build cycles shortly.

The release offers improved security, such as support for the Trusted Platform Module (TPM) 2.0 specification for security authentication. It also provides enhanced support for the open-source nftables firewall technology.

“TPM 2.0 support has been added incrementally over recent releases of Red Hat Enterprise Linux 7, as the technology has matured,” Steve Almy, principal product manager, Red Hat Enterprise Linux at Red Hat, told eWEEK. “The TPM 2.0 integration in 7.6 provides an additional level of security by tying the hands-off decryption to server hardware in addition to the network bound disk encryption (NBDE) capability, which operates across the hybrid cloud footprint from on-premise servers to public cloud deployments.”

No user-servicable parts inside. Refer to qualified service personnel.

Working…

Source

Trying To Make Ubuntu 18.10 Run As Fast As Intel’s Clear Linux

With the recent six-way Linux OS tests on the Core i9 9900K there was once again a number of users questioning the optimizations by Clear Linux out of Intel’s Open-Source Technology Center and remarks whether changing the compiler flags, CPU frequency scaling governor, or other settings would allow other distributions to trivially replicate its performance. Here’s a look at some tweaked Ubuntu 18.10 Cosmic Cuttlefish benchmarks against the latest Intel Clear Linux rolling-release from this i9-9900K 8-core / 16-thread desktop system.

 

 

In the forum comments and feedback elsewhere to that previous Linux distribution comparison, there were random comments questioning:

– Whether Clear Linux’s usage of the P-State “performance” governor by default explains its performance advantage with most other distributions opting for the “powersave” governor.

– Clear Linux is faster because its built with the Intel Compiler (ICC). This is not the case at all with Clear being built by GCC and LLVM/Clang, but seems to be a common misconception. So just tossing that out there…

– Clear Linux is faster because of its aggressive default CFLAGS/CXXFLAGS/FFLAGS. This does certainly help in some built-from-source benchmarks, but that’s not all.

About a year ago I tried similar tests to today of tweaking Ubuntu 17.10 to try to run like Clear Linux while in this article is a fresh look. The OS configurations tested were: Clear Linux – Clear Linux running on the i9-9900K with its Linux 4.18.16 kernel, Mesa 18.3-devel, GCC 8.2.1, EXT4 file-system, and other default components.

Ubuntu – The default Ubuntu 18.10 installation on the same system with its Linux 4.18 kernel, Mesa 18.2.2, GCC 8.2.0, EXT4, and other stock components/settings. Ubuntu + Perf Gov – The same Ubuntu 18.10 stack but when switching over to the P-State performance governor rather than the default P-State powersave mode. Ubuntu + Perf + Flags – The P-State performance mode from above on Ubuntu 18.10 but also setting the same CFLAGS/CXXFLAGS/FFLAGS as used by Clear Linux before re-building all of the source-based benchmarks to compare the performance impact of the default tuning parameters. Ubuntu + Perf + Flags + Kernel – The Ubuntu 18.10 tweaked state of above with the P-State performance governor and tuned compiler flags while also building a Linux 4.18.16 kernel from source with the relevant patches and Kconfig configuration as shipped by Clear Linux. Their kernel configuration and carried patches can be found via clearlinux-pkgs/linux on GitHub. Ubuntu + Clear Docker – Ubuntu 18.10 with the P-State performance governor, running on the Clear Linux optimized kernel, and using Docker CE to run the latest Clear Linux Docker image for having all of the Clear user-space components running within this container. The same system was used for all of the testing and was the Intel Core i9 9900K at stock speeds, ASUS PRIME Z390-A motherboard, 16GB DDR4-3200 memory, Samsung 970 EVO 250GB NVMe SSD, and Radeon RX Vega 64 8GB graphics card.

 

 

I ran 92 different tests on Ubuntu 18.10 and Clear Linux for a wide look at the performance between these distributions ranging from scripting language benchmarks like PHP and Python to various scientific workloads, code compilation, and other tests. With the 92 test runs, here are the key findings from this large round of testing of Clear Linux compared to Ubuntu 18.10 in five different tuned states:

– When comparing the out-of-the-box Clear Linux to Ubuntu 18.10, the Intel distribution was the fastest in 66 of the benchmarks (72%) with Ubuntu only taking the lead in 26 of these different benchmarks.

– Switching to the P-State “performance” governor on Ubuntu 18.10 only allowed it to win over Clear Linux in an extra 5 benchmarks… Clear Linux still came out ahead 66% of the time against Ubuntu either out-of-the-box or with the performance governor.

– In the third state of Ubuntu 18.10 with using the P-State performance governor and copying Clear’s compiler flags allowed Ubuntu 18.10 to enhance its performance relative to the default Ubuntu configuration, but Clear Linux was still leading ~66% of the time.

– When pulling in the Clear Linux kernel modifications to Ubuntu 18.10 and keeping the optimized compiler flags and performance governor, Ubuntu 18.10 just picked up one more win while Clear Linux was still running the fastest in 59 of the 92 benchmarks.

– Lastly, when running the Clear Linux Docker container on Ubuntu 18.10 while keeping the tweaked kernel and P-State performance governor, Clear Linux won now in “just” 54 of the 92 benchmarks, or about 59% of the time it was the fastest distribution.

Going to these varying efforts to tweak Ubuntu for faster performance resulted in Clear Linux’s lead shrinking from 72% to 58%… Or about 64% if not counting the run of using the Clear Linux Docker container itself on Ubuntu 18.10 for the optimized Clear user-space.

This data shows that Clear Linux still does much more than adjusting a few tunables to get to the leading performance that it’s not as trivial as adjusting CFLAGS/CXXFLAGS, opting for the performance governor, etc. Clear additionally makes use of GCC Function Multi-Versioning (FMV) for optimizing its binaries to use the fastest code path depending upon the CPU detected at run-time among other compiler/tooling optimizations. It also often patches its Glibc and other key components beyond just Linux kernel patches not yet ready to be mainlined. Other misconceptions to clear up about this open-source operating system is that it does not use the Intel ICC compiler, it does run on AMD hardware (and does so in a speedy manner as well), and runs on Intel hardware going back to around Sandy Bridge, just not the very latest and greatest generations.

While the prominent performance numbers are already shared, on the following pages are a look at some of the interesting benchmark results from this comparison.
Source

WP2Social Auto Publish Powered By : XYZScripts.com