Linux Hardware Reviews, Open-Source Benchmarks & Linux Performance

Updated Proton 3.16 Beta For Steam Play Has DXVK 0.90, D3D11 Fixes

Valve in cooperation with CodeWeavers and other developers continues making rapid progress on Steam Play and their “Proton” downstream flavor of Wine.

3 Hours Ago –

Valve

– Proton 3.16 Beta

Google Engineer Proposes KUnit As New Linux Kernel Unit Testing Framework

Google engineer Brendan Higgins sent out an experimental set of 31 patches today introducing KUnit as a new Linux kernel unit testing framework to help preserve and improve the quality of the kernel’s code.

Chrome 70 Now Officially Available With AV1 Video Decode, Opus In MP4 & Much More

Google’s Chrome/Chromium 70 web-browser made it out today for Linux users as well as all other key supported platforms.

3 Hours Ago –

Google

– Chrome 70

Ubuntu Server Is Making It Easier To Deploy Let’s Encrypt SSL Certificates

The Ubuntu Server developers are looking to make it easier to deploy free SSL/TLS certificates from Let’s Encrypt.

5 Hours Ago –

Ubuntu

– Ubuntu Server + Certbot

AMD Dual EPYC 7601 Benchmarks – 9-Way AMD EPYC / Intel Xeon Tests On Ubuntu 18.10 Server

Arriving earlier this month was a Dell PowerEdge R7425 server at Phoronix that was equipped with two AMD EPYC 7601 processors, 512GB of RAM, and 20 Samsung 860 EVO SSDs to make for a very interesting test platform and our first that is based on a dual EPYC design with our many other EPYC Linux benchmarks to date being 1P. Here is a look at the full performance capabilities of this 64-core / 128-thread server compared to a variety of other AMD EPYC and Intel Xeon processors while also doubling as an initial look at the performance of these server CPUs on Ubuntu 18.10.

Elementary OS 5.0 “Juno” Released For A Pleasant Linux Desktop Experience

Just ahead of Ubuntu 18.10, Solus 4, and Fedora 29 among other forthcoming Linux distribution releases, Elementary OS 5 “Juno” has been released for a polished desktop experience that aims to compete with macOS and Windows for desktop usability.

The Next Linux Kernel Will Bring More Drivers Converted To Use BLK-MQ I/O

More Linux storage drivers have been converted to the “blk-mq” interfaces for the multi-queue block I/O queuing mechanism for the 4.20~5.0 kernel cycle.

NVIDIA 410.66 Linux Driver Released With RTX 2070 Support, Vulkan Ray-Tracing, Etc

NVIDIA has released the 410.66 Linux graphics driver today as their first stable release in the 410 series and comes with support for the new GeForce RTX 2070 graphics card.

12 Hours Ago –

NVIDIA

– NVIDIA 410.66

CodeWeavers CrossOver Linux 18 Released With DXVK/VKD3D Support

While CodeWeavers’ developers have been busy with improvements to Wine and Valve’s downstream “Proton” for allowing a great Windows-on-Linux gaming experience, they haven’t parted ways with their core business and today they announced the availability of CrossOver 18.

13 Hours Ago –

WINE

– CrossOver 18

NVIDIA GeForce RTX 2070 Linux Benchmarks Will Be Coming

NVIDIA’s embargo for reviews on the GeForce RTX 2070 graphics cards has now expired ahead of the expected retail availability on Wednesday.

15 Hours Ago –

NVIDIA

– GeForce RTX 2070

Linux’s LoRa Is Ready To Deliver Long-Range, Low-Power Wireless

Adding to the long list of new features for what will be Linux 4.20 or likely renamed to Linux 5.0 per Linus Torvalds’ numbering preferences is a new wireless networking subsystem within the kernel’s networking code… Meet LoRa.

The Biggest Features Of Linux 4.19: Intel/AMD, CoC, 802.11ax, EROFS, GPS & GASKET

With the Linux 4.19 kernel set to be released next weekend, here’s a recap of the most prominent features to be found in this next kernel release.

Qt 5.12 Beta 2 Brings Many Fixes

Just two weeks after the Qt 5.12 beta release, a second beta is now available for testing of this forthcoming tool-kit update.

17 Hours Ago –

Qt

– Qt 5.12 Beta 2

The Expected Feature We Didn’t See Yet For Ubuntu 18.10

While Ubuntu 18.10 is set to roll out this week with its new theme and an assortment of package updates and other enhancements, there is one feature Canonical previously talked about for the Ubuntu 18.10 “Cosmic Cuttlefish” cycle that we have yet to see made public.

19 Hours Ago –

Ubuntu

– Survey….

Intel DRM Linux Driver Working On DisplayPort Forward Error Correction

Going in hand with their work on display stream compression for dealing with next-generation displays, the Intel Direct Rendering Manager driver developers are working on “FEC” support to deal with any errors that come up in the stream.

19 Hours Ago –

Intel

– DP FEC

GCC Is Preparing To End Support For Solaris 10

Solaris 10, what may will argue as the last “good” Solaris operating system release before Sun Microsystems fell under control of Oracle, may soon see its support deprecated by the GCC compiler stack.

24 Hours Ago –

GNU

– Fond Solaris 10 Memories

15 October

Mesa Vulkan Drivers Move Ahead With PCI Bus Info, Calibrated Timestamps

With this weekend’s release of Vulkan 1.1.88 stealing the show was the Vulkan transform feedback capability to allow projects like DXVK to support Direct3D’s Stream Output functionality. But besides VK_EXT_transform_feedback, there are other extensions also being worked on for Mesa ANV / RADV Vulkan driver coverage.

15 October 08:55 PM EDT –

Mesa

– New Vulkan Features

Purism Shares The Latest Librem 5 Smartphone Progress – Dev Kits Going Out Soon

Purism has shared the latest details on their efforts to deliver the open-source Linux Librem 5 smartphone to market in 2019.

15 October 06:18 PM EDT –

Hardware

– Librem 5

Linux’s Qualcomm Ath10k Driver Getting WoWLAN, WCN3990 Support

The Qualcomm/Atheros “Ath10k” Linux driver coming up in the Linux 4.20~5.0 kernel merge window is picking up two prominent features.

15 October 05:55 PM EDT –

Hardware

– Atheros Ath10k

AMD CodeXL 2.6 Advances GPU Profiling, Static Analysis & GPU Debugging

AMD’s GPUOpen group today released CodeXL 2.6 as the newest version of their GPU developer suite.

15 October 05:42 PM EDT –

AMD

– CodeXL 2.6

FUSE File-Systems Pick Up Another Performance Boost With Symlink Caching

FUSE file-systems in user-space are set to be running faster with the upcoming Linux 4.20~5.0 kernel thanks to several performance optimizations.

X.Org Server 1.20.2 Released With A Bunch Of Bug Fixes

It’s almost been a half-year already since the release of the long delayed X.Org Server 1.20, but with no signs of X.Org Server 1.21 releasing soon, xorg-server 1.20.2 was announced today as the latest stable point release.

15 October 12:56 PM EDT –

X.Org

– xorg-server 1.20.2

FreeDesktop.org Might Formally Join Forces With The X.Org Foundation

FreeDesktop.org is already effectively part of X.Org given the loose structure of FreeDesktop.org, the key members/administrators being part of both projects, and FreeDesktop.org long being the de facto hosting platform from the X.Org Server to Mesa and much more. But now they may be officially joining forces.

15 October 12:32 PM EDT –

X.Org

– FreeDesktop.org + X.Org

Windows 10 October 2018 Update Performance Against Ubuntu 18.10, Fedora 29

As the latest of our benchmarks using the newly re-released Microsoft Windows 10 October 2018 Update, here are benchmarks of this latest Windows 10 build against seven different Linux distributions on the same hardware for checking out the current performance of these operating systems.

Experimental Patches For Using SIMD32 Fragment Shaders With Intel’s Linux Driver

Existing Intel graphics hardware already supports SIMD32 fragment shaders and the Intel open-source Linux graphics driver has supported this mode for months, but it hasn’t been enabled. That though is in the process of changing.

15 October 10:08 AM EDT –

Intel

– Intel SIMD32 Linux Mesa

Xfce4-Screensaver Has Its First Release – Fork Of MATE Screensaver, Forked From GNOME

As a new alternative over XScreenSaver or using other desktop environments’ screensaver functionality, xfce4-screensaver has out its first release albeit of alpha quality.

15 October 07:12 AM EDT –

Desktop

– Xfce4-Screensaver 0.1

Another Change Proposed For Linux’s Code of Conduct

With the Linux 4.19-rc8 kernel release overnight, one change not to be found in this latest Linux 4.19 release candidate are any alterations to the new Code of Conduct. The latest proposal forbids discussing off-topic matters while protecting any sentient being in the universe.

DragonFlyBSD Lands Another NUMA Optimization Helping AMD Threadripper 2 CPUs

DragonFlyBSD lead developer Matthew Dillon has been quite impressed with AMD’s Threadripper 2 processors particularly the Threadripper 2990WX with 32-cores / 64-threads. Dillon has made various optimizations to DragonFly for helping out this processor in past months and overnight he made another significant improvement.

15 October 05:30 AM EDT –

BSD

– Threadripper 2990WX

KDE Frameworks 5.51 Released

KDE Frameworks 5.51 is out today as the latest monthly update to this collection of KDE libraries complementing Qt5.

15 October 05:17 AM EDT –

KDE

– KDE Frameworks 5.51

Linux 4.19-rc8 Released With A Lot Of “Tiny Things”

Greg Kroah-Hartman went ahead and released Linux 4.19-RC8 as the last test release of the upcoming Linux 4.19 kernel.

14 October

Fedora Workstation 29 Is Looking Up To Be Another Impressive Release, Looking Great

In addition to Ubuntu 18.10 releasing soon, Fedora 29 is set to be release by month’s end if all goes well.

14 October 08:45 PM EDT –

Fedora

– Fedora 29

GNOME’s Geoclue 2.5 Brings Vala Support, WiFi Geolocation For City-Level Accuracy

GNOME’s Geoclue library that provides a D-Bus service for location information based on GPS receivers, 3G modems, GeoIP, or even WiFi-based geolocation has been baking a lot of changes.

14 October 02:50 PM EDT –

GNOME

– Geoclue 2.5

HUANUO HNDSK2 Dual Monitor Arms Work Out Great

If you are in the market for a dual monitor desk arm/mount that clamps to your desk, the HUANUO HNDSK2 is a surprisingly suitable contender.

14 October 01:05 PM EDT –

Hardware

– HUANUO HNDSK2

KaOS 2018.10 Released With KDE Plasma 5.14 Desktop, Wayland 1.16

A new ISO spin is available of the KaOS Linux distribution that is closely aligned with shipping the upstream KDE desktop experience.

14 October 11:44 AM EDT –

KDE

– KaOS 2018.10

Vulkan Cracks 2,500 Projects On GitHub

After cracking 2,000 projects referencing Vulkan on GitHub earlier this year, this week it passed the milestone of having more than 2,500 projects.

14 October 08:55 AM EDT –

Vulkan

– Vulkan 2500

Wine-Staging 3.18 Released With Some New Patches While Other Code Got Upstreamed

It has been a very exciting weekend for Linux gamers relying upon Wine for running Windows titles under Linux… There was the routine bi-weekly Wine 3.18 development release on Friday but yesterday brought transform feedback to Vulkan and in turn Stream Output to DXVK to fix up a number of D3D11 games. Today is now the Wine-Staging 3.18 release.

14 October 07:08 AM EDT –

WINE

– Wine Staging

You Can Help Ubuntu This Weekend Test The Near-Final Cosmic Cuttlefish

If all goes well, the Ubuntu 18.10 “Cosmic Cuttlefish” release will happen on 18 October but for that to happen they could use your help this weekend testing their release candidate spins.

14 October 06:55 AM EDT –

Ubuntu

– Ubuntu 18.10

KDE Will Now Safely Spin Down External Hard Drives When Unmounting

Fixing a seven year old bug since the KDE4 days, KDE will now spin down external hard drives unmounting the drives to help stave off possible data loss / corruption.

14 October 06:40 AM EDT –

KDE

– Spin Them Down

13 October

DXVK 0.90 Released With Stream Output, Several Game Fixes

Hot off merging transform feedback into DXVK for supporting Direct3D 11 Stream Output, Philip Rebohle released DXVK 0.90.

NVIDIA 396.54.09 Vulkan Driver Released With Transform Feedback, Intel ANV Gets TF Too

Today is certainly a very exciting day in the Vulkan space.

13 October 10:15 AM EDT –

Vulkan

– NVIDIA Vulkan Beta

GCC9 Lands Initial C++ Networking TS Implementation

The GCC9 compiler code as of Friday has an initial implementation of the C++ networking technical specification.

13 October 09:22 AM EDT –

GNU

– Networking TS

DXVK Already Lands Vulkan Transform Feedback Support, RADV Posts Patches

With the newly-announced Vulkan 1.1.88 that brings VK_EXT_transform_feedback, the DXVK Direct3D-on-Vulkan layer has already implemented the transform feedback support.

13 October 09:03 AM EDT –

Vulkan

– DXVK Transform Feedback

Vulkan 1.1.88 Released With Transform Feedback As A Big Win For VKD3D / DXVK

Vulkan 1.1.88 is out this morning and it’s an exciting Vulkan update. Say hello to Vulkan transform feedback!

13 October 08:36 AM EDT –

Vulkan

– Vulkan 1.1.88

LibreOffice Lands More Qt5 Integration Improvements, LXQt Support

Recently there’s been more improvements for LibreOffice with its Qt5 integration to allow this open-source office suite to jive better with Qt5-based desktops like KDE Plasma and now LXQt.

13 October 08:25 AM EDT –

LibreOffice

– LibreOffice Bits

Intel’s Vulkan Driver Is Working On A NIR Cache

As a possible performance win, Jason Ekstrand as the lead developer of the Intel ANV open-source Vulkan driver has been developing a NIR cache.

13 October 06:16 AM EDT –

Intel

– NIR Cache

12 October

Wine 3.18 Brings FreeType Subpixel Font Rendering, Wine Console DPI Scaling

A new bi-weekly Wine development release is out for those wanting to try the latest Windows gaming on Linux experience (outside of Steam Play / Proton) or running other Windows applications on Linux and other operating systems.

12 October 08:41 PM EDT –

WINE

– Wine 3.18

Ubuntu Touch OTA-5 Is Being Prepped With New Browser, Qt Auto Scaling

The UBports community that continues to maintain Ubuntu Touch for a range of mobile devices will soon be rolling out Ubuntu Touch OTA-5.

12 October 03:26 PM EDT –

Ubuntu

– UBports Ubuntu Touch

A Look At The Windows 10 October 2018 Update Performance With WSL

As the first of our Linux vs. Windows benchmarks coming around Microsoft’s Windows 10 October 2018 Update, today we are exploring the Windows Subsystem for Linux (WSL) performance to see if they have finally managed to improve the I/O performance for this Linux binary compatibility layer and how the WSL performs compared to Ubuntu and Clear Linux.

MidnightBSD 1.0 Is Ready To Shine With ZFS Support, Ryzen Compatibility

Especially with TrueOS once again taking a new direction, one of the few current BSDs focused on a great desktop experience is MidnightBSD that is about to mark its 1.0 release.

12 October 09:51 AM EDT –

BSD

– MidnightBSD 1.0

Intel Whiskey Lake Support Formally Added To Mesa 18.3

The recently posted patch for Intel Whiskey Lake support in Mesa has now been merged for Mesa 18.3.

12 October 08:06 AM EDT –

Intel

– Mesa 18.3

GCC 6.5 Is Being Prepared As The Last GCC6 Compiler Release

Version 6.5 of the GNU Compiler Collection will soon be released to end out the GCC6 series.

12 October 07:07 AM EDT –

GNU

– GCC 6.5

La Frite: A Libre ARM SBC For $5, 10x Faster Than The Raspberry Pi Zero

The folks at the Libre Computer Project who have successfully released the Tritium, Le Potato, and other ARM SBCs while being as open-source friendly as possible have now announced La Frite.

12 October 06:21 AM EDT –

Hardware

– Libre Computer Board

Ubuntu’s Bring-Up Of NVIDIA’s Driver With Mir Continues

The Ubuntu developers continuing to work on the Mir display server stack have made headway in their NVIDIA driver enablement effort.

12 October 05:19 AM EDT –

Ubuntu

– Experimental NVIDIA + Mir

GNOME 3.31.1 Released As The First Step Towards GNOME 3.32

GNOME 3.31.1 was released on Thursday as the first step towards the GNOME 3.32 desktop update due out in March.

12 October 05:00 AM EDT –

GNOME

– GNOME 3.32 Desktop

Source

Linux-Focused Penguin Computing Banking On AI Infrastructure

Custom Linux-based system builder Penguin Computing on Tuesday formed a new practice focused on building data center infrastructures for artificial intelligence.

The new Penguin Computing artificial intelligence Practice is a full-service consultancy focused on providing clients a base on which to build their artificial intelligence technologies, said Philip Pokorny, chief technology officer for the Fremont, Calif.-based system builder.

Penguin Computing was founded in 1998 to build solutions based exclusively on the Linux operating system, Pokorny told CRN.

[Related: CRN Exclusive: Pure Storage CEO Giancarlo On A.I., Innovation, And M&A Strategy]

“We’re proud to continue doing so,” he said. “As we grew, we developed the skills to do the complete integration of compute, storage, network, racks, and so on. It’s great for customers doing racks of computers at a time, or virtualization companies looking for pre-configured infrastructures. Now it’s AI. As customers look to scale up and scale out their AI, they need the rack-scale capabilities we provide.”

Penguin Computing is not positioning itself to put together artificial intelligence clusters, but is instead focusing on the infrastructure to help those with artificial intelligence expertise with the underlying infrastructure, Pokorny said.

“We won’t tell you how to do AI,” he said. “We make it possible for you to do it at rack scale.”

Penguin Computing’s value-add in this business is the skillset it brings to assembling all the different components to a complete infrastructure, Pokorny said.

“A server company will sell you compute,” he said. “A storage company will sell you storage. They’re not going to provide the full rack. They’re not going to connect it for you. We provide it all under one roof.”

Artificial intelligence workloads have unique challenges that vary according to the size of specific workloads or the batch tests that lead to traffic issues, Pokorny said. Penguin addresses this by working with multiple storage partners who address options ranging from 100-percent flash file systems to the company’s own cost-optimized offerings, he said.

The company also has strong partnerships with Nvidia for graphics and imaging-intense analysis, as well as with other providers of acceleration technology for various types of artificial intelligence workloads, Pokorny said.

He cited the case of one customer, whose name he declined to specify, which based its artificial intelligence offering on a world-class system built by Penguin. “If we put together all the hardware that we brought together for that client, it would ran in the top-ten computers in the world, if they would let us benchmark it,” he said.

Source

Nuclear Reactor Startup Transatomic Power going Open Source after Closure

Last updated October 15, 2018 By Avimanyu Bandyopadhyay Leave a Comment

It seldom happens that certain circumstances do not allow one idea to prosper as planned. But Open Source can solve that issue, once the idea is shared with the world. Others can take on that work, build upon and keep improving it.

This recently happened with Transatomic Power (founded by Mark Massie and Dr. Leslie Dewan in April 2011), a Nuclear Startup that introduced a brand new design of its own Nuclear Reactor that is a lot more efficient than conventional ones.

As they haven’t been able to build it within their targeted timeframe, they announced suspending operations on September 25, 2018. But declaring their designs Open Source is certainly going to help change things for the better.

“We’re saddened
to announce that Transatomic is ceasing operations. But we’re still
optimistic and enthusiastic about the future of nuclear power. To
that end we’re therefore open-sourcing our technology, making it
freely available to all researchers and developers. We’re immensely
grateful to the advanced reactor community, and we hope you build on
our tech to make great things!”

Via the Twitter account of Transatomic Power

Things looked really promising in the early days:

Surely, the Startup had some very noble goals as described in the above video from 2016. But what went wrong? What are the good and bad things out of this news? Let’s discuss.

How different is Transatomic’s Design compared to conventional Nuclear Reactors?

Light Water Reactor vs Molten Salt ReactorImage Credit: Transatomic Science

Conventional Nuclear Reactors are most commonly industrialized as light-water Reactors, which are the most common type of Thermal Reactors. Transatomic Reactors, on the other hand, are improved versions of molten-salt Reactors. Lets briefly point out the differences:

Advantages of Transatomic Nuclear Reactors

  • Light-water Reactors use fuel in solid form while Transatomic’s molten-salt Reactors use liquid fuel. This makes easier maintenance possible.
  • Nuclear waste production in this molten-salt design is considerably lower (4.8 Tons per year) than light-water Reactors (10 Tons per year).
  • Significantly safer than light-water Reactors, even in the worst-case accident scenarios.
  • Operates at atmospheric pressure in contrast to 100 times the same in case of light-water, raising expenses for the latter.

You can check out their Science (or should we now say, “Open Science”) page where all the above points have been discussed in detail in addition to the white-paper after highlighting significant improvements to the original molten-salt Reactor model.

In their assessment paper, we learned about SCALE, a Comprehensive Modeling and Simulation Suite for Nuclear Safety Analysis and Design, homepage located on the Oak Ridge National Laboratory page. This lab is where the first molten-salt Reactor was designed.

Why is making an Open Source Nuclear Reactor Design a better step for Humanity?

  • The better scope of consistently improving the models via the scientific community.
  • An Open model is always good news for our environment.
  • Similar or other industries will also be encouraged to adopt such Open measures.

When Transatomic Wasn’t Open Source

Looking back in the past, there were some claims that had to be revalidated in 2015 and was endorsed early this year by Oak Ridge National Laboratory. We found this much earlier quote from co-founder Dr. Leslie Dewan:

“In early 2016, we realized there was a problem with our initial analysis and started working to correct the error,” cofounder Leslie Dewan said in an e-mail response to an inquiry from MIT Technology Review.

“In retrospect, that was a mistake of mine,” she said during the phone interview. “We should have open-published more of our information at a far earlier stage.”

Would Transatomic have to go through all this had they been Open Source from day one? Clearly no. Most definitely, their initial intention was indeed a noble one!

Following are the thoughts from Dr. Kord Smith, who is a professor of Nuclear Science and engineering at MIT and an expert in the physics of Nuclear Reactors. He analyzed the Transatomic Nuclear Reactor design in late 2015.

Smith stresses that the founders weren’t acting in bad faith, but he did note they didn’t subject their claims to the peer-review process early on.

“They didn’t do any of this intentionally,” Smith says. “It was just a lack of experience and perhaps an overconfidence in their own ability. And then not listening carefully enough when people were questioning the conclusions they were coming to.”

More importantly, Transatomic now realizes two very noteworthy principles highlighted on their Open Source page:

(1) Climate change is real, and unless massive action to de-carbonize the grid is taken soon, it will threaten much of humanity’s way of life.

(2) Novel nuclear technologies present the best way to address the issue, by rapidly expanding carbon-free energy at scale and making fossil fuels a thing of the past.

One critical thing considering the above two principles is that the newly available open resources from Transatomic will help address the issue of Nuclear waste production and innovate ways to reduce it.

Though it is sad that the company is shutting down, a new addition to the Open Science community is certainly great news for Open Research Practices and we are much glad about the later development.

Their Open Source page is now titled “Open-sourcing our reactor design, and the future of Transatomic”. Considering the latter part of this title, can we expect more Open Designs from them in the future? Have a feeling that we haven’t yet seen the last of Transatomic Power!

Do you agree they
should have followed an Open Source Approach from the beginning
itself? Do you like their new approach and improved design? Feel
free to share your thoughts in the comments below.


About Avimanyu Bandyopadhyay

Avimanyu is a Doctoral Researcher on GPU-based Bioinformatics and a big-time Linux fan. He strongly believes in the significance of Linux and FOSS in Scientific Research. Deep Learning with GPUs is his new excitement! He is a very passionate video gamer (his other side) and loves playing games on Linux, Windows and PS4 while wishing that all Windows/Xbox One/PS4 exclusive games get support on Linux some day! Both his research and PC gaming are powered by his own home-built computer. He is also a former Ubisoft Star Player (2016) and mostly goes by the tag “avimanyu786” on web indexes.

Source

Turn Your Old PC into a Retrogaming Console with Lakka Linux

Last updated October 16, 2018 By Abhishek Prakash Leave a Comment

If you have an old computer gathering dust, you can turn it into a PlayStation like retrogaming console with Lakka Linux distribution.

You probably already know that there are Linux distributions specially crafted for reviving older computers. But did you know about a Linux distribution that is created for the sole purpose of turning your old computer into a retro-gaming console?

Lakka is a Linux distribution specially for retrogaming

Meet Lakka, a lightweight Linux distribution that will transform your old or low-end computer (like Raspberry Pi) into a complete retrogaming console,

When I say retrogaming console, I am serious about the console part. If you have ever used a PlayStation of Xbox, you know what a typical console interface looks like.

Lakka provides a similar interface and a similar experience. I’ll talk about the ‘experience’ later. Have a look at the interface first.

Lakka Retrogaming interface

Lakka: Linux distributions for retrogaming

Lakka is the official Linux distribution of RetroArch and the Libretro ecosystem.

RetroArch is a frontend for retro game emulators and game engines. The interface you saw in the video above is nothing but RetroArch. If you just want to play retro games, you can simply install RetroArch in your current Linux distribution.

Lakka provides Libretro core with RetroArch. So you get a preconfigured operating system that you can install or plug in the live USB and start playing games.

Lakka is lightweight and you can install it on most old systems or single board computers like Raspberry Pi.

It supports a huge number of emulators. You just need to download the ROMs on your system and Lakka will play the games from these ROMs. You can find the list supported emulators and hardware here.

It enables you to run classic games on a wide range of computers and consoles through its slick graphical interface. Settings are also unified so configuration is done once and for all.

Let me summarize the main features of Lakka:

  • PlayStation like interface with RetroArch
  • Support for a number of retro game emulators
  • Supports up to 5 players gaming on the same system
  • Savestates allow you to save your progress at any moment in the game
  • You can improve the look of your old games with various graphical filters
  • You can join multiplayer games over the network
  • Out of the box support for a number of joypads like XBOX360, Dualshock 3, and 8bitdo
  • Unlike trophies and badges by connecting to RetroAchievements

Getting Lakka

Before you go on installing Lakka you should know that it is still under development so expect a few bugs here and there.

Keep in mind that Lakka only supports MBR partitioning. So if it doesn’t read your hard drive while installing, this could be a reason.

The FAQ section of the project answers the common doubts, so please refer to it for any further questions.

Do you like playing retro games? What emulators do you use? Have you ever used Lakka before? Share your views with us in the comments section.


About Abhishek Prakash

I am a professional software developer, and founder of It’s FOSS. I am an avid Linux lover and Open Source enthusiast. I use Ubuntu and believe in sharing knowledge. Apart from Linux, I love classic detective mysteries. I’m a huge fan of Agatha Christie’s work.

Source

Download Oracle VM VirtualBox Linux 5.2.20

Oracle VirtualBox (formerly Sun VirtualBox, innotek VirtualBox and Sun xVM VirtualBox) is a free and cross-platform virtualization application that provides a family of powerful x86 virtualization tools designed for desktop, server and embedded use. VirtualBox can be used on Linux, Solaris, Mac OS X and Microsoft Windows platforms to run virtual machines of any of the aforementioned operating systems, as well as any BSD distribution, IMB OS/2 flavors, DOS, Netware, L4, QNX, and JRockitVE.

It’s portable

Oracle VirtualBox is portable, requires no hardware virtualization, includes guest additions and great hardware support. It also features USB device support, full ACPI support, multiscreen resolutions, and built-in iSCSI support. Support for PXE network boot, multi-generation branched snapshots, remote machine display, extensible RDP authentication, and USB over RDP (Remote Desktop Protocol) is also integrated in Oracle VirtualBox.

Supports 32-bit and 64-bit architectures

At the moment, the program is capable of running only AMD64/Intel64 and x86 architectures. By default, when creating a new virtual machine, you will be able to select the operating system that you plan on virtualizing. Ever since Oracle acquired the Sun Microsystems company, VirtualBox is actively developed by a team of professional engineers who implement new features and functionality with every release.

Virtual machines can be highly customized

Once a new virtual machine has been created in VirtualBox, users will be able to change its type, version, boot order, chipset, pointing device, base memory (RAM), processors, video memory, monitor count, audio driver and controller, network adapters, serial and USB ports, and storage devices. When talking about storage devices supported by VirtualBox, we can mention that you will be able use a virtual CD/DVD image file (also known as ISO image) or use the host CD/DVD drive for running the virtualized OS.

The most sophisticated and powerful virtualization software

Support for USB devices is also a controversial feature of this application, because you will need to do some tweaking before it will work as intended. But all in all, this is one of the most sophisticated and powerful virtualization software for Linux operating systems.

X86 virtualization PC emulator X86 emulator VirtualBox X86 Virtualization PC

Source

Download NetworkManager-libreswan Linux 1.2.10

NetworkManager-openswan is an open source package that offers Openswan support for the NetworkManager application.

NetworkManager is an open source software designed as a network connection manager for most Debian- and RPM-based distributions.

Openswan is a complete IPsec implementation designed especially for the Linux 2.0, 2.2, 2.4 and 2.6 kernel branches. It began as a fork of the FreeS/WAN project, which has been discontinued.

Openswan is also an open source project, distributed on many operating system, including Linux.

NetworkManager offers great networking powered by DBus. It is used in many GNOME-based distributions.

NetworkManager openswan Openswan connection IPsec client NetworkManager Openswan IPsec Network

Source

Ceph for the rest of us: Rethinking your storage

Share with friends and colleagues on social media

    The steady crush of data growth is at your doorstep, your storage arrays are showing their age, and it just doesn’t seem like you have either the budget, staff or the resources to keep up. Whether you recognize it or not, that’s the siren call for Ceph, the open-source distributed storage system designed for performance, reliability and scalability.

    The only rub is that, as an IT practitioner familiar with RAID, SANS and proprietary storage solutions of all shapes and sizes, there’s not much about Ceph that feels, well, comfortable. After all, Ceph uses something called “replication” or “erasure coding” instead of RAID; it provides block, object and file storage services all in one; and it scatters data across drives, servers and even geographical locations.

    Still, you have that gnawing sense that you need to get on board — even if it feels like the expertise you and your team need is just out of reach.

    The real challenge isn’t the technology itself; that commodity stuff — servers, fast networks and loads of drives — is familiar enough. Ceph expertise is really about getting accustomed to abstracting that familiar hardware and willingly handing off routine aspects of the cluster to more automated, DevOps-style approaches. It also helps to get your hands on a cluster of your very own to see just how it works.

    SUSE Enterprise Storage can help.

    SUSE is a primary contributor to the open-source Ceph project, and we’ve added a lot of upstream features and capabilities that have gone a long way toward shaping the technology for the enterprise. With SUSE Enterprise Storage, we’ve made the technology even more attainable by automating the deployment with Salt and DeepSea, a collection of Salt files for deploying Ceph.

    With the latest releases of SUSE Enterprise Storage, you can use DeepSea to deploy Ceph in hours, not days or weeks. With the openATTIC graphical dashboard, newcomers can get a feel for how a Ceph storage array works while the slightly more expert can use it to manage, maintain and use the cluster. For example, the Dashboard makes adding iSCSI, NFS or other shared storage services straightforward and familiar, which gives you and your team more confidence with the technology.

    In the image below, you can see how the dashboard offers a real-time view of the cluster, including available storage capacity, health and availability:

    The SUSE Enterprise Storage dashboard with openATTIC.

    The SUSE Enterprise Storage dashboard with openATTIC.

    The visual information on the Dashboard is just the start. Under the covers it’s a SUSE Linux Enterprise Server 12 SP3 running as a Salt master that controls any number of Salt minions, which provide monitor, manager, storage, RADOS, iSCSI, NFS and other services to your storage cluster.

    Instead of wrestling with resources or manually figuring out how to set up an iSCSI gateway, for example, SUSE Enterprise Storage starts by automating the deployment in a predictable, reliable way, then gives you a graphical way to interact with all the components. It also gives you the flexibility to create storage pools and make them available through the gateways you want. Adding other services to your Ceph cluster requires only minor modifications to a straightforward policy.cfg file, which you apply with Salt to add even more capabilities and capacity:

    The policy.cfg defines your various nodes, including all your Ceph minions.

    The policy.cfg defines your various nodes, including all your Ceph minions.

    In this policy.cfg example, you can see the iSCSI gateway (role-igw) service role applied to any node you’ve assigned a hostname that begins with “igw”. Other Ceph cluster roles are assigned to other nodes, which work together to replicate data, set up storage pools and make it all accessible through familiar APIs, the command line and the dashboard.

    Adding the role-igw role to your Ceph cluster from the example above provides the iSCSI service to your cluster, which enables you to add new iSCSI shares from the dashboard at will:

    The GUI makes adding iSCSI and other gateways straightforward.

    The GUI makes adding iSCSI and other gateways straightforward.

    Next steps

    Of course, the key to any storage deployment is good planning, and regardless of the tools you use, you need to figure out how your Ceph storage cluster will be used — today and into the future. There’s no shortcut to good planning, but that part should be familiar to anyone who’s managed enterprise storage.

    In Part 2 of this SUSE Enterprise Storage series, I’ll show you how to sketch out a small-scale proof-of-concept Ceph plan and deploy SUSE Enterprise Storage in a purely virtual environment. This lab environment won’t be suitable for production purposes, but it will give you a working storage cluster that looks, feels and acts just like a full-blown deployment.

    Share with friends and colleagues on social media

      Source

      ARaymond migre sur SAP HANA avec SUSE

      Share with friends and colleagues on social media

        Depuis sa création il y a plus de 150 ans, ARaymond est devenu leader sur le marché de la fabrication de solutions d’assemblage et de fixation. La société fournit des pièces spécialisées aux secteurs de l’automobile, de l’industrie, de l’agriculture, de l’énergie et de l’industrie pharmaceutique. ARaymond est présent au niveau international avec 6 500 employés répartis sur 26 sites de fabrication implantés sur quatre continents.

        LE DEFI
        Pour optimiser la productivité tout en réduisant au maximum les déchets produits, les fabricants du secteur de l’automobile, de l’industrie et de l’industrie pharmaceutique utilisent généralement le modèle de production Just-in-Time. Pour que les chaînes de montage fonctionnent correctement et de manière économique, les fournisseurs de ces fabricants doivent respecter scrupuleusement les délais, afin d’éviter de subir d’importantes pénalités financières et de limiter les répercussions sur la réputation de l’entreprise.

        « La réussite d’ARaymond dépend de notre capacité à continuer à produire et distribuer des dispositifs de fixation et des solutions d’assemblage de manière rapide et efficace. Tout retard de livraison risquerait d’entraîner une rupture de fabrication très coûteuse pour nos clients, qui pourrait potentiellement donner lieu à des pénalités et nuire à notre réputation de leader du secteur. » Jérôme Rézé, Infrastructure Director chez ARaymond

        ARaymond IT Technology, service informatique interne fort de 170 employés, offre un service centralisé essentiel. Il a mis en place SAP pour supprimer la prise en charge de toutes les autres bases de données et a opté pour un passage anticipé à la base de données SAP HANA, pour une migration fluide, sans précipitation ni interruption des systèmes d’entreprise clés. Cette migration permettait également d’aborder certains problèmes de performances.

        « Nous étions conscients de la nécessité de migrer vers SAP HANA dans un avenir proche. Nous avons donc décidé de limiter les risques, ainsi que les perturbations au niveau de l’activité, en débutant la migration sans tarder. Cette approche nous a permis de prendre notre temps en commençant par la migration des systèmes les moins critiques. » Marc Coste, SAP Technical Leader

        LA SOLUTION
        Pour prendre en charge ses nouveaux serveurs de base de données SAP HANA, ARaymond a choisi SUSE Linux Enterprise Server for SAP Applications, associée à SUSE Linux Enterprise High Availability Extension.

        « Nous avons d’abord envisagé l’implémentation de Red Hat Enterprise Linux à plus grande échelle, étant donné que nous avions utilisé cette solution pour des applications autres que SAP pendant plusieurs années. Toutefois, lorsque nous avons collaboré avec notre partenaire TeamWork pour étudier les avantages et les inconvénients de chaque distribution, nous avons rapidement constaté que la grande majorité du marché SAP utilisait SUSE Linux Enterprise, ce qui impliquait un écosystème très solide pour le logiciel SAP autour du système d’exploitation SUSE. L’offre commerciale de SUSE Linux Enterprise Server for SAP Applications était également bien plus attractive, et nous avons aimé l’idée d’avoir une version spécialement adaptée aux exigences des solutions SAP. » Jérôme Rézé

        Tous les détails de cette migration sont disponibles dans le Case Study

        Share with friends and colleagues on social media

          Source

          The newest intelligent supercomputer – Red Hat Enterprise Linux Blog

          Summit, the world’s fastest supercomputer running at Oak Ridge National Laboratory (ORNL), was designed from the ground up to be flexible and to support a wide range of scientific and engineering workloads. In addition to traditional simulation workloads, Summit is well suited to analysis and AI/ML workloads – it is described as “the world’s first AI supercomputer”. The use of standard components and software makes it easy to port existing applications to Summit as well as develop new applications. As pointed out by Buddy Bland, Project Director for the ORNL Leadership Computing Facility, Summit lets users bring their codes to the machine quickly, thanks to the standard software environment provided by Red Hat Enterprise Linux (RHEL).

          Summit’s system is built using “fat node” building block concept, where each identically configured node is a powerful IBM Power System AC922 server which is interconnected with others via high-bandwidth dual rail Mellanox infiniband fabric, for a combined cluster of roughly 4,600 nodes. Each node in the system has:

          SUMMIT SUPERCOMPUTER NODE COMPOSITION

          The result is a system with excellent CPU compute capabilities, plenty of memory to hold data, high performance local storage, and massive communications bandwidth. Additionally, prominent use of graphical processing units (GPU) from Nvidia at the node architecture level provides robust acceleration platform for artificial intelligence (AI) and other workloads. All of this is achieved using standard hardware components, standard software components, and standard interfaces.

          So why is workload acceleration so important? In the past, hardware accelerators such as vector processors and array processors were exotic technologies used for esoteric applications. In today’s systems, hardware accelerators are mainstream in the form of GPUs. GPUs can be used for everything from visualization to number crunching to database acceleration, and are omnipresent across the hardware landscape, existing in desktops, traditional servers, supercomputers, and everything in-between, including cloud instances . And the standard unifying component across these configurations is Red Hat Enterprise Linux, the operating system and software development environment supporting hardware, applications, and users across variety of environments at scale.

          The breadth of scientific disciplines targeted by Summit can be seen in the list of applications included in the early science program. To help drive optimal use of the full system as soon as it was available, ORNL identified a set of research projects that were given access to small subsets of the full Summit system while Summit was being built. This enabled the applications to be ported to the Summit architecture, optimized for Summit, and be ready to scale out to the full system as soon as it was available. These early applications include astrophysics, materials science, systems biology, cancer research, and AI/ML.

          Machine learning (ML) is a great example of a workload that stresses systems: it needs compute power, I/O, and memory to handle data. It needs massive number crunching for training, which is handled by GPUs. All of that requires an enormous amount of electrical power to run. The Summit system is not only flexible and versatile in the way it can handle workloads, it also withstands one of the biggest challenges of today’s supercomputers – excessive power consumption. Besides being the fastest supercomputer on the planet, it is equally significant that Summit performs well on the Green500 list – a supercomputer measurement of speed and efficiency which puts a premium on energy-efficient performance for sustainable supercomputing. Summit comes in at #1 in its category and #5 overall on this list, a very strong performance.

          In summary, the fastest supercomputer in the world supports diverse application requirements, driven by simulation, big data, and AI/ML, employs the latest processor, acceleration and interconnect technologies from IBM, Nvidia and Mellanox, respectively, and shows unprecedented power efficiency for that scale of machines. Critical to the success of this truly versatile system is Linux, in Red Hat Enterprise Linux, as the glue that brings everything together and allows us to interact with this modern marvel.

          Source

          Software Freedom Conservancy Shares Thoughts on Microsoft Joining Open Invention Network’s Patent Non-Aggression Pact

          Software Freedom Conservancy Shares Thoughts on Microsoft Joining Open Invention Network’s Patent Non-Aggression Pact (sfconservancy.org)

          Posted by msmash
          on Sunday October 14, 2018 @06:10PM
          from the minute-details dept.

          Earlier this week, Microsoft announced that it was
          joining the open-source patent consortium Open Invention Network (OIN)

          The press release the two shared this week was short on details on how the two organizations intend to work together and what does the move mean to, for instance, the billions of dollars Microsoft earns each year from its Android patents (since
          Google is a member of OIN, too.) Software Freedom Conservancy (SFC)
          , a non-profit organization that promotes open-source software,
          has weighed in on the subject :
          While [this week’s] announcement is a step forward, we call on Microsoft to make this just the beginning of their efforts to stop their patent aggression efforts against the software freedom community. The OIN patent non-aggression pact is governed by something called the Linux System Definition. This is the most important component of the OIN non-aggression pact, because it’s often surprising what is not included in that Definition especially when compared with Microsoft’s patent aggression activities. Most importantly, the non-aggression pact only applies to the upstream versions of software, including Linux itself.
          We know that Microsoft has done patent troll shakedowns in the past on Linux products related to the exfat filesystem. While we at Conservancy were successful in getting the code that implements exfat for Linux released under GPL (by Samsung), that code has not been upstreamed into Linux. So, Microsoft has not included any patents they might hold on exfat into the patent non-aggression pact.
          We now ask Microsoft, as a sign of good faith and to confirm its intention to end all patent aggression against Linux and its users, to now submit to upstream the exfat code themselves under GPLv2-or-later. This would provide two important protections to Linux users regarding exfat: (a) it would include any patents that read on exfat as part of OIN’s non-aggression pact while Microsoft participates in OIN, and (b) it would provide the various benefits that GPLv2-or-later provides regarding patents, including an implied patent license and those protections provided by GPLv2 (and possibly other GPL protections and assurances as well).

           

          Small is beautiful.

          Working…

          Source

          WP2Social Auto Publish Powered By : XYZScripts.com