Linux-Focused Penguin Computing Banking On AI Infrastructure

Custom Linux-based system builder Penguin Computing on Tuesday formed a new practice focused on building data center infrastructures for artificial intelligence.

The new Penguin Computing artificial intelligence Practice is a full-service consultancy focused on providing clients a base on which to build their artificial intelligence technologies, said Philip Pokorny, chief technology officer for the Fremont, Calif.-based system builder.

Penguin Computing was founded in 1998 to build solutions based exclusively on the Linux operating system, Pokorny told CRN.

[Related: CRN Exclusive: Pure Storage CEO Giancarlo On A.I., Innovation, And M&A Strategy]

“We’re proud to continue doing so,” he said. “As we grew, we developed the skills to do the complete integration of compute, storage, network, racks, and so on. It’s great for customers doing racks of computers at a time, or virtualization companies looking for pre-configured infrastructures. Now it’s AI. As customers look to scale up and scale out their AI, they need the rack-scale capabilities we provide.”

Penguin Computing is not positioning itself to put together artificial intelligence clusters, but is instead focusing on the infrastructure to help those with artificial intelligence expertise with the underlying infrastructure, Pokorny said.

“We won’t tell you how to do AI,” he said. “We make it possible for you to do it at rack scale.”

Penguin Computing’s value-add in this business is the skillset it brings to assembling all the different components to a complete infrastructure, Pokorny said.

“A server company will sell you compute,” he said. “A storage company will sell you storage. They’re not going to provide the full rack. They’re not going to connect it for you. We provide it all under one roof.”

Artificial intelligence workloads have unique challenges that vary according to the size of specific workloads or the batch tests that lead to traffic issues, Pokorny said. Penguin addresses this by working with multiple storage partners who address options ranging from 100-percent flash file systems to the company’s own cost-optimized offerings, he said.

The company also has strong partnerships with Nvidia for graphics and imaging-intense analysis, as well as with other providers of acceleration technology for various types of artificial intelligence workloads, Pokorny said.

He cited the case of one customer, whose name he declined to specify, which based its artificial intelligence offering on a world-class system built by Penguin. “If we put together all the hardware that we brought together for that client, it would ran in the top-ten computers in the world, if they would let us benchmark it,” he said.

Source

Nuclear Reactor Startup Transatomic Power going Open Source after Closure

Last updated October 15, 2018 By Avimanyu Bandyopadhyay Leave a Comment

It seldom happens that certain circumstances do not allow one idea to prosper as planned. But Open Source can solve that issue, once the idea is shared with the world. Others can take on that work, build upon and keep improving it.

This recently happened with Transatomic Power (founded by Mark Massie and Dr. Leslie Dewan in April 2011), a Nuclear Startup that introduced a brand new design of its own Nuclear Reactor that is a lot more efficient than conventional ones.

As they haven’t been able to build it within their targeted timeframe, they announced suspending operations on September 25, 2018. But declaring their designs Open Source is certainly going to help change things for the better.

“We’re saddened
to announce that Transatomic is ceasing operations. But we’re still
optimistic and enthusiastic about the future of nuclear power. To
that end we’re therefore open-sourcing our technology, making it
freely available to all researchers and developers. We’re immensely
grateful to the advanced reactor community, and we hope you build on
our tech to make great things!”

Via the Twitter account of Transatomic Power

Things looked really promising in the early days:

Surely, the Startup had some very noble goals as described in the above video from 2016. But what went wrong? What are the good and bad things out of this news? Let’s discuss.

How different is Transatomic’s Design compared to conventional Nuclear Reactors?

Light Water Reactor vs Molten Salt ReactorImage Credit: Transatomic Science

Conventional Nuclear Reactors are most commonly industrialized as light-water Reactors, which are the most common type of Thermal Reactors. Transatomic Reactors, on the other hand, are improved versions of molten-salt Reactors. Lets briefly point out the differences:

Advantages of Transatomic Nuclear Reactors

  • Light-water Reactors use fuel in solid form while Transatomic’s molten-salt Reactors use liquid fuel. This makes easier maintenance possible.
  • Nuclear waste production in this molten-salt design is considerably lower (4.8 Tons per year) than light-water Reactors (10 Tons per year).
  • Significantly safer than light-water Reactors, even in the worst-case accident scenarios.
  • Operates at atmospheric pressure in contrast to 100 times the same in case of light-water, raising expenses for the latter.

You can check out their Science (or should we now say, “Open Science”) page where all the above points have been discussed in detail in addition to the white-paper after highlighting significant improvements to the original molten-salt Reactor model.

In their assessment paper, we learned about SCALE, a Comprehensive Modeling and Simulation Suite for Nuclear Safety Analysis and Design, homepage located on the Oak Ridge National Laboratory page. This lab is where the first molten-salt Reactor was designed.

Why is making an Open Source Nuclear Reactor Design a better step for Humanity?

  • The better scope of consistently improving the models via the scientific community.
  • An Open model is always good news for our environment.
  • Similar or other industries will also be encouraged to adopt such Open measures.

When Transatomic Wasn’t Open Source

Looking back in the past, there were some claims that had to be revalidated in 2015 and was endorsed early this year by Oak Ridge National Laboratory. We found this much earlier quote from co-founder Dr. Leslie Dewan:

“In early 2016, we realized there was a problem with our initial analysis and started working to correct the error,” cofounder Leslie Dewan said in an e-mail response to an inquiry from MIT Technology Review.

“In retrospect, that was a mistake of mine,” she said during the phone interview. “We should have open-published more of our information at a far earlier stage.”

Would Transatomic have to go through all this had they been Open Source from day one? Clearly no. Most definitely, their initial intention was indeed a noble one!

Following are the thoughts from Dr. Kord Smith, who is a professor of Nuclear Science and engineering at MIT and an expert in the physics of Nuclear Reactors. He analyzed the Transatomic Nuclear Reactor design in late 2015.

Smith stresses that the founders weren’t acting in bad faith, but he did note they didn’t subject their claims to the peer-review process early on.

“They didn’t do any of this intentionally,” Smith says. “It was just a lack of experience and perhaps an overconfidence in their own ability. And then not listening carefully enough when people were questioning the conclusions they were coming to.”

More importantly, Transatomic now realizes two very noteworthy principles highlighted on their Open Source page:

(1) Climate change is real, and unless massive action to de-carbonize the grid is taken soon, it will threaten much of humanity’s way of life.

(2) Novel nuclear technologies present the best way to address the issue, by rapidly expanding carbon-free energy at scale and making fossil fuels a thing of the past.

One critical thing considering the above two principles is that the newly available open resources from Transatomic will help address the issue of Nuclear waste production and innovate ways to reduce it.

Though it is sad that the company is shutting down, a new addition to the Open Science community is certainly great news for Open Research Practices and we are much glad about the later development.

Their Open Source page is now titled “Open-sourcing our reactor design, and the future of Transatomic”. Considering the latter part of this title, can we expect more Open Designs from them in the future? Have a feeling that we haven’t yet seen the last of Transatomic Power!

Do you agree they
should have followed an Open Source Approach from the beginning
itself? Do you like their new approach and improved design? Feel
free to share your thoughts in the comments below.


About Avimanyu Bandyopadhyay

Avimanyu is a Doctoral Researcher on GPU-based Bioinformatics and a big-time Linux fan. He strongly believes in the significance of Linux and FOSS in Scientific Research. Deep Learning with GPUs is his new excitement! He is a very passionate video gamer (his other side) and loves playing games on Linux, Windows and PS4 while wishing that all Windows/Xbox One/PS4 exclusive games get support on Linux some day! Both his research and PC gaming are powered by his own home-built computer. He is also a former Ubisoft Star Player (2016) and mostly goes by the tag “avimanyu786” on web indexes.

Source

Turn Your Old PC into a Retrogaming Console with Lakka Linux

Last updated October 16, 2018 By Abhishek Prakash Leave a Comment

If you have an old computer gathering dust, you can turn it into a PlayStation like retrogaming console with Lakka Linux distribution.

You probably already know that there are Linux distributions specially crafted for reviving older computers. But did you know about a Linux distribution that is created for the sole purpose of turning your old computer into a retro-gaming console?

Lakka is a Linux distribution specially for retrogaming

Meet Lakka, a lightweight Linux distribution that will transform your old or low-end computer (like Raspberry Pi) into a complete retrogaming console,

When I say retrogaming console, I am serious about the console part. If you have ever used a PlayStation of Xbox, you know what a typical console interface looks like.

Lakka provides a similar interface and a similar experience. I’ll talk about the ‘experience’ later. Have a look at the interface first.

Lakka Retrogaming interface

Lakka: Linux distributions for retrogaming

Lakka is the official Linux distribution of RetroArch and the Libretro ecosystem.

RetroArch is a frontend for retro game emulators and game engines. The interface you saw in the video above is nothing but RetroArch. If you just want to play retro games, you can simply install RetroArch in your current Linux distribution.

Lakka provides Libretro core with RetroArch. So you get a preconfigured operating system that you can install or plug in the live USB and start playing games.

Lakka is lightweight and you can install it on most old systems or single board computers like Raspberry Pi.

It supports a huge number of emulators. You just need to download the ROMs on your system and Lakka will play the games from these ROMs. You can find the list supported emulators and hardware here.

It enables you to run classic games on a wide range of computers and consoles through its slick graphical interface. Settings are also unified so configuration is done once and for all.

Let me summarize the main features of Lakka:

  • PlayStation like interface with RetroArch
  • Support for a number of retro game emulators
  • Supports up to 5 players gaming on the same system
  • Savestates allow you to save your progress at any moment in the game
  • You can improve the look of your old games with various graphical filters
  • You can join multiplayer games over the network
  • Out of the box support for a number of joypads like XBOX360, Dualshock 3, and 8bitdo
  • Unlike trophies and badges by connecting to RetroAchievements

Getting Lakka

Before you go on installing Lakka you should know that it is still under development so expect a few bugs here and there.

Keep in mind that Lakka only supports MBR partitioning. So if it doesn’t read your hard drive while installing, this could be a reason.

The FAQ section of the project answers the common doubts, so please refer to it for any further questions.

Do you like playing retro games? What emulators do you use? Have you ever used Lakka before? Share your views with us in the comments section.


About Abhishek Prakash

I am a professional software developer, and founder of It’s FOSS. I am an avid Linux lover and Open Source enthusiast. I use Ubuntu and believe in sharing knowledge. Apart from Linux, I love classic detective mysteries. I’m a huge fan of Agatha Christie’s work.

Source

Download Oracle VM VirtualBox Linux 5.2.20

Oracle VirtualBox (formerly Sun VirtualBox, innotek VirtualBox and Sun xVM VirtualBox) is a free and cross-platform virtualization application that provides a family of powerful x86 virtualization tools designed for desktop, server and embedded use. VirtualBox can be used on Linux, Solaris, Mac OS X and Microsoft Windows platforms to run virtual machines of any of the aforementioned operating systems, as well as any BSD distribution, IMB OS/2 flavors, DOS, Netware, L4, QNX, and JRockitVE.

It’s portable

Oracle VirtualBox is portable, requires no hardware virtualization, includes guest additions and great hardware support. It also features USB device support, full ACPI support, multiscreen resolutions, and built-in iSCSI support. Support for PXE network boot, multi-generation branched snapshots, remote machine display, extensible RDP authentication, and USB over RDP (Remote Desktop Protocol) is also integrated in Oracle VirtualBox.

Supports 32-bit and 64-bit architectures

At the moment, the program is capable of running only AMD64/Intel64 and x86 architectures. By default, when creating a new virtual machine, you will be able to select the operating system that you plan on virtualizing. Ever since Oracle acquired the Sun Microsystems company, VirtualBox is actively developed by a team of professional engineers who implement new features and functionality with every release.

Virtual machines can be highly customized

Once a new virtual machine has been created in VirtualBox, users will be able to change its type, version, boot order, chipset, pointing device, base memory (RAM), processors, video memory, monitor count, audio driver and controller, network adapters, serial and USB ports, and storage devices. When talking about storage devices supported by VirtualBox, we can mention that you will be able use a virtual CD/DVD image file (also known as ISO image) or use the host CD/DVD drive for running the virtualized OS.

The most sophisticated and powerful virtualization software

Support for USB devices is also a controversial feature of this application, because you will need to do some tweaking before it will work as intended. But all in all, this is one of the most sophisticated and powerful virtualization software for Linux operating systems.

X86 virtualization PC emulator X86 emulator VirtualBox X86 Virtualization PC

Source

Download NetworkManager-libreswan Linux 1.2.10

NetworkManager-openswan is an open source package that offers Openswan support for the NetworkManager application.

NetworkManager is an open source software designed as a network connection manager for most Debian- and RPM-based distributions.

Openswan is a complete IPsec implementation designed especially for the Linux 2.0, 2.2, 2.4 and 2.6 kernel branches. It began as a fork of the FreeS/WAN project, which has been discontinued.

Openswan is also an open source project, distributed on many operating system, including Linux.

NetworkManager offers great networking powered by DBus. It is used in many GNOME-based distributions.

NetworkManager openswan Openswan connection IPsec client NetworkManager Openswan IPsec Network

Source

Ceph for the rest of us: Rethinking your storage

Share with friends and colleagues on social media

    The steady crush of data growth is at your doorstep, your storage arrays are showing their age, and it just doesn’t seem like you have either the budget, staff or the resources to keep up. Whether you recognize it or not, that’s the siren call for Ceph, the open-source distributed storage system designed for performance, reliability and scalability.

    The only rub is that, as an IT practitioner familiar with RAID, SANS and proprietary storage solutions of all shapes and sizes, there’s not much about Ceph that feels, well, comfortable. After all, Ceph uses something called “replication” or “erasure coding” instead of RAID; it provides block, object and file storage services all in one; and it scatters data across drives, servers and even geographical locations.

    Still, you have that gnawing sense that you need to get on board — even if it feels like the expertise you and your team need is just out of reach.

    The real challenge isn’t the technology itself; that commodity stuff — servers, fast networks and loads of drives — is familiar enough. Ceph expertise is really about getting accustomed to abstracting that familiar hardware and willingly handing off routine aspects of the cluster to more automated, DevOps-style approaches. It also helps to get your hands on a cluster of your very own to see just how it works.

    SUSE Enterprise Storage can help.

    SUSE is a primary contributor to the open-source Ceph project, and we’ve added a lot of upstream features and capabilities that have gone a long way toward shaping the technology for the enterprise. With SUSE Enterprise Storage, we’ve made the technology even more attainable by automating the deployment with Salt and DeepSea, a collection of Salt files for deploying Ceph.

    With the latest releases of SUSE Enterprise Storage, you can use DeepSea to deploy Ceph in hours, not days or weeks. With the openATTIC graphical dashboard, newcomers can get a feel for how a Ceph storage array works while the slightly more expert can use it to manage, maintain and use the cluster. For example, the Dashboard makes adding iSCSI, NFS or other shared storage services straightforward and familiar, which gives you and your team more confidence with the technology.

    In the image below, you can see how the dashboard offers a real-time view of the cluster, including available storage capacity, health and availability:

    The SUSE Enterprise Storage dashboard with openATTIC.

    The SUSE Enterprise Storage dashboard with openATTIC.

    The visual information on the Dashboard is just the start. Under the covers it’s a SUSE Linux Enterprise Server 12 SP3 running as a Salt master that controls any number of Salt minions, which provide monitor, manager, storage, RADOS, iSCSI, NFS and other services to your storage cluster.

    Instead of wrestling with resources or manually figuring out how to set up an iSCSI gateway, for example, SUSE Enterprise Storage starts by automating the deployment in a predictable, reliable way, then gives you a graphical way to interact with all the components. It also gives you the flexibility to create storage pools and make them available through the gateways you want. Adding other services to your Ceph cluster requires only minor modifications to a straightforward policy.cfg file, which you apply with Salt to add even more capabilities and capacity:

    The policy.cfg defines your various nodes, including all your Ceph minions.

    The policy.cfg defines your various nodes, including all your Ceph minions.

    In this policy.cfg example, you can see the iSCSI gateway (role-igw) service role applied to any node you’ve assigned a hostname that begins with “igw”. Other Ceph cluster roles are assigned to other nodes, which work together to replicate data, set up storage pools and make it all accessible through familiar APIs, the command line and the dashboard.

    Adding the role-igw role to your Ceph cluster from the example above provides the iSCSI service to your cluster, which enables you to add new iSCSI shares from the dashboard at will:

    The GUI makes adding iSCSI and other gateways straightforward.

    The GUI makes adding iSCSI and other gateways straightforward.

    Next steps

    Of course, the key to any storage deployment is good planning, and regardless of the tools you use, you need to figure out how your Ceph storage cluster will be used — today and into the future. There’s no shortcut to good planning, but that part should be familiar to anyone who’s managed enterprise storage.

    In Part 2 of this SUSE Enterprise Storage series, I’ll show you how to sketch out a small-scale proof-of-concept Ceph plan and deploy SUSE Enterprise Storage in a purely virtual environment. This lab environment won’t be suitable for production purposes, but it will give you a working storage cluster that looks, feels and acts just like a full-blown deployment.

    Share with friends and colleagues on social media

      Source

      ARaymond migre sur SAP HANA avec SUSE

      Share with friends and colleagues on social media

        Depuis sa création il y a plus de 150 ans, ARaymond est devenu leader sur le marché de la fabrication de solutions d’assemblage et de fixation. La société fournit des pièces spécialisées aux secteurs de l’automobile, de l’industrie, de l’agriculture, de l’énergie et de l’industrie pharmaceutique. ARaymond est présent au niveau international avec 6 500 employés répartis sur 26 sites de fabrication implantés sur quatre continents.

        LE DEFI
        Pour optimiser la productivité tout en réduisant au maximum les déchets produits, les fabricants du secteur de l’automobile, de l’industrie et de l’industrie pharmaceutique utilisent généralement le modèle de production Just-in-Time. Pour que les chaînes de montage fonctionnent correctement et de manière économique, les fournisseurs de ces fabricants doivent respecter scrupuleusement les délais, afin d’éviter de subir d’importantes pénalités financières et de limiter les répercussions sur la réputation de l’entreprise.

        « La réussite d’ARaymond dépend de notre capacité à continuer à produire et distribuer des dispositifs de fixation et des solutions d’assemblage de manière rapide et efficace. Tout retard de livraison risquerait d’entraîner une rupture de fabrication très coûteuse pour nos clients, qui pourrait potentiellement donner lieu à des pénalités et nuire à notre réputation de leader du secteur. » Jérôme Rézé, Infrastructure Director chez ARaymond

        ARaymond IT Technology, service informatique interne fort de 170 employés, offre un service centralisé essentiel. Il a mis en place SAP pour supprimer la prise en charge de toutes les autres bases de données et a opté pour un passage anticipé à la base de données SAP HANA, pour une migration fluide, sans précipitation ni interruption des systèmes d’entreprise clés. Cette migration permettait également d’aborder certains problèmes de performances.

        « Nous étions conscients de la nécessité de migrer vers SAP HANA dans un avenir proche. Nous avons donc décidé de limiter les risques, ainsi que les perturbations au niveau de l’activité, en débutant la migration sans tarder. Cette approche nous a permis de prendre notre temps en commençant par la migration des systèmes les moins critiques. » Marc Coste, SAP Technical Leader

        LA SOLUTION
        Pour prendre en charge ses nouveaux serveurs de base de données SAP HANA, ARaymond a choisi SUSE Linux Enterprise Server for SAP Applications, associée à SUSE Linux Enterprise High Availability Extension.

        « Nous avons d’abord envisagé l’implémentation de Red Hat Enterprise Linux à plus grande échelle, étant donné que nous avions utilisé cette solution pour des applications autres que SAP pendant plusieurs années. Toutefois, lorsque nous avons collaboré avec notre partenaire TeamWork pour étudier les avantages et les inconvénients de chaque distribution, nous avons rapidement constaté que la grande majorité du marché SAP utilisait SUSE Linux Enterprise, ce qui impliquait un écosystème très solide pour le logiciel SAP autour du système d’exploitation SUSE. L’offre commerciale de SUSE Linux Enterprise Server for SAP Applications était également bien plus attractive, et nous avons aimé l’idée d’avoir une version spécialement adaptée aux exigences des solutions SAP. » Jérôme Rézé

        Tous les détails de cette migration sont disponibles dans le Case Study

        Share with friends and colleagues on social media

          Source

          The newest intelligent supercomputer – Red Hat Enterprise Linux Blog

          Summit, the world’s fastest supercomputer running at Oak Ridge National Laboratory (ORNL), was designed from the ground up to be flexible and to support a wide range of scientific and engineering workloads. In addition to traditional simulation workloads, Summit is well suited to analysis and AI/ML workloads – it is described as “the world’s first AI supercomputer”. The use of standard components and software makes it easy to port existing applications to Summit as well as develop new applications. As pointed out by Buddy Bland, Project Director for the ORNL Leadership Computing Facility, Summit lets users bring their codes to the machine quickly, thanks to the standard software environment provided by Red Hat Enterprise Linux (RHEL).

          Summit’s system is built using “fat node” building block concept, where each identically configured node is a powerful IBM Power System AC922 server which is interconnected with others via high-bandwidth dual rail Mellanox infiniband fabric, for a combined cluster of roughly 4,600 nodes. Each node in the system has:

          SUMMIT SUPERCOMPUTER NODE COMPOSITION

          The result is a system with excellent CPU compute capabilities, plenty of memory to hold data, high performance local storage, and massive communications bandwidth. Additionally, prominent use of graphical processing units (GPU) from Nvidia at the node architecture level provides robust acceleration platform for artificial intelligence (AI) and other workloads. All of this is achieved using standard hardware components, standard software components, and standard interfaces.

          So why is workload acceleration so important? In the past, hardware accelerators such as vector processors and array processors were exotic technologies used for esoteric applications. In today’s systems, hardware accelerators are mainstream in the form of GPUs. GPUs can be used for everything from visualization to number crunching to database acceleration, and are omnipresent across the hardware landscape, existing in desktops, traditional servers, supercomputers, and everything in-between, including cloud instances . And the standard unifying component across these configurations is Red Hat Enterprise Linux, the operating system and software development environment supporting hardware, applications, and users across variety of environments at scale.

          The breadth of scientific disciplines targeted by Summit can be seen in the list of applications included in the early science program. To help drive optimal use of the full system as soon as it was available, ORNL identified a set of research projects that were given access to small subsets of the full Summit system while Summit was being built. This enabled the applications to be ported to the Summit architecture, optimized for Summit, and be ready to scale out to the full system as soon as it was available. These early applications include astrophysics, materials science, systems biology, cancer research, and AI/ML.

          Machine learning (ML) is a great example of a workload that stresses systems: it needs compute power, I/O, and memory to handle data. It needs massive number crunching for training, which is handled by GPUs. All of that requires an enormous amount of electrical power to run. The Summit system is not only flexible and versatile in the way it can handle workloads, it also withstands one of the biggest challenges of today’s supercomputers – excessive power consumption. Besides being the fastest supercomputer on the planet, it is equally significant that Summit performs well on the Green500 list – a supercomputer measurement of speed and efficiency which puts a premium on energy-efficient performance for sustainable supercomputing. Summit comes in at #1 in its category and #5 overall on this list, a very strong performance.

          In summary, the fastest supercomputer in the world supports diverse application requirements, driven by simulation, big data, and AI/ML, employs the latest processor, acceleration and interconnect technologies from IBM, Nvidia and Mellanox, respectively, and shows unprecedented power efficiency for that scale of machines. Critical to the success of this truly versatile system is Linux, in Red Hat Enterprise Linux, as the glue that brings everything together and allows us to interact with this modern marvel.

          Source

          Software Freedom Conservancy Shares Thoughts on Microsoft Joining Open Invention Network’s Patent Non-Aggression Pact

          Software Freedom Conservancy Shares Thoughts on Microsoft Joining Open Invention Network’s Patent Non-Aggression Pact (sfconservancy.org)

          Posted by msmash
          on Sunday October 14, 2018 @06:10PM
          from the minute-details dept.

          Earlier this week, Microsoft announced that it was
          joining the open-source patent consortium Open Invention Network (OIN)

          The press release the two shared this week was short on details on how the two organizations intend to work together and what does the move mean to, for instance, the billions of dollars Microsoft earns each year from its Android patents (since
          Google is a member of OIN, too.) Software Freedom Conservancy (SFC)
          , a non-profit organization that promotes open-source software,
          has weighed in on the subject :
          While [this week’s] announcement is a step forward, we call on Microsoft to make this just the beginning of their efforts to stop their patent aggression efforts against the software freedom community. The OIN patent non-aggression pact is governed by something called the Linux System Definition. This is the most important component of the OIN non-aggression pact, because it’s often surprising what is not included in that Definition especially when compared with Microsoft’s patent aggression activities. Most importantly, the non-aggression pact only applies to the upstream versions of software, including Linux itself.
          We know that Microsoft has done patent troll shakedowns in the past on Linux products related to the exfat filesystem. While we at Conservancy were successful in getting the code that implements exfat for Linux released under GPL (by Samsung), that code has not been upstreamed into Linux. So, Microsoft has not included any patents they might hold on exfat into the patent non-aggression pact.
          We now ask Microsoft, as a sign of good faith and to confirm its intention to end all patent aggression against Linux and its users, to now submit to upstream the exfat code themselves under GPLv2-or-later. This would provide two important protections to Linux users regarding exfat: (a) it would include any patents that read on exfat as part of OIN’s non-aggression pact while Microsoft participates in OIN, and (b) it would provide the various benefits that GPLv2-or-later provides regarding patents, including an implied patent license and those protections provided by GPLv2 (and possibly other GPL protections and assurances as well).

           

          Small is beautiful.

          Working…

          Source

          Open Hardware – Challenges » Linux Magazine

          Changes in funding, manufacturing, and technology have helped move open hardware from an idea to reality.

          Like free software, open hardware was an idea before it was a reality. Until developments in the tech industry caught up with the idea, open hardware was impractical. Even now, in 2018, open hardware is at the stage where free software was in about 1999: ready to make its mark, but not being developed by major hardware manufacturers.

          As late as 1999, Richard M. Stallman of the Free Software Foundation (FSF) downplayed the practicality of what he called free hardware. In “On ‘Free Hardware'” [1], Stallman suggested that working on free hardware was “a fine thing to do” and said that the FSF would put enthusiasts in touch with each other. However, while firmware is just software, and specifications could be made freely available, he did not think that either would do much good because of the difficulties of manufacturing, writing:

          We don’t have automatic copiers for hardware. (Maybe nanotechnology will provide that capability.) So you must expect that making fresh a copy of some hardware will cost you, even if the hardware or design is free. The parts will cost money, and only a very good friend is likely to make circuit boards or solder wires and chips for you as a favor.

          […]

          Use Express-Checkout link below to read the full article (PDF).

          Source

          WP2Social Auto Publish Powered By : XYZScripts.com