​Dell XPS 13: The best Linux laptop of 2018

Usually, when I get review hardware in, it’s not a big deal. It’s like working in a candy shop. At first, it seems great (“All the candy I can eat!”). Then, you quickly get sick of dealing with the extra equipment.

But, every now and again, I get a really fine machine, like Dell’s latest XPS 13 Developer Edition laptop. And I get excited again.

There’s this persistent fake news story that you can’t buy a computer with Linux pre-installed on it. It’s nonsense. Dell has been selling Ubuntu-Linux powered computers since 2007. What’s also true is that, Dell, like Linux-specific desktop companies such as System76, sells high-end systems like its Precision mobile workstations. At the top end of Dell’s Ubuntu Linux line, you’ll find the Dell XPS 13 Developer Edition laptops.

Also: Best Black Friday 2018 deals: Business Bargain Hunter’s top picks

What makes them a “Developer Edition” besides the top-of-the-line hardware is its software configuration. Canonical, Ubuntu‘s parent company, and Dell worked together to certify Ubuntu 18.04 LTS on the XPS 13 9370. This worked flawlessly on my review system.

Now, Ubuntu runs without a hitch on almost any PC, but the XPS 13 was the first one I’d seen that comes with the option to automatically install the Canonical Livepatch Service. This Ubuntu Advantage Support package automatically installs critical kernel patches in such a way you won’t need to reboot your system. With new Spectre and Meltdown bugs still appearing, you can count on more critical updates coming down the road.

The XPS 13’s hardware is, in a word, impressive. My best of breed laptop came with an 8th-generation Intel Coffee Lake Core i7-8550U processor. This eight-core CPU runs at 4Ghz.

The system comes with 16GB of RAM. This isn’t plain-Jane RAM. It’s fast 2133MHz LPDDR3 RAM. It’s backed by a 512GB PCIe solid state drive (SSD).

Also: Best Cyber Monday 2018 deals: Business Bargain Hunter’s top picks

To see how all this hardware would really work for a developer, I ran the Phoronix Test Suite. This is a system benchmark, which focuses primarily on Linux. And, this system averaged 461.5 seconds to compile the 4.18 Linux kernel. For a laptop, those are darn good numbers.

When it comes to graphics, the XPS 13 uses an Intel UHD Graphics 620 chipset. This powers up a 13.3-inch 4K Ultra HD 3840 x 2160 InfinityEdge touch display. This is a lovely screen, but it has two annoyances.

First, when you boot-up, the font is tiny. This quickly changes, but it’s still can lead to a few seconds of screen squinting. The terminal font can also be on the small side. My solution to this was upscaling the display by using Settings > Devices > Displays menu and moving the Scale field from its default 200 percent to a more reasonable — for me — 220 percent. Your eyesight may vary.

CNET: Best Black Friday deals 2018 | Best Holiday gifts 2018 | Best TVs to give for the holidays

The other problem is, while the thin bezels make the screen attractive, putting the video-cam at the bottom of the screen can lead to some rather unattractive, up-nose video-conferencing moments until you get use to this atypical cam positioning.

The keyboard with its large, responsive keys is a pleasure to use. When you’re a programmer, that’s always important. The trackpad is wide and responsive.

Thinking of battery life, when you’re not working on the XPS 13, it’s very aggressive about shutting things down. Even when you are giving it a workout, I saw a real-world battery life of about nine hours.

One neat feature the XPS 13 includes, which I wish all laptops had, is a battery power indicator on the console’s left edge. You press a tiny button with your fingernail and up to five lights let you know how much juice you have left.

For ports, the XPS 13 has a trio of USB-C ports. If you, like me, have a host of older USB sticks and other devices, Dell kindly provides a USB-A to USB-C adaptor. It also has an audio jack and a MicroSD card reader. Two of the USB-C ports support Thunderbolt, while the other one supports PowerShare. The latter enables you to charge devices from your laptops. In my case, I could charge up my Google Pixel 2 phone

TechRepublic: A guide to tech and non-tech holiday gifts to buy online | Photos: Cool gifts for bosses to buy for employees | The do’s and don’ts of giving holiday gifts to your coworkers

All this comes in a package that weighs in at a smidge over two-and-a-half pounds. This is a full-powered laptop that comparable in size to a small Chromebook.

While my all-time favorite laptop remains my maxed-out Linux-enabled Pixelbook, the new Dell XPS-13 comes a close second. If you want a Great Linux laptop, this one demands your attention.

But it does have one problem: It’s pricey. The model I tried out lists for $1,779.99. If that’s too rich for your blood, the Dell XPS 13 line starts at $889.99. And even that model is pretty sweet.

Besides, don’t you owe yourself a holiday present for next year’s development work? Sure you do!

Related stories:

Source

Linux Today – Acumos Project’s 1st Software, Athena, Helps Ease AI Deployment

Acumos is part of a Linux Foundation umbrella organization, the LF Deep Learning Foundation, that supports and sustains open source innovation in artificial intelligence, machine learning and deep learning. Acumos is based in Shanghai.

Acumos AI is a platform and open source framework that makes it easy to build, share and deploy AI apps. Acumos standardizes the infrastructure stack and components required to run an out-of-the-box general AI environment, freeing data scientists and model trainers to focus on their core competencies, and accelerating innovation.

Complete Story

Source

Download Void GNU/Linux 20181111

Void GNU/Linux is an open source and completely free operating system written from scratch. It offers over 3,000 optimized packages, supports cross build packages, real-time package building, the ability to build binary packages from your favourite Linux distribution, and UEFI 64-bit systems.

Distributed as 64-bit and 32-bit minimal Live CDs

The distribution is available for download as two minimal Live CD ISO images designed for the 64-bit (x86_64) and 32-bit (i386) instruction set architectures.

In order to provide an always-bootable system, the kernel images and modules are never removed from the system. Void Linux uses dracut to handle the initial ramdisk images.

Boot options

From the boot menu, the user can boot the Void GNU/Linux in live mode, as well as to boot the first disk drive that is found by the BIOS. You will be autologged in the live session, which is actually a basic shell prompt.

Just like Arch Linux and other similar operating system, Void Linux installs only a base system that provides only the required utilities for console usage. From there, users will have to adapt the system to their needs.

Text-mode installer

To permanently install the distribution, you must run the “sudo void-installer” command and follow the on-screen instructions. Basically, you’ll have to select a keyboard layout, configure the network, choose an installation source.

Furthermore, you must set the system hostname, locale, timezone and root password, choose where to install the bootloader, partition the disk, as well as to configure filesystems and mount points.

Bottom line

Summing up, Void GNU/Linux is an interesting distribution of Linux, in the style of Arch Linux, allowing the user to install it from nothing using a text-mode installer. We strongly believe that it is aimed at advanced Linux users who want to try something new.

Linux from scratch Rolling release Linux distribution Void Rolling-release Linux Scratch

Source

Introducing the Non-Code Contributor’s Guide | Linux.com

It was May 2018 in Copenhagen, and the Kubernetes community was enjoying the contributor summit at KubeCon/CloudNativeCon, complete with the first run of the New Contributor Workshop. As a time of tremendous collaboration between contributors, the topics covered ranged from signing the CLA to deep technical conversations. Along with the vast exchange of information and ideas, however, came continued scrutiny of the topics at hand to ensure that the community was being as inclusive and accommodating as possible. Over that spring week, some of the pieces under the microscope included the many themes being covered, and how they were being presented, but also the overarching characteristics of the people contributing and the skill sets involved. From the discussions and analysis that followed grew the idea that the community was not benefiting as much as it could from the many people who wanted to contribute, but whose strengths were in areas other than writing code.

This all led to an effort called the Non-Code Contributor’s Guide.

Now, it’s important to note that Kubernetes is rare, if not unique, in the open source world, in that it was defined very early on as both a project and a community. While the project itself is focused on the codebase, it is the community of people driving it forward that makes the project successful. The community works together with an explicit set of community values, guiding the day-to-day behavior of contributors whether on GitHub, Slack, Discourse, or sitting together over tea or coffee.

By having a community that values people first, and explicitly values a diversity of people, the Kubernetes project is building a product to serve people with diverse needs. The different backgrounds of the contributors bring different approaches to the problem solving, with different methods of collaboration, and all those different viewpoints ultimately create a better project.

The Non-Code Contributor’s Guide aims to make it easy for anyone to contribute to the Kubernetes project in a way that makes sense for them. This can be in many forms, technical and non-technical, based on the person’s knowledge of the project and their available time. Most individuals are not developers, and most of the world’s developers are not paid to fully work on open source projects. Based on this we have started an ever-growing list of possible ways to contribute to the Kubernetes project in a Non-Code way!

Get Involved

Some of the ways that you can contribute to the Kubernetes community without writing a single line of code include:

The guide to get started with Kubernetes project contribution is documented on Github, and as the Non-Code Contributors Guide is a part of that Kubernetes Contributors Guide, it can be found here. As stated earlier, this list is not exhaustive and will continue to be a work in progress.

To date, the typical Non-Code contributions fall into the following categories:

  • Roles that are based on skill sets other than “software developer”
  • Non-Code contributions in primarily code-based roles
  • “Post-Code” roles, that are not code-based, but require knowledge of either the code base or management of the code base

If you, dear reader, have any additional ideas for a Non-Code way to contribute, whether or not it fits in an existing category, the team will always appreciate if you could help us expand the list.

If a contribution of the Non-Code nature appeals to you, please read the Non-Code Contributions document, and then check the Contributor Role Board to see if there are any open positions where your expertise could be best used! If there are no listed open positions that match your skill set, drop on by the #sig-contribex channel on Slack, and we’ll point you in the right direction.

We hope to see you contributing to the Kubernetes community soon!

This article originally appeared on the Kubernetes Blog.

Source

How To Secure Website HTTP Response Headers .Htaccess (Snippets)

When building a new website, I always make sure it goes through a series of checklists before deploying.

One of the things on my checklist is securing the HTTP response headers.

By securing the HTTP security headers of your website, you’ll prevent common attacks such as:

  • Framing or clickjacking
  • Cross-site scripting (XSS)
  • Drive-by downloads
  • SSL stripping

Before we get started, go ahead and test the security of your website headers right now using: securityheaders.com

NOTE: The following HTTP Security Header snippets are placed in the .htaccess file.

1. The X-Frame-Options Header

This snippet will prevent browsers from executing your site in an iframe. Essentially, it’ll prevent attackers from clickjacking, or showing your content on their site in the form of an iframe.

The disadvantage to this however, is iframe will be disabled completely. You won’t be able to view your site from stumbleupon or use tools such as mobiletest.me.

<IfModule mod_headers.c>
Header always append X-Frame-Options “sameorigin”
</IfModule>

2. The X-XSS-Protection Header

This snippet will activate the cross-site scripting (XSS) filters used by most modern browsers (ie. Chrome, IE), which helps protect your site from certain cross-site scripting attacks.

<IfModule mod_headers.c>
Header set X-XSS-Protection “1; mode=block”
</IfModule>

3. The X-Content-Type-Options Header

This snippet will reduce the risk of drive-by downloads on your site by stopping the browser from executing the wrong MIME and forcing it to stick with the declared content-type.

<IfModule mod_headers.c>
Header set X-Content-Type-Options “nosniff”
</IfModule>

4. The Strict Transport Security Header

This snippet will enforce the use of strict transport security, which will force the browser to access your website only through a safe HTTPS connection. The max-age is set to 31,536,000 which is approximately 1 year.

<IfModule mod_headers.c>
Header set Strict-Transport-Security “max-age=31536000”
</IfModule>

After adding the snippets via .htaccess, go ahead and run your site through securityheaders.com again to see if you did everything correctly.

Source

Amazon ECS Now Allows Two Additional Docker Flags

You can now specify two new docker flags as parameters in your Amazon Elastic Container Service (ECS) Task Definition. These flags are pidMode and ipcMode.

The pidMode parameter allows you to configure your containers to share their process ID (PID) namespace with other containers in the task, or with the host. Sharing the PID namespace enables for example monitoring applications deployed as containers to access information about other applications running in the same task or host.

The ipcMode parameter allows you to configure your containers to share their inter-process communication (IPC) namespace with the other containers in the task, or with the host. The IPC namespace allows containers to communicate directly through shared-memory with other containers running in the same task or host.

This feature is currently supported with EC2 launch-type. For more information about using Docker parameters in task definitions, visit the Amazon ECS documentation.

To view where Amazon ECS is avaiable, please visit our region table.

Source

Tool to Securely Transfer Files Between Linux Computers

DCP tool securely transfer files linux

Transfer files remotely has for a long time been a reserve of rsync and SCP protocols. In this article, we will take a look at how you can transfer files between Linux computers using dcp tool. dcp tool is a handy tool that copies files between host machines in a network using the DAT network. In this guide, we will try to remotely copy files between two Ubuntu/Debian Systems.

System SetUp

We are going to demonstrate the remote copying of files using two Debian hosts:

  1. Host A – IP 10.200.50.50 ( This system will host files to be sent remotely to another host system)
  2. Host B – IP 10.200.50.51 (This will be the System where files will be transferred/copied to)

How dcp works

Dcp tool creates a dat archive for a specified group of files or directories. Using a generated public key, dcp allows you to download the said archive from the second host system. Data shared over the network is encrypted using the archive’s public key, so you don’t need to worry about the security of your data. Data will only be limited to those who have access to the key.

Software Prerequisites

To successfully install the dcp tool, the following software packages are required on both host systems

  • NodeJS
  • NPM

Installing NodeJS

To install NodeJS, we are going to add Node.js PPA to our host machines. The PPA is provided by the Official Nodejs website. In addition, we are going to install the software-properties-common package.

Log in to each of the systems and follow the steps below

Install software-properties-common package

Run the command below

# sudo apt-get install curl software-properties-common

Sample Output

remotely copy files using dcp

Next, Add the required PPA file to allow you to install Node.JS

Run the command below to add PPA

# curl -sL https://deb.nodesource.com/setup_11.x | sudo bash –

Finally, let’s install the Node.JS package which will also come with NPM

Install Node.JS package

# sudo apt-get install -y nodejs

Sample Output

Verifying installation of Node.JS and NPM

To verify installation of Node.JS

# node -v

To verify installation of NPM

# npm -v

Now that we have our software prerequisites, let’s proceed and install dcp

Install dcp tool

To install the dcp tool run

npm i -g dat-cp

Sample output

To verify that all went OK, let’s check out the version of the dcp tool

dcp –version

OR

dcp -V

Output

0.6.2

Great! Let’s create a few files on our source system and try and send them over the network to the second host.

# touch file1.txt file2.txt file3.txt

How to remotely transfer/copy files

To remotely copy the files to another host , run the following command

dcp file1.txt file2.txt file3.txt

This will generate a public key at the bottom as shown in cyan color.

Copy the key and paste it on the remote server as shown

Congratulations! You have successfully copied files from one host to another using the dcp tool.

To find more information about the tool’s usage run

dcp –help

OR

dcp -h

That’s it for today guys. I hope you found this guide helpful. Feel free to comment and share. Thanks!

Read Also:

Source

Download Baobab Linux 3.31.1

Baobab (also known as Disk Usage Analyzer) is an open source application that allows users to analyse the disk usage under the GNOME desktop environment. It can scan multiple device drives, as well as local, remote and external user-requested folders and devices.

The application’s main goal is to help users view how they are using the storage space of a HDD (Hard Disk Drive), SSD (Solid Disk Drive), USB flash drive, memory cards, and digital cameras.

Its user interface is extremely easy to use, especially because it follows the GNOME HIG (Human Interface Guidelines). It provides users with a list of the devices and specific directories that can be scanned.

Getting started with Baobab

After a scan, users can decide if a specific directory should be deleted, moved or archived, in order to free up space. The results are also very useful if you want to estimate the total required space for a backup.

If a device or a folder is locked or encrypted, the application will automatically detect it and will ask the user for a password. When a scan is complete, it will display a graphical chart with each scanned folder.

Users can choose to scan either their Home directory, the entire filesystem and an attached disk drive (if available). In addition, they can also scan a specific directory (including subdirectories) or a remote folder, simply by clicking the gear button on main toolbar.

In order to scan a remote directory, the application will require the IP address of the remote machine. It supports SMB (Samba shared folders from other OSes, such as Microsoft Windows) and keeps a history of previous connections.

Bottom line

All in all, Baobab or Disk Usage Analyzer is a great and easy to use application for the GNOME desktop environment. It helps you to quickly analyze the usage of specific folder and device.

Analyse directory trees File management Directory analizer Disk Directory Analyze Analyzer

Source

Yes We Can: Technical Documentation with DAPS for DocBook and AsciiDoc

DocBook Authoring and Publishing Suite: A Fully-Fledged Authoring and Content Management System for Documentation Projects

If you are working in technical communication, banking on DocBook for your documentation projects comes with many advantages. However, over the past few years, software documentation projects started to move from DocBook to AsciiDoc, a lightweight markup language, as the document format. This is partly due to the ever-growing complexity of IT solutions and the need to involve external experts (not having a technical writing background) into documentation efforts.

Such a move usually not only requires converting the DocBook sources to AsciiDoc, but also changing the project setup, the toolchain and writing new stylesheets.

But we have good news: The new AsciiDoc support in the DocBook Authoring and Publishing Suite (DAPS) saves you from switching to a new toolchain and new stylesheets. Whether you convert an existing DAPS project from DocBook to AsciiDoc, or whether you have used DAPS before and are starting an AsciiDoc project from scratch, DAPS lets you use the:

  • existing XSLT stylesheets (for converting DocBook into PDF, HTML, ePUB, etc.)
  • same DAPS commands as with DocBook projects
  • same project setup as with DocBook projects

Advantages of DocBook for Large Documentation Projects

DocBook is the ideal framework when it comes to publishing large documentation projects in different formats. The DocBook project consists of a language (DocBook XML) and a set of stylesheets to translate this language into different output formats such as HTML, PDF, and EPUB.

The stylesheets define the layout you want to apply when transforming the XML sources into output formats. You can use the stylesheets included with DocBook, or you can write your own XSLT stylesheets to ensure your corporate design is properly reflected.

The language DocBook XML is based on the eXtensible Markup Language (XML) and defines the content in a semantic way through elements like in HTML. DocBook itself is written as a schema that defines the element names and the content and where they can appear. The DocBook schema is used to fulfill two tasks: guided editing and validation.

Guided editing is done via an XML editor (and there are many choices, from XML-focused editors such as oXygen to general programming editors such as Emacs). The editor reads in the DocBook schema and suggests which elements are allowed in the current context. This is similar to sorting objects into drawers according to their function: For example, you place screwdriver and hammer into a drawer labeled Tools, whereas you place teddy bears and building blocks into a drawer labeled Toys. Similarly, when writing documents with DocBook, you would “sort” the author’s name into an XML tag called author, whereas you would “sort” a table into an XML element called table. Validation gives hints about structural errors in an XML document; this could, for example, be a missing element.

Similar products often share a considerable amount of features and differ in details only. If you want to generate multiple documentation variants from your XML files, you can do so with the help of conditional text – or profiling, as it is called in DocBook. For example, you can profile certain parts of your XML texts for different (processor) architectures, operating systems, vendors or target groups.

While learning DocBook XML might seem cumbersome at first sight, it comes with many unique advantages. Among others, it is ideal for the modular structures of complex documentation, it provides profiling, and you can generate many different output formats from the same XML sources.

Contribute to Documentation: AsciiDoc as Convenient Alternative

However, in the age of Cloud, “X as a Service” and “Y as a Platform”, technical projects become more and more complex. In consequence, documentation projects are reliant on contributions from external experts, such as engineers working on new technologies, consultants implementing product and solution stacks onsite at a customer’s, and many others. They don’t have a technical writing background, but they have to deliver specific content. And they don’t have any time at all to deep-dive into a writing language just to provide some documents.

For those projects and contributors, AsciiDoc offers a serious alternative. AsciiDoc belongs to the lightweight markup languages and provides a plain text documentation syntax and processor. It is not as modular and extensive as DocBook, but it is easy to understand and to use.

One of the biggest advantages of using AsciiDoc as a source for documentation is its seamless integration with GitHub. GitHub not only renders AsciiDoc sources, but also allows to edit them directly in the Web interface. This fits nicely with GitHub‘s Web-based pull request workflow: You edit the document online, click a button, and someone else (usually the repository owner) can review and integrate the change. All you need is a free GitHub account (which many developers and technical experts already have). This improves the contribution flow for external contributors.

DAPS Adds AsciiDoc Support

Transforming the XML sources to output formats such as PDF, requires several steps such as validating, filtering (profiling), converting images, and generating a .fo file. As the DocBook project does not provide a standard tool chain, custom solutions (written with make, ant or a scripting language) are necessary for publishing your DocBook documentation projects. That is a major hurdle for writers who would like to use DocBook. The DocBook Authoring and Publishing Suite, originally developed by me, with lots of contributions by Thomas Schraitle, fills this gap by providing a tool set for easy creation and publication of DocBook sources on Linux.

DAPS helps technical writers in the editing, translation and publishing process for documentation written in DocBook XML. DAPS is a command line based software for Linux and lets you create HTML, PDF, EPUB, man pages, and other formats with a single command. It automatically takes care of validating and profiling your sources and of converting the images into the format best suited for the selected output format. DAPS also allows you to manage the key tasks related to writing and editing, create profiled source tarballs for translation or review. DAPS supports authors by providing link checker, validator, spellchecker, and editor macros. Thus it is perfectly suited to manage large documentation projects with multiple authors.

Starting with version 3.0, DAPS supports also AsciiDoc sources. AsciiDoc sources are converted to DocBook and then processed the same way as DocBook sources. Projects with AsciiDoc sources are handled the same way as regular DocBook projects. Therefore, the full range of output formats supported by DAPS is supported also for AsciiDoc sources (HTML, single HTML, PDF, EPUB, plain text, etc.).

DAPS is released as open source. It offers a dual-licensing model at your choice (GPL 2.0 or GPL 3.0) and can be installed and used on any modern Linux system. DAPS packages are available for SUSE Linux Enterprise and for openSUSE, and previous versions of DAPS have successfully been tested and used on Fedora, Ubuntu, Xubuntu, and Debian.

Together with a text editor and a version management system such as Git, DAPS can be used as a fully-fledged authoring and content management system for documentation projects based on DocBook and AsciiDoc.

Curious? Want to try it yourself now? Check out the latest DAPS version, the most recent changes and the documentation and share your feedback with us – just send an email to doc-team@suse.com.

Share with friends and colleagues on social media

Source

LF Networking Approaches Inaugural Year With Addition of New Members

Globo.com, Packet, PANTHEON.tech and RIFT Inc. Join More Than 100 Leading Technology Organizations to Further Open Source, Open Standards-Based Networking Technologies

San Francisco – November 20, 2018 — The LF Networking Fund (LFN), which facilitates collaboration and operational excellence across open networking projects, today announced the addition of four new members, continuing its rapid global growth. Welcoming new members Packet, PANTHEON.tech and RIFT, Inc. extends LFN’s first year momentum, and sets the stage for accelerated development and adoption of open source and open standards-based networking technologies in next year.

“Industry acceptance and participation this year have been tremendous and validate that businesses see an open source, open standards-based future for networking technologies,” said Arpit Joshipura, general manager, Networking and Orchestration, The Linux Foundation. “In 2019, the combined efforts of our growing community will continue to accelerate harmonization of open source and open standards-based networking technologies that will define tomorrow’s networks.”

The newest members will work with more than 100 other technology leaders to drive greater harmonization and development of LFN’s networking projects, including FD.io, ONAP, Open Daylight, OPNFV, PNDA, and Tungsten Fabric. LFN enables open source networking technologies by integrating the governance of participating projects in order to enhance operational excellence, simplify member engagement, and increase collaboration. Globo.com, Packet, PANTHEON.tech and RIFT, Inc. join as Silver members.

The LFN Community will come together at KubeCon and ContainerCon North America on December 10-13 in Seattle, Washington. LFN’s onsite presence includes a FDio Mini Summit and all-LFN member evening reception on December 10 featuring ONAP, OpenDaylight, OPNFV, FD.io, PNDA, and Tungsten Fabric. Heather Kirksey will also give a presentation on Thursday, December 13 titled “The Telco Networking Journey to Cloud Native: The Good, Bad, and Ugly.” Find additional details on LFN’s presence at KubeCon here.

Member Supporting Quotes:

“Becoming a member of ONF, CNC and the Linux Foundation is a recognition of the passion and commitment of Globo.com with the OSS community. We firmly believe that the ability to handle infrastructure as code, software-defined networks, and scalable applications is vital to the dynamic environment and will be instrumental in supporting our business and driving innovation in the digital media market in which we are included.”

“Networking is going through tremendous transformation, driven by the need for automation and the disaggregation of hardware and software,” said Ihab Tarazi, CTO of Packet. “We see great opportunities in collaborating with the LF Community to accelerate our innovation and solving the hard problems of extending cloud automation to the edge.”

“PANTHEON.tech is committed to accelerate networking innovation by joining LFN as a Silver Member,” said Tomáš Jančo, CEO, PANTHEON.tech. “As an organization with 17 years of experience, we are eager to share our advancements in developing cohesive software solutions across the LF Networking projects at the bleeding edge.”

“Proprietary management and automation approaches have impeded NFV deployments,” said Matt Harper, RIFT’s CTO. “Open source communities (such as ONAP) play a key role in validating interoperability and creating best-of-breed de-facto standards. We are excited to be working with the Linux Foundation and ONAP to evolve NFV technology to cultivate a robust and interoperable commercial NFV ecosystem.”

About the Newest LFN Members:

Globo.com provides digital platforms and services for Brazil’s Grupo Globo, connecting online audiences to the diverse array of publishing, video and data properties Grupo Globo operates, including Globo TV’s broadcast networks and over-the-top television service Globo Play. As part of its services, Globo.com provides LIVE stream and VOD services for programs produced by more than 100 affiliates across Brazil.

Packet is the leading bare metal automation platform for developers. Its proprietary technology automates physical servers and networks without the use of virtualization or multi-tenancy, powering over 60k deploys each month across its 18 global datacenters. Founded in 2014, Packet has quickly become a provider of choice for leading enterprises, SaaS companies, and software innovators. In addition to its bare metal public cloud, Packet offers a custom “Private Deployment” model that automates infrastructure in customer-owned locations.

PANTHEON.tech is a research & development software company, delivering managed projects to its clients, primarily oriented atnetwork technologies and prototype software development with a focus on SDN, NFV, Automotive and Smart Cities. The company has deep expertise in many open-source networking technologies, including OpenDaylight, ONAP, VPP, FD.io, PNDA, Sysrepo, Honeycomb, networking for CNCF and others. This includes PANTHEON.tech’s OpenDaylight-based open source SDN SDK toolkit, lighty.io, designed to support, ease and accelerate development of Software-defined Networking solutions in Java, Python and Go.

RIFT, Inc provides an open sourced and standards-based platform designed to automate the deployment and operation of virtualized network services and functions. RIFT’s technology, RIFT.wareTM, and RIFT services accelerate service providers’ efforts to virtualize cloud-based communication services. RIFT.ware empowers enterprises to successfully deploy virtualized network services on private and hybrid cloud, and, NFV-enabled virtualized networks. Any network application built with RIFT technology can intelligently take advantage of any cloud’s unique capabilities and operate at any scale. RIFT, Inc is a privately held, global company with offices in the United States and India.

About The Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and industry adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Source

WP2Social Auto Publish Powered By : XYZScripts.com