Facebook’s GraphQL Gets Its Own Open-Source Foundation

Facebook announced GraphQL back in 2012 and open sourced it in 2015. Today, it’s being used by companies that range from Airbnb to Audi, GitHub, Netflix, Shopify, Twitter and The New York Times . At Facebook itself, the GraphQL API powers billions of API calls every day. At its core, GraphQL is basically a language for querying databases from client-side applications and a set of specifications for how the API on the backend should present this data to the client. It presents an alternative to REST-based APIs and promises to offer developers more flexibility and the ability to write faster and more secure applications. Virtually every major programming language now supports it through a variety of libraries.

“GraphQL has redefined how developers work with APIs and client-server interactions. We look forward to working with the GraphQL community to become an independent foundation, draft their governance and continue to foster the growth and adoption of GraphQL,” said Chris Aniszczyk, vice president of Developer Relations at the Linux Foundation. As Aniszczyk noted, the new foundation will have an open governance model, similar to that of other Linux Foundation projects. The exact details are still a work in progress, though. The list of founding members is also still in flux, but for now, it includes Airbnb, Apollo, Coursera, Elementl, Facebook, GitHub, Hasura, Prisma, Shopify and Twitter.

 

Money can’t buy love, but it improves your bargaining position.
— Christopher Marlowe

Working…

Source

How to Install Nginx with Virtual Hosts and SSL Certificate

Nginx (short for Engine-x) is a free, open source, powerful, high-performance and scalable HTTP and reverse proxy server, a mail and standard TCP/UDP proxy server. It is easy to use and configure, with a simple configuration language. Nginx is now the preferred web server software for powering heavily loaded sites, due its scalability and performance.

In this article will discuss how to use Nginx as a HTTP server, configure it to serve web content, and set up name-based virtual hosts, and create and install SSL for secure data transmissions, including a self-signed certificate on Ubuntu and CentOS.

How to Install Nginx Web Server

First start by installing the Nginx package from the official repositories using your package manager as shown.

———— On Ubuntu ————
$ sudo apt update
$ sudo apt install nginx

———— On CentOS ————
$ sudo yum update
$ sudo yum install epel-release
$ sudo yum install nginx

After the Nginx package is installed, you need to start the service for now, enable it to auto-start at boot time and view it’s status, using the following commands. Note that on Ubuntu, it should be started and enabled automatically while the package is pre-configured.

$ sudo systemctl start nginx
$ sudo systemctl senable nginx
$ sudo systemctl status nginx

Start and Check Nginx Status

Start and Check Nginx Status

At this point, the Nginx web server should be up and running, you can verify the status with the netstat command.

$ sudo netstat -tlpn | grep nginx

Check Nginx Port Status

Check Nginx Port Status

If your system has a firewall enabled, you need to open port 80 and 443 to allow HTTP and HTTPS traffic respectively, through it, by running.

———— On CentOS ————
$ sudo firewall-cmd –permanent –add-port=80/tcp
$ sudo firewall-cmd –permanent –add-port=443/tcp
$ sudo firewall-cmd –reload

———— On Ubuntu ————
$ sudo ufw allow 80/tcp
$ sudo ufw allow 443/tcp
$ sudo ufw reload

The ideal method for testing the Nginx installation and checking whether it’s running and able to serve web pages is by opening a web browser and pointing to the IP of the server.

http://Your-IP-Address
OR
http://Your-Domain.com

A working installation should be indicated by the following screen.

Check Nginx Web Page

Check Nginx Web Page

How to Configure Nginx Web Server

Nginx’s configuration files are located in the directory /etc/nginx and the global configuration file is located at /etc/nginx/nginx.conf on both CentOS and Ubuntu.

Nginx is made up of modules that are controlled by various configuration options, known as directives. A directive can either be simple (in the form name and values terminated with a 😉 or block ( has extra instructions enclosed using {}). And a block directive which contains other directives is called a context.

All the directives are comprehensively explained in the Nginx documentation in the project website. You can refer to it for more information.

How to Serve Static Content Using Nginx in Standalone Mode

At a foundational level, Nginx can be used to serve static content such as HTML and media files, in standalone mode, where only the default server block is used (analogous to Apache where no virtual hosts have been configured).

We will start by briefly explaining the configuration structure in the main configuration file.

$ sudo vim /etc/nginx/nginx.conf

If you look into this Nginx configuration file, the configuration structure should appear as follows and this is referred to as the main context, which contains many other simple and block directives. All web traffic is handled in the http context.

user nginx;
worker_processes 1;
…..

error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
…..

events {
…..
}

http {
server{
…….
}
…..
}

The following is a sample Nginx main configuration (/etc/nginx/nginx.conf) file, where the http block above contains an include directive which tells Nginx where to find website configuration files (virtual host configurations).

Nginx Configuration File

user www-data;
worker_processes auto;
pid /run/nginx.pid;

events {
worker_connections 768;
# multi_accept on;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

include /etc/nginx/mime.types;
default_type application/octet-stream;

access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;

sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;

include /etc/nginx/conf.d/*.conf;
}

Note that on Ubuntu, you will also find an additional include directive (include /etc/nginx/sites-enabled/*;), where the directory /etc/nginx/sites-enabled/ stores symlinks to the websites configuration files created in /etc/nginx/sites-available/, to enable the sites. And deleting a symlink disables that particular site.

Based on your installation source, you’ll find the default website configuration file at /etc/nginx/conf.d/default.conf (if you installed from official NGINX repository and EPEL) or /etc/nginx/sites-enabled/default (if you installed from Ubuntu repositories).

This is our sample default nginx server block located at /etc/nginx/conf.d/default.conf on the test system.

server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /var/www/html/;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}

A brief explanation of the directives in the above configuration:

  • listen: specifies the port the server listens on.
  • server_name: defines the server name which can be exact names, wildcard names, or regular expressions.
  • root: specifies the directory out of which Nginx will serve web pages and other documents.
  • index: specifies the type(s) of index file(s) to be served.
  • location: used to process requests for specific files and folders.

From a web browser, when you point to the server using the hostname localhost or its IP address, it processes the request and serves the file /var/www/html/index.html, and immediately saves the event to its access log (/var/log/nginx/access.log) with a 200 (OK) response. In case of an error (failed event), it records the message in the error log (/var/log/nginx/error.log).

Test Nginx Default Site

Test Nginx Default Site

To learn more about logging in Nginx, you may refer to How to Configure Custom Access or Error Log Formats in Nginx.

Instead of using the default log files, you can define custom log files for different web sites, as we shall look at later on, under the section “setting up name-based virtual hosts (server blocks)”.

How ot Restrict Access to a Web Page with Nginx

In order to restrict access to your website/application or some parts of it, you can setup basic HTTP authentication. This can be used essentially to restrict access to the whole HTTP server, individual server blocks or location blocks.

Start by creating a file that will store your access credentials (username/password) by using the htpasswd utility.

$ yum install httpd-tools #RHEL/CentOS
$ sudo apt install apache2-utils #Debian/Ubuntu

As an example, let’s add user admin to this list (you can add as many users as possible), where the -c option is used to specify the password file, and the -B to encrypt the password. Once you hit [Enter], you will be asked to enter the users password:

$ sudo htpasswd -Bc /etc/nginx/conf.d/.htpasswd admin

Then, let’s assign the proper permissions and ownership to the password file (replace the user and group nginx with www-data on Ubuntu).

$ sudo chmod 640 /etc/nginx/conf.d/.htpasswd
$ sudo chmod nginx:nginx /etc/nginx/conf.d/.htpasswd

As we mentioned earlier on, you can restrict access to your webserver, a single website (using its server block) or specific directory or file. Two useful directives can be used to achieve this:

  • auth_basic – turns on validation of user name and password using the “HTTP Basic Authentication” protocol.
  • auth_basic_user_file – specifies the credential’s file.

As an example, we will show how to password-protect the directory /var/www/html/protected.

server {
listen 80 default_server;
server_name localhost;
root /var/www/html/;
index index.html;
location / {
try_files $uri $uri/ =404;
}

location /protected/ {
auth_basic “Restricted Access!”;
auth_basic_user_file /etc/nginx/conf.d/.htpasswd;
}
}

Now, save changes and restart Nginx service.

$ sudo systemctl restart nginx

The next time you point your browser to the above directory (http://localhost/protected) you will be asked to enter your login credentials (username admin and the chosen password).

A successful login allows you to access the directory’s contents, otherwise you will get a a “401 Authorization Required” error.

How to Setup Name-based Virtual hosts (Server Blocks) in Nginx

The server context allows multiple domains/sites to be stored in and served from the same physical machine or virtual private server (VPS). Multiple server blocks (representing virtual hosts) can be declared within the http context for each site/domain. Nginx decides which server processes a request based on the request header it receives.

We will demonstrate this concept using the following dummy domains, each located in the specified directory:

  • wearetecmint.com – /var/www/html/wearetecmint.com/
  • welovelinux.com – /var/www/html/welovelinux.com/

Next, assign the appropriate permissions on the directory for each site.

$ sudo chmod -R 755 /var/www/html/wearetecmint.com/public_html
$ sudo chmod -R 755 /var/www/html/welovelinux.com/public_html

Now, create a sample index.html file inside each public_html directory.

<html>
<head>
<title>www.wearetecmint.com</title>
</head>
<body>
<h1>This is the index page of www.wearetecmint.com</h1>
</body>
</html>

Next, create the server block configuration files for each site inside the /etc/httpd/conf.d directory.

$ sudo vi /etc/nginx/conf.d/wearetecmint.com.conf
$ sudo vi /etc/nginx/conf.d/welovelinux.com.conf

Add the following server block declaration in the wearetecmint.com.conf file.

wearetecmint.com.conf

server {
listen 80;
server_name wearetecmint.com;
root /var/www/html/wearetecmint.com/public_html ;
index index.html;
location / {
try_files $uri $uri/ =404;
}

}

Next, add the following server block declaration in the welovelinux.com.conf file.

welovelinux.com.conf

server {
listen 80;
server_name welovelinux.com;
root /var/www/html/welovelinux.com/public_html;
index index.html;
location / {
try_files $uri $uri/ =404;
}

}

To apply the recent changes, restart the Nginx web server.

$ sudo systemctl restart nginx

and pointing your web server to the above addresses should make you see the main pages of the dummy domains.

http://wearetecmint.com
http://welovelinux.com

Test Nginx Virtual Hosts Websites

Test Nginx Virtual Hosts Websites

Important: If you have SELinux enabled, its default configuration does not allow Nginx to access files outside of well-known authorized locations (such as /etc/nginx for configurations, /var/log/nginx for logs, /var/www/html for web files etc..).

You can handle this by either disabling SELinux, or setting the correct security context. For more information, refer to this guide: using Nginx and Nginx Plus with SELinux on the Nginx Plus website.

How to Install and Configure SSL with Nginx

SSL certificates help to enable secure http (HTTPS) on your site, which is essential to establishing a trusted/secure connection between the end users and your server by encrypting the information that is transmitted to, from, or within your site.

We will cover how to create and install a self-signed certificate, and generate a certificate signing request (CSR) to acquire an SSL certificate from a certificate authority (CA), to use with Nginx.

Self-signed certificates are free to create and are practically good to go for testing purposes and for internal LAN-only services. For public-facing servers, it is highly recommended to use a certificate issued by a CA (for example Let’s Encrypt) to uphold its authenticity.

To create a self-signed certificate, first create a directory where your certificates will be stored.

$ sudo mkdir /etc/nginx/ssl-certs/

Then generate your self-signed certificate and the key using the openssl command line tool.

$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl-certs/nginx.key -out /etc/nginx/ssl-certs/nginx.crt

Let’s briefly describe the options used in the above command:

  • req -X509 – shows we are creating a x509 certificate.
  • -nodes (NO DES) – means “don’t encrypt the key”.
  • -days 365 – specifies the number of days the certificate will be valid for.
  • -newkey rsa:2048 – specifies that the key generated using RSA algorithm should be 2048-bit.
  • -keyout /etc/httpd/ssl-certs/apache.key – specifies the full path of the RSA key.
  • -out /etc/httpd/ssl-certs/apache.crt – specifies the full path of the certificate.

Create SSL Certificate and Key for Nginx

Create SSL Certificate and Key for Nginx

Next, open your virtual host configuration file and add the following lines to a server block declaration listening on port 443. We will test with the virtual host file /etc/nginx/conf.d/wearetecmint.com.conf.

$ sudo vi /etc/nginx/conf.d/wearetecmint.com.conf

Then add the ssl directive to nginx configuration file, it should look similar to below.

server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;

ssl on;
ssl_certificate /etc/nginx/ssl-certs/nginx.crt;
ssl_trusted_certificate /etc/nginx/ssl-certs/nginx.crt;
ssl_certificate_key /etc/nginx/ssl-certs/nginx.key;

server_name wearetecmint.com;
root /var/www/html/wearetecmint.com/public_html;
index index.html;
location / {
try_files $uri $uri/ =404;
}

}

Now restart the Nginx and point your browser to the following address.

https://www.wearetecmint.com

Check Nginx SSL Website

Check Nginx SSL Website

If you would like to purchase an SSL certificate from a CA, you need to generate a certificate signing request (CSR) as shown.

$ sudo openssl req -newkey rsa:2048 -nodes -keyout /etc/nginx/ssl-certs/example.com.key -out /etc/nginx/ssl-certs/example.com.csr

You can also create a CSR from an existing private key.

$ sudo openssl req -key /etc/nginx/ssl-certs/example.com.key -new -out /etc/nginx/ssl-certs/example.com.csr

Then, you need to send the CSR that is generated to a CA to request the issuance of a CA-signed SSL certificate. Once you receive your certificate from the CA, you can configure it as shown above.

Read Also: The Ultimate Guide to Secure, Harden and Improve Performance of Nginx Web Server

Summary

In this article, we have explained how to install and configure Nginx; covered how to setup name-based virtual hosting with SSL to secure data transmissions between the web server and a client.

If you experienced any setbacks during your nginx installation/configuration process or have any questions or comments, use the feedback form below to reach us.

Source

Amazon Inspector Adds Amazon EC2 Instance Details to Security Findings

Amazon Inspector security findings now include the Amazon Machine Image (AMI) ID, instance tags, auto scaling group, hostname, IP addresses, DNS names, and subnet ID of the Amazon EC2 instance that has the vulnerability or insecure configuration. You can view these fields by clicking the ‘Show Details’ button while reviewing a finding in the management console. These fields are also available when you describe findings through the AWS API and CLI.

Amazon Inspector automatically assesses applications for vulnerabilities or deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity. These additional fields help you filter, group, and prioritize your security findings based on the image, network location, tags, or other attributes of vulnerable EC2 instances.

Amazon Inspector is available in the following eleven regions: US East (Northern Virginia), US East (Ohio), US West (Northern California), US West (Oregon), EU (Frankfurt), EU (Ireland), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Tokyo), and AWS GovCloud (US).

To learn more about Amazon Inspector or to start your free trial, please visit Amazon Inspector.

Source

Download Manjaro Linux KDE 18

Manjaro Linux KDE is an open source Linux operating system that uses all the powerful features found on other Manjaro editions, but on top of a highly customized KDE desktop environment. It is based on the Arch Linux distribution, which means that it is a very stable, reliable and virus-free operating system.

Follows a rolling-release model

It follows a rolling-release model keeping your installation up-to-date forever (or at least until a complete reinstall is required because of unforeseen circumstances). It is available for download as an ISO image that you will need to burn on a blank DVD disc or use Unetbootin to deploy it on a USB flash drive.

Live DVD boot menu options

It uses exactly the same boot menu found on all official Manjaro releases, allowing users to install the entire operating system on their computer, use the distribution directly from the live media, or boot the operating system that is already installed on the respective computer.

Uses the KDE Plasma desktop environment

Because it uses the KDE Plasma desktop environment, this edition is much bigger in size than any other Manjaro flavor. Besides all the amazing applications that are part of a default KDE installations, this edition includes the entire LibreOffice office suite, the GIMP image editor, and the VLC Media Player applications.

You can install the OS with or without proprietary drivers

An interesting feature of the Manjaro Live CD is the ability to use or install the operating system with or without proprietary drivers. This means that if you have an AMD Radeon or Nvidia graphics card, choosing the second option (Start or install Manjaro (non-free drivers)) is the best method to enjoy a complete Manjaro experience. On the other hand, if you have an Intel video card, we suggest to use the first option when installing or using the live environment.

Bottom line

We highly recommend the Manjaro Linux KDE operating system if you own a high-end computer and you want to transform it into a modern, beautiful, clean and powerful workstation for office, multimedia and gaming tasks.

Source

GNU Linux-Libre 4.19 Kernel Is Now Available for Those Seeking 100% Freedom | Linux.com

With Linux kernel 4.19 hitting the streets, a new version of the GNU Linux-libre kernel is now available, version 4.19, based on the upstream kernel but without any proprietary drivers. Based on the recently released Linux 4.19 kernel series, the GNU Linux-libre 4.17-gnu kernel borrows all the new features, including the experimental EROFS (Enhanced Read-Only File System) file system, initial support for the Wi-Fi 6 (802.11ax) wireless protocol, and mitigations for the L1FT and SpectreRSB security flaws. While the GNU Linux-Libre 4.19 kernel comes with all these goodies found in the upstream Linux 4.19 kernel, it doesn’t ship with proprietary code. Deblobbed drivers include Aspeed ColdFire FSI Master, MT76x0U and MT76x2U Wi-Fi, MTk Bluetooth UART, as well as Keystone and Qualcomm Hexagon Remoteproc.

Source

Linux IoT Landscape: Distributions – IoT For All

Graphic of the Linux Penguin

Linux has traditionally suffered an embarrassment of riches when it comes to the selection of the distribution that that is used to deploy it.

Image of terraced rice farms to represent Linux distributions for IoT stacks

What Is a Linux Distribution?

Linux is an Operating System, which is the program at the heart of controlling a computer. It decides how to partition the available resources (CPU, memory, disk, network) between all of the other programs vying for it. The operating system, while very important, isn’t useful on its own. Its purpose is to manage the compute resources for other programs. Without these other programs, the Operating System doesn’t serve much of a purpose.

That’s where the distribution comes in. A distribution provides a large number of other programs that, together with Linux, can be assembled into working sets for a vast number of purposes. These programs can range from basic program writing tools such as compilers and linkers to communications libraries to spreadsheets and editors to pretty much everything in between. A distribution tends to have a superset of what’s actually used for each individual computer or solution. It also provides many choices for each category of software components that users or companies can assemble into what they consider a working set. A rough analogy can be made to a supermarket in which there are many options for many items on the shelves, and each user picks and chooses what makes sense to them in their cart.

Binary-Based or Source-Based Distribution?

Distributions can largely be split into two categories: binary-based and source-based.

Binary-based distributions provide all of the software components already pre-compiled and ready to be installed. These components are compiled with “good-enough” build options that work fine for the majority of users. They also do provide sources for these components for the minority of users that need or want to compile their own components. Following our supermarket analogy, this supermarket contains all of the food pre-packaged and pre-cooked, but with clear instructions on how to get the ingredients and repeat the process for those that want to tweak a recipe or two. This kind of distribution is exemplified by Debian, Fedora Core, OpenSUSE, Ubuntu, and many others. And while they provide the same type of system, they all do so using different—and unfortunately, incompatible—methods. They’re the primary kind of distribution used in general purpose computers such as servers, desktops, and laptops.

Source-based distributions, on the other hand, focus on providing a framework in which the end users can build all of the components themselves from source code. These distributions also provide tools for easily choosing a sensible starting collection of components and tweaking each component’s build as necessary. These tweaks can be as simple as adding a compile flag to using a different version of the sources or modifying the sources in some way. A user will assemble a menu of what they want to build and then start the build. After minutes or hours, depending on the case, they will have a resulting image which they can use for their computer. Examples of this kind of distribution are Gentoo, Android, and Yocto. In our supermarket analogy, this is closer to a bulk foods store, where you can get pre-measured foods with detailed machine-readable cooking instructions, and you’d have a fancy cooker that can read those instructions and cook the meals for you. And handle tweaks to a range of recipes such as adjusting for brown rice over white rice. Sort of — the analogy gets a bit weak on this one.

These source-based distributions are generally preferred for embedded Linux-based devices in general and IoT devices in particular. While they are harder to set up and maintain, source-based distributions have the unique advantage of being able to tailor the installed image to the exact target hardware in order to maximize resource usage—or minimize resource wastage. And for embedded devices that tend to be a strong constraint. In addition, source based distributions are better suited for cross-building—where the machine on which you build your platform isn’t the same as the one on which you run it—while binary based distributions are better for self-hosted building—where you build and run on the same machine (or same architecture).

Given today’s prevalence of having Intel architecture machines as build machines—and using ARM architecture for IoT products—cross-building support is important for IoT devices.

New Kid On The Block: Container-Centered Distributions

The traditional Linux method—shipping a single unified userspace that contains all of the platform outside of the kernel—is changing. The new model is about having a collection of “containers” that componentize the userspace. The containerized model transforms a portion of the userspace into a federated collection of components with a high degree of independence between each component.

Containerized distribution brings many benefits, from allowing teams to work more independently to making it feasible to do granular platform upgrades. The downside is that they have a larger footprint than non-containerized solutions. If the evolution of technology has shown us anything, however, it’s that when the only downside of a new technology is the footprint, the resourcing available to it tends to expand to make that a smaller and smaller problem at every new generation.

Some of the early options are described below to compare to existing distributions.

The Contenders: Linux Distributions for IoT

Now we must delve into contentious territory. Many people have their favorite Linux distribution, and even if their requirements change wildly (for example going from a server setup to an embedded IoT device), they cling onto that distribution—sometimes to the point of fitting a square peg into a round hole.

I’ll preface the list below: this is a sampling of some well established Linux distributions and some up and comers. Many others exist and might be more suitable for some use cases.

Now with that out of the way…

Yocto

Yocto is a source-based distribution that’s used in many embedded and IoT devices. It tries to unite the benefits of binary-based distributions, such as clear separation of the packages and their dependencies, with the benefits of source-based distributions that allow you to alter your target binaries in significant ways as you make smaller changes.

Diagram demonstrating how Yocto works as a Linux distribution for IoT

Yocto is composed of a series of recipes, each of which describes how to build one module of the system (e.g. library, daemon, application, etc.). These recipes are then collected into layers which collect a series recipes and configure various aspects of how they are supposed to be used together, from compile flags to recipe features, to details on how they show up on the target. Each target build will be composed of a few of these layers, each one adding or removing packages from the lower layers, or modifying their default behavior. This allows multiple parties to tweak their own layer to affect final images. So if the base layer uses a conservative set of compiler flags (which it usually does), a chip vendor can add compiler flags that are beneficial to their specific chip model, and a board vendor can remove chip functionality that their board might not support.

What this means in practice for your IoT product is that your effort to build a solution using a board that already supports Yocto will be to add or modify recipes that provide your value-add over the base functionality. You will also need to have a build and configuration management infrastructure setup that allows creating images for your target, though in today’s world of containers that is not too difficult to do

For more information on Yocto, you can start here. It’s also worth checking out how well supported Yocto is on any dev. boards that you’re considering for your IoT solution.

Debian

Debian is a venerable open source binary-based distribution. It’s both a distribution onto itself and also the baseline for other well-known derived distributions, the most famous of which is Ubuntu.

Debian has a sizeable collection of packages that are already pre-built for ARM (the architecture of choice for IoT), but the level of support and maintenance for the ARM binaries of these packages tends to be significantly less than the Intel counterparts given Debian’s strength in Intel ecosystems. So metrics such as “10,000+ packages built” aren’t all that meaningful. You’ll need to understand the packages that are important to you and how well-supported they are.

A shortcoming of many distributions used in self-hosted setups (e.g. Debian) is that developers don’t understand or remember that package installation might not be done on the machine that will ultimately be running the package, and thus they can’t rely on any functionality from the target being available. Given that this nuisance is also a headache for docker environments, distributions have spent good effort in cleaning up these dependencies, so it’s a smaller problem than it used to be.

The effort to set up a build environment for a small set of packages is fairly trivial, but the infrastructure to build all the packages for a system can become significant.

Because of these, Debian for IoT is a good option as long as the board you can considering already has gone through the effort of supporting Debian, in which case you just need to add or create a few packages to complete your platform

EdgeX Foundry

EdgeX Foundry is not exactly a distribution in the strict sense, in that it does not have any opinion on the Board Support Package (BSP) component of distributions. The BSP is the portion that contains the Linux kernel itself, device drivers and libraries to enable the hardware platform. It starts from a level above that, requiring a working Linux system with docker support as the underlying substrate. From there it provides a wide variety of containers that provide a rich set of middleware and verticals for IoT devices, in particular edge devices(in docker parlance, a container is a self-contained module that usually provides a vertical function such as a database or a web service, in with little or no dependency on the host operating system, libraries, etc).

The concepts behind EdgeX Foundry point to the way forward for larger IoT devices, particularly edge devices, but work remains to be done to define a more constrained version that provides a good set of baseline services. Progress has been made in this regard with a move of some services from JVM to golang based implementations but the footprint will remain out of reach for low and mid-end Linux based IoT for the immediate future.

Foundries.io Microplatform

Foundries.io has created a Linux platform using a Yocto based approach to creating the board support layer and then layers a set of containerized microservices on top of it. Their set of containers is a smaller and more modest set than EdgeX Foundry approach, with a smaller footprint.

While full access to the Foundries.io product with automated updates and management is available via subscription, the underlying platform is open source and available here.

Conclusion

Linux-based IoT is starting a migration from a traditional embedded model where the complete vertical solution is created from a single team/worldview/toolchain/model to a more flexible model with greater separation of firmware, board, middleware, and applications components. This migration is not without cost however, and places higher demand on CPU, memory, and disk requirements. In order to choose a Linux baseline for your next IoT project, you’ll need to take into account what footprint you can afford and what lifespan you plan for your product. Smaller and more quickly replaced products are better off staying close to today’s tried and true solutions such as Yocto. Products that can afford more resources, and require new feature rollout into deployed products as a requirement should look into the more mainstream Linux distributions and the new container-focused solutions as a path forward.

Source

Steam Play thoughts: A Valve game streaming service

With the talk of some big players moving into cloud gaming, along with a number of people thinking Valve will also be doing it, here’s a few thoughts from me.

Firstly, for those that didn’t know already, Google are testing the waters with their own cloud gaming service called Project Stream. For this, they teamed up with Ubisoft to offer Assassin’s Creed Odyssey on the service. I actually had numerous emails about this, from a bunch of Linux gamers who managed to try it out and apparently it worked quite well on Linux.

EA are pushing pretty heavily with this too with what they’re calling Project Atlas, as their Chief Technology Officer talked about in a Medium post on how they’ve got one thousand EA employees now working on it. That sounds incredibly serious to me!

There’s more cloud services offering hardware for a subscription all the time, although a lot of them are quite expensive and use Windows.

So this does beg the question: What is Valve going to do? Cloud gaming services, that will allow people with lower-end devices to play a bunch of AAA games relatively easily could end up cutting into Valve’s wallet.

Enter Valve’s Cloud Gaming Service

Pure speculation of course, but with the amount of big players now moving into the market, I’m sure Valve will be researching it themselves. Perhaps this is what Steam Play is actually progressing towards? With Steam Play, Valve will be able to give users access to a large library of games running on Linux where they don’t have to pay extra fees for any sort of Windows licensing fee from Microsoft and obviously being Linux it would allow them to heavily customise it to their liking.

On top of that, what about the improvements this could further bring for native desktop Linux gaming? Stop and think about it for a moment, how can Valve tell developers they will get the best experience on this cloud gaming platform? Have a native Linux version they support with updates and fixes. Valve are already suggesting developers to use Vulkan, it’s not such a stretch I think.

Think about how many games, even single-player games are connected to the net now in some way with various features. Looking to the future, having it so your games can be accessed from any device with the content stored in the cloud somewhere does seem like the way things are heading. As much as some (including me) aren’t sold on the idea, clearly this is where a lot of major players are heading and Valve won’t want to be left behind.

For Valve, it might not even need to be a subscription service, since they already host the data for the developers. Perhaps, you buy a game and get access to both a desktop and cloud copy? That would be a very interesting and tempting idea. Might not be feasible of course, since the upkeep on the cloud machines might require a subscription if Valve wanted to keep healthy profits, but it’s another way they could possibly trump the already heavy competition.

Think the whole idea is incredibly farfetched? Fair enough, I do a little too. However, they might already have a good amount of the legwork done on this, thanks to their efforts with the Steam Link. Did anyone think a year or two ago you would be able to stream Steam games to your phone and tablet?

Valve also offer movies, TV series and more on Steam so they have quite a lot to offer.

It might not happen at all of course, these are just some basic thoughts of mine on what Valve’s moves might be in future. It’s likely not going to happen for VR titles, since they need so much power and any upset with latency could make people quite sick. Highly competitive games would also be difficult, but as always once it gets going the technology behind it will constantly improve like everything. There’s got to be some sort of end game for all their Linux gaming work and not just to help us, they are a business and they will keep moving along with all the other major players.

Source

Download Manjaro Linux Xfce 18

Manjaro Linux Xfce is an open source and completely free Linux operating system based on the powerful Arch Linux distribution. It uses a tool called BoxIt, which is designed to work like a Git repository. Besides the fact that the Manjaro distribution uses Arch Linux as its base, it aims to be user-friendly, especially because of the easy-to-configure installation. Additionally, it is fully compatible with the Arch User Repository (AUR) and uses the Pacman package manager.

Distributed in multiple editions

Manjaro is distributed with the Xfce, KDE and Openbox desktop environments, supporting both 32-bit and 64-bit architectures. A Netboot edition of Manjaro also exists for advanced users who want to install the distro over the Internet. In addition to the official editions listed above, the talented Manjaro community provides many other flavors, with by the LXDE, Cinnamon, MATE, Enlightenment and awesome window managers.

A rolling-release distribution

Keep in mind that, just like Arch Linux, Manjaro Linux is a rolling release. This means that users don’t need to download a new ISO image in order to upgrade the system to the latest stable version.

Boot options

The boot menu is exactly the same as on the other Manjaro flavors, allowing users to boot the live environment with or without proprietary drivers, check if the hardware components are correctly recognized, test the system memory (RAM), and boot the operating system that is currently installed. “Start Manjaro Linux” is the recommended option for all new users, as it will start the graphical environment powered by the lightweight Xfce window manager.

Default applications

Thunar is the default file manager, Mozilla Firefox can be used for all your web browsing needs, Mozilla Thunderbird is the default email client, and the Pidgin multi-protocol instant messenger application is there for any type of communication. The Steam client for Linux is also installed by default in this edition of Manjaro Linux, along with the HexChat IRC client, Viewnior image viewer, the powerful GIMP image editor, VLC Media Player, Xnoise music organizer and player, and the entire LibreOffice office suite.

Xfce is in charge of the graphical session

Besides the common utilities such as calculator, terminal emulator, text editor, clipboard manager, dictionary, document viewer and archive manager, the Manjaro Xfce edition also includes the GParted utility for disk partitioning tasks, and the Xfburn application for burning CD/DVD discs. We strongly recommend the Xfce edition of Manjaro Linux for new Linux users, old computers, and for all who want to discover the true power of the Arch Linux operating system.

Source

How to Manage Storage on Linux with LVM | Linux.com

Logical Volume Manager (LVM) is a software-based RAID-like system that lets you create “pools” of storage and add hard drive space to those pools as needed. There are lots of reasons to use it, especially in a data center or any place where storage requirements change over time. Many Linux distributions use it by default for desktop installations, though, because users find the flexibility convenient and there are some built-in encryption features that the LVM structure simplifies.

However, if you aren’t used to seeing an LVM volume when booting off of a Live CD for data rescue or migration purposes, LVM can be confusing because the mountcommand can’t mount LVM volumes. For that, you need LVM tools installed. The chances are great that your distribution has LVM utils available—if they aren’t already installed.

This tutorial explains how to create and deal with LVM volumes.

Source

WP2Social Auto Publish Powered By : XYZScripts.com