VMware Acquires Heptio, Mining Bitcoin Requires More Energy Than Mining Gold, Fedora Turns 15, Microsoft’s New Linux Distros and ReactOS 0.4.10 Released

News briefs for November 6, 2018.

VMware has acquired Heptio, which was founded by Joe Beda and Craig
McLuckie, two of the creators of Kubernetes. TechCrunch
reports
that the terms of the deal aren’t being disclosed and
that “this is a signal of the big bet that VMware is taking on
Kubernetes, and the belief that it will become an increasing
cornerstone in how enterprises run their businesses.” The post also
notes that this acquisition is “also another endorsement of the ongoing
rise of open source and its role in cloud architectures”.

The energy needed to mine one dollar’s worth of bitcoin is reported to
be more than double the energy required to mine the same amount of
gold, copper or platinum. The
Guardian reports on recent research from the Oak Ridge Institute in
Cincinnati, Ohio
, that “one dollar’s worth of bitcoin takes
about 17 megajoules of energy to mine…compared with four, five and
seven megajoules for copper, gold and platinum”.

Happy 15th birthday to Fedora! Fifteen years ago today, November 6,
2003, Fedora Core 1 was released. See Fedora
Magazine’s post
for a look back at the Fedora Project’s beginnings.

Microsoft announced the availability of two new Linux distros for
Windows Subsystem for Linux, which will coincide with the Windows 10
1809 release. ZDNet
reports
that the Debian-based Linux distribution WLinux is
available from the Microsoft Store for $9.99 currently (normally it’s
$19.99). Also, OpenSUSE 15 and SLES 15 are now available from the
Microsoft Store as well.

ReactOS
0.4.10 was released today
. The main new feature is
“ReactOS’ ability to now boot from a BTRFS formatted drive”. See the
official ChangeLog for more
details.

Source

The Linux Foundation Announces Intent to Form New Foundation to Support GraphQL

SAN FRANCISCO, Nov. 6, 2018 /PRNewswire/ — The Linux Foundation, the nonprofit organization enabling mass innovation through open source, announces a broad coalition of industry leaders and users have joined forces to create a new open source foundation for the GraphQL project, which will be dedicated to growing and sustaining a neutral GraphQL ecosystem. Hosted under the Linux Foundation, the GraphQL Foundation’s mission will be to enable widespread adoption and help accelerate development of GraphQL and the surrounding ecosystem.

The Linux Foundation logo

“As one of GraphQL’s co-creators, I’ve been amazed and proud to see it grow in adoption since its open sourcing. Through the formation of the GraphQL Foundation, I hope to see GraphQL become industry standard by encouraging contributions from a broader group and creating a shared investment in vendor-neutral events, documentation, tools, and support,” said Lee Byron, co-creator of GraphQL.

GraphQL is a next­-generation API technology developed internally by Facebook in 2012 before being publicly open sourced in 2015. As application development shifts towards microservices architectures with an emphasis on flexibility and speed to market, tools like GraphQL are redefining API design and client-server interaction to improve the developer experience, increasing developer productivity and minimizing amounts of data transferred. GraphQL makes cross-platform and mobile development simpler with availability in multiple programming languages, allowing developers to create seamless user experiences for their customers.

GraphQL is being used in production by a variety of high scale companies such as Airbnb, Atlassian, Audi, CNBC, GitHub, Major League Soccer, Netflix, Shopify, The New York Times, Twitter, Pinterest and Yelp. GraphQL also powers hundreds of billions of API calls a day at Facebook.

“We are thrilled to welcome the GraphQL Foundation into the Linux Foundation. This advancement is important because it allows for long-term support and accelerated growth of this essential and groundbreaking technology that is changing the approach to API design for cloud-connected applications in any language,” said Jim Zemlin, Executive Director, the Linux Foundation.

Unlike REST-based APIs, which take advantage of HTTP and existing protocols, GraphQL APIs provide developers with the flexibility to query the exact data they need from a diverse set of cloud data sources, with less code, greater performance and security, and a faster development cycle. Not only does this enable developers to rapidly build top­-quality apps, it also helps them achieve consistency and feature parity across multiple platforms such as web, iOS, Android, and embedded and IoT applications.

The GraphQL Foundation will have an open governance model that encourages participation and technical contribution and will provide a framework for long-term stewardship by an ecosystem invested in GraphQL’s success.

“At Facebook, our mission is to give people the power to build community and bring the world closer together. We believe open source projects and the communities built around them help accelerate the pace of innovation and bring many minds to bear to solve large-scale challenges. GraphQL is one such project and community and the GraphQL Foundation will help ensure GraphQL continues to solve the real data fetching challenges that developers will face in building the products of tomorrow,” said Killian Murphy, Director, Facebook Open Source.

“GraphQL has redefined how developers work with APIs and client-server interactions. We look forward to working with the GraphQL community to become an independent foundation, draft their governance and continue to foster the growth and adoption of GraphQL,” said Chris Aniszczyk, Vice President of Developer Relations, the Linux Foundation.

Supporting Quotes

“Airbnb is making a massive investment in GraphQL, putting it at the center of our API strategy across both our product and internal tools. We are excited to see the Foundation play a key role in cultivating the community around GraphQL and continue to evolve GraphQL as a technology, paving the way for continued innovation of Airbnb’s API.” – Adam Neary, Tech Lead, Airbnb

“Given GraphQL’s centrality in the modern app development stack, the foundation we’re announcing today is not just necessary, but overdue. As the creators of Apollo, the most widely used implementation of GraphQL, we’re looking forward to working together with the Linux Foundation to define appropriate governance processes for this critical Internet standard.” – Geoff Schmidt, co­-founder and CEO of Apollo GraphQL

“GraphQL, and the strong ecosystem behind it, is leading to a fundamental change in how we build products, and it helps bring together teams and organizations of every size. At Coursera, GraphQL assists us in understanding the massive breadth of our APIs and helps us create transformative educational experiences for everyone, everywhere. We’re excited to see the impact of the GraphQL Foundation in making both the technology and the community stronger.” – Jon Wong, Staff Software Engineer, Coursera

“GraphQL has come a long way since its creation in 2012. It’s been an honor seeing the technology grow from a prototype, to powering Facebook’s core applications, to an open source technology on the way to becoming a ubiquitous standard across the entire industry. The GraphQL Foundation is an exciting step forward. This new governance model is a major milestone in that maturation process that will ensure a neutral venue and structure for the entire community to drive the technology forward.” – Nick Schrock, Founder, Elementl, GraphQL Co-Creator

“We created GraphQL at Facebook six years ago to help us build high-performance mobile experiences, so to see it grow and gain broad industry adoption has been amazing. Since Facebook open-sourced GraphQL in 2015, the community has grown to include developers around the world, newly-founded startups, and well-established companies. The creation of the GraphQL Foundation is a new chapter that will create a governance structure we believe will empower the community and provide GraphQL long-term technical success. I’m excited to see its continued growth under the Foundation’s guidance.” – Dan Schafer, Facebook Software Engineer, GraphQL Co-Creator

“GraphQL has proven to be a valuable, extensible tool for GitHub, our customers, and our integrators over the past two years. The GraphQL Foundation embodies openness, transparency, and community — all of which we believe in at GitHub.” – Kyle Daigle, Director, Ecosystem Engineering, GitHub

“This is a very welcome announcement, and we believe that this is a necessary step. The GraphQL community has grown rapidly over the last few years, and has reached the point where transparent, neutral governance policies are necessary for future growth. At Hasura, we look forward to helping the Foundation in its work.” – Tanmai Gopal, CEO, Hasura

“GraphQL has become one of the most important technologies in the modern application development stack and sees rapid adoption by developers and companies across all industries. At Prisma, we’re very excited to support the GraphQL Foundation to enable a healthy community and sustain the continuous development of GraphQL.” Johannes Schickling, Founder and CEO, Prisma

“At Shopify, GraphQL powers our core APIs and all our mobile and web clients. We strongly believe in open development and look to the Foundation to help expand the community and nurture its evolution.” – Jean-Michel Lemieux, SVP Engineering, Shopify

“GraphQL is gaining tremendous adoption as one of the best protocols for remote retrieval of large object graphs. At Twitter, we are looking forward to what’s to come in the GraphQL ecosystem and are very excited to support the GraphQL Foundation.” – Anna Sulkina Sr. Engineering Manager, Core Services Group, Twitter

About the Linux Foundation
The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and industry adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact:
Emily Olin
The Linux Foundation
eolin@linuxfoundation.org

Cision View original content to download multimedia:http://www.prnewswire.com/news-releases/the-linux-foundation-announces-intent-to-form-new-foundation-to-support-graphql-300744847.html

SOURCE The Linux Foundation

Source

Amazon SageMaker Now Supports Pipe Mode for Datasets in CSV Format

Posted On: Nov 5, 2018

The built-in algorithms that come with Amazon SageMaker now support Pipe Mode for datasets in CSV format. This accelerates the speed at which data can be streamed from Amazon Simple Storage Service (S3) into SageMaker by up to 40%, while training machine learning (ML) models. With this new enhancement, the performance benefits of Pipe Mode are extended to training datasets in CSV format in addition to the protobuf recordIO format that we released earlier this year.

Amazon SageMaker supports two methods of transferring training data: File Mode and Pipe Mode. With File Mode, the training data is downloaded first to an encrypted EBS volume attached to the training instance before training the model. With Pipe Mode, the data is streamed directly to the training algorithm while it is running. This results in faster training jobs and lesser disk space, reducing overall costs to train ML models on Amazon SageMaker.

Support for CSV format with Pipe Mode is available in all AWS regions where Amazon SageMaker is available today. You can read additional details in this blog post.

Source

PostgreSQL to Manage JSON | Linux Hint

One of the many data types that PostgreSQL support is JSON. Since most of the web APIs communication uses JSON payload immensely, this feature is rather important. Rather than using the plaintext data type to store JSON objects, Postgres has a different data type which is optimized for JSON payloads, verifies that data stored in these fields confirms to the

RFC specification

. Also in a classic Postgres manner, it allows you to fine-tune your JSON fields for maximum performance.

While creating a table, you will have two options for your JSON column. Plain json data type and jsonb data type, both have their own advantages and disadvantages. We shall go through each of them, by creating a simple table with just 2 columns an ID and a JSON value. Following this we will query data from the table and get a feel for how to manage JSON formatted data inside Postgres.

JSON Data Type

1. Creating a Table with JSON Data Type

Let’s create a simple two column table named users:

CREATE TABLE users (
id serial NOT NULL PRIMARY KEY,
info json NOT NULL
);

Here the column id acts as the primary key, and it will increase in an incremental fashion thanks to the pseudotype serial so we won’t have to worry about manually entering values for id as we go along.

The second column is of json type and is forced to be NOT NULL. Let’s enter a few rows of data to this table, consisting of JSON values.

INSERT INTO users (info) VALUES (
‘{
“name”: “Jane Doe”,
“email”: “janedoe@example.com”,
“personalDetails”: {“age”:33, “gender”:”F”}
}’);

INSERT INTO users (info) VALUES (
‘{
“name”: “Jane Doe”,
“email”: “janedoe@example.com”,
“personalDetails”: {“age”:33, “gender”:”F”}
}’);

You can use your prefered JSON beautifier/minifier to convert the JSON payloads above into a single line. So you can paste it at a go into your psql prompt.

SELECT

*

FROM

users;

id

|

info

—-+——————————————————1 | {“name”: “John Doe”, “email”: “johndoe@example.com”…}

2 | {“name”

:

“Jane Doe”

,

“email”

:

“janedoe@example.com”

}
(2 rows)

The SELECT command at the end showed us that the rows were successfully inserted into the users table.

2. Querying JSON Data Type

Postgres allows you to dig into the JSON payload itself and retrieve a particular value out of it, if you reference it using the corresponding value. We can use the -> operator after the json column’s name, followed by the key inside the JSON object. Doing so

For example, in the table we created above:

SELECT

info –

>

‘email’

FROM

users;

—-+—————————————-
id | ?column?

—-+—————————————-1 | “johndoe@example.com”

2 | “janedoe@example.com”

You may have noticed the double quotes in the column containing emails. This is because the -> operator returns a JSON object, as present in the value of key “email”. Of course, you can return just text, but you will have to use the ->> operator instead.

SELECT

info –

>>

‘email’

FROM

users;

id

|

?

column

?

—-+—————————————-1 | johndoe@example.com

2 |

janedoe@example.com

The difference between returning a JSON object and a string becomes clear once we start working with JSON objects nested inside other JSON objects. For example, I chose the “personalDetails” key to intentionally hold another JSON object. We can dig into this object too, if we want:

SELECT

info –

> ‘personalDetails’

> ‘gender’ FROM

users;

 

?

column

?

———-“M””F”(2 rows)

This can let you go as deep into the JSON object as you would want to. Let’s drop this table and create a new one (with the same name) but with JSONB type.

JSONB Data Type

Except for the fact that during creation of the table we mention jsonb data type instead of json, all else looks the same.

CREATE TABLE users (
id serial NOT NULL PRIMARY KEY,
info jsonb NOT NULL
);

Even the insertion of data and retrieval using the -> operator behaves the same way. What has changed is all under the hood and noticeable in the table’s performance. When converting JSON text into a jsonb, Postgres actually turns the various JSON value types into native Postgres type, so not all valid json objects can be saved as valid jsonb value.

Moreover, jsonb doesn’t preserve the whitespaces, order of json keys as supplied by the INSERT statement. Jsonb actually converts the payload into native postgres binary, hence the term jsonb.

Of course, insertion of jsonb datum has a performance overhead because of all these additional work that postgres needs to do. However, the advantage that you gain is in terms of faster processing of the already stored data, since your application would not have the need to parse a JSON payload everytime it retrieves one from the database.

JSON vs JSONB

The decision between json and jsonb sole depends on your use case. When in doubt use jsonb, since most applications tend to have more frequent read operations that write operations. On the other hand, if you are sure that your application is expected to do more synchronous write operations than read, then you may want to consider json as an alternative.

Conclusion

People working with JSON payloads and designing interfaces for Postgres storage will benefit immensely from this particular section of their official documentation. The developers were kind enough to furnish us with jsonb indexing and other cool features which can be leveraged to improve the performance and simplicity of your application. I implore you to investigate these as well.

Hopefully, you found this brief introduction of the matter helpful and inspiring.

Source

Open-Source OPNids Enhances Suricata With Machine Learning

A new open-source intrusion detection system (IDS) effort is officially getting underway on Nov. 5 with the launch of the OPNids project.

The OPNids effort is being led by threat hunting firm CounterFlow AI and security appliance provider Deciso, which also leads the Opensense security platform project. OPNids is built on top of the open-source Suricata IDS, providing a new layer of machine learning-based intelligence to help improve incident response and threat hunting activities.

“We created a pipeline that will actually take the Suricata logs and analyze the packets to provide context around any alerts,” Randy Caldejon, CEO and co-founder of CounterFlow AI, told eWEEK. “We like to call this alert triage. It’s like taking it to the last mile of what the analysts would do anyhow because typically when there’s an alert, they want some context.”

The Suricata project got started in 2009 by the Open Information Security Foundation as an alternative open-source option to the Snort IDS that was already in market.

“What we’re doing with the machine learning is taking advantage of the data that is flowing at line rates from Suricata and rather than just storing the data to disk we figured we should analyze the data while it’s in motion,” he said. “What we have is not one single model; we built an agent so analysts can write their own scripts.”

How It Works

At the core of OPNids is the DragonFly Machine Learning Engine (MLE), which uses a streaming data analytics model to ingest line rate network data from Suricata.

“Most machine learning techniques are what are traditionally known as batch techniques, where you get a big pool of data offline and you apply a sophisticated algorithm onto that,” Andrew Fast, chief data scientist at CounterFlow AI, told eWEEK. “We are taking a different approach, with streaming analytics, which is a newer branch of machine learning that is not as widely used.”

Fast explained that with streaming analytics rather than looking at a collected pool of data, the DragonFly MLE is collecting statistics and making decisions as the data flows through the IDS. The DragonFly MLE can also support offline training and then online scoring at wire speed, he said. With machine learning, training is used to help tune a model, while scoring is the output of the model.

Caldejon added that OPNids can also enable post-processing of data for additional analysis, with an approach known as Filebeat, which is commonly used in the Elastic Stack. As such, OPNids data can be forwarded to an Elastic search engine for additional analysis. Apache Kafka streaming is also supported, as is syslog for general log capabilities.

Moving forward, Caldejon said the plan is to also integrate OPNids with threat intelligence gateway capabilities, taking in third-party threat data as a factor that helps the MLE make decisions.

OPNids Pro

Alongside the open-source effort, there is a commercially supported version of OPNids in the works. The commercial version has a hardware edition where software is all preloaded and integrated. Additionally, Caldejon said the Pro edition of OPNids will include a packet caching capability. The packet caching capability will take network packets and write them to disk, saving the PCAP (packet capture) for additional offline analysis, he said.

Both the Pro and standard editions of OPNids are fully bundled application images including the MLE and Suricata. Caldejon said users can download and get the system running within 10 minutes.

“It’s got Suricata, it’s got the MLE, and it has a nice UI [user interface], so even unsophisticated users can manage the technology,” he said. “Typically, a lot of open-source projects are all CLI [command line interface] and expect people to be experts at the command line, but we wanted to make this accessible to data scientists as well, and they usually don’t have those skills, so yeah, OPNids is a one-stop shop, everything is bundled.”

Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.

Source

Download Pop!_OS Linux 18.10

Pop!_OS Linux is a free Ubuntu-based GNU/Linux distribution built around the widely used GNOME desktop environment and optimized for the desktop and laptop computers created by Linux computer retailer System76. It is based on the popular Ubuntu Linux operating system.

Supports Intel, AMD, and Nvidia GPUs

Pop!_OS Linux comes in two flavors, for System76 computers with either hybrid Intel and AMD or Intel and Nvidia graphics cards. Only 64-bit systems are supported, and you can install the operating system on other computers not manufactured by System76 too, considering you download the right ISO image.

Taking about the ISO image, Pop!_OS Linux offers a live and installable image that’s about 2GB in size, which means you can write it to a USB flash drive and use it to install the operating system whenever you want. There’s no boot menu on the live images, and the installer prompts the user immediately after loading the live session.

Forced installer, but there’s a live session too

The installer is forced on the user when running the live session, but you can always minimize it in a corner if you only want to give Pop!_OS Linux a test drive on your personal computer and see what the fuss is all about. You’ll have unrestricted access to the entire graphical environment and pre-installed applications.

Customized GNOME desktop with a minimal set of apps

Pop!_OS Linux features a highly customized GNOME desktop environment with beautiful artwork, yet a traditional experience with a single panel located on the top of the screen, from where you can access the pre-installed apps, running apps, and the system tray area.

There’s only a minimal set of apps available in the Pop!_OS Linux live image, including the Mozilla Firefox web browser, Geary email client, LibreOffice office suite, and a few standard GNOME utilities, such as a calculator, system monitor, calendar, contacts, terminal emulator, text editor, whether client, and others.

A Linux distro for System76 customers

Bottom line, Pop!_OS Linux is a GNU/Linux distribution designed and optimized by System76 for its own customers, in case they to reinstall the operating system if something goes wrong. However, nothing stops you from using System76’s in-house built operating system on your personal computer from a different manufacturer.

Source

How to play Windows games on Linux

Linux OS provides the kind of security and stability that somewhat Windows fails to deliver. However, most of the gamers deviate towards Windows because of the misconception that Linux cannot support Windows games. Users who want to go for gaming will rarely pick Linux and users comfortable with Linux will rarely switch to Windows.

The most often question you’ve been asked as a Linux user is how do you play games? Well, when it comes to gaming we know that Windows leads Linux to some extent. That doesn’t mean that we can never play our favorite Windows games again. It’s just a misconception that Ubuntu does not support Windows gameplay. What people don’t usually know is that game developers are increasingly taking advantage of the Linux’s growing market. Not only there are companies making Linux based games but companies like Valve or Steam are trying to develop tools that can support Windows games on your Linux system, not just Ubuntu but Linux in general.

A big advantage of using Linux over any other gaming platforms is the stability it has to offer. Other systems are generally loaded with bugs or freezing issues. Gamers can easily get frustrate from untimely interruptions whilst completing their missions. To save you from this trouble, Linux will give you better stability in the gaming domain as well.

So now coming to the juicy part of our topic, how exactly can you play Windows games on Linux? Below are tools that support your favorite Windows games on Linux.

Wine

The most common way to support Windows games is to get Wine installed on your system. When the WineHQ released their first stable version 1.0, it already supported 200 most popular games of Windows. The latest version of wine also offers rankings of the games which help in determining the number of configurations they require. If you see a Platinum ranking, it means that the game has 99% chances of working. Gold ranking means that you’d need to configure them a little bit, but in the end, they’ll work fine. They are labeled as gold because they haven’t been integrated with the newest version of wine. Silver and Bronze labels mean that there may be some issues in the game. Of course, if a game shows garbage ranking, the chances of it working would be as rare as seeing a penguin talk. Check out their huge database before installing it.

Steam Play

A new beta version of the Steam play was released this year. A way that allow users to access Windows, Mac and Linux versions of steam games. They already had more than 3000 games for Linux users and have been adding more with each day. In order to increase the compatibility with the Windows games, they decided to include the beta version of Steam Play which includes a modified of Wine, Proton.
Their official site has listed some of the benefits that the new version will provide:

  • Windows games with no current Linux versions can be installed and run directly from Linux Steam client.
  • It will include complete built-in support for Steamworks and OpenVR.
  • Improved game compatibility and reduced performance impact due to the implementation of DirectX 11 & 12
  • Games will recognize all controllers automatically.
  • Multi-thread games will have improved performance as compared to vanilla Wine.

Check out the list of games that new Steam beta version supports.

PlayOnLinux (POL)

Not only does it provide an interactive and user-friendly front-end, but also includes a series of pre-built scripts to help users install some specific games pretty quickly. It is an effective and more user-friendly interface to Wine emulator, allowing you to configure and access it outside of command line. The advantage that you have is if you cannot find your game listed in PlayOnLinux or if the script fails, you can just visit Wine Application Database and enter the name game you desire in the Search box. Cons of POL is that Wine is hardware specified meaning that its performance will depend on the kind of hardware you’re using and unfortunately POL cannot work without Wine.

Lutris

Lutris is a tool that allows you to install and manage games on Linux. It works for built-in and Windows games as well as emulators. A Wine-based compatibility layer, Proton, which allows playing Windows-only games, but is strictly for steam games. Lutris can be used to enhance your experience of playing other games, like Blizzard games. It provides a large database of games and has install scripts for download.

Conclusion

People are trying to move towards Linux due to the greater degree of stability it offers. Gamers have this idea that Linux won’t be able to support their favorite games, hence they are hesitant. However, it is just a hox and companies, worldwide, are putting efforts to provide comfort to gamers who might want to shift to Linux.

Source

Go Tutorial – Introduction to Go

Introductory Go Programming Tutorial

An Overview of the Core Language

Go Programming

Maybe you’ve heard of Go. It was first introduced in 2009, but like any new programming language, it took a while for it to mature and stabilize to the point where it became useful for production applications. Nowadays, Go is a well-established language that is used for network and database programming, web development, and writing DevOps tools. It was used to write Docker, Kubernetes, Terraform and Ethereum. Go is accelerating in popularity, with adoption increasing by 76% in 2017, and now there are Go user groups and Go conferences. Whether you want to add to your professional skills, or are just interested in learning a new programming language, you may want to check it out.

Why Go?

Go was created at Google by a team of three programmers: Robert Griesemer, Rob Pike, and Ken Thompson. The team decided to create Go because they were frustrated with C++ and Java, which over the years had become cumbersome and clumsy to work with. They wanted to bring enjoyment and productivity back to programming.

The three have impressive accomplishments. Griesemer worked on Google’s ultra-fast V8 JavaScript engine, used in the Chrome web browser, Node.js JavaScript run time environment, and elsewhere. Pike and Thompson were part of the original Bell Labs team that created Unix, the C language, and Unix utilities, which led to the development of the GNU utilities and Linux. Thompson wrote the very first version of Unix and created the B programming language, upon which C was based. Later, Thompson and Pike worked on the Plan 9 operating system team, and they also worked together to define the UTF-8 character encoding.

Go has the safety of static typing and garbage collection along with the speed of a compiled language. With other languages, “compiled” and “garbage collection” are associated with waiting around for the compiler to finish, and then getting slowly-running programs. But Go has a lightning-fast compiler that makes compile times barely noticeable, and a modern, ultra-efficient garbage collector. You get fast compile times along with fast programs. Go has concise syntax and grammar with few keywords, giving Go the simplicity and fun of dynamically-typed interpreted languages like Python, Ruby, and JavaScript.

If you know C, C++, Java, Python, JavaScript, or a similar language, many parts of Go, including identifiers, operators, and flow control statements, will look familiar. A simple example is that comments can be included as in ANSI C:

// This comment continues only to the end of the line.

/* This type
of comment
can span
multiple lines. */

In other ways, Go is very different from C. One obvious difference is its declaration syntax, which is more like that of Pascal and Modula-2. Here is how an integer variable is declared in Go:

var i int

This syntax may look “backwards” to a C or Java programmer, but it is much more natural and easier to work with. Declarations in Go can usually be directly translated to or from English or another human language. In the above example, the declaration can be read as, “Variable i is an integer.” Here are some more simple examples:

var x float32 // “Variable x is a 32-bit floating point number.”
var c byte // “Variable c is a byte.”
var s string // “Variable s is a string.”

And it is also possible to do this with more complex data types:

var a [32]int // “Variable a is an array of 32 integers.”

var s struct { // “Variable s is a structure composed of
i int // i, an integer,
b byte // b, a byte,
s []int // and s, a slice of integers.”
}

The idea of Go’s design is to have the best parts of many languages. At first, Go looks a lot like a hybrid of C and Pascal (both of which are successors to Algol 60), but looking closer, you will find ideas taken from many other languages as well.

Go is designed to be a simple compiled language that is easy to use, while allowing concisely-written programs that run efficiently. Go lacks extraneous features, so it’s easy to program fluently, without needing to refer to language documentation while programming. Programming in Go is fast, fun, and productive.

Let’s Go

First, make sure you have Go installed. The Go Team has easily-installed distributions for Linux (including Raspberry Pi), macOS, FreeBSD, and Windows on the Go website at https://golang.org/dl/

You will also need to make sure that your PATH environment variable is set properly to allow your shell or command line processor to find the programs that make up the Go system. On Linux, macOS, or FreeBSD, add /usr/local/go/bin to your PATH, somewhat like this:

$ PATH=$PATH:/usr/local/go/bin

On Windows, setting a PATH environment variable is a little more complicated. In either case, you can find directions for setting PATH in the installation instructions for your system, which appear in a web page after you click to download the installation package.

On Linux, you may be able to install Go using your distribution’s package management system. To find the Go package, try looking for “golang”, which is a synonym for Go. Even if you can do that, it may be better to download a distribution from the Go website to make sure you get the most recent version.

When you have Go installed, try this command:

$ go version
go version go1.11.1 linux/amd64

The output shows that I have Go version 1.11.1 installed on my 64-bit Linux machine.

Hopefully, by now you’ve become interested and want to see what a complete Go program looks like. Here’s a very simple program in Go that prints “hello, world”.

package main

import “fmt”

func main() {
fmt.Printf(“hello, worldn”)
}

The line package main defines the package that this file is part of. Naming main as the name of the package and the function tell Go that this is where the program’s execution should start. We need to define a main package and main function even when there is only one package with one function in the entire program.

At the top level, Go source code is organized into packages. Every source file is part of a package. Importing packages and exporting functions are child’s play.

The next line, import “fmt”, imports the fmt package. It is part of the Go standard library, and contains the Printf() function. Often you will need to import more than one package. To import the fmt, os, and strings packages, you can either type

import “fmt”
import “os”
import “strings”

or

import (
“fmt”
“os”
“strings”
)

Using parentheses, import is applied to everything listed inside the parentheses, saving some typing. You will see parentheses used like this again elsewhere in Go, and Go has other kinds of typing shortcuts, too.

Packages may export constants, types, variables, and functions. To export something, just capitalize the name of the constant, type, variable or function you want to export. It’s that simple.

Notice that there are no semicolons in the “hello, world” program. Semicolons at the ends of lines are optional. Although this is convenient, it leads to something to be careful about when you are first learning Go. This part of Go’s syntax is implemented using a method taken from the BCPL language. The compiler uses a simple set of rules to “guess” when there should be a semicolon at the end of the line, and it inserts one automatically. In this case, if the right parenthesis in main() were at the end of the line, it would trigger the rule, so it’s necessary to place the open curly bracket after main() on the same line.

This formatting is a common practice that’s allowed in other languages, but in Go, it is required. If we put the open curly bracket on the next line, we will get an error message.

Go is unusual in that it either requires or favors a specific style of whitespace formatting. Rather than allowing all sorts of formatting styles, the language comes with a single formatting style as part of its design. The programmer has a lot of freedom to violate it, but only up to a point. This is either a straitjacket or godsend, depending on your preferences! Free-form formatting, allowed by many other languages, can lead to a mini Tower of Babel, making code difficult to read by other programmers. Go avoids that by making a single formatting style the preferred one. Since it’s fairly easy to adopt a standard formatting style and get used to using it habitually, that’s all you have to do to be writing universally-readable code. Fair enough? Go even comes with a tool for reformatting your code to make it fit the standard:

$ go fmt hello.go

Just two caveats: Your code must be free of syntax errors for it to work, so it won’t fix problems such as failing to put an open brace on the same line. Also, it overwrites the original file, so if you want to keep the original, make a backup before running go fmt.

The main() function has just one line of code to print the message. In this example, the Printf() function from the fmt package was used to make it similar to writing a “hello, world” program in C. If you prefer, you can also use

fmt.Println(“hello, world”)

to save typing the n newline character at the end of the string. This is another example of Go being similar to C and Pascal. You get formatted printing and scanning functions closely resembling the ones C programmers are used to, plus simple and convenient functions like Println() that are similar to writeln() and other functions in Pascal.

So let’s compile and run the program. First, copy the “hello, world” source code to a file named hello.go. Then compile it using this command:

$ go build hello.go

And to run it, use the resulting executable, named hello, as a command:

$ hello
hello, world

As a shortcut, you can do both steps in just one command:

$ go run hello.go
hello, world

That will compile and run the program without creating an executable file. It’s great for when you are actively developing a project and you are just checking for errors before doing more edits.

Next, let’s look at a few of Go’s main features.

Concurrency

Go’s built-in support for concurrency, in the form of goroutines, is one of the language’s best features. A goroutine is like a process or thread, but is much more lightweight. It is normal for a Go program to have thousands (or maybe even a few millions) of active goroutines. Starting up a goroutine is as simple as

go f()

The function f() will then run concurrently with the main program and other goroutines. Go has a means of allowing the concurrent pieces of the program to synchronize and communicate using channels. A channel is somewhat like a Unix pipe; it can be written to at one end, and read from at the other. A common use of channels is for goroutines to indicate when they have finished.

The goroutines and their resources are managed automatically by the Go runtime system. With Go’s currency support, it’s easy to get all of the cores and threads of a multi-core CPU working efficiently.

Synchronization with Channels

Any language that supports multi-threaded or concurrent processing needs to have a mechanism that allows simultaneously or concurrently-running code to coordinate modifying and accessing data. For this purpose, Go has channels, which are first-in-first-out (FIFO) queues that function in a manner that allows them to be used to synchronize goroutines.

In the simplest case, a channel is unbuffered and allows only one pending operation at a time. Therefore, it is either empty or has a read or write that is waiting for another goroutine to perform the corresponding action.

Synchronization is performed as in this example: Suppose a goroutine attempts to receive (read) from an empty channel. The goroutine is put to sleep by the Go runtime system until another goroutine sends on (writes to) the channel. Then there is something there for the first goroutine to receive, so it’s awakened and the receive operation succeeds. This mechanism allows a channel to be used to make a goroutine wait for another to signal it by performing an operation on their shared channel. Go’s channels can be used to implement mutually-exclusive locks (mutexes) that are found in other languages, and the sync Go package does exactly that.

Using channels is very simple and concise. Before they are used, channels are allocated using Go’s built-in make() function. For example,

var intch chan int
intch = make(chan int)

declares and then creates a new channel named intch for sending and receiving integers. Sending on a channel is as simple as

intch // send the integer 1 on the channel

and receiving from a channel (in another goroutine) is just as simple:

var num int
num = // receive an integer from the channel

Here’s a complete example that shows a channel in use:

package main

import “fmt”

var intch chan int

func main() {
intch = make(chan int)
go print_number()
intch // send 37 on the channel
// wait for a response before exiting
}

func print_number() {
var number int

number = // receive an integer from the channel
fmt.Printf(“The number is %dn”,number)
intch // send a response
}

Types, Methods, and Interfaces

You might wonder why types and methods are together in the same heading. It’s because Go has a simplified object-oriented programming model that works along with its expressive, lightweight type system. It completely avoids classes and type hierarchies, so it’s possible to do complicated things with datatypes without creating a mess. In Go, methods are attached to user-defined types, not to classes, objects, or other data structures. Here’s a simple example:

package main

import “fmt”

type MyInt int

func (n MyInt) sqr() MyInt {
return n*n
}

func main() {

var number MyInt = 5

var square = number.sqr()

fmt.Printf(“The square of %d is %dn”,number,square)
}

Along with this, Go has a facility called interfaces that allow mixing of types. Operations can be performed on mixed types as long as each has the method or methods attached to it, specified in the definition of the interface, that are needed for the operations.

Suppose we’ve created types called cat, dog, and bird, and each have a method called age() that return the age of the animal. If we want to add the ages of all animals in one operation, we can define an interface like this:

type animal interface {
age() int
}

The animal interface then can be used like a type, allowing the cat, dog, and bird types to all be handled collectively when calculating ages.

Unicode Support

Considering that Ken Thompson and Rob Pike defined the Unicode UTF-8 encoding that is now dominant worldwide, it may not surprise you that Go has good support for UTF-8. If you’ve never used Unicode and don’t want to bother with it, don’t worry; UTF-8 is a superset of ASCII. That means you can continue programming in ASCII and ignore Go’s Unicode support and everything will work nicely.

In reality, all source code is treated as UTF-8 by the Go compiler and tools. If your system is properly-configured to allow you to enter and display UTF-8 characters, you can use them in Go source file names, command-line arguments, and in Go source code for literal strings and names of variables, functions, types, and constants.

Just below, you can see a “hello, world” program in Portuguese, as it might be written by a Brazilian programmer.

package main

import “fmt”

func faça_uma_ação_em_português() {
fmt.Printf(“Olá mundo!n”)
}

func main() {
faça_uma_ação_em_português()
}

In addition to supporting Unicode in these ways, Go has three packages in its standard library for handling more complicated issues involving Unicode.

Summing it Up

By now, maybe you understand why Go programmers are enthusiastic about the language. It’s not just that Go has so many good features, but that they are all included in one language that was designed to avoid overcomplication. It’s a really good example of the whole being greater than the sum of its parts.

To learn more about Go, visit the Go website and take A Tour of Go.

Contact the author

Source

Linux Today – Apple’s New Hardware With The T2 Security Chip Will Currently Block Linux From Booting

Nov 06, 2018, 04:00

Apple’s MacBook Pro laptops have become increasingly unfriendly with Linux in recent years while their Mac Mini computers have generally continued working out okay with most Linux distributions due to not having to worry about multiple GPUs, keyboards/touchpads, and other Apple hardware that often proves problematic with the Linux kernel. But now with the latest Mac Mini systems employing Apple’s T2 security chip, they took are likely to crush any Linux dreams.

Source

Download MorpheusArch Linux 2018.4

MorpheusArch Linux is an open-source, Linux-based operating system derived from Arch Linux and featuring the lightweight LXQt desktop environment. It’s designed as a live recovery ISO that comes preloaded with numerous data recovery tools that let you recover over 400 file types.

Standard Arch Linux boot menu, 64-bit only

Being based on Arch Linux, the MorpheusArch Linux distribution supports only 64-bit (x86_64) systems and features an identical boot menu with the one present on Arch Linux’s ISO images, allowing you to boot the live OS or an installed one, run a memory or hardware test, as well as to reboot or power off the PC.

The live ISO image can be written to a USB flash drive or DVD disc so you can use it on the go when you need to recover lost data form broken disks, including memory sticks. When using the MorpheusArch Linux live image, please keep in mind that you need to use the “arch” password (without quotes) when prompted.

Standard LXQt desktop environment, includes PhotoRec and TestDisk

MorpheusArch Linux features a standard LXQt desktop environment that’s not customized in any way. You will get the vanilla LXQt desktop experience because the point of this GNU/Linux distribution is to offer users a set of data recovery tools that you’ll have to run from a terminal emulator.

Among the included data recovery tools, we can mention the popular PhotoRec digital picture and file recovery utility, as well as the TestDisk partition recovery and file undelete software from CGSecurity. Additionally, MorpheusArch Linux comes with the GNU ddrescue is a data recovery tool and Zenmap security scanner.

Helps you recover more than 400 file types

According to the developers, MorpheusArch Linux is a nifty live GNU/Linux distribution that helps you recover more than 400 file types, and they choose to base it on the lightweight and highly efficient Arch Linux operating system and LXQt desktop environment.

However, MorpheusArch Linux is not a distro you can use as your main operating system. It doesn’t come with an installer as it’s intended to be used only as a live image, from a USB flash drive, to recover lost data from damaged or broken disk drives and memory sticks.

Source

WP2Social Auto Publish Powered By : XYZScripts.com