16-Way Graphics Card Comparison With Valve’s Steam Play For Windows Games On Linux

While Steam Play is still of beta quality on Linux for running Windows games on Linux using their Wine-based Proton compatibility layer, Steam Play has been fast maturing since it was rolled out to the public in late August. The game list continues growing and with regular updates to Steam Play / Proton / DXVK (Direct3D 10/11 over Vulkan), more games are going online for running on Linux and doing so with decent performance and correct rendering. Given the most recent Steam Play beta update vastly improving the experience in our tests, here are the first of our Steam Play Proton benchmarks with Ubuntu Linux and using sixteen different NVIDIA GeForce / AMD Radeon graphics cards.

 

 

The wonderful database at ProtonDb.com is the de facto source for tracking what Windows games are working on Linux. As of writing there are more than 2,800 titles reported to work, though depending upon your Linux distribution and graphics drivers / hardware that number can vary. In terms of the vast majority of games running well, they tend to be older and/or indie games. Among the “platinum” rated games at this point are Tomb Raider: Anniversary, Final Fantasy VII, the original Company of Heroes, Unreal Gold, Far Cry, and also some more interesting games like Call of Duty 4: Modern Warfare and The Witcher 3. The selection though of games is improving almost daily thanks to Proton/DXVK advancements being open-source and Valve regularly releasing updates and also the occasional workarounds to the Mesa graphics driver code.

 

 

For finding Steam Play games to utilize as benchmarks is still a bit mixed as the games need to be newer to at least stress modern graphics cards to make for an interesting comparison. The games also need to meet our benchmark/test requirements for integration with the Phoronix Test Suite and OpenBenchmarking.org. Since the Steam Play beta update last week improving things, I’ve been running tests using Batman: Arkham Origins and F1 2018. The Batman title is one of the older ones in the franchise but at least working well on Steam Play while F1 2018 is quite interesting considering that it is still a modern Windows game yet working well on Linux thanks to Proton and DXVK for remapping D3D11 to Vulkan.

 

 

There are also some other game titles I’m still working on benchmarking like Grand Theft Auto V and Shadow of the Tomb Raider but there are still issues there in my most recent checks. Benchmarks on other games will come as more benchmark-friendly, modern games are brought up to run properly with Steam Play.

 

 

For this benchmarking I tested 16 different graphics cards that on the Radeon side included the R9 285, R9 290, RX 560, RX 580, RX Vega 56, and RX Vega 64. All of the Radeon tests were done with the fresh driver stack of Linux 4.19 paired with Mesa 18.3-devel for the newest RADV driver code as of testing. On the NVIDIA side was the GeForce GTX 970, GTX 980, GTX 980 Ti, GTX 1060, GTX 1070, GTX 1070 Ti, GTX 1080, GTX 1080 Ti, RTX 2070, and RTX 2080 Ti. The cards tested on both sides were limited to the newer GPUs I had available for testing. The NVIDIA driver in use was the 410.73 release and all of these benchmarks were run from the same Ubuntu 18.10 system with Intel Core i9 9900K processor.

 

These benchmarks were run via the Phoronix Test Suite open-source benchmarking framework.

Source

JingDong (JD.com), China Mobile Cloud, Qing Cloud, and Whale Cloud Join the OpenMessaging Project

OpenMessaging

The goal of the OpenMessaging Project is to build out an industry standard, cloud oriented, and vendor neutral open standard for distributed messaging.

Today, the OpenMessaging Project — a collaborative project focused on creating a vendor-neutral open standard for distributed messaging — announced four new members JD.com, China Mobile Cloud, Qing Cloud, and Whale Cloud. Current members include Alibaba, DataPipeline, Di Di, Streamlio, WeBank, and Yahoo!.

The acceleration of microservice-based and cloud-based applications has put a growing focus on how data is connected to services, applications, and users. This focus has led to a number of new innovations and new products that support messaging and queueing needs. It has also contributed to increased demands on messaging and queuing solutions, making performance and scalability critical to success, and the need for an open standardization a must.

The goal of the OpenMessaging Project is to build out an industry standard, cloud oriented, and vendor neutral open standard for distributed messaging. More on this project and how to participate here: http://openmessaging.cloud

New Member Supporting Quotes:

“At China Mobile and CMsoft, we have built a MQ proxy system of Apache RocketMQ to provide a set of producer APIs and consumer APIs. The redundancy of having to hide the differences among the MQs takes so much time and energy out of our team. Given our knowledge in this field, we understand first hand the importance of a messaging communication standard. Having a vendor-neutral and language-independent MQ standard guideline is a big win for many applications. We believe this standard can help and promote the MQ technology that we rely on.” – Henry Hu, Architect at China Mobile and CMsoft.

“As a cloud provider, we offer various messaging services including Apache Kafka, RabbitMQ, and RocketMQ to our customers. More and more people keep asking us what software to use for their messaging requirements as the market is saturated with various open source solutions. This market saturation causes not only a high learning curve, but also a high maintenance cost. An industry open standard, vendor-neutral and language-independent specification for distributed messaging is increasingly important, especially in a cloud era. We look forward to collaborating with the OpenMessaging project to help drive messaging service towards a unified, open standard interface.” – Ray Zhou, Development Director at QingCloud

At the JD Group, JingDong Message Queue (JMQ) has been widely used. However, despite our efforts to be compatible with all kinds of message protocols, we still can’t meet all the requirements. We are planning to open source JMQ, so it can be implemented for OpenMessaging. We see OpenMessaging as a de-facto international open standard for distributed messaging that aims at satisfying the need of modern cloud-native messaging and streaming applications. We sincerely believe that a unified and widely-accepted messaging standard can benefit MQ technology and applications relied on it.” – DeQiang Lin, Messaging Leader at the JingDong Middleware Department

“Currently, message queuing uses proprietary, closed protocols, restricting the ability for different operating systems or programming languages to interact in a heterogeneous set of environments. At Whale Cloud, in order to make it easy for developers to use messaging and streaming services, we’ve worked to eliminate the differences between the different protocols. Giving us insight and knowledge to know that a vendor-neutral and language-independent open specification is badly needed.” – Zheng Tao, Technical Director of Distributed Messaging and Streaming Data Platform at Whale Cloud

Source

RK3399 based Raspberry Pi clone will launch at $49 — or even lower

Radxa has posted specs for a $49 and up, community backed “Rock Pi” Raspberry Pi lookalike with a Rockchip RK3399, USB 3.0, M.2, HDMI 2.0, and native GbE, plus optional WiFi, BT, and PoE.

Radxa is prepping a Rockchip RK3399-based Raspberry Pi pseudo clone called the Rock Pi. It joins the RK3399-based NanoPi M4 in closely matching the RPi 3 layout, and it appears it may be the most affordable RK3399 based SBC yet, starting at $49 with 2GB RAM, and possibly lower for the unpriced 1GB model.

Rock Pi, front and back
(click images to enlarge)

 

Many other RK3399 based SBCs have the same size and 40-pin connector as the Pi, but with different layouts. These include the new

Khadas Edge-V

, the

Renegade Elite

, and several other boards found in our

2018 open-spec SBC roundup

.

Tom Cubie, who started Cubieboards.org before moving to Radxa, informed me of the upcoming Rock Pi a month ago. However, I first saw the specs today on a revised version of the Single Board Computer Database (“board-DB”), now hosted on Hackerboards. As some of you may recall, LinuxGizmos switched to the Hackerboards.com domain for a year before switching back.

Rick Lehrbaum, who created LinuxDevices and LinuxGizmos, not to mention the PC/104 SBC standard, has been transitioning away from LinuxGizmos in 2018. He decided to revive Hackerboards.com when board.db creator Raffaele Tranquillini asked if he could take over the database for him. Currently, Hackerboards is devoted to a revised version of board-db, which Lehrbaum is in the process of updating.

In his October email, Cubie informed me that Radxa was acquired by a Shenzhen based OEM/ODM called Emdoor Group in 2016. This temporarily put a halt to the Radxa community, which once brought us open-spec boards like the Rockchip RK3188 based Radxa Rock and RK3288 equipped Radxa Rock 2 Square. This year, Cubie signed an agreement with Emdoor, enabling them to revise the Radxa community. “Rock Pi is the beginning of the rebuilding of Radxa,” wrote Cubie.

We based our spec list below primarily on the Radxa product page but added a few items from the board-db listings such as the extended temperature range. Unlike the product page, the board-db listings also include pricing on all but one model.

The Rock Pi Model A will sell for $49 (2GB) and $65 (4GB). The Model B, which adds PoE and a WiFi-ac/Bluetooth 5.0 wireless module sells for $49 (1GB), $59 (2GB) or $75 (4GB). There’s no price yet for the 1GB Model A, which could end up in the low to mid $40 range, if not $39. The only other differences between the Model A and B, according to board-db, is that the Model B lacks Android support (7.1 or 9.0). Both models support “some Linux distributions,” says Radxa.

Inside the Rock Pi

The ports on the 85 x 54mm Rock Pi are just where a Pi lover would expect them to be. Unlike the RPi 3B or 3B+, the GbE port is native, giving you at least 939Mbps — at least three times the bandwidth. Like the 3B+, it supports Power-over-Ethernet using the same official Raspberry Pi PoE HAT.

Rock Pi (left) and pinout diagram
(click images to enlarge)

 

Specs are almost identical to those of the $75 (2GB) NanoPi M4. The major difference is that the Rock Pi adds an M.2 storage slot for NVMe SSDs but lacks the M4’s 24-pin GPIO interface, which augments the 40-pin connector found on both boards. The NanoPi M4 also has standard wireless (but no PoE) and has 4x USB 4.0 host ports instead of the 2x 3.0 and 2x 2.0 on the Rock Pi.

If the Rock Pi pricing holds, it looks like the better deal based on specs alone. That means it could be the most affordable RK3399 SBC yet, even besting the smaller, more limited (1GB only) $50 NanoPi Neo4.

The Rock Pi has a microSD slot and an empty eMMC socket in addition to the M.2. You get the same, 4K-ready HDMI 2.0 port, which is one of the main selling points of the RK3399.

The board also provides MIPI-DSI and -CSI interfaces for dual displays and camera attachments, respectively, although they are only 2-lane each. Other features include an audio jack with mic, an RTC, and a USB Type-C port for wide-range power.

Preliminary specifications listed for the Rock Pi include:

  • Processor — Rockchip RK3399 (2x Cortex-A72 at up to 2.0GHz, 4x Cortex-A53 @ up to 1.5GHz); Mali-T860 MP4 GPU
  • Memory/storage:
    • 1GB, 2GB, or 4GB LPDDR4 RAM (dual-channel)
    • eMMC socket for 8GB to 128GB (bootable)
    • MicroSD slot for up to 128GB (bootable)
    • M.2 socket with support for up to 2TB NVMe SSD
  • Wireless — 802.11b/g/n/ac (2.4GHz/5GHz) with Bluetooth 5.0 with antenna (Model B only)
  • Networking — Gigabit Ethernet port; PoE support on Model B only (requires RPi PoE HAT)
  • Media I/O:
    • HDMI 2.0a port (with audio) for up to 4K at 60Hz
    • MIPI-DSI (2-lane) via FPC; dual display mirror or extend with HDMI
    • MIPI-CSI (2-lane) via FPC for up to 8MP camera
    • 3.5mm audio I/O jack (24-bit/96KHz)
    • Mic interface
  • Other I/O:
    • 2x USB 3.0 host ports
    • 2x USB 2.0 host ports
    • USB 3.0 Type-C OTG with power support and HW switch for host/device
  • Expansion — 40-pin GPIO header (see pinout diagram); M.2 slot for SSD (see mem/storage)
  • Other features — RTC with optional battery connector
  • Power:
    • 5.5-20V input
    • USB Type-C PD 2.0, 9V/2A, 12V/2A, 15V/2A, 20V/2A
    • Qualcomm Quick Charge support for QC 3.0/2.0 adapter, 9V/2A, 12V/1.5A
    • 8mA to 20mA consumption
  • Operating temperature — 0 to 80°C
  • Dimensions — 85 x 54mm
  • Operating system — Android 9.0; “some” Linux distros

Further information

The Rock Pi is looking like it’s heading for pre-order or live orders soon, starting at below $49 if you can get by with only 1GB RAM. More information may be found on Radxa’s Rock Pi product page.

Source

VMware Acquires Heptio, Mining Bitcoin Requires More Energy Than Mining Gold, Fedora Turns 15, Microsoft’s New Linux Distros and ReactOS 0.4.10 Released

News briefs for November 6, 2018.

VMware has acquired Heptio, which was founded by Joe Beda and Craig
McLuckie, two of the creators of Kubernetes. TechCrunch
reports
that the terms of the deal aren’t being disclosed and
that “this is a signal of the big bet that VMware is taking on
Kubernetes, and the belief that it will become an increasing
cornerstone in how enterprises run their businesses.” The post also
notes that this acquisition is “also another endorsement of the ongoing
rise of open source and its role in cloud architectures”.

The energy needed to mine one dollar’s worth of bitcoin is reported to
be more than double the energy required to mine the same amount of
gold, copper or platinum. The
Guardian reports on recent research from the Oak Ridge Institute in
Cincinnati, Ohio
, that “one dollar’s worth of bitcoin takes
about 17 megajoules of energy to mine…compared with four, five and
seven megajoules for copper, gold and platinum”.

Happy 15th birthday to Fedora! Fifteen years ago today, November 6,
2003, Fedora Core 1 was released. See Fedora
Magazine’s post
for a look back at the Fedora Project’s beginnings.

Microsoft announced the availability of two new Linux distros for
Windows Subsystem for Linux, which will coincide with the Windows 10
1809 release. ZDNet
reports
that the Debian-based Linux distribution WLinux is
available from the Microsoft Store for $9.99 currently (normally it’s
$19.99). Also, OpenSUSE 15 and SLES 15 are now available from the
Microsoft Store as well.

ReactOS
0.4.10 was released today
. The main new feature is
“ReactOS’ ability to now boot from a BTRFS formatted drive”. See the
official ChangeLog for more
details.

Source

The Linux Foundation Announces Intent to Form New Foundation to Support GraphQL

SAN FRANCISCO, Nov. 6, 2018 /PRNewswire/ — The Linux Foundation, the nonprofit organization enabling mass innovation through open source, announces a broad coalition of industry leaders and users have joined forces to create a new open source foundation for the GraphQL project, which will be dedicated to growing and sustaining a neutral GraphQL ecosystem. Hosted under the Linux Foundation, the GraphQL Foundation’s mission will be to enable widespread adoption and help accelerate development of GraphQL and the surrounding ecosystem.

The Linux Foundation logo

“As one of GraphQL’s co-creators, I’ve been amazed and proud to see it grow in adoption since its open sourcing. Through the formation of the GraphQL Foundation, I hope to see GraphQL become industry standard by encouraging contributions from a broader group and creating a shared investment in vendor-neutral events, documentation, tools, and support,” said Lee Byron, co-creator of GraphQL.

GraphQL is a next­-generation API technology developed internally by Facebook in 2012 before being publicly open sourced in 2015. As application development shifts towards microservices architectures with an emphasis on flexibility and speed to market, tools like GraphQL are redefining API design and client-server interaction to improve the developer experience, increasing developer productivity and minimizing amounts of data transferred. GraphQL makes cross-platform and mobile development simpler with availability in multiple programming languages, allowing developers to create seamless user experiences for their customers.

GraphQL is being used in production by a variety of high scale companies such as Airbnb, Atlassian, Audi, CNBC, GitHub, Major League Soccer, Netflix, Shopify, The New York Times, Twitter, Pinterest and Yelp. GraphQL also powers hundreds of billions of API calls a day at Facebook.

“We are thrilled to welcome the GraphQL Foundation into the Linux Foundation. This advancement is important because it allows for long-term support and accelerated growth of this essential and groundbreaking technology that is changing the approach to API design for cloud-connected applications in any language,” said Jim Zemlin, Executive Director, the Linux Foundation.

Unlike REST-based APIs, which take advantage of HTTP and existing protocols, GraphQL APIs provide developers with the flexibility to query the exact data they need from a diverse set of cloud data sources, with less code, greater performance and security, and a faster development cycle. Not only does this enable developers to rapidly build top­-quality apps, it also helps them achieve consistency and feature parity across multiple platforms such as web, iOS, Android, and embedded and IoT applications.

The GraphQL Foundation will have an open governance model that encourages participation and technical contribution and will provide a framework for long-term stewardship by an ecosystem invested in GraphQL’s success.

“At Facebook, our mission is to give people the power to build community and bring the world closer together. We believe open source projects and the communities built around them help accelerate the pace of innovation and bring many minds to bear to solve large-scale challenges. GraphQL is one such project and community and the GraphQL Foundation will help ensure GraphQL continues to solve the real data fetching challenges that developers will face in building the products of tomorrow,” said Killian Murphy, Director, Facebook Open Source.

“GraphQL has redefined how developers work with APIs and client-server interactions. We look forward to working with the GraphQL community to become an independent foundation, draft their governance and continue to foster the growth and adoption of GraphQL,” said Chris Aniszczyk, Vice President of Developer Relations, the Linux Foundation.

Supporting Quotes

“Airbnb is making a massive investment in GraphQL, putting it at the center of our API strategy across both our product and internal tools. We are excited to see the Foundation play a key role in cultivating the community around GraphQL and continue to evolve GraphQL as a technology, paving the way for continued innovation of Airbnb’s API.” – Adam Neary, Tech Lead, Airbnb

“Given GraphQL’s centrality in the modern app development stack, the foundation we’re announcing today is not just necessary, but overdue. As the creators of Apollo, the most widely used implementation of GraphQL, we’re looking forward to working together with the Linux Foundation to define appropriate governance processes for this critical Internet standard.” – Geoff Schmidt, co­-founder and CEO of Apollo GraphQL

“GraphQL, and the strong ecosystem behind it, is leading to a fundamental change in how we build products, and it helps bring together teams and organizations of every size. At Coursera, GraphQL assists us in understanding the massive breadth of our APIs and helps us create transformative educational experiences for everyone, everywhere. We’re excited to see the impact of the GraphQL Foundation in making both the technology and the community stronger.” – Jon Wong, Staff Software Engineer, Coursera

“GraphQL has come a long way since its creation in 2012. It’s been an honor seeing the technology grow from a prototype, to powering Facebook’s core applications, to an open source technology on the way to becoming a ubiquitous standard across the entire industry. The GraphQL Foundation is an exciting step forward. This new governance model is a major milestone in that maturation process that will ensure a neutral venue and structure for the entire community to drive the technology forward.” – Nick Schrock, Founder, Elementl, GraphQL Co-Creator

“We created GraphQL at Facebook six years ago to help us build high-performance mobile experiences, so to see it grow and gain broad industry adoption has been amazing. Since Facebook open-sourced GraphQL in 2015, the community has grown to include developers around the world, newly-founded startups, and well-established companies. The creation of the GraphQL Foundation is a new chapter that will create a governance structure we believe will empower the community and provide GraphQL long-term technical success. I’m excited to see its continued growth under the Foundation’s guidance.” – Dan Schafer, Facebook Software Engineer, GraphQL Co-Creator

“GraphQL has proven to be a valuable, extensible tool for GitHub, our customers, and our integrators over the past two years. The GraphQL Foundation embodies openness, transparency, and community — all of which we believe in at GitHub.” – Kyle Daigle, Director, Ecosystem Engineering, GitHub

“This is a very welcome announcement, and we believe that this is a necessary step. The GraphQL community has grown rapidly over the last few years, and has reached the point where transparent, neutral governance policies are necessary for future growth. At Hasura, we look forward to helping the Foundation in its work.” – Tanmai Gopal, CEO, Hasura

“GraphQL has become one of the most important technologies in the modern application development stack and sees rapid adoption by developers and companies across all industries. At Prisma, we’re very excited to support the GraphQL Foundation to enable a healthy community and sustain the continuous development of GraphQL.” Johannes Schickling, Founder and CEO, Prisma

“At Shopify, GraphQL powers our core APIs and all our mobile and web clients. We strongly believe in open development and look to the Foundation to help expand the community and nurture its evolution.” – Jean-Michel Lemieux, SVP Engineering, Shopify

“GraphQL is gaining tremendous adoption as one of the best protocols for remote retrieval of large object graphs. At Twitter, we are looking forward to what’s to come in the GraphQL ecosystem and are very excited to support the GraphQL Foundation.” – Anna Sulkina Sr. Engineering Manager, Core Services Group, Twitter

About the Linux Foundation
The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and industry adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact:
Emily Olin
The Linux Foundation
eolin@linuxfoundation.org

Cision View original content to download multimedia:http://www.prnewswire.com/news-releases/the-linux-foundation-announces-intent-to-form-new-foundation-to-support-graphql-300744847.html

SOURCE The Linux Foundation

Source

Amazon SageMaker Now Supports Pipe Mode for Datasets in CSV Format

Posted On: Nov 5, 2018

The built-in algorithms that come with Amazon SageMaker now support Pipe Mode for datasets in CSV format. This accelerates the speed at which data can be streamed from Amazon Simple Storage Service (S3) into SageMaker by up to 40%, while training machine learning (ML) models. With this new enhancement, the performance benefits of Pipe Mode are extended to training datasets in CSV format in addition to the protobuf recordIO format that we released earlier this year.

Amazon SageMaker supports two methods of transferring training data: File Mode and Pipe Mode. With File Mode, the training data is downloaded first to an encrypted EBS volume attached to the training instance before training the model. With Pipe Mode, the data is streamed directly to the training algorithm while it is running. This results in faster training jobs and lesser disk space, reducing overall costs to train ML models on Amazon SageMaker.

Support for CSV format with Pipe Mode is available in all AWS regions where Amazon SageMaker is available today. You can read additional details in this blog post.

Source

PostgreSQL to Manage JSON | Linux Hint

One of the many data types that PostgreSQL support is JSON. Since most of the web APIs communication uses JSON payload immensely, this feature is rather important. Rather than using the plaintext data type to store JSON objects, Postgres has a different data type which is optimized for JSON payloads, verifies that data stored in these fields confirms to the

RFC specification

. Also in a classic Postgres manner, it allows you to fine-tune your JSON fields for maximum performance.

While creating a table, you will have two options for your JSON column. Plain json data type and jsonb data type, both have their own advantages and disadvantages. We shall go through each of them, by creating a simple table with just 2 columns an ID and a JSON value. Following this we will query data from the table and get a feel for how to manage JSON formatted data inside Postgres.

JSON Data Type

1. Creating a Table with JSON Data Type

Let’s create a simple two column table named users:

CREATE TABLE users (
id serial NOT NULL PRIMARY KEY,
info json NOT NULL
);

Here the column id acts as the primary key, and it will increase in an incremental fashion thanks to the pseudotype serial so we won’t have to worry about manually entering values for id as we go along.

The second column is of json type and is forced to be NOT NULL. Let’s enter a few rows of data to this table, consisting of JSON values.

INSERT INTO users (info) VALUES (
‘{
“name”: “Jane Doe”,
“email”: “janedoe@example.com”,
“personalDetails”: {“age”:33, “gender”:”F”}
}’);

INSERT INTO users (info) VALUES (
‘{
“name”: “Jane Doe”,
“email”: “janedoe@example.com”,
“personalDetails”: {“age”:33, “gender”:”F”}
}’);

You can use your prefered JSON beautifier/minifier to convert the JSON payloads above into a single line. So you can paste it at a go into your psql prompt.

SELECT

*

FROM

users;

id

|

info

—-+——————————————————1 | {“name”: “John Doe”, “email”: “johndoe@example.com”…}

2 | {“name”

:

“Jane Doe”

,

“email”

:

“janedoe@example.com”

}
(2 rows)

The SELECT command at the end showed us that the rows were successfully inserted into the users table.

2. Querying JSON Data Type

Postgres allows you to dig into the JSON payload itself and retrieve a particular value out of it, if you reference it using the corresponding value. We can use the -> operator after the json column’s name, followed by the key inside the JSON object. Doing so

For example, in the table we created above:

SELECT

info –

>

‘email’

FROM

users;

—-+—————————————-
id | ?column?

—-+—————————————-1 | “johndoe@example.com”

2 | “janedoe@example.com”

You may have noticed the double quotes in the column containing emails. This is because the -> operator returns a JSON object, as present in the value of key “email”. Of course, you can return just text, but you will have to use the ->> operator instead.

SELECT

info –

>>

‘email’

FROM

users;

id

|

?

column

?

—-+—————————————-1 | johndoe@example.com

2 |

janedoe@example.com

The difference between returning a JSON object and a string becomes clear once we start working with JSON objects nested inside other JSON objects. For example, I chose the “personalDetails” key to intentionally hold another JSON object. We can dig into this object too, if we want:

SELECT

info –

> ‘personalDetails’

> ‘gender’ FROM

users;

 

?

column

?

———-“M””F”(2 rows)

This can let you go as deep into the JSON object as you would want to. Let’s drop this table and create a new one (with the same name) but with JSONB type.

JSONB Data Type

Except for the fact that during creation of the table we mention jsonb data type instead of json, all else looks the same.

CREATE TABLE users (
id serial NOT NULL PRIMARY KEY,
info jsonb NOT NULL
);

Even the insertion of data and retrieval using the -> operator behaves the same way. What has changed is all under the hood and noticeable in the table’s performance. When converting JSON text into a jsonb, Postgres actually turns the various JSON value types into native Postgres type, so not all valid json objects can be saved as valid jsonb value.

Moreover, jsonb doesn’t preserve the whitespaces, order of json keys as supplied by the INSERT statement. Jsonb actually converts the payload into native postgres binary, hence the term jsonb.

Of course, insertion of jsonb datum has a performance overhead because of all these additional work that postgres needs to do. However, the advantage that you gain is in terms of faster processing of the already stored data, since your application would not have the need to parse a JSON payload everytime it retrieves one from the database.

JSON vs JSONB

The decision between json and jsonb sole depends on your use case. When in doubt use jsonb, since most applications tend to have more frequent read operations that write operations. On the other hand, if you are sure that your application is expected to do more synchronous write operations than read, then you may want to consider json as an alternative.

Conclusion

People working with JSON payloads and designing interfaces for Postgres storage will benefit immensely from this particular section of their official documentation. The developers were kind enough to furnish us with jsonb indexing and other cool features which can be leveraged to improve the performance and simplicity of your application. I implore you to investigate these as well.

Hopefully, you found this brief introduction of the matter helpful and inspiring.

Source

Open-Source OPNids Enhances Suricata With Machine Learning

A new open-source intrusion detection system (IDS) effort is officially getting underway on Nov. 5 with the launch of the OPNids project.

The OPNids effort is being led by threat hunting firm CounterFlow AI and security appliance provider Deciso, which also leads the Opensense security platform project. OPNids is built on top of the open-source Suricata IDS, providing a new layer of machine learning-based intelligence to help improve incident response and threat hunting activities.

“We created a pipeline that will actually take the Suricata logs and analyze the packets to provide context around any alerts,” Randy Caldejon, CEO and co-founder of CounterFlow AI, told eWEEK. “We like to call this alert triage. It’s like taking it to the last mile of what the analysts would do anyhow because typically when there’s an alert, they want some context.”

The Suricata project got started in 2009 by the Open Information Security Foundation as an alternative open-source option to the Snort IDS that was already in market.

“What we’re doing with the machine learning is taking advantage of the data that is flowing at line rates from Suricata and rather than just storing the data to disk we figured we should analyze the data while it’s in motion,” he said. “What we have is not one single model; we built an agent so analysts can write their own scripts.”

How It Works

At the core of OPNids is the DragonFly Machine Learning Engine (MLE), which uses a streaming data analytics model to ingest line rate network data from Suricata.

“Most machine learning techniques are what are traditionally known as batch techniques, where you get a big pool of data offline and you apply a sophisticated algorithm onto that,” Andrew Fast, chief data scientist at CounterFlow AI, told eWEEK. “We are taking a different approach, with streaming analytics, which is a newer branch of machine learning that is not as widely used.”

Fast explained that with streaming analytics rather than looking at a collected pool of data, the DragonFly MLE is collecting statistics and making decisions as the data flows through the IDS. The DragonFly MLE can also support offline training and then online scoring at wire speed, he said. With machine learning, training is used to help tune a model, while scoring is the output of the model.

Caldejon added that OPNids can also enable post-processing of data for additional analysis, with an approach known as Filebeat, which is commonly used in the Elastic Stack. As such, OPNids data can be forwarded to an Elastic search engine for additional analysis. Apache Kafka streaming is also supported, as is syslog for general log capabilities.

Moving forward, Caldejon said the plan is to also integrate OPNids with threat intelligence gateway capabilities, taking in third-party threat data as a factor that helps the MLE make decisions.

OPNids Pro

Alongside the open-source effort, there is a commercially supported version of OPNids in the works. The commercial version has a hardware edition where software is all preloaded and integrated. Additionally, Caldejon said the Pro edition of OPNids will include a packet caching capability. The packet caching capability will take network packets and write them to disk, saving the PCAP (packet capture) for additional offline analysis, he said.

Both the Pro and standard editions of OPNids are fully bundled application images including the MLE and Suricata. Caldejon said users can download and get the system running within 10 minutes.

“It’s got Suricata, it’s got the MLE, and it has a nice UI [user interface], so even unsophisticated users can manage the technology,” he said. “Typically, a lot of open-source projects are all CLI [command line interface] and expect people to be experts at the command line, but we wanted to make this accessible to data scientists as well, and they usually don’t have those skills, so yeah, OPNids is a one-stop shop, everything is bundled.”

Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.

Source

Download Pop!_OS Linux 18.10

Pop!_OS Linux is a free Ubuntu-based GNU/Linux distribution built around the widely used GNOME desktop environment and optimized for the desktop and laptop computers created by Linux computer retailer System76. It is based on the popular Ubuntu Linux operating system.

Supports Intel, AMD, and Nvidia GPUs

Pop!_OS Linux comes in two flavors, for System76 computers with either hybrid Intel and AMD or Intel and Nvidia graphics cards. Only 64-bit systems are supported, and you can install the operating system on other computers not manufactured by System76 too, considering you download the right ISO image.

Taking about the ISO image, Pop!_OS Linux offers a live and installable image that’s about 2GB in size, which means you can write it to a USB flash drive and use it to install the operating system whenever you want. There’s no boot menu on the live images, and the installer prompts the user immediately after loading the live session.

Forced installer, but there’s a live session too

The installer is forced on the user when running the live session, but you can always minimize it in a corner if you only want to give Pop!_OS Linux a test drive on your personal computer and see what the fuss is all about. You’ll have unrestricted access to the entire graphical environment and pre-installed applications.

Customized GNOME desktop with a minimal set of apps

Pop!_OS Linux features a highly customized GNOME desktop environment with beautiful artwork, yet a traditional experience with a single panel located on the top of the screen, from where you can access the pre-installed apps, running apps, and the system tray area.

There’s only a minimal set of apps available in the Pop!_OS Linux live image, including the Mozilla Firefox web browser, Geary email client, LibreOffice office suite, and a few standard GNOME utilities, such as a calculator, system monitor, calendar, contacts, terminal emulator, text editor, whether client, and others.

A Linux distro for System76 customers

Bottom line, Pop!_OS Linux is a GNU/Linux distribution designed and optimized by System76 for its own customers, in case they to reinstall the operating system if something goes wrong. However, nothing stops you from using System76’s in-house built operating system on your personal computer from a different manufacturer.

Source

How to play Windows games on Linux

Linux OS provides the kind of security and stability that somewhat Windows fails to deliver. However, most of the gamers deviate towards Windows because of the misconception that Linux cannot support Windows games. Users who want to go for gaming will rarely pick Linux and users comfortable with Linux will rarely switch to Windows.

The most often question you’ve been asked as a Linux user is how do you play games? Well, when it comes to gaming we know that Windows leads Linux to some extent. That doesn’t mean that we can never play our favorite Windows games again. It’s just a misconception that Ubuntu does not support Windows gameplay. What people don’t usually know is that game developers are increasingly taking advantage of the Linux’s growing market. Not only there are companies making Linux based games but companies like Valve or Steam are trying to develop tools that can support Windows games on your Linux system, not just Ubuntu but Linux in general.

A big advantage of using Linux over any other gaming platforms is the stability it has to offer. Other systems are generally loaded with bugs or freezing issues. Gamers can easily get frustrate from untimely interruptions whilst completing their missions. To save you from this trouble, Linux will give you better stability in the gaming domain as well.

So now coming to the juicy part of our topic, how exactly can you play Windows games on Linux? Below are tools that support your favorite Windows games on Linux.

Wine

The most common way to support Windows games is to get Wine installed on your system. When the WineHQ released their first stable version 1.0, it already supported 200 most popular games of Windows. The latest version of wine also offers rankings of the games which help in determining the number of configurations they require. If you see a Platinum ranking, it means that the game has 99% chances of working. Gold ranking means that you’d need to configure them a little bit, but in the end, they’ll work fine. They are labeled as gold because they haven’t been integrated with the newest version of wine. Silver and Bronze labels mean that there may be some issues in the game. Of course, if a game shows garbage ranking, the chances of it working would be as rare as seeing a penguin talk. Check out their huge database before installing it.

Steam Play

A new beta version of the Steam play was released this year. A way that allow users to access Windows, Mac and Linux versions of steam games. They already had more than 3000 games for Linux users and have been adding more with each day. In order to increase the compatibility with the Windows games, they decided to include the beta version of Steam Play which includes a modified of Wine, Proton.
Their official site has listed some of the benefits that the new version will provide:

  • Windows games with no current Linux versions can be installed and run directly from Linux Steam client.
  • It will include complete built-in support for Steamworks and OpenVR.
  • Improved game compatibility and reduced performance impact due to the implementation of DirectX 11 & 12
  • Games will recognize all controllers automatically.
  • Multi-thread games will have improved performance as compared to vanilla Wine.

Check out the list of games that new Steam beta version supports.

PlayOnLinux (POL)

Not only does it provide an interactive and user-friendly front-end, but also includes a series of pre-built scripts to help users install some specific games pretty quickly. It is an effective and more user-friendly interface to Wine emulator, allowing you to configure and access it outside of command line. The advantage that you have is if you cannot find your game listed in PlayOnLinux or if the script fails, you can just visit Wine Application Database and enter the name game you desire in the Search box. Cons of POL is that Wine is hardware specified meaning that its performance will depend on the kind of hardware you’re using and unfortunately POL cannot work without Wine.

Lutris

Lutris is a tool that allows you to install and manage games on Linux. It works for built-in and Windows games as well as emulators. A Wine-based compatibility layer, Proton, which allows playing Windows-only games, but is strictly for steam games. Lutris can be used to enhance your experience of playing other games, like Blizzard games. It provides a large database of games and has install scripts for download.

Conclusion

People are trying to move towards Linux due to the greater degree of stability it offers. Gamers have this idea that Linux won’t be able to support their favorite games, hence they are hesitant. However, it is just a hox and companies, worldwide, are putting efforts to provide comfort to gamers who might want to shift to Linux.

Source

WP2Social Auto Publish Powered By : XYZScripts.com