Intel 6th and 7th Gen box PCs offer PCIe graphics expansion

Nov 6, 2018

Aaeon launched a rugged, Linux-friendly line of “Boxer-6841M” industrial computers based on 6th or 7th Gen Core CPUs with either a PCIe x16 slot for Nvidia GPU cards or 2x PCIe x8 slots for frame grabbers.

The Boxer-6841M line of six industrial box PCs is designed for edge AI and machine vision applications. Like last year’s Boxer-6839, the rugged, wall-mountable computers run Linux (Ubuntu 16.04) or Windows on Intel’s 6th Generation “Skylake” and 7th Generation “Kaby Lake” Core and Xeon processors with 35W to 73W TDPs. The systems use T and TE branded Core CPUs and Intel H110 PCH or C236 PCH chipsets.

Boxer-6841M-A2 (left) and smaller, fanless -A5
(click images to enlarge)

 

Aaeon refers to the Boxer-6841M as “compact,” but even the two smaller A5 and A6 models with dual PCIe x8 slots are considerably larger than the Boxer-6839, measuring 260 x 300 x 155mm. The PCIe x8 slots on the fanless A5 and A6 are intended primarily for loading video frame grabber cards. The only difference between the A5 and A6 is that the A6 provides 4x RS232 ports via a side-mounted addition.

The fan-powered A1-A4 models, meanwhile, measure 400 x 200 x 155mm. This gives them room to fit a single PCIe x16 slot for AI-enabled Nvidia Tesla graphics cards.

Boxer-6841M A1 through A4 models (left) and BOXER-6841M power specs
(click images to enlarge)

 

The A1 and A3 models can drive 180W of power to their PCIe x16 slot while the A2 and A4 offer dual 12V inputs to support up to a 250W PCIe x16 card. The dual input design “makes the system more stable by reducing the level of wasted heat that would be produced by a single 24V input,” says Aaeon. (See power specs above.)

Boxer-6841M A5 and A6
(click image to enlarge)

The difference between the A1 and A3 models and between the A2 and A4 models is that only the A1 and A2 support the 73W TDP Intel Xeon parts. All six systems also offer single PCIe x1 slots and dual mini-PCIe slots with USB support. There’s also a single SIM slot and 4G and WiFi options with antennas.

Boxer-6841M-A1 front view, which is the same as A2, A3, and A4 (left) and A4 rear view with extra 12V input, a feature it shares with the A2
(click images to enlarge)

The Boxer-6841M systems support up to 32GB of DDR4 (including ECC) RAM. They offer dual 2.5-inch SATA bays with removable drive support, with an option to expand to 4x bays. The system provides 5x GbE ports that support machine vision cameras.

A VGA port and 2x HDMI 1.4b ports handle video duty, backed up by audio in and out jacks. You also get 4x USB 3.0 ports and an RS-232/422/485 port.

Boxer-6841M-A5 (left) and A6 model with quad-serial port extension
(click images to enlarge)

 

All six models provide 12-24V inputs on the front, as well as a power switch, a remote power connector, and LEDs. The system support -20 to 55°C temperatures with 35W TDPs and -20 to 45°C with the 73W Xeons, both with 0.5m/s airflow. Anti-vibration support is listed as Random, 1Grm, 5~500Hz.

Further information

No pricing or availability information was provided for the Boxer-6841M computers. More information may be found at Aaeon’s Boxer-6841M product page.

Source

​A kinder, gentler Linus Torvalds and Linux 4.20

After apologizing for his behavior in the Linux developer community last September, Linus Torvalds came back to Linux in October. And now, in November, with the first release candidate of the 4.20 Linux kernel out, it’s time to look at what’s what with Torvalds and the controversial Linux Code of Conduct (CoC).

The answer is: We do have a kinder, gentler Torvalds.

Also: Nintendo Switch turned into Linux tablet by hackers CNET

Torvalds told me, besides seeking professional help, he had “an email filter in place (that might be expanded upon or modified as needed or as I come up with more esoteric swearing — the current filter is really pretty basic)” In addition, Torvalds has asked the other senior Linux maintainers “to just send me email if they feel I’ve been unnecessarily abrupt.”

The results? I’ve been going through the Linux Kernel Mailing List (LKML) archives, and I haven’t seen hardily a trace of the blue language that made Torvalds infamous.

Michael Larabel. founder and principal author of the Linux news site Phoronix, went further. He compared and contrasted how Torvalds reacted to major Linux coding no-nos now and last year.

In 2018, a developer enabled a gaming device driver by default in the kernel. Torvalds replied:

We do *not* enable new random drivers by default. And we most *definitely* don’t do it when they are odd-ball ones that most people have never heard of.

Yet the new “BigBen Interactive” driver that was added this merge window did exactly that.

Just don’t do it.

In 2017, another developer made the same kind of blunder with the Dell SMBIOS driver. Then, Torvalds fired back:

As a developer, you think _your_ driver or feature is the most important thing ever, and you have the hardware.

AND ALMOST NOBODY ELSE CARES.

Read it and weep. Unless your hardware is completely ubiquitous, it damn well should not default to being defaulted everybody else’s config.

Notice the change in tone? I did, and I’m sure the wet-behind-the-ears developers on the receiving end did as well.

The new Linux Code of Conduct held now been in effect for several weeks. Before the CoC took effect, people — largely outside the Linux kernel community — were having fits about it. Developers would be kicked out for not being politically correct. Programmers would leave and they take their code with them, Dogs and cats would start living together.

Well, maybe not the last, but you get the idea.

The results in the real world? All’s quiet on the LKML. The last substantive talk about the CoC was two weeks ago. And, that conversation was more about the process of editing the CoC than its substance.

In short, things are peaceful in the Linux community, and they’re working hard on the next release.

Speaking of which, 4.20 will be a large release, with about 300,000 additional and changed lines of code. But there’s nothing Earth-shattering in it. Torvalds wrote 70 percent of it is driver updates, with the bulk of the rest bringing architecture updates and tooling.

Also: The Linux Code of Conduct is long overdue TechRepublic

That said, Torvalds is considering a change in how he works with the patches:

One thing I _would_ like to point out as the merge window closes: I tend to delay some pull requests that I want to take a closer look at until the second week of the merge window when things are calming down, and that _really_ means that I’d like to get all the normal pullrequests in the first week of the two-week merge window. And most people really followed that, but by Wednesday this week I had gotten a big frustrated that I kept getting new pull requests when I wanted to really just spend most of the day looking through the ones that deserved a bit of extra attention.And yes, people generally kind of know about this and I really do get *most* pull requests early. But I’m considering trying to make that a more explicit rule that I will literally stop taking new pull requests some time during the second week unless you have a good reason for whyit was delayed.

Perhaps the most significant news coming out of this release is that the Linux kernel is free of variable-length arrays (VLA)s. While part of standard C, VLAs, Torvalds noted are “actively bad not just for security worries, but simply because VLA’s are a really horribly bad idea in general in the kernel.”

Finally, WireGuard, a proposed built-in Linux virtual private network (VPN), won’t be making it into the kernel this go-around. This is due to some unresolved questions about how it handles encryption.

Perhaps with a kinder, gentler Torvalds in charge, who really likes WireGuard, Wireguard will finally make it in for the 2019 Linux 5.0 kernel release.

Related stories:

Source

How to manage storage on Linux with LVM

Logical Volume Manager (LVM) is a software-based RAID-like system that lets you create “pools” of storage and add hard drive space to those pools as needed. There are lots of reasons to use it, especially in a data center or any place where storage requirements change over time. Many Linux distributions use it by default for desktop installations, though, because users find the flexibility convenient and there are some built-in encryption features that the LVM structure simplifies.

However, if you aren’t used to seeing an LVM volume when booting off of a Live CD for data rescue or migration purposes, LVM can be confusing because the mount command can’t mount LVM volumes. For that, you need LVM tools installed. The chances are great that your distribution has LVM utils available—if they aren’t already installed.

This tutorial explains how to create and deal with LVM volumes.

Create an LVM pool

This article assumes you have a working knowledge of how to interact with hard drives on Linux. If you need more information on the basics before continuing, read my

introduction to hard drives on Linux

.

Usually, you don’t have to set up LVM at all. When you install Linux, it often defaults to creating a virtual “pool” of storage and adding your machine’s hard drive(s) to that pool. However, manually creating an LVM storage pool is a great way to learn what happens behind the scenes.

You can practice with two spare thumb drives of any size, or two hard drives, or a virtual machine with two imaginary drives defined.

First, format the imaginary drive /dev/sdx so that you have a fresh drive ready to use for this demo.

# echo “warning, this ERASES everything on this drive.”
warning, this ERASES everything on this drive.
# dd if=/dev/zero of=/dev/sdx count=8196
# parted /dev/sdx print | grep Disk
Disk /dev/sdx: 100GB
# parted /dev/sdx mklabel gpt
# parted /dev/sdx mkpart primary 1s 100%

This LVM command creates a storage pool. A pool can consist of one or more drives, and right now it consists of one. This example storage pool is named billiards, but you can call it anything.

# vgcreate billiards /dev/sdx1

Now you have a big, nebulous pool of storage space. Time to hand it out. To create two logical volumes (you can think of them as virtual drives), one called vol0 and the other called vol1, enter the following:

# lvcreate billiards 49G –name vol0
# lvcreate billiards 49G –name vol1

Now you have two volumes carved out of one storage pool, but neither of them has a filesystem yet. To create a filesystem on each volume, you must bring the billiards volume group online.

# vgchange –activate y billiards

Now make the file systems. The -L option provides a label for the drive, which is displayed when the drive is mounted on your desktop. The path to the volume is a little different than the usual device paths you’re used to because these are virtual devices in an LVM storage pool.

# mkfs.ext4 -L finance /dev/billiards/vol0
# mkfs.ext4 -L production /dev/billiards/vol1

You can mount these new volumes on your desktop or from a terminal.

# mkdir -p /mnt/vol0 /mnt/vol1
# mount /dev/billiards/vol0 /mnt/vol0
# mount /dev/billiards/vol1 /mnt/vol1

Add space to your pool

So far, LVM has provided nothing more than partitioning a drive normally provides: two distinct sections of drive space on a single physical drive (in this example, 49GB and 49GB on a 100GB drive). Imagine now that the finance department needs more space. Traditionally, you’d have to restructure. Maybe you’d move the finance department data to a new, dedicated physical drive, or maybe you’d add a drive and then use an ugly symlink hack to provide users easy access to their additional storage space. With LVM, however, all you have to do is expand the storage pool.

You can add space to your pool by formatting another drive and using it to create more additional space.

First, create a partition on the new drive you’re adding to the pool.

# part /dev/sdy mkpart primary 1s 100%

Then use the vgextend command to mark the new drive as part of the pool.

# vgextend billiards /dev/sdy1

Finally, dedicate some portion of the newly available storage pool to the appropriate logical volume.

# lvextend -L +49G /dev/billiards/vol0

Of course, the expansion doesn’t have to be so linear. Imagine that the production department suddenly needs 100TB of additional space. With LVM, you can add as many physical drives as needed, adding each one and using vgextend to create a 100TB storage pool, then using lvextend to “stretch” the production department’s storage space across 100TB of available space.

Use utils to understand your storage structure

Once you start using LVM in earnest, the landscape of storage can get overwhelming. There are two commands to gather information about the structure of your storage infrastructure.

First, there is vgdisplay, which displays information about your volume groups (you can think of these as LVM’s big, high-level virtual drives).

# vgdisplay
— Volume group —
VG Name billiards
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size <237.47 GiB
PE Size 4.00 MiB
Total PE 60792
Alloc PE / Size 60792 / <237.47 GiB
Free PE / Size 0 / 0
VG UUID j5RlhN-Co4Q-7d99-eM3K-G77R-eDJO-nMR9Yg

The second is lvdisplay, which displays information about your logical volumes (you can think of these as user-facing drives).

# lvdisplay

— Logical volume —

LV Path /dev/billiards/finance

LV Name finance

VG Name billiards

LV UUID qPgRhr-s0rS-YJHK-0Cl3-5MME-87OJ-vjjYRT

LV Write Access read/write

LV Creation host, time localhost, 2018-12-16 07:31:01 +1300

LV Status available

# open 1

LV Size 149.68 GiB

Current LE 46511

Segments 1

Allocation inherit

Read ahead sectors auto

– currently set to 256

Block device 253:3

[…]

Use LVM in a rescue environment

The “problem” with LVM is that it wraps partitions in a way that is unfamiliar to many administrative users who are used to traditional drive partitioning. Under normal circumstances, LVM drives are activated and mounted fairly invisibly during the boot process or desktop LVM integration. It’s not something you typically have to think about. It only becomes problematic when you find yourself in recovery mode after something goes wrong with your system.

If you need to mount a volume that’s “hidden” within the structure of LVM, you must make sure that the LVM toolchain is installed. If you have access to your /usr/sbin directory, you probably have access to all of your usual LVM commands. But if you’ve booted into a minimal shell or a rescue environment, you may not have those tools. A good rescue environment has LVM installed, so if you’re in a minimal shell, find a rescue system that does. If you’re using a rescue disc and it doesn’t have LVM installed, either install it manually or find a rescue disc that already has it.

For the sake of repetition and clarity, here’s how to mount an LVM volume.

# vgchange –activate y
2 logical volume(s) in volume group “billiards” now active
# mkdir /mnt/finance
# mount /dev/billiards/finance /mnt/finance

Integrate LVM with LUKS encryption

Many Linux distributions use LVM by default when installing the operating system. This permits storage extension later, but it also integrates nicely with disk encryption provided by the Linux Unified Key Setup (LUKS) encryption toolchain.

Encryption is pretty important, and there are two ways to encrypt things: you can encrypt on a per-file basis with a tool like GnuPG, or you can encrypt an entire partition. On Linux, encrypting a partition is easy with LUKS, which, being completely integrated into Linux by way of kernel modules, permits drives to be mounted for seamless reading and writing.

Encrypting your entire main drive usually happens as an option during installation. You select to encrypt your entire drive or just your home partition when prompted, and from that point on you’re using LUKS. It’s mostly invisible to you, aside from a password prompt during boot.

If your distribution doesn’t offer this option during installation, or if you just want to encrypt a drive or partition manually, you can do that.

You can follow this example by using a spare drive; I used a small 4GB thumb drive.

First, plug the drive into your computer. Make sure it’s safe to erase the drive and use lsblk to locate the drive on your system.

If the drive isn’t already partitioned, partition it now. If you don’t know how to partition a drive, check out the link above for instructions.

Now you can set up the encryption. First, format the partition with the cryptsetup command.

# cryptsetup luksFormat /dev/sdx1

Note that you’re encrypting the partition, not the physical drive itself. You’ll see a warning that LUKS is going to erase your drive; you must accept it to continue. You’ll be prompted to create a passphrase, so do that. Don’t forget that passphrase. Without it, you will never be able to get into that drive again!

You’ve encrypted the thumb drive’s partition, but there’s no filesystem on the drive yet. Of course, you can’t write a filesystem to the drive while you’re locked out of it, so open the drive with LUKS first. You can provide a human-friendly name for your drive; for this example, I used mySafeDrive.

# cryptsetup luksOpen /dev/sdx1 mySafeDrive

Enter your passphrase to open the drive.

Look in /dev/mapper and you’ll see that you’ve mounted the volume along with any other LVM volumes you might have, meaning you now have access to that drive. The custom name (e.g., mySafeDrive) is a symlink to an auto-generated designator in /dev/mapper. You can use either path when operating on this drive.

# ls -l /dev/mapper/mySafeDrive
lrwxrwxrwx. 1 root root 7 Oct 24 03:58 /dev/mapper/mySafeDrive -> ../dm-4

Create your filesystem.

# mkfs.ext4 -o Linux -L mySafeExt4Drive /dev/mapper/mySafeDrive

Now do an ls -lh on /dev/mapper and you’ll see that mySafeDrive is actually a symlink to some other dev; probably /dev/dm0 or similar. That’s the filesystem you can mount:

# mount /dev/mapper/mySafeExt4Drive /mnt/hd

Now the filesystem on the encrypted drive is mounted. You can read and write files as you’d expect with any drive.

Use encrypted drives with the desktop

LUKS is built into the kernel, so your Linux system is fully aware of how to handle it. Detach the drive, plug it back in, and mount it from your desktop. In KDE’s Dolphin file manager, you’ll be prompted for a password before the drive is decrypted and mounted.

Using LVM and LUKS is easy, and it provides flexibility for you as a user and an admin. Being tightly integrated into Linux itself, it’s well-supported and a great way to add a layer of security to your data. Try it today!

Source

Nimbatus – The Space Drone Constructor is going to add drone racing, weather effects and more goodies

Nimbatus – The Space Drone Constructor is an excellent Early Access game where you snap blocks together to make some truly ridiculous creations. Stray Fawn Studio have now outlined their future plans and it sounds fun.

It’s an addictive game, one where you can easily get lost in how configurable you can make your drones. Do you make them small and sneaky? Do you make them as big as the entire screen? Do you give them some automation with AI to do things for you or go fully manual? So many options, so little time.

To give another example of what you can make with it, I perused the Steam Workshop today and found this amusing little number called “Deus Mecanicus”:

As for their current plans, the Drone Racing sounds awesome. They say it’s going to take a while, but even so it’s hard not to be excited. A game mode which has “drone vs. drone racing for fully autonomous drones and also races against the clock with multiple tracks and leaderboards”, sign me up!

Additionally, they’re going to be adding in wheels, weather effects, ice planets, boss fights, improved single-player campaign progress, sandbox planets and more.

See their roadmap here for their full plans. Find the game on Humble Store and Steam, well worth a look.

Trailer below for those who haven’t seen yet, it’s brilliant:

Source

Download Manjaro Linux GNOME 18

Manjaro Linux GNOME is an open source Linux operating system, a community-operated edition of Manjaro Linux built around the well know GNOME desktop environment.

Distributed as 32-bit and 64-bit Live DVDs

It is distributed, like all the other Manjaro derivatives, as Live DVD ISO images that support both 64-bit and 32-bit architectures. In addition, the distribution inherits all the unique features of the original Manjaro operating system.

Boot options

The boot medium will allow users to try the operating system without installing anything on their computers. It provides two modes, one for users of Intel graphics cards, and another one for owners of Nvidia or AMD video cards. Additionally, the Live CD can be used to start the currently installed operating system, view if your hardware components are correctly recognized or test the system memory for errors.

GNOME is in charge of the graphical session

As mentioned before, the live session is powered by the GNOME desktop environment, which includes some of the main packages from the official GNOME Project. Nothing has been changed to the graphical user interface, providing users with a pure GNOME experience.

Default applications

Default applications include the VLC Media Player, Mozilla Firefox web browser, Evolution email client, Viewnior image viewer, LibreOffice office suite, Banshee music player, and GIMP image editor.

Many other core GNOME components are installed in this edition of Manjaro Linux, including the GNOME Photos, GNOME Weather, GNOME Clocks, GNOME Chess, GNOME Logs, and many, if not all, GNOME games.

Follows a rolling-release model

Manjaro Linux GNOME Community Edition is a rolling release operating system, which means that users don’t have to download a new ISO image to upgrade their system each time a new version is available.

Bottom line

You should really consider using this Linux distribution as your main operating system for common day to day tasks, especially because it will offer you an uncluttered GNOME desktop environment and a reliable, rolling-release Arch Linux base.

Source

Advance Your Open Source Skills with These Essential Articles, Videos, and More | Linux.com

Recent industry events have underscored the strength of open source in today’s computing landscape. With billions of dollars being spent, the power of open source development, collaboration, and organization seems unstoppable.

Toward that end, we recently provided an array of articles, videos, and other resources to meet you where you are on your open source journey and help you master the basics, improve your skills, or explore the broader ecosystem. Let’s take a look.

To start, we provided some Linux basics in our two-part series exploring Linux links:

Then, we covered some basic tools for open source logging and monitoring:

We also took an in-depth look at the Introduction to Open Source, Git, and Linux training course from The Linux Foundation. This course presents a comprehensive learning path focused on development, Linux systems, and the Git revision control system. The $299 course is self-paced and comes with extensive and easily referenced learning materials. Get a preview of the course curriculum in this four-part series by Sam Dean:

As the default compiler for the Linux kernel, the GNU Compiler Collection (GCC) delivers trusted, stable performance along with the additional extensions needed to correctly build the kernel. We took a closer look at this vital tool in this whitepaper:

Security is another vital component of Linux. In this video interview, Linux kernel maintainer Greg Kroah-Hartman provides a glimpse into how the kernel community deals with vulnerabilities.

Along with all these articles, we also recently published videos from some of our October events. Follow the links below to watch complete keynote and technical session presentations from Open Source Summit, Linux Security Summit, and Open FinTech Forum.

  • Check out 90+ sessions from Open Source Summit Europe & ELC + OpenIoT Summit Europe.
  • These 21 videos from Linux Security Summit Europe provide an overview of recent kernel development.
  • The 9 keynote videos from Open FinTech Forum cover cutting-edge open source technologies including AI, blockchain, and Kubernetes.

Stay tuned for more event coverage and essential open source resources.

Source

16-Way Graphics Card Comparison With Valve’s Steam Play For Windows Games On Linux

While Steam Play is still of beta quality on Linux for running Windows games on Linux using their Wine-based Proton compatibility layer, Steam Play has been fast maturing since it was rolled out to the public in late August. The game list continues growing and with regular updates to Steam Play / Proton / DXVK (Direct3D 10/11 over Vulkan), more games are going online for running on Linux and doing so with decent performance and correct rendering. Given the most recent Steam Play beta update vastly improving the experience in our tests, here are the first of our Steam Play Proton benchmarks with Ubuntu Linux and using sixteen different NVIDIA GeForce / AMD Radeon graphics cards.

 

 

The wonderful database at ProtonDb.com is the de facto source for tracking what Windows games are working on Linux. As of writing there are more than 2,800 titles reported to work, though depending upon your Linux distribution and graphics drivers / hardware that number can vary. In terms of the vast majority of games running well, they tend to be older and/or indie games. Among the “platinum” rated games at this point are Tomb Raider: Anniversary, Final Fantasy VII, the original Company of Heroes, Unreal Gold, Far Cry, and also some more interesting games like Call of Duty 4: Modern Warfare and The Witcher 3. The selection though of games is improving almost daily thanks to Proton/DXVK advancements being open-source and Valve regularly releasing updates and also the occasional workarounds to the Mesa graphics driver code.

 

 

For finding Steam Play games to utilize as benchmarks is still a bit mixed as the games need to be newer to at least stress modern graphics cards to make for an interesting comparison. The games also need to meet our benchmark/test requirements for integration with the Phoronix Test Suite and OpenBenchmarking.org. Since the Steam Play beta update last week improving things, I’ve been running tests using Batman: Arkham Origins and F1 2018. The Batman title is one of the older ones in the franchise but at least working well on Steam Play while F1 2018 is quite interesting considering that it is still a modern Windows game yet working well on Linux thanks to Proton and DXVK for remapping D3D11 to Vulkan.

 

 

There are also some other game titles I’m still working on benchmarking like Grand Theft Auto V and Shadow of the Tomb Raider but there are still issues there in my most recent checks. Benchmarks on other games will come as more benchmark-friendly, modern games are brought up to run properly with Steam Play.

 

 

For this benchmarking I tested 16 different graphics cards that on the Radeon side included the R9 285, R9 290, RX 560, RX 580, RX Vega 56, and RX Vega 64. All of the Radeon tests were done with the fresh driver stack of Linux 4.19 paired with Mesa 18.3-devel for the newest RADV driver code as of testing. On the NVIDIA side was the GeForce GTX 970, GTX 980, GTX 980 Ti, GTX 1060, GTX 1070, GTX 1070 Ti, GTX 1080, GTX 1080 Ti, RTX 2070, and RTX 2080 Ti. The cards tested on both sides were limited to the newer GPUs I had available for testing. The NVIDIA driver in use was the 410.73 release and all of these benchmarks were run from the same Ubuntu 18.10 system with Intel Core i9 9900K processor.

 

These benchmarks were run via the Phoronix Test Suite open-source benchmarking framework.

Source

JingDong (JD.com), China Mobile Cloud, Qing Cloud, and Whale Cloud Join the OpenMessaging Project

OpenMessaging

The goal of the OpenMessaging Project is to build out an industry standard, cloud oriented, and vendor neutral open standard for distributed messaging.

Today, the OpenMessaging Project — a collaborative project focused on creating a vendor-neutral open standard for distributed messaging — announced four new members JD.com, China Mobile Cloud, Qing Cloud, and Whale Cloud. Current members include Alibaba, DataPipeline, Di Di, Streamlio, WeBank, and Yahoo!.

The acceleration of microservice-based and cloud-based applications has put a growing focus on how data is connected to services, applications, and users. This focus has led to a number of new innovations and new products that support messaging and queueing needs. It has also contributed to increased demands on messaging and queuing solutions, making performance and scalability critical to success, and the need for an open standardization a must.

The goal of the OpenMessaging Project is to build out an industry standard, cloud oriented, and vendor neutral open standard for distributed messaging. More on this project and how to participate here: http://openmessaging.cloud

New Member Supporting Quotes:

“At China Mobile and CMsoft, we have built a MQ proxy system of Apache RocketMQ to provide a set of producer APIs and consumer APIs. The redundancy of having to hide the differences among the MQs takes so much time and energy out of our team. Given our knowledge in this field, we understand first hand the importance of a messaging communication standard. Having a vendor-neutral and language-independent MQ standard guideline is a big win for many applications. We believe this standard can help and promote the MQ technology that we rely on.” – Henry Hu, Architect at China Mobile and CMsoft.

“As a cloud provider, we offer various messaging services including Apache Kafka, RabbitMQ, and RocketMQ to our customers. More and more people keep asking us what software to use for their messaging requirements as the market is saturated with various open source solutions. This market saturation causes not only a high learning curve, but also a high maintenance cost. An industry open standard, vendor-neutral and language-independent specification for distributed messaging is increasingly important, especially in a cloud era. We look forward to collaborating with the OpenMessaging project to help drive messaging service towards a unified, open standard interface.” – Ray Zhou, Development Director at QingCloud

At the JD Group, JingDong Message Queue (JMQ) has been widely used. However, despite our efforts to be compatible with all kinds of message protocols, we still can’t meet all the requirements. We are planning to open source JMQ, so it can be implemented for OpenMessaging. We see OpenMessaging as a de-facto international open standard for distributed messaging that aims at satisfying the need of modern cloud-native messaging and streaming applications. We sincerely believe that a unified and widely-accepted messaging standard can benefit MQ technology and applications relied on it.” – DeQiang Lin, Messaging Leader at the JingDong Middleware Department

“Currently, message queuing uses proprietary, closed protocols, restricting the ability for different operating systems or programming languages to interact in a heterogeneous set of environments. At Whale Cloud, in order to make it easy for developers to use messaging and streaming services, we’ve worked to eliminate the differences between the different protocols. Giving us insight and knowledge to know that a vendor-neutral and language-independent open specification is badly needed.” – Zheng Tao, Technical Director of Distributed Messaging and Streaming Data Platform at Whale Cloud

Source

RK3399 based Raspberry Pi clone will launch at $49 — or even lower

Radxa has posted specs for a $49 and up, community backed “Rock Pi” Raspberry Pi lookalike with a Rockchip RK3399, USB 3.0, M.2, HDMI 2.0, and native GbE, plus optional WiFi, BT, and PoE.

Radxa is prepping a Rockchip RK3399-based Raspberry Pi pseudo clone called the Rock Pi. It joins the RK3399-based NanoPi M4 in closely matching the RPi 3 layout, and it appears it may be the most affordable RK3399 based SBC yet, starting at $49 with 2GB RAM, and possibly lower for the unpriced 1GB model.

Rock Pi, front and back
(click images to enlarge)

 

Many other RK3399 based SBCs have the same size and 40-pin connector as the Pi, but with different layouts. These include the new

Khadas Edge-V

, the

Renegade Elite

, and several other boards found in our

2018 open-spec SBC roundup

.

Tom Cubie, who started Cubieboards.org before moving to Radxa, informed me of the upcoming Rock Pi a month ago. However, I first saw the specs today on a revised version of the Single Board Computer Database (“board-DB”), now hosted on Hackerboards. As some of you may recall, LinuxGizmos switched to the Hackerboards.com domain for a year before switching back.

Rick Lehrbaum, who created LinuxDevices and LinuxGizmos, not to mention the PC/104 SBC standard, has been transitioning away from LinuxGizmos in 2018. He decided to revive Hackerboards.com when board.db creator Raffaele Tranquillini asked if he could take over the database for him. Currently, Hackerboards is devoted to a revised version of board-db, which Lehrbaum is in the process of updating.

In his October email, Cubie informed me that Radxa was acquired by a Shenzhen based OEM/ODM called Emdoor Group in 2016. This temporarily put a halt to the Radxa community, which once brought us open-spec boards like the Rockchip RK3188 based Radxa Rock and RK3288 equipped Radxa Rock 2 Square. This year, Cubie signed an agreement with Emdoor, enabling them to revise the Radxa community. “Rock Pi is the beginning of the rebuilding of Radxa,” wrote Cubie.

We based our spec list below primarily on the Radxa product page but added a few items from the board-db listings such as the extended temperature range. Unlike the product page, the board-db listings also include pricing on all but one model.

The Rock Pi Model A will sell for $49 (2GB) and $65 (4GB). The Model B, which adds PoE and a WiFi-ac/Bluetooth 5.0 wireless module sells for $49 (1GB), $59 (2GB) or $75 (4GB). There’s no price yet for the 1GB Model A, which could end up in the low to mid $40 range, if not $39. The only other differences between the Model A and B, according to board-db, is that the Model B lacks Android support (7.1 or 9.0). Both models support “some Linux distributions,” says Radxa.

Inside the Rock Pi

The ports on the 85 x 54mm Rock Pi are just where a Pi lover would expect them to be. Unlike the RPi 3B or 3B+, the GbE port is native, giving you at least 939Mbps — at least three times the bandwidth. Like the 3B+, it supports Power-over-Ethernet using the same official Raspberry Pi PoE HAT.

Rock Pi (left) and pinout diagram
(click images to enlarge)

 

Specs are almost identical to those of the $75 (2GB) NanoPi M4. The major difference is that the Rock Pi adds an M.2 storage slot for NVMe SSDs but lacks the M4’s 24-pin GPIO interface, which augments the 40-pin connector found on both boards. The NanoPi M4 also has standard wireless (but no PoE) and has 4x USB 4.0 host ports instead of the 2x 3.0 and 2x 2.0 on the Rock Pi.

If the Rock Pi pricing holds, it looks like the better deal based on specs alone. That means it could be the most affordable RK3399 SBC yet, even besting the smaller, more limited (1GB only) $50 NanoPi Neo4.

The Rock Pi has a microSD slot and an empty eMMC socket in addition to the M.2. You get the same, 4K-ready HDMI 2.0 port, which is one of the main selling points of the RK3399.

The board also provides MIPI-DSI and -CSI interfaces for dual displays and camera attachments, respectively, although they are only 2-lane each. Other features include an audio jack with mic, an RTC, and a USB Type-C port for wide-range power.

Preliminary specifications listed for the Rock Pi include:

  • Processor — Rockchip RK3399 (2x Cortex-A72 at up to 2.0GHz, 4x Cortex-A53 @ up to 1.5GHz); Mali-T860 MP4 GPU
  • Memory/storage:
    • 1GB, 2GB, or 4GB LPDDR4 RAM (dual-channel)
    • eMMC socket for 8GB to 128GB (bootable)
    • MicroSD slot for up to 128GB (bootable)
    • M.2 socket with support for up to 2TB NVMe SSD
  • Wireless — 802.11b/g/n/ac (2.4GHz/5GHz) with Bluetooth 5.0 with antenna (Model B only)
  • Networking — Gigabit Ethernet port; PoE support on Model B only (requires RPi PoE HAT)
  • Media I/O:
    • HDMI 2.0a port (with audio) for up to 4K at 60Hz
    • MIPI-DSI (2-lane) via FPC; dual display mirror or extend with HDMI
    • MIPI-CSI (2-lane) via FPC for up to 8MP camera
    • 3.5mm audio I/O jack (24-bit/96KHz)
    • Mic interface
  • Other I/O:
    • 2x USB 3.0 host ports
    • 2x USB 2.0 host ports
    • USB 3.0 Type-C OTG with power support and HW switch for host/device
  • Expansion — 40-pin GPIO header (see pinout diagram); M.2 slot for SSD (see mem/storage)
  • Other features — RTC with optional battery connector
  • Power:
    • 5.5-20V input
    • USB Type-C PD 2.0, 9V/2A, 12V/2A, 15V/2A, 20V/2A
    • Qualcomm Quick Charge support for QC 3.0/2.0 adapter, 9V/2A, 12V/1.5A
    • 8mA to 20mA consumption
  • Operating temperature — 0 to 80°C
  • Dimensions — 85 x 54mm
  • Operating system — Android 9.0; “some” Linux distros

Further information

The Rock Pi is looking like it’s heading for pre-order or live orders soon, starting at below $49 if you can get by with only 1GB RAM. More information may be found on Radxa’s Rock Pi product page.

Source

WP2Social Auto Publish Powered By : XYZScripts.com