AI in the Real World

Hilary Mason, general manager for machine learning at Cloudera, discussed AI in the real world in her keynote the recent Open FinTech Forum.

We are living in the future – it is just unevenly distributed with “an outstanding amount of hype and this anthropomorphization of what [AI] technology can actually provide for us,” observed Hilary Mason, general manager for machine learning at Cloudera, who led a keynote on “AI in the Real World: Today and Tomorrow,” at the recent Open FinTech Forum.

AI has existed as an academic field of research since the mid-1950s, and if the forum had been held 10 years ago, we would have been talking about big data, she said. But, today, we have machine learning and feedback loops that allow systems continue to improve with the introduction of more data.

Machine learning provides a set of techniques that fall under the broad umbrella of data science. AI has returned, from a terminology perspective, Mason said, because of the rise of deep learning, a subset of machine learning techniques based around neural networks that has provided not just more efficient capabilities but the ability to do things we couldn’t do at all five years ago.

Imagine the future

All of this “creates a technical foundation on which we can start to imagine the future,’’ she said. Her favorite machine learning application is Google Maps. Google is getting real-time data from people’s smartphones, then it is integrating that data with public data sets, so the app can make predictions based on historical data, she noted.

Getting this right, however, is really hard. Mason shared an anecdote about how her name is a “machine learning-edge case.” She shares her name with a British actress who passed away around 2005 after a very successful career.

Late in her career, the actress played the role of a ugly witch, and a search engine from 2009 combined photos with text results. At the time, Mason was working as a professor, and her bio was paired with the actress’s picture in that role. “Here she is, the ugly hag… and the implication here is obvious,’’ Mason said. “This named entity disambiguation problem is still a problem for us in machine learning in every domain.”

This example illustrates that “this technology has a tremendous amount of potential to make our lives more efficient, to build new products. But it also has limitations, and when we have conferences like this, we tend to talk about the potential, but not about the limitations, and not about where things tend to go a bit wrong.”

Machine learning in FinTech

Large companies operating complex businesses have a huge amount of human and technical expertise on where the ROI in machine learning would be, she said. That’s because they also have huge amounts of data, generally created as a result of operating those businesses for some time. Mason’s rule of thumb when she works with companies, is to find some clear ROI on a cost savings or process improvement using machine learning.

“Lots of people, in FinTech especially, want to start in security, anti-money laundering, and fraud detection. These are really fruitful areas because a small percentage improvement is very high impact.”

Other areas where machine learning can be useful is in understanding your customers, churn analysis and marketing techniques, all of which are pretty easy to get started in, she said.

“But if you only think about the ROI in the terms of cost reduction, you put a boundary on the amount of potential your use of AI will have. Think also about new revenue opportunities, new growth opportunities that can come out of the same technologies. That’s where the real potential is.”

Getting started

The first thing to do, she said is to “drink coffee, have ideas.” Mason said she visits lots of companies and when she sees their list of projects, they’re always good ideas. “I get very worried, because you are missing out on a huge amount of opportunity that would likely look like bad ideas on the surface.”

It’s important to “validate against robust criteria” and create a broad sweep of ideas. Then, go through and validate capabilities. Some of the questions to ask include: is there research activity relevant to what you’re doing? Is there work in one domain you can transfer to another domain? Has somebody done something in another industry that you can use or in an academic context that you can use?

Organizations also need to figure out whether systems are becoming commoditized in open source; meaning “you have a robust software and infrastructure you can build on without having to own and create it yourself.” Then, the organization must figure out if data is available — either within the company or available to purchase.

Then it’s time to “progressively explore the risky capabilities. That means have a phased investment plan,’’ Mason explained. In machine learning, this is done in three phases, starting with validation and exploration: Does the data exist? Can you build a very simple model in a week?

“At each [phase], you have a cost gate to make sure you’re not investing in things that aren’t ready and to make sure that your people are happy, making progress, and not going down little rabbit holes that are technically interesting, but ultimately not tied to the application.”

That said, Mason said predicting the future is of course, very hard, so people write reports on different technologies that are designed to be six months to two years ahead of what they would put in production.

Looking ahead

As progress is made in the development of AI, machine learning and deep learning, there are still things we need to keep in mind, Mason said. “One of the biggest topics in our field right now is how we incorporate ethics, how we comply with expectations of privacy in the practice of data science.”

She gave a plug to a short, free ebook called “Data Driven: Creating a Data Culture,” that she co-authored with DJ Patil, who worked as chief data scientist for President Barack Obama. Their goal, she said, is “to try and get folks who are practicing out in the world of machine learning and data science to think about their tools [and] for them to practice ethics in the context of their work.”

Mason ended her presentation on an optimistic note, observing that “AI will find its way into many fundamental processes of the businesses that we all run. So when I say, ‘Let’s make it boring,’ I actually think that’s what makes it more exciting.’”

Source

Linux Today – What’s New In Red Hat OpenStack Platform 14?

Nov 15, 2018

The Red Hat OpenStack Platform 14 release is based on the upstream OpenStack Rocky milestone, which first became publicly available, on Aug. 30. Among the new features in OSP 14 are improved networking capabilities, including enhanced load balancing capabilities for container workloads. Red Hat is also continuing to push forward on the integration of its OpenShift Kubernetes container orchestration platform with OpenStack.

In a video interview with eWEEK, Mark McLoughlin, Senior Director of Engineering, OpenStack at Red Hat, outlined some of the new features in OSP 14 and the direction for the road ahead.

Source

Odd Realm is a sandbox settlement builder inspired by Dwarf Fortress and Rimworld with Linux support

Inspired by the likes of Dwarf Fortress and Rimworld, Odd Realm is a sandbox settlement builder currently in Early Access on itch.

A sweet find while browsing around for new games today, I came across it and was instantly pulled in by the style. Looks like it could be an interesting choice perhaps if you find Dwarf Fortress to complex or you fancy trying out something new.

You will face off against passing seasons, roaming bandits, underground horrors, and gods from legend making it sound really quite fun.

Features:

  • 4 procedurally generated biomes (Desert, Taiga, Voidland, and Tropical)
  • 24+ Creatures
  • 100+ items, weapons, and gear
  • 100+ buildable blueprints for props, blocks, plants, trees, and platforms
  • 9+ Settler professions
  • Unique scenarios and encounters based on player decisions

Currently only available on itch.io for $10. The developer is planning to eventually released on Steam, once developed enough and enough feedback has been given on how it’s going on the itch store.

Source

New Linux-Targeting Crypto-Mining Malware Combines Hiding and Upgrading Capabilities

Japanese multinational cybersecurity firm Trend Micro has detected a new strain of crypto-mining malware that targets PCs running Linux, according to a report published Nov. 8.

The new strain is reportedly able to hide the malicious process of unauthorized cryptocurrency-mining through users’ CPU by implementing a rootkit component. The malware itself, detected by Trend Micro as Coinminer.Linux.KORKERDS.AB, is also reportedly capable of updating itself.

According to the report, the combination of hiding and self-upgrading capabilities gives the malware a great advantage. While the rootkit fails to hide the increased CPU usage and the presence of a running crypto-mining malware, it is also improved by updates, which can completely repurpose the existing code or tools by editing a few “lines of code,” the report notes.

The new crypto-mining malware strain infects Linux PCs via third-party or compromised plugins. Once installed, the plugin reportedly gets admin rights, with malware able to be run with privileges granted to an application. In this regard, Trend Micro mentioned another case of Linux-targeting crypto malware that used the same entry point, and took place in September this year.

Based on web server statistics, the estimated market share of Linux on personal computers amounted to around 1.8 percent in 2016. The share of Microsoft Windows systems in 2016 was around 89.7, while Mac OS served around 8.5 percent of users.

Recently, Cointelegraph reported that a group of South-Korean hackers will face trial for a cryptojacking case that allegedly infected more than 6,000 computers with malicious crypto-mining malware.

In September, a report revealed that leaked code targeting Microsoft systems, which hackers allegedly stole from the U.S. National Security Agency (NSA), sparked a fivefold increase in cryptocurrency mining malware infections.

Source

6 Best Practices for High-Performance Serverless Engineering | Linux.com

When you write your first few lambdas, performance is the last thing on your mind. Permissions, security, identity and access management (IAM) roles and triggers all conspire to make the first couple of lambdas, even after a “hello world” trial just to get your first serverless deployments up and working. But once your users begin to rely on services your lambdas provide, it’s time to focus on high-performance serverless.

Here are some key things to remember when you’re trying to produce high-performance serverless applications.

1. Observability
Serverless handles scaling really well. But as scale interacts with complexity, slowdowns and bugs are inevitable. I’ll be frank: these can be a bear if you don’t plan for observability from the start.

Read more at The New Stack

Source

How to use systemd-nspawn for Linux system recovery

For as long as GNU/Linux systems have existed, system administrators have needed to recover from root filesystem corruption, accidental configuration changes, or other situations that kept the system from booting into a “normal” state.

Linux distributions typically offer one or more menu options at boot time (for example, in the GRUB menu) that can be used for rescuing a broken system; typically they boot the system into a single-user mode with most system services disabled. In the worst case, the user could modify the kernel command line in the bootloader to use the standard shell as the init (PID 1) process. This method is the most complex and fraught with complications, which can lead to frustration and lost time when a system needs rescuing.

Most importantly, these methods all assume that the damaged system has a physical console of some sort, but this is no longer a given in the age of cloud computing. Without a physical console, there are few (if any) options to influence the boot process this way. Even physical machines may be small, embedded devices that don’t offer an easy-to-use console, and finding the proper serial port cables and adapters and setting up a serial terminal emulator, all to use a serial console port while dealing with an emergency, is often complicated.

When another system (of the same architecture and generally similar configuration) is available, a common technique to simplify the repair process is to extract the storage device(s) from the damaged system and connect them to the working system as secondary devices. With physical systems, this is usually straightforward, but most cloud computing platforms can also support this since they allow the root storage volume of the damaged instance to be mounted on another instance.

Once the root filesystem is attached to another system, addressing filesystem corruption is straightforward using fsck and other tools. Addressing configuration mistakes, broken packages, or other issues can be more complex since they require mounting the filesystem and locating and changing the correct configuration files or databases.

Using systemd

Before systemd, editing configuration files with a text editor was a practical way to correct a configuration. Locating the necessary files and understanding their contents may be a separate challenge, which is beyond the scope of this article.

When the GNU/Linux system uses systemd though, many configuration changes are best made using the tools it provides—enabling and disabling services, for example, requires the creation or removal of symbolic links in various locations. The systemctl tool is used to make these changes, but using it requires a systemd instance to be running and listening (on D-Bus) for requests. When the root filesystem is mounted as an additional filesystem on another machine, the running systemd instance can’t be used to make these changes.

Manually launching the target system’s systemd is not practical either, since it is designed to be the PID 1 process on a system and manage all other processes, which would conflict with the already-running instance on the system used for the repairs.

Thankfully, systemd has the ability to launch containers, fully encapsulated GNU/Linux systems with their own PID 1 and environment that utilize various namespace features offered by the Linux kernel. Unlike tools like Docker and Rocket, systemd doen’t require a container image to launch a container; it can launch one rooted at any point in the existing filesystem. This is done using the systemd-nspawn tool, which will create the necessary system namespaces and launch the initial process in the container, then provide a console in the container. In contrast to chroot, which only changes the apparent root of the filesystem, this type of container will have a separate filesystem namespace, suitable filesystems mounted on /dev, /run, and /proc, and a separate process namespace and IPC namespaces. Consult the systemd-nspawn man page to learn more about its capabilities.

An example to show how it works

In this example, the storage device containing the damaged system’s root filesystem has been attached to a running system, where it appears as /dev/vdc. The device name will vary based on the number of existing storage devices, the type of device, and the method used to connect it to the system. The root filesystem could use the entire storage device or be in a partition within the device; since the most common (simple) configuration places the root filesystem in the device’s first partition, this example will use /dev/vdc1. Make sure to replace the device name in the commands below with your system’s correct device name.

The damaged root filesystem may also be more complex than a single filesystem on a device; it may be a volume in an LVM volume set or on a set of devices combined into a software RAID device. In these cases, the necessary steps to compose and activate the logical device holding the filesystem must be performed before it will be available for mounting. Again, those steps are beyond the scope of this article.

Prerequisites

First, ensure the systemd-nspawn tool is installed—most GNU/Linux distributions don’t install it by default. It’s provided by the systemd-container package on most distributions, so use your distribution’s package manager to install that package. The instructions in this example were tested using Debian 9 but should work similarly on any modern GNU/Linux distribution.

Using the commands below will almost certainly require root permissions, so you’ll either need to log in as root, use sudo to obtain a shell with root permissions, or prefix each of the commands with sudo.

Verify and mount the fileystem

First, use fsck to verify the target filesystem’s structures and content:

$ fsck /dev/vdc1

If it finds any problems with the filesystem, answer the questions appropriately to correct them. If the filesystem is sufficiently damaged, it may not be repairable, in which case you’ll have to find other ways to extract its contents.

Now, create a temporary directory and mount the target filesystem onto that directory:

$ mkdir /tmp/target-rescue
$ mount /dev/vdc1 /tmp/target-rescue

With the filesystem mounted, launch a container with that filesystem as its root filesystem:

$ systemd-nspawn –directory /tmp/target-rescue –boot — –unit rescue.target

The command-line arguments for launching the container are:

  • –directory /tmp/target-rescue provides the path of the container’s root filesystem.
  • –boot searches for a suitable init program in the container’s root filesystem and launches it, passing parameters from the command line to it. In this example, the target system also uses systemd as its PID 1 process, so the remaining parameters are intended for it. If the target system you are repairing uses any other tool as its PID 1 process, you’ll need to adjust the parameters accordingly.
  • — separates parameters for systemd-nspawn from those intended for the container’s PID 1 process.
  • –unit rescue.target tells systemd in the container the name of the target it should try to reach during the boot process. In order to simplify the rescue operations in the target system, boot it into “rescue” mode rather than into its normal multi-user mode.

If all goes well, you should see output that looks similar to this:

Spawning container target-rescue on /tmp/target-rescue.

Press ^] three times within 1s to kill container.

systemd 232 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN)

Detected virtualization systemd-nspawn.

Detected architecture arm.

Welcome to Debian GNU/Linux 9 (Stretch)!

Set hostname to <test>.

Failed to install release agent, ignoring: No such file or directory

[ OK ] Reached target Swap.

[ OK ] Listening on Journal Socket (/dev/log).

[ OK ] Started Dispatch Password Requests to Console Directory Watch.

[ OK ] Reached target Encrypted Volumes.

[ OK ] Created slice System Slice.

Mounting POSIX Message Queue File System…

[ OK ] Listening on Journal Socket.

Starting Set the console keyboard layout…

Starting Restore / save the current clock…

Starting Journal Service…

Starting Remount Root and Kernel File Systems…

[ OK ] Mounted POSIX Message Queue File System.

[ OK ] Started Journal Service.

[ OK ] Started Remount Root and Kernel File Systems.

Starting Flush Journal to Persistent Storage…

[ OK ] Started Restore / save the current clock.

[ OK ] Started Flush Journal to Persistent Storage.

[ OK ] Started Set the console keyboard layout.

[ OK ] Reached target Local File Systems (Pre).

[ OK ] Reached target Local File Systems.

Starting Create Volatile Files and Directories…

[ OK ] Started Create Volatile Files and Directories.

[ OK ] Reached target System Time Synchronized.

Starting Update UTMP about System Boot/Shutdown…

[ OK ] Started Update UTMP about System Boot/Shutdown.

[ OK ] Reached target System Initialization.

[ OK ] Started Rescue Shell.

[ OK ] Reached target Rescue Mode.

Starting Update UTMP about System Runlevel Changes…

[ OK ] Started Update UTMP about System Runlevel Changes.

You are in rescue mode. After logging in, type “journalctl -xb” to view

system logs, “systemctl reboot” to reboot, “systemctl default” or ^D to

boot into default mode.

Give root password for maintenance

(or press Control-D to continue):

In this output, you can see systemd launching as the init process in the container and detecting that it is being run inside a container so it can adjust its behavior appropriately. Various unit files are started to bring the container to a usable state, then the target system’s root password is requested. You can enter the root password here if you want a shell prompt with root permissions, or you can press Ctrl+D to allow the startup process to continue, which will display a normal console login prompt.

When you have completed the necessary changes to the target system, press Ctrl+] three times in rapid succession; this will terminate the container and return you to your original shell. From there, you can clean up by unmounting the target system’s filesystem and removing the temporary directory:

$ umount /tmp/target-rescue
$ rmdir /tmp/target-rescue

That’s it! You can now remove the target system’s storage device(s) and return them to the target system.

The idea to use systemd-nspawn this way, especially the –boot parameter, came from a question posted on StackExchange. Thanks to Shibumi and kirbyfan64sos for providing useful answers to this question!

Source

Snort Open Source IDS – ls /blog

Snort is an open source Intrusion Detection System that you can use on your Linux systems. This tutorial will go over basic configuration of Snort IDS and teach you how to create rules to detect different types of activities on the system.

For this tutorial the network we will use is: 10.0.0.0/24. Edit your /etc/snort/snort.conf file and and replace the “any” next to $HOME_NET with your network information as shown in the example screen shot below:

Alternatively you can also define specific IP addresses to monitor separated with comma between [ ] as shown in this screen shot:

Now let’s get started and run this command on the command line:

# snort -d -l /var/log/snort/ -h 10.0.0.0/24 -A console -c /etc/snort/snort.conf

Where:
d= tells snort to show data
l= determines the logs directory
h= specifies the network to monitor
A= instructs snort to print alerts in the console
c= specifies Snort the configuration file

Lets launch a fast scan from a different device using nmap:

And lets see what happens in the snort console:

Snort detected the scan, now, also from a different device lets attack with DoS using hping3

# hping3 -c 10000 -d 120 -S -w 64 -p 21 –flood –rand-source 10.0.0.3

The device displaying Snort is detecting bad traffic as shown here:

Since we instructed Snort to save logs, we can read them by running:

Introduction to Snort Rules

Snort’s NIDS mode works based on rules specified in the /etc/snort/snort.conf file.

Within the snort.conf file we can find commented and uncommented rules as you can see below:

The rules path normally is /etc/snort/rules , there we can find the rules files:

Lets see the rules against backdoors:

There are several rules to prevent backdoor attacks, surprisingly there is a rule against NetBus, a trojan horse which became popular a couple of decades ago, lets look at it and I will explain its parts and how it works:

alert tcp $HOME_NET 20034 –> $EXTERNAL_NET any (msg:“BACKDOOR NetBus Pro 2.0 connection
established”; flow:from_server,established;
flowbits:isset,backdoor.netbus_2.connect; content:“BN|10 00 02 00|”; depth:6; content:“|
05 00|”; depth:2; offset:8; classtype:misc-activity; sid:115; rev:9;)

This rule instructs snort to alert about TCP connections on port 20034 transmitting to any source in a external network.

-> = specifies the traffic direction, in this case from our protected network to an external one

msg = instructs the alert to include a specific message when displaying

content = search for specific content within the packet. It can include text if between “ “ or binary data if between | |
depth = Analysis intensity, in the rule above we see two different parameters for two different contents
offset = tells Snort the starting byte of each packet to start searching for the content
classtype = tells what kind of attack Snort is alerting about

sid:115 = rule identifier

Creating our own rule

Now we’ll create a new rule to notify about incoming SSH connections. Open /etc/snort/rules/yourrule.rules, and inside paste the following text:

alert tcp $EXTERNAL_NET any –> $HOME_NET 22 (msg:“SSH incoming”;
flow:stateless; flags:S+; sid:100006927; rev:1;)

We are telling Snort to alert about any tcp connection from any external source to our ssh port (in this case the default port) including the text message “SSH INCOMING”, where stateless instructs Snort to ignore the connection’s state.

Now, we need to add the rule we created to our /etc/snort/snort.conf file. Open the config file in an editor and search for #7, which is the section with rules. Add an uncommented rule like in the image above by adding:

include $RULE_PATH/yourrule.rules

Instead of “yourrule.rules”, set your file name, in my case it was test3.rules.

Once it is done run Snort again and see what happens.

#snort -d -l /var/log/snort/ -h 10.0.0.0/24 -A console -c /etc/snort/snort.conf

ssh to your device from another device and see what happens:

You can see that SSH incoming was detected.

With this lesson I hope you know how to make basic rules and use them for detecting activity on a system.

Full article:

https://linuxhint.com/configure-snort-ids-create-rules/

Source

SBC showcases Qualcomm’s 10nm, octa-core QCS605 IoT SoC

Intrinsyc’s compact “Open-Q 605” SBC for computer vision and edge AI applications runs Android 8.1 and Qualcomm’s Vision Intelligence Platform on Qualcomm’s IoT-focused, octa-core QCS605.

In April, Qualcomm announced its QCS605 SoC, calling it “the first 10nm FinFET fabricated SoC purpose built for the Internet of Things.” The octa-core Arm SoC is available in an Intrinsyc Open-Q 605 SBC with full development kit with a 12V power supply is open for pre-orders at $429. The products will ship in early December.

Open-Q 605, front and back

The fact that Qualcomm is billing the high-end QCS605 as an IoT SoC reveals how demand for vision and AI processing on the edge is broadening the IoT definition to encompass a much higher range of embedded technology. The IoT focus is also reinforced by the lack of the usual Snapdragon branding. The QCS605 is accompanied by the Qualcomm Vision Intelligence Platform, a set of mostly software components that includes the Qualcomm Neural Processing SDK and camera processing software, as well as the company’s 802.11ac WiFi and Bluetooth connectivity and security technologies.

The QCS605 supports Linux and Android, but Intrinsyc supports its Open-Q 605 board only with Android 8.1.

Qualcomm QCS605 and Vision Intelligence Platform

The QCS605 SoC features 8x Kryo 300 CPU cores, two of which are 2.5GHz “gold” cores that are equivalent to Cortex-A75. The other six are 1.7GHz “silver” cores like the Cortex-A55 — Arm’s more powerful follow-on to Cortex-A53.

The QCS605 also integrates an Adreno 615 GPU, a Hexagon 685 DSP with Hexagon vector extensions (“HVX”), and a Spectra 270 ISP that supports dual 16-megapixel image sensors. Qualcomm also sells a QCS603 model that is identical except that it offers only 2x of the 1.7GHz “Silver” cores instead of six.

Qualcomm sells the QCS605 as part of a Vision Intelligence Platform — a combination of software and hardware starting with a Qualcomm AI Engine built around the Qualcomm Snapdragon Neural Processing Engine (NPE) software framework. The NPE provides analysis, optimization, and debugging tools for developing with Tensorflow, Caffe, and Caffe2 frameworks. The AI Engine also includes the Open Neural Network Exchange interchange format, the Android Neural Networks API, and the Qualcomm Hexagon Neural Network library, which together enable the porting of trained networks.

The Vision Intelligence Platform running on the QCS605 delivers up to 2.1 TOPS of compute performance for deep neural network inferences, claims Qualcomm. The platform also supports up to 4K60 resolution or 5.7K at 30fps and supports multiple concurrent video streams at lower resolutions.

Other features include “staggered” HDR to prevent ghost effects in high-dynamic range video. You also get advanced electronic image stabilization, de-warp, de-noise, chromatic aberration correction, and motion compensated temporal filters in hardware.

Inside the Open-Q 605 SBC

Along with the Snapdragon 600 based Open-Q 600, the Open-Q 605 is the only Open-Q development board that Intrinsyc refers to as an SBC. Most Open-Q kits are compute modules or sandwich-style carrier board starter kits based on Intrinsyc modules equipped with Snapdragon SoCs, such as the recent, Snapdragon 670 based Open-Q 670 HDK.

Open-Q 605

The 68 x 50mm Open-Q 605 ships with an eMCP package with 4GB LPDDR4x RAM and 32GB eMMC flash, and additional storage is available via a microSD slot. Networking depends on the 802.11ac (WiFi 5) and Bluetooth 5.x radios. There’s also a Qualcomm GNSS receiver for location and 3x U.FL connectors.

The only real-world coastline port is a USB Type-C that supports DisplayPort 1.4 with [email protected] support. If you’d rather use the Type-C port for USB or charging a user-supplied Li-Ion battery, you can turn to an HD-ready MIPI DSI interface with touch support. You also get 2x MIPI-CSI for dual cameras, as well as 2x analog audio.

The Open-Q 605 has a 76-pin expansion header for other interfaces, including an I2S/SLIMBus digital audio interface. The board runs on a 5-15V DC input and offers an extended -25 to 60°C operating range.

Specifications listed for the Open-Q 605 SBC include:

  • Processor — Qualcomm QCS605 with Vision Intelligence Platform (2x up to 2.5GHz and 6x up to 1.7GHz Krait 300 cores); Adreno 615 GPU; Hexagon 685 DSP; Spectra 270 ISP; Qualcomm AI Engine and other VIP components
  • Memory/storage — 4GB LPDDR4X and 32GB eMMC flash in combo eMCP package; microSD slot.
  • Wireless:
    • 802.11b/g/n/ac 2×2 dual-band WiFi (Qualcomm WCN3990) with planned FCC/IC/CE certification
    • Bluetooth 5.x
    • Qualcomm GNSS (SDR660G) receiver with Qualcomm Location Suite Gen9 VT
    • U.FL antenna connectors for WiFi, BT, GNSS
  • Media I/O:
    • DisplayPort 1.4 via USB Type-C up to with USB data concurrency (USB and power)
    • MIPI DSI (4-lane) with I2C touch interface on flex cable connector for up to 1080p30
    • 2x MIPI-CSI (4-lane) with micro-camera module connectors
    • 2x analog mic I/Ps, speaker O/P, headset I/O
    • I2S/SLIMBus digital audio interface with 2x DMIC ports (via 76-pin expansion header)
  • Expansion — 76-pin header (multiple SPI, I2C, UART, GPIO, and sensor I/O; digital and analog audio I/O, LED flash O/P, haptic O/P, power output rails
  • Other features — 3x LEDs; 4x mounting holes; optional dev kit with quick start guide, docs, SW updates
  • Operating temperature — -25 to 60°C
  • Power — 5-15V DC jack and support for user-supplied Li-Ion battery with USB Type-C charging; PM670 + PM670L PMIC; 12V supply with dev kit
  • Dimensions — 68 x 50 x 13mm
  • Operating system — Android 8.1 Oreo

Further information

The Open-Q 605 SBC is available for pre-order in the full Development Kit version, which costs $429 and ships in early December. The SBC will also be sold on its own at an undisclosed price. More information may be found in Intrinsyc’s Open-Q 605 announcement, as well as the product page and shopping page.

Source

The Growing Significance Of DevOps For Data Science | Linux.com

DevOps involves infrastructure provisioning, configuration management, continuous integration and deployment, testing and monitoring. DevOps teams have been closely working with the development teams to manage the lifecycle of applications effectively.

Data science brings additional responsibilities to DevOps. Data engineering, a niche domain that deals with complex pipelines that transform the data, demands close collaboration of data science teams with DevOps. Operators are expected to provision highly available clusters of Apache Hadoop, Apache Kafka, Apache Spark and Apache Airflow that tackle data extraction and transformation. Data engineers acquire data from a variety of sources before leveraging Big Data clusters and complex pipelines for transforming it.

Source

WP2Social Auto Publish Powered By : XYZScripts.com