Linux Today – What’s New In Red Hat OpenStack Platform 14?

Nov 15, 2018

The Red Hat OpenStack Platform 14 release is based on the upstream OpenStack Rocky milestone, which first became publicly available, on Aug. 30. Among the new features in OSP 14 are improved networking capabilities, including enhanced load balancing capabilities for container workloads. Red Hat is also continuing to push forward on the integration of its OpenShift Kubernetes container orchestration platform with OpenStack.

In a video interview with eWEEK, Mark McLoughlin, Senior Director of Engineering, OpenStack at Red Hat, outlined some of the new features in OSP 14 and the direction for the road ahead.

Source

Odd Realm is a sandbox settlement builder inspired by Dwarf Fortress and Rimworld with Linux support

Inspired by the likes of Dwarf Fortress and Rimworld, Odd Realm is a sandbox settlement builder currently in Early Access on itch.

A sweet find while browsing around for new games today, I came across it and was instantly pulled in by the style. Looks like it could be an interesting choice perhaps if you find Dwarf Fortress to complex or you fancy trying out something new.

You will face off against passing seasons, roaming bandits, underground horrors, and gods from legend making it sound really quite fun.

Features:

  • 4 procedurally generated biomes (Desert, Taiga, Voidland, and Tropical)
  • 24+ Creatures
  • 100+ items, weapons, and gear
  • 100+ buildable blueprints for props, blocks, plants, trees, and platforms
  • 9+ Settler professions
  • Unique scenarios and encounters based on player decisions

Currently only available on itch.io for $10. The developer is planning to eventually released on Steam, once developed enough and enough feedback has been given on how it’s going on the itch store.

Source

New Linux-Targeting Crypto-Mining Malware Combines Hiding and Upgrading Capabilities

Japanese multinational cybersecurity firm Trend Micro has detected a new strain of crypto-mining malware that targets PCs running Linux, according to a report published Nov. 8.

The new strain is reportedly able to hide the malicious process of unauthorized cryptocurrency-mining through users’ CPU by implementing a rootkit component. The malware itself, detected by Trend Micro as Coinminer.Linux.KORKERDS.AB, is also reportedly capable of updating itself.

According to the report, the combination of hiding and self-upgrading capabilities gives the malware a great advantage. While the rootkit fails to hide the increased CPU usage and the presence of a running crypto-mining malware, it is also improved by updates, which can completely repurpose the existing code or tools by editing a few “lines of code,” the report notes.

The new crypto-mining malware strain infects Linux PCs via third-party or compromised plugins. Once installed, the plugin reportedly gets admin rights, with malware able to be run with privileges granted to an application. In this regard, Trend Micro mentioned another case of Linux-targeting crypto malware that used the same entry point, and took place in September this year.

Based on web server statistics, the estimated market share of Linux on personal computers amounted to around 1.8 percent in 2016. The share of Microsoft Windows systems in 2016 was around 89.7, while Mac OS served around 8.5 percent of users.

Recently, Cointelegraph reported that a group of South-Korean hackers will face trial for a cryptojacking case that allegedly infected more than 6,000 computers with malicious crypto-mining malware.

In September, a report revealed that leaked code targeting Microsoft systems, which hackers allegedly stole from the U.S. National Security Agency (NSA), sparked a fivefold increase in cryptocurrency mining malware infections.

Source

6 Best Practices for High-Performance Serverless Engineering | Linux.com

When you write your first few lambdas, performance is the last thing on your mind. Permissions, security, identity and access management (IAM) roles and triggers all conspire to make the first couple of lambdas, even after a “hello world” trial just to get your first serverless deployments up and working. But once your users begin to rely on services your lambdas provide, it’s time to focus on high-performance serverless.

Here are some key things to remember when you’re trying to produce high-performance serverless applications.

1. Observability
Serverless handles scaling really well. But as scale interacts with complexity, slowdowns and bugs are inevitable. I’ll be frank: these can be a bear if you don’t plan for observability from the start.

Read more at The New Stack

Source

How to use systemd-nspawn for Linux system recovery

For as long as GNU/Linux systems have existed, system administrators have needed to recover from root filesystem corruption, accidental configuration changes, or other situations that kept the system from booting into a “normal” state.

Linux distributions typically offer one or more menu options at boot time (for example, in the GRUB menu) that can be used for rescuing a broken system; typically they boot the system into a single-user mode with most system services disabled. In the worst case, the user could modify the kernel command line in the bootloader to use the standard shell as the init (PID 1) process. This method is the most complex and fraught with complications, which can lead to frustration and lost time when a system needs rescuing.

Most importantly, these methods all assume that the damaged system has a physical console of some sort, but this is no longer a given in the age of cloud computing. Without a physical console, there are few (if any) options to influence the boot process this way. Even physical machines may be small, embedded devices that don’t offer an easy-to-use console, and finding the proper serial port cables and adapters and setting up a serial terminal emulator, all to use a serial console port while dealing with an emergency, is often complicated.

When another system (of the same architecture and generally similar configuration) is available, a common technique to simplify the repair process is to extract the storage device(s) from the damaged system and connect them to the working system as secondary devices. With physical systems, this is usually straightforward, but most cloud computing platforms can also support this since they allow the root storage volume of the damaged instance to be mounted on another instance.

Once the root filesystem is attached to another system, addressing filesystem corruption is straightforward using fsck and other tools. Addressing configuration mistakes, broken packages, or other issues can be more complex since they require mounting the filesystem and locating and changing the correct configuration files or databases.

Using systemd

Before systemd, editing configuration files with a text editor was a practical way to correct a configuration. Locating the necessary files and understanding their contents may be a separate challenge, which is beyond the scope of this article.

When the GNU/Linux system uses systemd though, many configuration changes are best made using the tools it provides—enabling and disabling services, for example, requires the creation or removal of symbolic links in various locations. The systemctl tool is used to make these changes, but using it requires a systemd instance to be running and listening (on D-Bus) for requests. When the root filesystem is mounted as an additional filesystem on another machine, the running systemd instance can’t be used to make these changes.

Manually launching the target system’s systemd is not practical either, since it is designed to be the PID 1 process on a system and manage all other processes, which would conflict with the already-running instance on the system used for the repairs.

Thankfully, systemd has the ability to launch containers, fully encapsulated GNU/Linux systems with their own PID 1 and environment that utilize various namespace features offered by the Linux kernel. Unlike tools like Docker and Rocket, systemd doen’t require a container image to launch a container; it can launch one rooted at any point in the existing filesystem. This is done using the systemd-nspawn tool, which will create the necessary system namespaces and launch the initial process in the container, then provide a console in the container. In contrast to chroot, which only changes the apparent root of the filesystem, this type of container will have a separate filesystem namespace, suitable filesystems mounted on /dev, /run, and /proc, and a separate process namespace and IPC namespaces. Consult the systemd-nspawn man page to learn more about its capabilities.

An example to show how it works

In this example, the storage device containing the damaged system’s root filesystem has been attached to a running system, where it appears as /dev/vdc. The device name will vary based on the number of existing storage devices, the type of device, and the method used to connect it to the system. The root filesystem could use the entire storage device or be in a partition within the device; since the most common (simple) configuration places the root filesystem in the device’s first partition, this example will use /dev/vdc1. Make sure to replace the device name in the commands below with your system’s correct device name.

The damaged root filesystem may also be more complex than a single filesystem on a device; it may be a volume in an LVM volume set or on a set of devices combined into a software RAID device. In these cases, the necessary steps to compose and activate the logical device holding the filesystem must be performed before it will be available for mounting. Again, those steps are beyond the scope of this article.

Prerequisites

First, ensure the systemd-nspawn tool is installed—most GNU/Linux distributions don’t install it by default. It’s provided by the systemd-container package on most distributions, so use your distribution’s package manager to install that package. The instructions in this example were tested using Debian 9 but should work similarly on any modern GNU/Linux distribution.

Using the commands below will almost certainly require root permissions, so you’ll either need to log in as root, use sudo to obtain a shell with root permissions, or prefix each of the commands with sudo.

Verify and mount the fileystem

First, use fsck to verify the target filesystem’s structures and content:

$ fsck /dev/vdc1

If it finds any problems with the filesystem, answer the questions appropriately to correct them. If the filesystem is sufficiently damaged, it may not be repairable, in which case you’ll have to find other ways to extract its contents.

Now, create a temporary directory and mount the target filesystem onto that directory:

$ mkdir /tmp/target-rescue
$ mount /dev/vdc1 /tmp/target-rescue

With the filesystem mounted, launch a container with that filesystem as its root filesystem:

$ systemd-nspawn –directory /tmp/target-rescue –boot — –unit rescue.target

The command-line arguments for launching the container are:

  • –directory /tmp/target-rescue provides the path of the container’s root filesystem.
  • –boot searches for a suitable init program in the container’s root filesystem and launches it, passing parameters from the command line to it. In this example, the target system also uses systemd as its PID 1 process, so the remaining parameters are intended for it. If the target system you are repairing uses any other tool as its PID 1 process, you’ll need to adjust the parameters accordingly.
  • — separates parameters for systemd-nspawn from those intended for the container’s PID 1 process.
  • –unit rescue.target tells systemd in the container the name of the target it should try to reach during the boot process. In order to simplify the rescue operations in the target system, boot it into “rescue” mode rather than into its normal multi-user mode.

If all goes well, you should see output that looks similar to this:

Spawning container target-rescue on /tmp/target-rescue.

Press ^] three times within 1s to kill container.

systemd 232 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN)

Detected virtualization systemd-nspawn.

Detected architecture arm.

Welcome to Debian GNU/Linux 9 (Stretch)!

Set hostname to <test>.

Failed to install release agent, ignoring: No such file or directory

[ OK ] Reached target Swap.

[ OK ] Listening on Journal Socket (/dev/log).

[ OK ] Started Dispatch Password Requests to Console Directory Watch.

[ OK ] Reached target Encrypted Volumes.

[ OK ] Created slice System Slice.

Mounting POSIX Message Queue File System…

[ OK ] Listening on Journal Socket.

Starting Set the console keyboard layout…

Starting Restore / save the current clock…

Starting Journal Service…

Starting Remount Root and Kernel File Systems…

[ OK ] Mounted POSIX Message Queue File System.

[ OK ] Started Journal Service.

[ OK ] Started Remount Root and Kernel File Systems.

Starting Flush Journal to Persistent Storage…

[ OK ] Started Restore / save the current clock.

[ OK ] Started Flush Journal to Persistent Storage.

[ OK ] Started Set the console keyboard layout.

[ OK ] Reached target Local File Systems (Pre).

[ OK ] Reached target Local File Systems.

Starting Create Volatile Files and Directories…

[ OK ] Started Create Volatile Files and Directories.

[ OK ] Reached target System Time Synchronized.

Starting Update UTMP about System Boot/Shutdown…

[ OK ] Started Update UTMP about System Boot/Shutdown.

[ OK ] Reached target System Initialization.

[ OK ] Started Rescue Shell.

[ OK ] Reached target Rescue Mode.

Starting Update UTMP about System Runlevel Changes…

[ OK ] Started Update UTMP about System Runlevel Changes.

You are in rescue mode. After logging in, type “journalctl -xb” to view

system logs, “systemctl reboot” to reboot, “systemctl default” or ^D to

boot into default mode.

Give root password for maintenance

(or press Control-D to continue):

In this output, you can see systemd launching as the init process in the container and detecting that it is being run inside a container so it can adjust its behavior appropriately. Various unit files are started to bring the container to a usable state, then the target system’s root password is requested. You can enter the root password here if you want a shell prompt with root permissions, or you can press Ctrl+D to allow the startup process to continue, which will display a normal console login prompt.

When you have completed the necessary changes to the target system, press Ctrl+] three times in rapid succession; this will terminate the container and return you to your original shell. From there, you can clean up by unmounting the target system’s filesystem and removing the temporary directory:

$ umount /tmp/target-rescue
$ rmdir /tmp/target-rescue

That’s it! You can now remove the target system’s storage device(s) and return them to the target system.

The idea to use systemd-nspawn this way, especially the –boot parameter, came from a question posted on StackExchange. Thanks to Shibumi and kirbyfan64sos for providing useful answers to this question!

Source

Snort Open Source IDS – ls /blog

Snort is an open source Intrusion Detection System that you can use on your Linux systems. This tutorial will go over basic configuration of Snort IDS and teach you how to create rules to detect different types of activities on the system.

For this tutorial the network we will use is: 10.0.0.0/24. Edit your /etc/snort/snort.conf file and and replace the “any” next to $HOME_NET with your network information as shown in the example screen shot below:

Alternatively you can also define specific IP addresses to monitor separated with comma between [ ] as shown in this screen shot:

Now let’s get started and run this command on the command line:

# snort -d -l /var/log/snort/ -h 10.0.0.0/24 -A console -c /etc/snort/snort.conf

Where:
d= tells snort to show data
l= determines the logs directory
h= specifies the network to monitor
A= instructs snort to print alerts in the console
c= specifies Snort the configuration file

Lets launch a fast scan from a different device using nmap:

And lets see what happens in the snort console:

Snort detected the scan, now, also from a different device lets attack with DoS using hping3

# hping3 -c 10000 -d 120 -S -w 64 -p 21 –flood –rand-source 10.0.0.3

The device displaying Snort is detecting bad traffic as shown here:

Since we instructed Snort to save logs, we can read them by running:

Introduction to Snort Rules

Snort’s NIDS mode works based on rules specified in the /etc/snort/snort.conf file.

Within the snort.conf file we can find commented and uncommented rules as you can see below:

The rules path normally is /etc/snort/rules , there we can find the rules files:

Lets see the rules against backdoors:

There are several rules to prevent backdoor attacks, surprisingly there is a rule against NetBus, a trojan horse which became popular a couple of decades ago, lets look at it and I will explain its parts and how it works:

alert tcp $HOME_NET 20034 –> $EXTERNAL_NET any (msg:“BACKDOOR NetBus Pro 2.0 connection
established”; flow:from_server,established;
flowbits:isset,backdoor.netbus_2.connect; content:“BN|10 00 02 00|”; depth:6; content:“|
05 00|”; depth:2; offset:8; classtype:misc-activity; sid:115; rev:9;)

This rule instructs snort to alert about TCP connections on port 20034 transmitting to any source in a external network.

-> = specifies the traffic direction, in this case from our protected network to an external one

msg = instructs the alert to include a specific message when displaying

content = search for specific content within the packet. It can include text if between “ “ or binary data if between | |
depth = Analysis intensity, in the rule above we see two different parameters for two different contents
offset = tells Snort the starting byte of each packet to start searching for the content
classtype = tells what kind of attack Snort is alerting about

sid:115 = rule identifier

Creating our own rule

Now we’ll create a new rule to notify about incoming SSH connections. Open /etc/snort/rules/yourrule.rules, and inside paste the following text:

alert tcp $EXTERNAL_NET any –> $HOME_NET 22 (msg:“SSH incoming”;
flow:stateless; flags:S+; sid:100006927; rev:1;)

We are telling Snort to alert about any tcp connection from any external source to our ssh port (in this case the default port) including the text message “SSH INCOMING”, where stateless instructs Snort to ignore the connection’s state.

Now, we need to add the rule we created to our /etc/snort/snort.conf file. Open the config file in an editor and search for #7, which is the section with rules. Add an uncommented rule like in the image above by adding:

include $RULE_PATH/yourrule.rules

Instead of “yourrule.rules”, set your file name, in my case it was test3.rules.

Once it is done run Snort again and see what happens.

#snort -d -l /var/log/snort/ -h 10.0.0.0/24 -A console -c /etc/snort/snort.conf

ssh to your device from another device and see what happens:

You can see that SSH incoming was detected.

With this lesson I hope you know how to make basic rules and use them for detecting activity on a system.

Full article:

https://linuxhint.com/configure-snort-ids-create-rules/

Source

SBC showcases Qualcomm’s 10nm, octa-core QCS605 IoT SoC

Intrinsyc’s compact “Open-Q 605” SBC for computer vision and edge AI applications runs Android 8.1 and Qualcomm’s Vision Intelligence Platform on Qualcomm’s IoT-focused, octa-core QCS605.

In April, Qualcomm announced its QCS605 SoC, calling it “the first 10nm FinFET fabricated SoC purpose built for the Internet of Things.” The octa-core Arm SoC is available in an Intrinsyc Open-Q 605 SBC with full development kit with a 12V power supply is open for pre-orders at $429. The products will ship in early December.

Open-Q 605, front and back

The fact that Qualcomm is billing the high-end QCS605 as an IoT SoC reveals how demand for vision and AI processing on the edge is broadening the IoT definition to encompass a much higher range of embedded technology. The IoT focus is also reinforced by the lack of the usual Snapdragon branding. The QCS605 is accompanied by the Qualcomm Vision Intelligence Platform, a set of mostly software components that includes the Qualcomm Neural Processing SDK and camera processing software, as well as the company’s 802.11ac WiFi and Bluetooth connectivity and security technologies.

The QCS605 supports Linux and Android, but Intrinsyc supports its Open-Q 605 board only with Android 8.1.

Qualcomm QCS605 and Vision Intelligence Platform

The QCS605 SoC features 8x Kryo 300 CPU cores, two of which are 2.5GHz “gold” cores that are equivalent to Cortex-A75. The other six are 1.7GHz “silver” cores like the Cortex-A55 — Arm’s more powerful follow-on to Cortex-A53.

The QCS605 also integrates an Adreno 615 GPU, a Hexagon 685 DSP with Hexagon vector extensions (“HVX”), and a Spectra 270 ISP that supports dual 16-megapixel image sensors. Qualcomm also sells a QCS603 model that is identical except that it offers only 2x of the 1.7GHz “Silver” cores instead of six.

Qualcomm sells the QCS605 as part of a Vision Intelligence Platform — a combination of software and hardware starting with a Qualcomm AI Engine built around the Qualcomm Snapdragon Neural Processing Engine (NPE) software framework. The NPE provides analysis, optimization, and debugging tools for developing with Tensorflow, Caffe, and Caffe2 frameworks. The AI Engine also includes the Open Neural Network Exchange interchange format, the Android Neural Networks API, and the Qualcomm Hexagon Neural Network library, which together enable the porting of trained networks.

The Vision Intelligence Platform running on the QCS605 delivers up to 2.1 TOPS of compute performance for deep neural network inferences, claims Qualcomm. The platform also supports up to 4K60 resolution or 5.7K at 30fps and supports multiple concurrent video streams at lower resolutions.

Other features include “staggered” HDR to prevent ghost effects in high-dynamic range video. You also get advanced electronic image stabilization, de-warp, de-noise, chromatic aberration correction, and motion compensated temporal filters in hardware.

Inside the Open-Q 605 SBC

Along with the Snapdragon 600 based Open-Q 600, the Open-Q 605 is the only Open-Q development board that Intrinsyc refers to as an SBC. Most Open-Q kits are compute modules or sandwich-style carrier board starter kits based on Intrinsyc modules equipped with Snapdragon SoCs, such as the recent, Snapdragon 670 based Open-Q 670 HDK.

Open-Q 605

The 68 x 50mm Open-Q 605 ships with an eMCP package with 4GB LPDDR4x RAM and 32GB eMMC flash, and additional storage is available via a microSD slot. Networking depends on the 802.11ac (WiFi 5) and Bluetooth 5.x radios. There’s also a Qualcomm GNSS receiver for location and 3x U.FL connectors.

The only real-world coastline port is a USB Type-C that supports DisplayPort 1.4 with [email protected] support. If you’d rather use the Type-C port for USB or charging a user-supplied Li-Ion battery, you can turn to an HD-ready MIPI DSI interface with touch support. You also get 2x MIPI-CSI for dual cameras, as well as 2x analog audio.

The Open-Q 605 has a 76-pin expansion header for other interfaces, including an I2S/SLIMBus digital audio interface. The board runs on a 5-15V DC input and offers an extended -25 to 60°C operating range.

Specifications listed for the Open-Q 605 SBC include:

  • Processor — Qualcomm QCS605 with Vision Intelligence Platform (2x up to 2.5GHz and 6x up to 1.7GHz Krait 300 cores); Adreno 615 GPU; Hexagon 685 DSP; Spectra 270 ISP; Qualcomm AI Engine and other VIP components
  • Memory/storage — 4GB LPDDR4X and 32GB eMMC flash in combo eMCP package; microSD slot.
  • Wireless:
    • 802.11b/g/n/ac 2×2 dual-band WiFi (Qualcomm WCN3990) with planned FCC/IC/CE certification
    • Bluetooth 5.x
    • Qualcomm GNSS (SDR660G) receiver with Qualcomm Location Suite Gen9 VT
    • U.FL antenna connectors for WiFi, BT, GNSS
  • Media I/O:
    • DisplayPort 1.4 via USB Type-C up to with USB data concurrency (USB and power)
    • MIPI DSI (4-lane) with I2C touch interface on flex cable connector for up to 1080p30
    • 2x MIPI-CSI (4-lane) with micro-camera module connectors
    • 2x analog mic I/Ps, speaker O/P, headset I/O
    • I2S/SLIMBus digital audio interface with 2x DMIC ports (via 76-pin expansion header)
  • Expansion — 76-pin header (multiple SPI, I2C, UART, GPIO, and sensor I/O; digital and analog audio I/O, LED flash O/P, haptic O/P, power output rails
  • Other features — 3x LEDs; 4x mounting holes; optional dev kit with quick start guide, docs, SW updates
  • Operating temperature — -25 to 60°C
  • Power — 5-15V DC jack and support for user-supplied Li-Ion battery with USB Type-C charging; PM670 + PM670L PMIC; 12V supply with dev kit
  • Dimensions — 68 x 50 x 13mm
  • Operating system — Android 8.1 Oreo

Further information

The Open-Q 605 SBC is available for pre-order in the full Development Kit version, which costs $429 and ships in early December. The SBC will also be sold on its own at an undisclosed price. More information may be found in Intrinsyc’s Open-Q 605 announcement, as well as the product page and shopping page.

Source

The Growing Significance Of DevOps For Data Science | Linux.com

DevOps involves infrastructure provisioning, configuration management, continuous integration and deployment, testing and monitoring. DevOps teams have been closely working with the development teams to manage the lifecycle of applications effectively.

Data science brings additional responsibilities to DevOps. Data engineering, a niche domain that deals with complex pipelines that transform the data, demands close collaboration of data science teams with DevOps. Operators are expected to provision highly available clusters of Apache Hadoop, Apache Kafka, Apache Spark and Apache Airflow that tackle data extraction and transformation. Data engineers acquire data from a variety of sources before leveraging Big Data clusters and complex pipelines for transforming it.

Source

Is your startup built on open source? 9 tips for getting started

When I started Gluu in 2009, I had no idea how difficult it would be to start an open source software company. Using the open source development methodology seemed like a good idea, especially for infrastructure software based on protocols defined by open standards. By nature, entrepreneurs are optimistic—we underestimate the difficulty of starting a business. However, Gluu was my fourth business, so I thought I knew what I was in for. But I was in for a surprise!

Every business is unique. One of the challenges of serial entrepreneurship is that a truth that was core to the success of a previous business may be incorrect in your next business. Building a business around open source forced me to change my plan. How to find the right team members, how to price our offering, how to market our product—all of these aspects of starting a business (and more) were impacted by the open source mission and required an adjustment from my previous experience.

A few years ago, we started to question whether Gluu was pursuing the right business model. The business was growing, but not as fast as we would have liked.

One of the things we did at Gluu was to prepare a “business model canvas,” an approach detailed in the book Business Model Generation: A Handbook for Visionaries, Game Changers, and Challengers by Yves Pigneur and Alexander Osterwalder. This is a thought-provoking exercise for any business at any stage. It helped us consider our business more holistically. A business is more than a stream of revenue. You need to think about how you segment the market, how to interact with customers, what are your sales channels, what are your key activities, what is your value proposition, what are your expenses, partnerships, and key resources. We’ve done this a few times over the years because a business model naturally evolves over time.

In 2016, I started to wonder how other open source businesses were structuring their business models. Business Model Generation talks about three types of companies: product innovation, customer relationship, and infrastructure.

  • Product innovation companies are first to market with new products and can get a lot of market share because they are first.
  • Customer relationship companies have a wider offering and need to get “wallet share” not market share.
  • Infrastructure companies are very scalable but need established operating procedures and lots of capital.

It’s hard to figure out what models and types of business other open source software companies are pursuing by just looking at their website. And most open source companies are private—so there are no SEC filings to examine.

To find out more, I went to the web. I found a great talk from Mike Olson, Founder and Chief Strategy Officer at Cloudera, about open source business models. It was recorded as part of a Stanford business lecture series. I wanted more of these kinds of talks! But I couldn’t find any. That’s when I got the idea to start a podcast where I interview founders of open source companies and ask them to describe what business model they are pursuing.

In 2018, this idea became a reality when we started a podcast called Open Source Underdogs. So far, we have recorded nine episodes. There is a lot of great content in all the episodes, but I thought it would be fun to share one piece of advice from each.

Advice from 9 open source businesses

Peter Wang, CTO of Anaconda: “Investors coming in to help put more gas in your gas tank want to understand what road you’re on and how far you want to go. If you can’t communicate to investors on a basis that they understand about your business model and revenue model, then you have no business asking them for their money. Don’t get mad at them!”

Jim Thompson, Founder of Netgate: “Businesses survive at the whim of their customers. Solving customer problems and providing value to the business is literally why you have a business!”

Michael Howard, CEO of MariaDB: “My advice to open source software startups? It depends what part of the stack you’re in. If you’re infrastructure, you have no choice but to be open source.”

Ian Tien, CEO of Mattermost: “You want to build something that people love. So start with roles that open source can play in your vision for the product, the distribution model, the community you want to build, and the business you want to build.”

Mike Olson, Founder and Chief Strategy Officer at Cloudera: “A business model is a complex construct. Open source is a really important component of strategic thinking. It’s a great distributed development model. It’s a genius, low-cost distribution model—and those have a bunch of advantages. But you need to think about how you’re going to get paid.”

Elliot Horowitz, Founder of MongoDB: “The most important thing, whether it’s open source or not open source, is to get incredibly close to your users.”

Tom Hatch, CEO of SaltStack: “Being able to build an internal culture and a management mindset that deals with open source, and profits from open source, and functions in a stable and responsible way with regard to open source is one of the big challenges you’re going to face. It’s one thing to make a piece of open source software and get people to use it. It’s another to build a company on top of that open source.”

Matt Mullenweg, CEO of Automattic: “Open source businesses aren’t that different from normal businesses. A mistake that we made, that others can avoid, is not incorporating the best leaders and team members in functions like marketing and sales.”

Gabriel Engel, CEO of RocketChat: “Moving from a five-person company, where you are the center of the company, and it’s easy to know what everyone is doing, and everyone relies on you for decisions, to a 40-person company—that transition is harder than expected.”

What we’ve learned

After recording these podcasts, we’ve tweaked Gluu’s business model a little. It’s become clearer that we need to embrace open core—we’ve been over-reliant on support revenue. It’s a direction we had been going, but listening to our podcast’s guests supported our decision.

We have many new episodes lined up for 2018 and 2019, including conversations with the founders of Liferay, Couchbase, TimescaleDB, Canonical, Redis, and more, who are sure to offer even more great insights about the open source software business. You can find all the podcast episodes by searching for “Open Source Underdogs” on iTunes and Google podcasts or by visiting our website. We want to hear your opinions and ideas you have to help us improve the podcast, so after you listen, please leave us a review.

Source

WP2Social Auto Publish Powered By : XYZScripts.com