What Metrics Matter: A Guide for Open Source Projects

metrics and data shown on a computer screen

“Without data, you’re just a person with an opinion.”

Those are the words of W. Edwards Deming, the champion of statistical process control, who was credited as one of the inspirations for what became known as the Japanese post-war economic miracle of 1950 to 1960. Ironically, Japanese manufacturers like Toyota were far more receptive to Deming’s ideas than General Motors and Ford were.

Community management is certainly an art. It’s about mentoring. It’s about having difficult conversations with people who are hurting the community. It’s about negotiation and compromise. It’s about interacting with other communities. It’s about making connections. In the words of Red Hat’s Diane Mueller, it’s about “nurturing conversations.”

However, it’s also about metrics and data.

Some have much in common with software development projects more broadly. Others are more specific to the management of the community itself. I think of deciding what to measure and how as adhering to five principles.

1. Recognize that behaviors aren’t independent of the measurements you choose to highlight.

In 2008, Daniel Ariely published Predictably Irrational, one of a number of books written around that time that introduced behavioral psychology and behavioral economics to the general public. One memorable quote from that book is the following: “Human beings adjust behavior based on the metrics they’re held against. Anything you measure will impel a person to optimize his score on that metric. What you measure is what you’ll get. Period.”

This shouldn’t be surprising. It’s a finding that’s been repeatedly confirmed by research. It should also be familiar to just about anyone with business experience. It’s certainly not news to anyone in sales management, for example. Base sales reps’ (or their managers’) bonuses solely on revenue, and they’ll try to discount whatever it takes to maximize revenue, even if it puts margin in the toilet. Conversely, want the sales force to push a new product line—which will probably take extra effort—but skip the spiffs? Probably not happening.

And lest you think I’m unfairly picking on sales, this behavior is pervasive, all the way up to the CEO, as Ariely describes in a 2010 Harvard Business Reviewarticle: “CEOs care about stock value because that’s how we measure them. If we want to change what they care about, we should change what we measure.”

Developers and other community members are not immune.

2. You need to choose relevant metrics.

There’s a lot of folk wisdom floating around about what’s relevant and important that’s not necessarily true. My colleague Dave Neary offers an example from baseball: “In the late ’90s, the key measurements that were used to measure batter skill were RBI (runs batted in) and batting average (how often a player got on base with a hit, divided by the number of at-bats). The Oakland A’s were the first major league team to recruit based on a different measurement of player performance: on-base percentage. This measures how often they get to first base, regardless of how it happens.”

Indeed, the whole revolution of sabermetrics in baseball and elsewhere, which was popularized in Michael Lewis’ Moneyball, often gets talked about in terms of introducing data in a field that historically was more about gut feel and personal experience. But it was also about taking a game that had actually always been fairly numbers-obsessed and coming up with new metrics based on mostly existing data to better measure player value. (The data revolution going on in sports today is more about collecting much more data through video and other means than was previously available.)

3. Quantity may not lead to quality.

As a corollary, collecting lots of tangential but easy-to-capture data isn’t better than just selecting a few measurements you’ve determined are genuinely useful. In a world where online behavior can be tracked with great granularity and displayed in colorful dashboards, it’s tempting to be distracted by sheer data volume, even when it doesn’t deliver any great insight into community health and trajectory.

This may seem like an obvious point: Why measure something that isn’t relevant? In practice, metrics often get chosen because they’re easy to measure, not because they’re particularly useful. They tend to be more about inputs than outputs: The number of developers. The number of forum posts. The number of commits. Collectively, measures like this often get called vanity metrics. They’re ubiquitous, but most people involved with community management don’t think much of them.

Number of downloads may be the worst of the bunch. It’s true that, at some level, they’re an indication of interest in a project. That’s something. But it’s sufficiently distant from actively using the project, much less engaging with the project deeply, that it’s hard to view downloads as a very useful number.

Is there any harm in these vanity metrics? Yes, to the degree that you start thinking that they’re something to base action on. Probably more seriously, stakeholders like company management or industry observers can come to see them as meaningful indicators of project health.

4. Understand what measurements really mean and how they relate to each other.

Neary makes this point to caution against myopia. “In one project I worked on,” he says, ”some people were concerned about a recent spike in the number of bug reports coming in because it seemed like the project must have serious quality issues to resolve. However, when we looked at the numbers, it turned out that many of the bugs were coming in because a large company had recently started using the project. The increase in bug reports was actually a proxy for a big influx of new users, which was a good thing.”

In practice, you often have to measure through proxies. This isn’t an inherent problem, but the further you get between what you want to measure and what you’re actually measuring, the harder it is to connect the dots. It’s fine to track progress in closing bugs, writing code, and adding new features. However, those don’t necessarily correlate with how happy users are or whether the project is doing a good job of working towards its long-term objectives, whatever those may be.

5. Different measurements serve different purposes.

Some measurements may be non-obvious but useful for tracking the success of a project and community relative to internal goals. Others may be better suited for a press release or other external consumption. For example, as a community manager, you may really care about the number of meetups, mentoring sessions, and virtual briefings your community has held over the past three months. But it’s the number of contributions and contributors that are more likely to grab the headlines. You probably care about those too. But maybe not as much, depending upon your current priorities.

Still, other measurements may relate to the goals of any sponsoring organizations. The measurements most relevant for projects tied to commercial products are likely to be different from pure community efforts.

Because communities differ and goals differ, it’s not possible to simply compile a metrics checklist, but here are some ideas to think about:

Consider qualitative metrics in addition to quantitative ones. Conducting surveys and other studies can be time-consuming, especially if they’re rigorous enough to yield better-than-anecdotal data. It also requires rigor to construct studies so that they can be used to track changes over time. In other words, it’s a lot easier to measure quantitative contributor activity than it is to suss out if the community members are happier about their participation today than they were a year ago. However, given the importance of culture to the health of a community, measuring it in a systematic way can be a worthwhile exercise.

Breadth of community, including how many are unaffiliated with commercial entities, is important for many projects. The greater the breadth, the greater the potential leverage of the open source development process. It can also be instructive to see how companies and individuals are contributing. Projects can be explicitly designed to better accommodate casual contributors.

Are new contributors able to have an impact, or are they ignored? How long does it take for code contributions to get committed? How long does it take for a reported bug to be fixed or otherwise responded to? If they asked a question in a forum, did anyone answer them? In other words, are you letting contributors contribute?

Advancement within the project is also an important metric. Mikeal Rogers of the Node.js community explains: “The shift that we made was to create a support system and an education system to take a user and turn them into a contributor, first at a very low level, and educate them to bring them into the committer pool and eventually into the maintainer pool. The end result of this is that we have a wide range of skill sets. Rather than trying to attract phenomenal developers, we’re creating new phenomenal developers.”

Whatever metrics you choose, don’t forget why you made them metrics in the first place. I find a helpful question to ask is: “What am I going to do with this number?” If the answer is to just put it in a report or in a press release, that’s not a great answer. Metrics should be measurements that tell you either that you’re on the right path or that you need to take specific actions to course-correct.

For this reason, Stormy Peters, who handles community leads at Red Hat, argues for keeping it simple. She writes, “It’s much better to have one or two key metrics than to worry about all the possible metrics. You can capture all the possible metrics, but as a project, you should focus on moving one. It’s also better to have a simple metric that correlates directly to something in the real world than a metric that is a complicated formula or ration between multiple things. As project members make decisions, you want them to be able to intuitively feel whether or not it will affect the project’s key metric in the right direction.”

Source

A Personal Streaming Server to Stream Music from Anywhere

mStream is a free, open source and cross-platform personal music streaming server that lets you sync and stream music between all your devices. It consists of a lightweight music streaming server written with NodeJS; you can use it to stream your music from your home computer to any device, anywhere.

Server Features

  • Works on Linux, Windows, OSX and Raspbian
  • Dependency Free Installation
  • Light on memory and CPU usage
  • Tested on multi-terabyte libraries

WebApp Features

  • Gapless Playback
  • Milkdrop Visualizer
  • Playlist Sharing
  • Upload Files through the file explorer
  • AutoDJ – Queues up random songs

Importantly, mStream Express is a special version of the server that comes with all the dependencies pre-packaged and in this article, we will explain how to install and use mStream to stream your home music to anywhere from the Linux.

Before you install mStream, check out the demo: https://demo.mstream.io/

How to Install mStream Express in Linux

The easiest way to install mStream, without facing any dependencies issues is to download the latest version of mStream Express from the release page and run it.

The package comes with an additional set of UI tools and features for adding tray icon for easy server management, auto boots server on startup and GUI tools for server configuration.

You can use the wget command to download it directly from the command line, unzip the archive file, move into the extracted folder and run the mstreamExpress file as follows.

$ wget -c https://github.com/IrosTheBeggar/mStream/releases/download/3.9.1/mstreamExpress-linux-x64.zip
$ unzip mstreamExpress-linux-x64.zip 
$ cd mstreamExpress-linux-x64/
$ ./mstreamExpress

After starting mstreamExpress, the server configuration interface will show up as shown in the following screenshot. Enter the config options and click on Boot Server.

Configure mStream Express Server

Configure mStream Express Server

Once the server has booted, you will see the following messages.

mStream Express Server Started

mStream Express Server Started

To access the webapp, go to the address: http://localhost:3000 or http://server_ip:3000.

Access mStream Webapp

Access mStream Webapp

You can easily manage the server via the Tray Icon; it has options to disable auto-boot, restart and reconfigure, advanced options, manage DDNS and SSL, among others.

mStream Github repositoryhttps://github.com/IrosTheBeggar/mStream.

That’s all! mStream is an easy to install and personal music streaming software. In this article, we showed how to easily install and use mStream Express in Linux. If you have any queries, reach us via the feedback form below.

Source

Protect Your Websites with Let’s Encrypt

Learn how to use Let’s Encrypt in this tutorial from our archives.

Back in the bad old days, setting up basic HTTPS with a certificate authority cost as much as several hundred dollars per year, and the process was difficult and error-prone to set up. Now we have Let’s Encrypt for free, and the whole thing takes just a few minutes.

Why Encrypt?

Why encrypt your sites? Because unencrypted HTTP sessions are wide open to multiple abuses:

Internet service providers lead the code-injecting offenders. How to foil their nefarious desires? Your best defense is HTTPS. Let’s review how HTTPS works.

Chain of Trust

You could set up asymmetric encryption between your site and everyone who is allowed to access it. This is very strong protection: GPG (GNU Privacy Guard, see How to Encrypt Email in Linux), and OpenSSH are common tools for asymmetric encryption. These rely on public-private key pairs. You can freely share public keys, while your private keys must be protected and never shared. The public key encrypts, and the private key decrypts.

This is a multi-step process that does not scale for random web-surfing, however, because it requires exchanging public keys before establishing a session, and you have to generate and manage key pairs. An HTTPS session automates public key distribution, and sensitive sites, such as shopping and banking, are verified by a third-party certificate authority (CA) such as Comodo, Verisign, or Thawte.

When you visit an HTTPS site, it provides a digital certificate to your web browser. This certificate verifies that your session is strongly encrypted and supplies information about the site, such as organization’s name, the organization that issued the certificate, and the name of the certificate authority. You can see all of this information, and the digital certificate, by clicking on the little padlock in your web browser’s address bar (Figure 1).

The major web browsers, including Opera, Firefox, Chromium, and Chrome, all rely on the certificate authority to verify the authenticity of the site’s digital certificate. The little padlock gives the status at a glance; green = strong SSL encryption and verified identity. Web browsers also warn you about malicious sites, sites with incorrectly configured SSL certificates, and they treat self-signed certificates as untrusted.

So how do web browsers know who to trust? Browsers include a root store, a batch of root certificates, which are stored in /usr/share/ca-certificates/mozilla/. Site certificates are verified against your root store. Your root store is maintained by your package manager, just like any other software on your Linux system. On Ubuntu, they are supplied by the ca-certificates package. The root store itself is maintained by Mozilla for Linux.

As you can see, it takes a complex infrastructure to make all of this work. If you perform any sensitive online transactions, such as shopping or banking, you are trusting a whole lot of unknown people to protect you.

Encryption Everywhere

Let’s Encrypt is a global certificate authority, similar to the commercial CAs. Let’s Encrypt was founded by the non-profit Internet Security Research Group (ISRG) to make it easier to secure Websites. I don’t consider it sufficient for shopping and banking sites, for reasons which I will get to shortly, but it’s great for securing blogs, news, and informational sites that don’t have financial transactions.

There are at least three ways to use Let’s Encrypt. The best way is with the Certbot client, which is maintained by the Electronic Frontier Foundation (EFF). This requires shell access to your site.

If you are on shared hosting then you probably don’t have shell access. The easiest method in this case is using a host that supports Let’s Encrypt.

If your host does not support Let’s Encrypt, but supports custom certificates, then you can create and upload your certificate manually with Certbot. It’s a complex process, so you’ll want to study the documentation thoroughly.

When you have installed your certificate use SSL Server Test to test your site.

Let’s Encrypt digital certificates are good for 90 days. When you install Certbot it should also install a cron job for automatic renewal, and it includes a command to test that the automatic renewal works. You may use your existing private key or certificate signing request (CSR), and it supports wildcard certificates.

Limitations

Let’s Encrypt has some limitations: it performs only domain validation, that is, it issues a certificate to whoever controls the domain. This is basic SSL. It does not support Organization Validation (OV) or Extended Validation (EV) because it is not possible to automate identity validation. I would not trust a banking or shopping site that uses Let’s Encrypt– let ’em spend the bucks for a complete package that includes identity validation.

As a free-of-cost service run by a non-profit organization there is no commercial support, but only documentation and community support, both of which are quite good.

The Internet is full of malice. Everything should be encrypted. Start with Let’s Encrypt to protect your site visitors.

Source

How to find cpu minimum, current & maximum frequency in linux ?

How to find cpu minimum, current & maximum frequency in linux ?

cpu manufacturers pro-grammatically reduce the frequency of the processor. You can find out the current and possible frequency with the command:


How to find available frequencies ?


cat  /sys/devices/system/cpu/cpu*/cpufreq/scaling_available_frequencies


Sample output:


cat  /sys/devices/system/cpu/cpu*/cpufreq/scaling_available_frequencies
2933000 2128000 1596000
2933000 2128000 1596000


Finding each core   minimum, current & maximum frequency


grep ” /sys/devices/system/cpu/cpu0/cpufreq/scaling_{min,cur,max}_freq


Sample output:


/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq:1596000
/sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq:1596000
/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq:2933000


Here we can see that the current processor frequency-1596 Mhz and the maximum-2933 Mhz.


Above example for core 0 if you have N number of cores use *:


grep ” /sys/devices/system/cpu/cpu*/cpufreq/scaling_{min,cur,max}_freq


Sample output :


/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq:1596000
/sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq:1596000
/sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq:1596000
/sys/devices/system/cpu/cpu1/cpufreq/scaling_cur_freq:2128000
/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq:2933000
/sys/devices/system/cpu/cpu1/cpufreq/scaling_max_freq:2933000


How to find cpu count ?


grep -c ‘model name’ /proc/cpuinfo


Sample output :


2
Use various performance governors.

ondemand: The CPU freq governor “on-demand” sets the CPU depending on the current usage.

To do this the CPU must have the capability to switch the frequency very quickly.
conservative: The CPU freq governor “conservative”, much like the “on-demand” governor, sets
the CPU depending on the current usage. It differs in behavior in that it gracefully increases
and decreases the CPU speed rather than jumping to max speed the moment there is any
load on the CPU. This behavior is more suitable in a battery powered environment.
userspce: The CPU freq governor “user-space” allows the user, or any user-space program
running with UID “root”, to set the CPU to a specific frequency by making a sysfs
file “scaling_setspeed” available in the CPU-device directory.
powersave: The CPU freq governor “powersave” sets the CPU statically to the lowest
frequency within the borders of scaling_min_freq and scaling_max_freq.
performance: The CPU freq governor “performance” sets the CPU statically to the highest
frequency within the borders of scaling_min_freq and scaling_max_freq.

How to find available_governors?


cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors


Sample output:


conservative ondemand userspace powersave performance


Set permanently one eg.:

sudo echo “performance” > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor


cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor


performance

Source

A Command line Dictionary And Vocabulary Building Tool

Howdy! I have a good news for non-native English speakers. Now, you can improve your English vocabulary and find the meaning of English words, right from your Terminal. Say hello to Pyvoc, a cross-platform, open source, command line dictionary and vocabulary building tool written in Python programming language. Using this tool, you can brush up some English words meanings, test or improve your vocabulary skill or simply use it as a CLI dictionary on Unix-like operating systems.

Installing Pyvoc

Since Pyvoc is written using Python language, you can install it using Pip3 package manager.

$ pip3 install pyvoc

Once installed, run the following command to automatically create necessary configuration files in your $HOME directory.

$ pyvoc word

Sample output:

|Creating necessary config files
/getting api keys. please handle with care!
|

word 
Noun: single meaningful element of speech or writing
example: I don't like the word ‘unofficial’

Verb: express something spoken or written
example: he words his request in a particularly ironic way

Interjection: used to express agreement or affirmation
example: Word, that's a good record, man

Done! Let us go ahead and brush the English skills.

Use Pyvoc as a command line Dictionary tool

Pyvoc fetches the word meaning from Oxford Dictionary API.

Let us say, you want to find the meaning of a word ‘digression’. To do so, run:

$ pyvoc digression
pyvoc1

Find a word meaning using Pyvoc

See? Pyvoc not only displays the meaning of word ‘digression’, but also an example sentence which shows how to use that word in practical.

Let us see an another example.

$ pyvoc subterfuge
|

subterfuge 
Noun: deceit used in order to achieve one's goal
example: he had to use subterfuge and bluff on many occasions

It also shows the word classes as well. As you already know, English has four major word classes:

  1. Nouns,
  2. Verbs,
  3. Adjectives,
  4. Adverbs.

Take a look at the following example.

$ pyvoc welcome
 /

welcome 
Noun:            instance or manner of greeting someone
example:         you will receive a warm welcome

Interjection:    used to greet someone in polite or friendly way
example:         welcome to the Wildlife Park

Verb:            greet someone arriving in polite or friendly way
example:         hotels should welcome guests in their own language

Adjective:       gladly received
example:         I'm pleased to see you, lad—you're welcome

As you see in the above output, the word ‘welcome’ can be used as a verb, noun, adjective and interjection. Pyvoc has given example for each class.

If you misspell a word, it will inform you to check the spelling of the given word.

$ pyvoc wlecome
\
No definition found. Please check the spelling!!

Useful, isn’t it?

Create vocabulary groups

A vocabulary group is nothing but a collection words added by the user. You can later revise or take quiz from these groups. 100 groups of 60 words are reserved for the user.

To add a word (E.g sporadic) to a group, just run:

$ pyvoc sporadic -a
-

sporadic 
Adjective: occurring at irregular intervals or only in few places
example: sporadic fighting broke out


writing to vocabulary group...
word added to group number 51

As you can see, I didn’t provide any group number and pyvoc displayed the meaning of given word and automatically added that word to group number 51. If you don’t provide the group number, Pyvoc will incrementally add words to groups 51-100.

Pyvoc also allows you to specify a group number if you want to. You can specify a group from 1-50 using -goption. For example, I am going to add a word to Vocabulary group 20 using the following command.

$ pyvoc discrete -a -g 20
 /

discrete 
Adjective:       individually separate and distinct
example:         speech sounds are produced as a continuous sound signal rather
               than discrete units

creating group Number 20...
writing to vocabulary group...
word added to group number 20

See? The above command displays the meaning of ‘discrete’ word and adds it to the vocabulary group 20. If the group doesn’t exists, Pyvoc will create it and add the word.

By default, Pyvoc includes three predefined vocabulary groups (101, 102, and 103). These custom groups has 800 words of each. All words in these groups are taken from GRE and SAT preparation websites.

To view the user-generated groups, simply run:

$ pyvoc word -l
 -

word 
Noun:            single meaningful element of speech or writing
example:         I don't like the word ‘unofficial’

Verb:            express something spoken or written
example:         he words his request in a particularly ironic way

Interjection:    used to express agreement or affirmation
example:         Word, that's a good record, man


USER GROUPS
Group no.      No. of words
20             1

DEFAULT GROUP
Group no.      No. of words
51             1

As you see, I have created one group (20) including the default group (51).

Test and improve English vocabulary

As I already said, you can use the Vocabulary groups to revise or take quiz from them.

For instance, to revise the group no. 101, use -r option like below.

$ pyvoc 101 -r

You can now revise the meaning of all words in the Vocabulary group 101 in random order. Just hit ENTER to go through next questions. Once done, hit CTRL+C to exit.

pyvoc2

Revise Vocabulary group

Also, you take quiz from the existing groups to brush up your vocabulary. To do so, use -q option like below.

$ pyvoc 103 -q 50

This command allows you to take quiz of 50 questions from vocabulary group 103. Choose the correct answer from the list by entering the appropriate number. You will get 1 point for every correct answer. The more you score the more your vocabulary skill will be.

pyvoc3

Take quiz using Pyvoc

Pyvoc is in the early-development stage. I hope the developer will improve it and add more features in the days to come.

As a non-native English speaker, I personally find it useful to test and learn new word meanings in my free time. If you’re a heavy command line user and wanted to quickly check the meaning of a word, Pyvoc is the right tool. You can also test your English Vocabulary at your free time to memorize and improve your English language skill. Give it a try. You won’t be disappointed.

And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! Cheers!

Resource:

Source

How to Mount Windows Partitions in Ubuntu

If you are running a dual-boot of Ubuntu and Windows, sometimes you might fail to access a Windows partition (formatted with NTFS or FAT32 filesystem type), while using Ubuntu, after hibernating Windows (or when it’s not fully shutdown).

This is because, Linux cannot mount and open hibernated Windows partitions (the full discussion of this is beyond the ambit of this article).

In this article, we will simply show how to mount Windows partition in Ubuntu. We will explain a few useful methods of solving the above issue.

Mount Windows Using the File Manager

The first and safest way is to boot into Windows and fully shutdown the system. Once you have done that, power on the machine and select Ubuntu kernel from the grub menu to boot into Ubuntu.

After a successful logon, open your file manager, and from the left pane, find the partition you wish to mount (under Devices) and click on it. It should be automatically mounted and its contents will show up in the main pane.

Mounted Windows Partition

Mounted Windows Partition

Mount Windows Partition in Read Only Mode From Terminal

The second method is to manually mount the filesystem in read only mode. Usually, all mounted filesystems are located under the directory /media/$USERNAME/.

Ensure that you have a mount point in that directory for the Windows partition (in this example, $USERNAME=aaronkilik and the Windows partition is mounted to a directory called WIN_PART, a name which corresponds to the device label):

$ cd /media/aaronkilik/
$ ls -l
List Mounted Partitions

List Mounted Partitions

In case the mount point is missing, create it using the mkdir command as shown (if you get “permission denied” errors, use sudo command to gain root privileges):

$ sudo mkdir /media/aaronkilik/WIN_PART

To find the device name, list all block devices attached to the system using the lsblk utility.

$ lsblk
List Block Devices

List Block Devices

Then mount the partition (/dev/sdb1 in this case) in read-only mode to the above directory as shown.

$ sudo mount -t vfat -o ro /dev/sdb1 /media/aaronkilik/WIN_PART		#fat32
OR
$ sudo mount -t ntfs-3g -o ro /dev/sdb1 /media/aaronkilik/WIN_PART	#ntfs

Now to get mount details (mount point, options etc..) of the device, run the mount command without any options and pipe its output to grep command.

$ mount | grep "sdb1" 
List Windows Partition

List Windows Partition

After successfully mounting the device, you can access files on your Windows partition using any applications in Ubuntu. But, remember that, because the device is mounted as read-only, you will not be able to write to the partition or modify any files.

Also note that if Windows is in a hibernated state, if you write to or modify files in the Windows partition from Ubuntu, all your changes will be lost after a reboot.

For more information, refer to the Ubuntu community help wiki: Mounting Windows Partitions.

That’s all! In this article, we have shown how to mount Windows partition in Ubuntu. Use the feedback form below to reach us for any questions if you face any unique challenges or for any comments.

Source

An Introduction to the Machine Learning Platform as a Service | Linux.com

Machine-Learning-Platform-as-a-Service (ML PaaS) is one of the fastest growing services in the public cloud. It delivers efficient lifecycle management of machine learning models.

At a high level, there are three phases involved in training and deploying a machine learning model. These phases remain the same from classic ML models to advanced models built using sophisticated neural network architecture.

Provision and Configure Environment

Before the actual training takes place, developers and data scientists need a fully configured environment with the right hardware and software configuration.

Hardware configuration may include high-end CPUs, GPUs, or FPGAs that accelerate the training process. Configuring the software stack deals with installing a diverse set of frameworks and tools that are specific to the model.

These fully configured environments need to run as a cluster where training jobs may run in parallel. Large datasets need to be made locally available to each of the machines in the cluster to speed up access. Provisioning, configuring, orchestrating, and terminating the compute resources is a complex task.

The development and data science team rely on internal DevOps teams to tackle this problem. DevOps teams automate the steps through traditional provisioning and configuration tools such as Chef, Puppet, and Ansible. ML training jobs cannot start till DevOps teams hand off the environment to the data science team.

Training & Tuning an ML Model

Once the testbed is ready, data scientists perform the steps of data preparation, training, hyperparameter tuning, and evaluation of the model. This is an iterative process where each step may be repeated multiple times until the results are satisfactory.

During the training and tuning phase, data scientists record multiple metrics such as the number of nodes in a layer, the number of layers of deep learning neural network, the learning rate used by an optimizer, the scoring technique along with the actual score. These metrics are useful in choosing the right combination of parameters that deliver the most accurate results.

The available frameworks and tools don’t include the mechanism for logging and recording the metrics critical to the collaborative and iterative training process. Data science teams build their own logging engine for recording and tracking critical metrics. Since this engine is external to the environment, they need to maintain the logging infrastructure and visualization tools.

Serving and Scaling an ML Model

Once the data science team evolves a fully trained model, it is made available to developers to use it in production. The model, which is typically a serialized object, needs to be wrapped in a REST web service that can be consumed through standard HTTP client libraries and SDKs.

Since models are continuously trained and tuned, there will be new versions published often by the data science teams. DevOps is expected to implement a CI/CD pipeline to deploy the ML artifacts in production. They may have to perform blue/green deployments to find the best model for production usage.

The web service exposing the ML model has to scale to meet the demand of the consumers. It also needs to be highly secure aligning with the rest of the policies defined by central IT.

To meet these requirements, DevOps teams are turning to containers and Kubernetes to manage the CI/CD pipelines, security, and scalability of ML models. They are using tools such as Jenkins or Spinnaker to integrate the data processing pipeline with software delivery pipelines.

The Challenge for Developers and Data Scientists

In the above three phases, development and data science teams find it extremely challenging to deal with the first and last phases. Their strength is training, tuning, and evolving the most accurate model than dealing with infrastructure and software configuration. The high reliance on DevOps teams introduces an additional layer of dependency for these teams.

Developers are productive when they can use APIs for automating repetitive tasks. Unfortunately, there are no standard, portable, well-defined APIs for the first and the last phases of ML model development and deployment.

The rise of ML PaaS

ML PaaS delivers the best of both worlds — iterative software development and model management — to developers and data scientists. It removes the friction involved in configuring and provisioning environments for training and serving machine learning models.

The best thing about an ML PaaS is the availability of APIs that abstract the underlying hardware and software stack. Developers can call a couple of APIs to spin up a large cluster of GPU-based machines fully configured with data preparation tools, training frameworks, and monitoring tools to kick off a complex training job. They will also be able to take advantage of data processing pipelines for automating ETL jobs. When the model is ready, they will publish the latest version as a developer-facing web service without worrying about packaging and deploying the artifacts and dependencies.

Public cloud providers have all the required building blocks to deliver ML PaaS. They are now exposing an abstract service that connects the dots between compute, storage, networks, and databases to bring a unified service for developers. Even though the service can be accessed through the console, the real value of the platform is exploited through the CLI and SDK. DevOps teams can integrate the CLI to automation while developers consume the SDK from IDEs such as Jupyter Notebooks, VS Code, or PyCharm.

The SDK simplifies the creation of data processing and software delivery pipelines for developers. By changing a single parameter, they would be able to switch from a CPU-based training cluster to a powerful GPU cluster running the latest NVIDIA K80 or P100 accelerators.

Cloud providers such as Amazon, Google, IBM, and Microsoft have built robust ML PaaS offerings.

Source

Zipping files on Linux: the many variations and how to use them

There are quite a few interesting things that you can do with “zip” commands other than compress and uncompress files. Here are some other zip options and how they can help.

how to zip files on Linux
Some of us have been zipping files on Unix and Linux systems for many decades — to save some disk space and package files together for archiving. Even so, there are some interesting variations on zipping that not all of us have tried. So, in this post, we’re going to look at standard zipping and unzipping as well as some other interesting zipping options.

The basic zip command

First, let’s look at the basic zip command. It uses what is essentially the same compression algorithm as gzip, but there are a couple important differences. For one thing, the gzip command is used only for compressing a single file where zip can both compress files and join them together into an archive. For another, the gzip command zips “in place”. In other words, it leaves a compressed file — not the original file alongside the compressed copy. Here’s an example of gzip at work:

$ gzip onefile
$ ls -l
-rw-rw-r-- 1 shs shs 10514 Jan 15 13:13 onefile.gz

And here’s zip. Notice how this command requires that a name be provided for the zipped archive where gzip simply uses the original file name and adds the .gz extension.

$ zip twofiles.zip file*
  adding: file1 (deflated 82%)
  adding: file2 (deflated 82%)
$ ls -l
-rw-rw-r-- 1 shs shs 58021 Jan 15 13:25 file1
-rw-rw-r-- 1 shs shs 58933 Jan 15 13:34 file2
-rw-rw-r-- 1 shs shs 21289 Jan 15 13:35 twofiles.zip

Notice also that the original files are still sitting there.

The amount of disk space that is saved (i.e., the degree of compression obtained) will depend on the content of each file. The variation in the example below is considerable.

$ zip mybin.zip ~/bin/*
  adding: bin/1 (deflated 26%)
  adding: bin/append (deflated 64%)
  adding: bin/BoD_meeting (deflated 18%)
  adding: bin/cpuhog1 (deflated 14%)
  adding: bin/cpuhog2 (stored 0%)
  adding: bin/ff (deflated 32%)
  adding: bin/file.0 (deflated 1%)
  adding: bin/loop (deflated 14%)
  adding: bin/notes (deflated 23%)
  adding: bin/patterns (stored 0%)
  adding: bin/runme (stored 0%)
  adding: bin/tryme (deflated 13%)
  adding: bin/tt (deflated 6%)

The unzip command

The unzip command will recover the contents from a zip file and, as you’d likely suspect, leave the zip file intact, whereas a similar gunzip command would leave only the uncompressed file.

$ unzip twofiles.zip
Archive:  twofiles.zip
  inflating: file1
  inflating: file2
$ ls -l
-rw-rw-r-- 1 shs shs 58021 Jan 15 13:25 file1
-rw-rw-r-- 1 shs shs 58933 Jan 15 13:34 file2
-rw-rw-r-- 1 shs shs 21289 Jan 15 13:35 twofiles.zip

The zipcloak command

The zipcloak command encrypts a zip file, prompting you to enter a password twice (to help ensure you don’t “fat finger” it) and leaves the file in place. You can expect the file size to vary a little from the original.

$ zipcloak twofiles.zip
Enter password:
Verify password:
encrypting: file1
encrypting: file2
$ ls -l
total 204
-rw-rw-r-- 1 shs shs 58021 Jan 15 13:25 file1
-rw-rw-r-- 1 shs shs 58933 Jan 15 13:34 file2
-rw-rw-r-- 1 shs shs 21313 Jan 15 13:46 twofiles.zip   <== slightly larger than
                                                           unencrypted version

Keep in mind that the original files are still sitting there unencrypted.

The zipdetails command

The zipdetails command is going to show you details — a lot of details about a zipped file, likely a lot more than you care to absorb. Even though we’re looking at an encrypted file, zipdetails does display the file names along with file modification dates, user and group information, file length data, etc. Keep in mind that this is all “metadata.” We don’t see the contents of the files.

$ zipdetails twofiles.zip

0000 LOCAL HEADER #1       04034B50
0004 Extract Zip Spec      14 '2.0'
0005 Extract OS            00 'MS-DOS'
0006 General Purpose Flag  0001
     [Bit  0]              1 'Encryption'
     [Bits 1-2]            1 'Maximum Compression'
0008 Compression Method    0008 'Deflated'
000A Last Mod Time         4E2F6B24 'Tue Jan 15 13:25:08 2019'
000E CRC                   F1B115BD
0012 Compressed Length     00002904
0016 Uncompressed Length   0000E2A5
001A Filename Length       0005
001C Extra Length          001C
001E Filename              'file1'
0023 Extra ID #0001        5455 'UT: Extended Timestamp'
0025   Length              0009
0027   Flags               '03 mod access'
0028   Mod Time            5C3E2584 'Tue Jan 15 13:25:08 2019'
002C   Access Time         5C3E27BB 'Tue Jan 15 13:34:35 2019'
0030 Extra ID #0002        7875 'ux: Unix Extra Type 3'
0032   Length              000B
0034   Version             01
0035   UID Size            04
0036   UID                 000003E8
003A   GID Size            04
003B   GID                 000003E8
003F PAYLOAD

2943 LOCAL HEADER #2       04034B50
2947 Extract Zip Spec      14 '2.0'
2948 Extract OS            00 'MS-DOS'
2949 General Purpose Flag  0001
     [Bit  0]              1 'Encryption'
     [Bits 1-2]            1 'Maximum Compression'
294B Compression Method    0008 'Deflated'
294D Last Mod Time         4E2F6C56 'Tue Jan 15 13:34:44 2019'
2951 CRC                   EC214569
2955 Compressed Length     00002913
2959 Uncompressed Length   0000E635
295D Filename Length       0005
295F Extra Length          001C
2961 Filename              'file2'
2966 Extra ID #0001        5455 'UT: Extended Timestamp'
2968   Length              0009
296A   Flags               '03 mod access'
296B   Mod Time            5C3E27C4 'Tue Jan 15 13:34:44 2019'
296F   Access Time         5C3E27BD 'Tue Jan 15 13:34:37 2019'
2973 Extra ID #0002        7875 'ux: Unix Extra Type 3'
2975   Length              000B
2977   Version             01
2978   UID Size            04
2979   UID                 000003E8
297D   GID Size            04
297E   GID                 000003E8
2982 PAYLOAD

5295 CENTRAL HEADER #1     02014B50
5299 Created Zip Spec      1E '3.0'
529A Created OS            03 'Unix'
529B Extract Zip Spec      14 '2.0'
529C Extract OS            00 'MS-DOS'
529D General Purpose Flag  0001
     [Bit  0]              1 'Encryption'
     [Bits 1-2]            1 'Maximum Compression'
529F Compression Method    0008 'Deflated'
52A1 Last Mod Time         4E2F6B24 'Tue Jan 15 13:25:08 2019'
52A5 CRC                   F1B115BD
52A9 Compressed Length     00002904
52AD Uncompressed Length   0000E2A5
52B1 Filename Length       0005
52B3 Extra Length          0018
52B5 Comment Length        0000
52B7 Disk Start            0000
52B9 Int File Attributes   0001
     [Bit 0]               1 Text Data
52BB Ext File Attributes   81B40000
52BF Local Header Offset   00000000
52C3 Filename              'file1'
52C8 Extra ID #0001        5455 'UT: Extended Timestamp'
52CA   Length              0005
52CC   Flags               '03 mod access'
52CD   Mod Time            5C3E2584 'Tue Jan 15 13:25:08 2019'
52D1 Extra ID #0002        7875 'ux: Unix Extra Type 3'
52D3   Length              000B
52D5   Version             01
52D6   UID Size            04
52D7   UID                 000003E8
52DB   GID Size            04
52DC   GID                 000003E8

52E0 CENTRAL HEADER #2     02014B50
52E4 Created Zip Spec      1E '3.0'
52E5 Created OS            03 'Unix'
52E6 Extract Zip Spec      14 '2.0'
52E7 Extract OS            00 'MS-DOS'
52E8 General Purpose Flag  0001
     [Bit  0]              1 'Encryption'
     [Bits 1-2]            1 'Maximum Compression'
52EA Compression Method    0008 'Deflated'
52EC Last Mod Time         4E2F6C56 'Tue Jan 15 13:34:44 2019'
52F0 CRC                   EC214569
52F4 Compressed Length     00002913
52F8 Uncompressed Length   0000E635
52FC Filename Length       0005
52FE Extra Length          0018
5300 Comment Length        0000
5302 Disk Start            0000
5304 Int File Attributes   0001
     [Bit 0]               1 Text Data
5306 Ext File Attributes   81B40000
530A Local Header Offset   00002943
530E Filename              'file2'
5313 Extra ID #0001        5455 'UT: Extended Timestamp'
5315   Length              0005
5317   Flags               '03 mod access'
5318   Mod Time            5C3E27C4 'Tue Jan 15 13:34:44 2019'
531C Extra ID #0002        7875 'ux: Unix Extra Type 3'
531E   Length              000B
5320   Version             01
5321   UID Size            04
5322   UID                 000003E8
5326   GID Size            04
5327   GID                 000003E8

532B END CENTRAL HEADER    06054B50
532F Number of this disk   0000
5331 Central Dir Disk no   0000
5333 Entries in this disk  0002
5335 Total Entries         0002
5337 Size of Central Dir   00000096
533B Offset to Central Dir 00005295
533F Comment Length        0000
Done

The zipgrep command

The zipgrep command is going to use a grep-type feature to locate particular content in your zipped files. If the file is encrypted, you will need to enter the password provided for the encryption for each file you want to examine. If you only want to check the contents of a single file from the archive, add its name to the end of the zipgrep command as shown below.

$ zipgrep hazard twofiles.zip file1
[twofiles.zip] file1 password:
Certain pesticides should be banned since they are hazardous to the environment.

The zipinfo command

The zipinfo command provides information on the contents of a zipped file whether encrypted or not. This includes the file names, sizes, dates and permissions.

$ zipinfo twofiles.zip
Archive:  twofiles.zip
Zip file size: 21313 bytes, number of entries: 2
-rw-rw-r--  3.0 unx    58021 Tx defN 19-Jan-15 13:25 file1
-rw-rw-r--  3.0 unx    58933 Tx defN 19-Jan-15 13:34 file2
2 files, 116954 bytes uncompressed, 20991 bytes compressed:  82.1%

The zipnote command

The zipnote command can be used to extract comments from zip archives or add them. To display comments, just preface the name of the archive with the command. If no comments have been added previously, you will see something like this:

$ zipnote twofiles.zip
@ file1
@ (comment above this line)
@ file2
@ (comment above this line)
@ (zip file comment below this line)

If you want to add comments, write the output from the zipnote command to a file:

$ zipnote twofiles.zip > comments

Next, edit the file you’ve just created, inserting your comments above the (comment above this line) lines. Then add the comments using a zipnote command like this one:

$ zipnote -w twofiles.zip < comments

The zipsplit command

The zipsplit command can be used to break a zip archive into multiple zip archives when the original file is too large — maybe because you’re trying to add one of the files to a small thumb drive. The easiest way to do this seems to be to specify the max size for each of the zipped file portions. This size must be large enough to accomodate the largest included file.

$ zipsplit -n 12000 twofiles.zip
2 zip files will be made (100% efficiency)
creating: twofile1.zip
creating: twofile2.zip
$ ls twofile*.zip
-rw-rw-r-- 1 shs shs  10697 Jan 15 14:52 twofile1.zip
-rw-rw-r-- 1 shs shs  10702 Jan 15 14:52 twofile2.zip
-rw-rw-r-- 1 shs shs  21377 Jan 15 14:27 twofiles.zip

Notice how the extracted files are sequentially named “twofile1” and “twofile2”.

Wrap-up

The zip command, along with some of its zipping compatriots, provide a lot of control over how you generate and work with compressed file archives.

Source

Get started with Cypht, an open source email client

There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year’s resolutions, the itch to start the year off right, and of course, an “out with the old, in with the new” attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn’t have to be that way.

Here’s the fourth of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019.

Cypht

We spend a lot of time dealing with email, and effectively managing your email can make a huge impact on your productivity. Programs like Thunderbird, Kontact/KMail, and Evolution all seem to have one thing in common: they seek to duplicate the functionality of Microsoft Outlook, which hasn’t really changed in the last 10 years or so. Even the console standard-bearers like Mutt and Cone haven’t changed much in the last decade.

Cypht is a simple, lightweight, and modern webmail client that aggregates several accounts into a single view. Along with email accounts, it includes Atom/RSS feeds. It makes reading items from these different sources very simple by using an “Everything” screen that shows not just the mail from your inbox, but also the newest articles from your news feeds.

It uses a simplified version of HTML messages to display mail or you can set it to view a plain-text version. Since Cypht doesn’t load images from remote sources (to help maintain security), HTML rendering can be a little rough, but it does enough to get the job done. You’ll get plain-text views with most rich-text mail—meaning lots of links and hard to read. I don’t fault Cypht, since this is really the email senders’ doing, but it does detract a little from the reading experience. Reading news feeds is about the same, but having them integrated with your email accounts makes it much easier to keep up with them (something I sometimes have issues with).

Users can use a preconfigured mail server and add any additional servers they use. Cypht’s customization options include plain-text vs. HTML mail display, support for multiple profiles, and the ability to change the theme (and make your own). You have to remember to click the “Save” button on the left navigation bar, though, or your custom settings will disappear after that session. If you log out and back in without saving, all your changes will be lost and you’ll end up with the settings you started with. This does make it easy to experiment, and if you need to reset things, simply logging out without saving will bring back the previous setup when you log back in.

Installing Cypht locally is very easy. While it is not in a container or similar technology, the setup instructions were very clear and easy to follow and didn’t require any changes on my part. On my laptop, it took about 10 minutes from starting the installation to logging in for the first time. A shared installation on a server uses the same steps, so it should be about the same.

In the end, Cypht is a fantastic alternative to desktop and web-based email clients with a simple interface to help you handle your email quickly and efficiently.

What to read next

Source

Professional Audio Closer to Linux – OSnews

Browsing Freshmeat tonight, the premier online Linux software repository, I came across to these two great (and brand new) applications, ReBorn and ReZound. Reborn, a Rebirth clone that will soon become open source according to the developer, provides a software emulation of three of Roland’s most famous electronic musical instruments. It got me thinking as to how much more viable Linux is today as a professional (or semi-professional) audio platform than it used to be two years ago. Update: On a related multimedia notice, WinAMP 3.0 for Windows was released yesterday.While ALSA, and especially OSS, still have some limitations, it seems that a number of great audio apps are emerging. Unfortunately, with only 4-5 exceptions, the same do not apply for professional 3D/rendering/video/vector-imaging applications though. Linux still does not have something similar to Apple’s iMovie or personalStudio for simple users, or Adobe Premier, or Cinema4D/Bryce/etc or a really professional DTP system, or something with the power of Illustrator/FireWorks/Freehand.

However, let’s browse together these great audio apps that are available today. Some of them, might actually need a helping hand to get further developed.

    • ReBorn – A Linux version of the Windows/Mac program ReBirth, providing a software emulation of three of Roland Corporation’s most famous electronic musical instruments: the TB303 Bassline, the TR808 Rhythm Composer and the TR909 Rhythm Composer. Also thrown in are four audio effects, individual mixers and a programmable sequencer. ReBorn is fully compatible with the ReBirth .rbs song file format. (UPDATE: The project is now dead due to legal issues.)
    • ReZound -Aims to be a stable, open source, and graphical audio file editor primarily for but not limited to the Linux operating system.
    • Anthem – An advanced open source MIDI sequencer which allows you to record, edit, and playback music using a sophisticated and acclaimed object oriented song technology.
    • Ardour – A professional multitrack/multichannel audio recorder and DAW for Linux, using ALSA-supported audio interfaces. It supports up to 32 bit samples, 24+ channels at up to 96kHz, full MMC control, a non-destructive, non-linear editor, and LADSPA plugins.
    • DAP – A comprehensive audio sample editing and processing suite. It currently supports AIFF and AIFF-C audio files, 8 or 16 bit resolution, and 1, 2 or 4 channels of audio data. The package offers comprehensive editing, playback, and recording facilities including full time stretch resampling, manual data editing, and a reasonably complete DSP processing suite.
    • GNUsound – A sound editor for Linux/x86. It supports multiple tracks, multiple outputs, and 8, 16, or 24/32 bit samples. It can read a number of audio formats through libaudiofile, and saves them as WAV.
    • Bristol – A synthesizer emulation package. It includes a Moog Mini, Moog Voyager, Hammond B3, Prophet 5, Juno 6, DX 7, and others.
    • Audacity – A cross-platform multitrack audio editor. It allows you to record sounds directly or to import Ogg, WAV, AIFF, AU, IRCAM, or MP3 files. It features a few simple effects, all of the editing features you should need, and unlimited undo. The GUI was built with wxWindows and the audio I/O currently uses OSS under Linux. We recently reviewed its version 1.0.
    • TerminatorX – A realtime audio synthesizer that allows you to “scratch” on digitally sampled audio data (*.wav, *.au, *.mp3, etc.) the way hiphop-DJs scratch on vinyl records. It features multiple turntables, realtime effects (built-in as well as LADSPA plugin effects), a sequencer, and an easy-to-use GTK+ GUI.
    • LAoE – A graphical audiosample-editor, based on multi-layers, floating-point samples, volume-masks, variable selection-intensity, and many plugins suitable to manipulate sound, such as filtering, retouching, resampling, graphical spectrogram editing by brushes and rectangles, sample-curve editing by freehand-pen and spline and other interpolation curves, effects like reverb, echo, compress, expand, pitch-shift, time-stretch, and much more.
    • MidiMountain – A sequencer to edit standard MIDI files. Its easy-to-use interface should help beginners to edit and create MIDI songs (sequences), and it is designed to edit every definition known to standard MIDI files and the MIDI transfer protocol, from easy piano roll editing to changing binary system exclusive messages.
    • GNoise – A GTK+ based wave file editor. It uses a display cache and a double-buffered display for maximum speed with large files. It supports common editing functions such as cut, copy, paste, fade in/out, normalize, and more, with unlimited undo.
    • MusE – A Qt 2.1-based MIDI sequencer for Linux with editing and recording capabilities. While the sequencer is playing you can edit events in realtime with the pianoroll editor or the score editor. Recorded MIDI events can be grouped as parts and arranged in the arrange editor.
    • Rosegarden – An integrated MIDI sequencer and musical notation editor. The stable version (2.1) is a simple application for any Unix/X system. The development branch (Rosegarden-4) is an entirely new KDE application.
    • KGuitar – A guitarist suite for KDE. It’s based on MIDI concepts and includes tabulature editor, chord construction helpers, and importing and exporting song formats.
    • Swami – An instrument patch file editor using SoundFont files that allows you to create and distribute instruments from audio samples used for composing music. It uses iiwusynth, a software synthesizer, which has real time effect control, support for modulators, and routable audio via Jack.
    • SoundTracker – A pattern-oriented music editor (similar to the DOS program ‘FastTracker’). Samples are lined up on tracks and patterns which are then arranged to a song. Supported module formats are XM and MOD; the player code is the one from OpenCP. A basic sample recorder and editor is also included.
    • Tutka – A tracker style MIDI sequencer for Linux (and other systems; only Linux is supported at this time though). It is similar to programs like SoundTracker, ProTracker and FastTracker except that it does not support samples and is meant for MIDI use only.
    • amSynth – A realtime polyphonic analogue modeling synthesizer. It provides a virtual analogue synthesizer in the style of the classic Moog Minimoog/Roland Junos. It offers an easy-to-use interface and synth engine, while still creating varied sounds. It runs as a standalone application, using either the ALSA audio and MIDI sequencer system or the plain OSS devices.
    • Cheese Tracker – A program to create module music that aims to have an interface and feature set similar to that of Impulse Tracker. It also has some advantages such as oscilloscopes over each pattern track, more detailed sample info, a more detailed envelope editor, improved filters, and effect buffers (chorus/reverb) with individual send levels per channel.
    • SpiralSynth Modular – An object orientated modular softsynth / sequencer / sampler. Audio or control data can be freely passed between the plugins, and is all treated the same.
    • gAlan – An audio-processing tool for X windows and Win32. It allows you to build synthesizers, effects chains, mixers, sequencers, drum-machines, etc. in a modular fashion by linking together icons representing primitive audio-processing components.
    • Xsox – An X interface for sox. Record or play many types of sound files. Cut, copy, paste, add effects, convert file types etc.
    • Voodoo Tracker – A project that aims to harness and extend the power of conventional trackers. Imagine self contained digital studio; complete and ready for your modern music needs. Additionally Voodoo will provide an interface that is designed for live performances.
    • SLab – Direct to Disk Audio Recording Studio is a free HDD audio recording system for linux operating systems, written using Tcl/Tk. SLab can record up to 64 tracks.
    • BeatForce – A computer DJing system, with two players with independent playlist, song database, mixer, sampler etc. It was planned as a feature enhanced Linux replacement for BPM-Studio from Alcatech.

Do you know any more professional or simply fully working audio applications for Linux? Share your knowledge with us (but do not mention plain audio players please). Dave Philips has a web page with many projects mentioned too.

Source

WP2Social Auto Publish Powered By : XYZScripts.com