How to Change Screen Resolution on Ubuntu

Screen resolution is an important factor in enjoying your system. For allowing us, humans, to perform tasks, we have to have a way to interact with the machine. Monitors are almost the most important parts of the I/O framework. Each monitor has a specific resolution. When your system sends the output to the screen, your monitor stretches the image to fit the screen. If your system sends the right resolution of frames, then your monitor will provide the best display output. Otherwise, you’ll see clutters and/or lags; simply put, your system won’t be usable at all. Let’s check out changing your screen resolution on Ubuntu – one of the most popular Linux distros of all!

Before changing the resolution, make sure that your system contains all the latest graphics driver. Get the latest driver of NVIDIA, AMD or Intel.

Go to GNOME menu. Search for “resolution”. Open “Displays” from “Settings” section. Here, you’ll have the option of changing the resolution. Click on the “Resolution” section.

There are a number of available resolutions. By default, the present one should be your system’s resolution. Here are some of the most popular screen resolutions with their acronyms.

  • Standard HD (720p) – 1280 × 720 px
  • HD (1080p) – 1920 x 1080 px
  • Quad HD (1440p) – 2560 x 1440 px
  • 4K – 3480 x 2160 px

Once you’ve selected the option, you’ll notice the “Apply” button on the top-right corner of the window. After applying the option, the system will wait for 15 seconds for your surety to change the resolution. If you don’t decide to change, the system will revert back to the default resolution again. Sometimes, you may have chosen a wrong resolution with the revert option way out of the screen. In that case, the countdown can save you a lot of trouble.

After applying the resolution, it’s better to restart your system for letting all the apps to adjust to the new resolution.

Source

Finally! The Venerable RISC OS is Now Open Source

November 1, 2018

It was recently announced that RISC OS was going to be released as open-source. RISC OS has been around for over 30 years. It was the first operating system to run on ARM technology and is still available on modern ARM-powered single-board computers, like the Raspberry Pi.

What is RISC OS?

RISC OS is open source

To give you the history of RISC OS, we need to go back to the early 1970s. UK entrepreneurs Clive Sinclair and Chris Curry founded Science of Cambridge (which later became Sinclair Research) to sell electronics. One of their early products was a kit computer. Curry wanted to develop it into a full computer, but could not convince Sinclair to agree. As a result, Curry left Sinclair Research to found a new company with friend Hermann Hauser. The new company was eventually named Acorn Computer. (This name was chosen because it would come before Apple Computer in the phone book.)

Over the next decade, Sinclair and Acorn competed for the growing UK PC market. In the early 1980s, a project was started at Acorn to create a new computer system based on RISC technology. They had seen how popular the IBM PC was among businesses and they wanted to capture some of that market. At the same time, Acorn engineers were working on an operating system for the new line of computers. RISC OS was originally launched in 1987 as Arthur 1.20 on the new Acorn Archimedes.

Acorn suffered financially during the late 80s and 90s. In 1999, the company changed its name to Element 14 and changed its focus to designing silicon. Development of RISC OS was halted at 3.60. In the years that followed, the RISC OS license has bounced from company to company. This led to the ownership of RISC OS being very messy. RISC OS Developments Ltd has attempted to fix this by purchasing the most recent owner of the license Castle Technology Ltd.

RISC OS 5

Welcome to the Open Source Community

RISC OS Open announced on October 23rd that RISC OS would be open-sourced under the Apache 2.0 License. Responsibilities will be shared by two organizations: RISC OS Open Limited will “offer professional services to customers wishing to deploy RISC OS commercially” and RISC OS Developments Ltd will handle development and investment in the operating system.

RISC OS 5.26 has been released to reflect the operating system’s new open-source nature. It even says in the announcement that “This is actually functionally identical to 5.24, so we don’t have to retest everything as actually being stable.”

Why RISC OS?

I’m sure a few of you in the audience are wondering why you should care about an operating system that is over 30 years old. I will give you two reasons.

First, it is an important part of computer history, specifically UK computer history. After all, it ran on ARM before ARM ran everything. Many of us know about the early days of Apple and IBM, which can mislead up into thinking that the US has always been the center of the PC world. In some ways that might be true, but other countries have made amazing contributions to technology that we take for granted. We mustn’t forget that.

Second, it is one of the few operating systems written to take advantage of ARM. The majority of operating systems and software that is available for ARM has been written for something else first and therefore is not optimized for ARM. According to RISC OS Development Ltd, “A high performance and low footprint system, RISC OS provides a modern desktop interface coupled with easy access to programming, hardware and connectivity. It continues to incorporate the world-renowned programming language, BBC BASIC, and remains amazingly compact, fitting onto a tiny 16MB SD card.”

Final Thoughts

I would like to welcome RISC OS to the open-source community. I have never used RISC OS. Mainly because I don’t have any hardware to run it on. However, now I’m starting to eye a Raspberry Pi. Maybe that’ll be a future article. We’ll have to see.

Have you ever used RISC OS? If so, what are your favorite features?

Source

Download Bitnami Discourse Stack Linux 2.1.2-0

Bitnami Discourse Stack is a multiplatform and free software project that aims to deliver an all-in-one, easy-to-install and easy-to-use native installers for the Discourse discussion application, as well as all of its required dependencies. The Discourse stack is also distributed as cloud images, a virtual appliance, and a Docker container.

What is Discourse?

Discourse is an open source and freely distributed discussion platform that features built-in governance and moderation systems, which let discussion communities protect themselves from spambots, bad actors and trolls. It offers a wide variety of attractive functionality.

Installing Bitnami Discourse Stack

Bitnami Discourse Stack is available for download on the GNU/Linux and Mac OS X operating systems, supporting both 32-bit and 64-bit (recommended) computers. To install Discourse on your desktop computer or laptop, you must download the package that corresponds to your computer’s hardware architecture, run it and follow the instructions displayed on the screen. Please note that the Discourse stack is not available for the Microsoft Windows platform.

Run Discourse in the cloud

Thanks to Bitnami, users are now able to run their own Discourse stack server in the cloud with their hosting platform or by using a pre-built cloud image for the Amazon EC2 and Windows Azure cloud hosting providers.

Bitnami’s Discourse virtual appliance

Bitnami also offers a virtual appliance for virtualizing the Discourse application on the Oracle VirtualBox and VMware ESX, ESXi virtualization software, based on the latest stable version of the Ubuntu Linux distribution.

The Discourse Docker container and LAMP and MAMP modules

Besides installing Discourse on your personal computer, run it in the cloud or virtualize it, you can use the Docker container, which is available for download on the project’s homepage (see link below for details). Unfortunately, you won’t be able to deploy Discourse on top of a Bitnami LAMP (Linux, Apache, MySQL and PHP) Stack or Bitnami MAMP (Mac, Apache, MySQL and PHP) Stack products, without having to deal with its runtime dependencies.

Source

System76 Announces American-Made Desktop PC with Open-Source Parts

Early in 2017—nearly two years ago—System76 invited me, and a handful of others, out to its Denver headquarters for a sneak peek at something new they’d been working on.

We were ushered into a windowless, underground meeting room. Our phones and cameras confiscated. Seriously. Every word of that is true. We were sworn to total and complete secrecy. Assumedly under penalty of extreme death…though that part was, technically, never stated.

Once the head honcho of System76, Carl Richell, was satisfied that the room was secure and free from bugs, the presentation began.

System76 told us the company was building its own desktop computers. Ones that it designed themselves. From-scratch cases. With wood. And inlaid metal. What’s more, these designs would be open. All built right there in Denver, Colorado.

We were intrigued.

Then they showed them to us, and we darn near lost our minds. They were gorgeous. We all wanted them.

But they were not ready yet. This was early on in the design and engineering, and they were looking for feedback—to make sure System76 was on the right track.

They were.

Flash-forward to today (November 1, 2018), and these Linux-powered, made in America desktop machines are finally being unveiled to the world as the Thelio line (which they’ve been teasing for several weeks with a series of sci-fi themed stories).

The Thelio comes in three sizes:

  • Thelio (aka “small”) — max 32GB RAM, 24TB storage.
  • Thelio Major (aka “medium”) — max 128GB RAM, 46TB storage.
  • Thelio Massive (aka “large”) — max 768GB RAM, 86TB storage.

""

All three sport the same basic look: part black metal, part wood (with either maple or walnut options) with rounded side edges. The cases open with a single slide up of the outer housing, with easy swapping of components. Lots of nice little touches, like a spot for in-case storage of screws that can be used in securing drives.

In an awesomely nerdy touch, the rear exhaust grill shows the alignment of planets in the solar system…at UNIX Epoch time. Also known as January 1, 1970. A Thursday.

""

They come in both Intel and AMD CPU varieties. So you get to pick between an Intel chip (ranging from i5 to i9 to Xeon) or an AMD chip (Ryzen 5 or 7 or Threadripper) with a bunch of GPU options available, including an AMD RX Vega 11, RX 580, NVIDIA GeForce RTX 2080, Titan V3 and a quite few others (both beefier and less so).

Temperature control is assisted by a custom daughterboard that controls airflow (along with power and LED), dubbed “Thelio Io”. This daughterboard has open firmware and is certified by the Open Source Hardware Association (OSHWA).

That last little bit is what I find most interesting about this new endeavor from System76. The more open a design is, the better for all. Makes maintenance and customization easier and helps others to learn from the designs for their own projects.

Thelio hardware is not completely open. But the company says that’s what it’s working toward. As System76 puts it, the company is “chipping away at the proprietary bits until it’s 100% open source.” This is a big move in a wonderfully open direction.

Also…wood. The case is partially made out of wood. A computer. Made with wood.

A wooden computer.

There need to be more things like that in this world.

Source

How to Search for Files from the Linux Command Line | Linux.com

Learn how to use the find command in this tutorial from our archives.

It goes without saying that every good Linux desktop environment offers the ability to search your file system for files and folders. If your default desktop doesn’t — because this is Linux — you can always install an app to make searching your directory hierarchy a breeze.

But what about the command line? If you happen to frequently work in the command line or you administer GUI-less Linux servers, where do you turn when you need to locate a file? Fortunately, Linux has exactly what you need to locate the files in question, built right into the system.

The command in question is find. To make the understanding of this command even more enticing, once you know it, you can start working it into your Bash scripts. That’s not only convenience, that’s power.

Let’s get up to speed with the find command so you can take control of locating files on your Linux servers and desktops, without the need of a GUI.

How to use the find command

When I first glimpsed Linux, back in 1997, I didn’t quite understand how the find command worked; therefore, it never seemed to function as I expected. It seemed simple; issue the command find FILENAME (where FILENAME is the name of the file) and the command was supposed to locate the file and report back. Little did I know there was more to the command than that. Much more.

If you issue the command man find, you’ll see the syntax of the find command is:

find [-H] [-L] [-P] [-D debugopts] [-Olevel] [starting-point…] [expression]

Naturally, if you’re unfamiliar with how man works, you might be confused about or overwhelmed by that syntax. For ease of understanding, let’s simplify that. The most basic syntax of a basic find command would look like this:

find /path option filename

Now we’ll see it at work.

Find by name

Let’s break down that basic command to make it as clear as possible. The most simplistic structure of the find command should include a path for the file, an option, and the filename itself. You may be thinking, “If I know the path to the file, I’d already know where to find it!”. Well, the path for the file could be the root of your drive; so / would be a legitimate path. Entering that as your path would take find longer to process — because it has to start from scratch — but if you have no idea where the file is, you can start from there. In the name of efficiency, it is always best to have at least an idea where to start searching.

The next bit of the command is the option. As with most Linux commands, you have a number of available options. However, we are starting from the beginning, so let’s make it easy. Because we are attempting to find a file by name, we’ll use one of two options:

  • name – case sensitive
  • iname – case insensitive

Remember, Linux is very particular about case, so if you’re looking for a file named Linux.odt, the following command will return no results.

find / -name linux.odt

If, however, you were to alter the command by using the -iname option, the find command would locate your file, regardless of case. So the new command looks like:

find / -iname linux.odt

Find by type

What if you’re not so concerned with locating a file by name but would rather locate all files of a certain type? Some of the more common file descriptors are:

  • f – regular file
  • d – directory
  • l – symbolic link
  • c – character devices
  • b – block devices

Now, suppose you want to locate all block devices (a file that refers to a device) on your system. With the help of the -type option, we can do that like so:

find / -type c

The above command would result in quite a lot of output (much of it indicating permission denied), but would include output similar to:

/dev/hidraw6
/dev/hidraw5
/dev/vboxnetctl
/dev/vboxdrvu
/dev/vboxdrv
/dev/dmmidi2
/dev/midi2
/dev/kvm

Voilà! Block devices.

We can use the same option to help us look for configuration files. Say, for instance, you want to locate all regular files that end in the .conf extension. This command would look something like:

find / -type f -name “*.conf”

The above command would traverse the entire directory structure to locate all regular files ending in .conf. If you know most of your configuration files are housed in /etc, you could specify that like so:

find /etc -type f -name “*.conf”

The above command would list all of your .conf files from /etc (Figure 1).

Outputting results to a file

One really handy trick is to output the results of the search into a file. When you know the output might be extensive, or if you want to comb through the results later, this can be incredibly helpful. For this, we’ll use the same example as above and pipe the results into a file called conf_search. This new command would look like: ​

find /etc -type f -name “*.conf” > conf_search

You will now have a file (conf_search) that contains all of the results from the find command issued.

Finding files by size

Now we get to a moment where the find command becomes incredibly helpful. I’ve had instances where desktops or servers have found their drives mysteriously filled. To quickly make space (or help locate the problem), you can use the find command to locate files of a certain size. Say, for instance, you want to go large and locate files that are over 1000MB. The find command can be issued, with the help of the -size option, like so:

find / -size +1000MB

You might be surprised at how many files turn up. With the output from the command, you can comb through the directory structure and free up space or troubleshoot to find out what is mysteriously filling up your drive.

You can search with the following size descriptions:

  • c – bytes
  • k – Kilobytes
  • M – Megabytes
  • G – Gigabytes
  • b – 512-byte blocks

Keep learning

We’ve only scratched the surface of the find command, but you now have a fundamental understanding of how to locate files on your Linux systems. Make sure to issue the command man find to get a deeper, more complete, knowledge of how to make this powerful tool work for you.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Source

Top 5 Tools for Taking and Editing Screenshots on Linux

Many times you feel the need to capture the screen or a part of it to show someone or to keep for yourself. While on Android, iOS, and even Windows, you can do so with the click of just a button, there is no built-in feature for taking screenshots on Linux.

However, this is no reason for Linux users to be deprived of the ability to take screenshots. There exist loads of software applications and tools on the internet for taking and editing screenshots which can be downloaded for free on your system. While the built-in programs for screenshots in other operating systems will usually only allow you to take screenshots, those available for download on Linux will often offer you a much greater range of features beyond simply taking a snapshot of your screen. The additional features in these tools channel greater user-convenience for efficient performance.

Let’s see what the top 5 tools for taking and editing screenshots on Linux have in store for you:

The screenshot tool which you will find the easiest to use is Shutter. It is loaded with features. With the use of this tool, you have the liberty to capture the whole screen or a particular part of it. Unlike many other screen capture tools which require the use of other programs to edit the captured images, here on Shutter you have the power of editing the picture without asking for any outside help. Whether you want to highlight something in the screenshot, hide a specific part of the image by pixelating, or write a note on it, it is all a piece of cake if you are using Shutter. It will even allow you to share the screenshot on a hosting website.

Get Shutter on Linux simply by typing in the following command on the terminal:

$ sudo apt-get install shutter

2. ImageMagick

ImageMagick is another popular screen capture tool available for Linux users for free, under license from Apache 2.0. This screen capture tool not only captures and edits a screenshot but also has the capability of converting it to other formats. It can open, convert, and edit over 200 different types of image formats. It also allows you to take the screenshot using a set of commands on the Linux terminal.

There are various editing options that you have with ImageMagick for the captured image including transformations, adjustments of transparency on specific portions of the image, combining multiple images, drawing shapes on the image, writing notes, and much more.

Install ImageMagick on your Linux operating system using the following command:

$ sudo apt-get install ImageMagick

3. Kazam

Kazam is another popular tool for Linux based operating systems and offers the services of screencasting as well. What is most notable about the Kazam app is that it allows you to add a specified delay before it captures a screenshot or screencast of a specific part of the whole of the screen. Moreover, you will find the interface more user-friendly than most applications on Linux.

This versatile application also allows you to record audio along with video provided the format is compatible with the software. You can take a screenshot or capture a video whenever you want without even opening the app since it includes a tray icon with a menu to assist you right away. Even with all the features that Kazam offers, it isn’t a heavy file at all and won’t take up much space on your system.

To download and install Kazam on your system, you simply need to type in the following command on the terminal:

sudo add-apt-repository ppa:kazam-team/unstable-series
sudo apt-get update
sudo apt-get install kazam

4. Gnome-screenshot

Gnome-screenshot is a tool available in the Gnome desktop on Ubuntu as a built-in feature using which you can take screenshots or record a screencast of the entire window or a particular portion of it and then save it in a file. The interface of the app makes screencasting or taking screenshots, both, very simple actions. You can even adjust the settings in the app to give a delay before the app takes the screenshot.

You also have the option to either take the screenshot with the border of the screen or without it depending on what’s most suitable at the moment. Furthermore, it lets you add a border of your choice to the screenshot that you take with the app. You will find its accessibility the simplest among other screen capture tools as it appears on your system panel and can be summoned for action anytime you want to take a screenshot or record a screen activity.

  • Installation of the extension is simple on a Gnome desktop since you will simply need to go to extensions.gnome.org and look for ‘Screenshot tool’.

5. Gimp

Gimp is an open source application for Linux which is far more than just a screen capture tool. Although it actually comes under the categorization of image editors, it features the capability for taking screenshots (which is the reason we have it on our list).

Since it is an image editor, it will offer a far greater number of features to edit the screenshots you take. You can easily edit your screenshots using the tools available on the app including paint tools, retouching facilities, color management, transparency adjustments, transformation tools, and much more. Also, like any other basic screen capture app, it allows you to capture the complete screen or a part of it.

Install it on Ubuntu using these simple set of commands:

sudo add-apt-repository ppa:otto-kesselgulasch/gimp
sudo apt-get update
sudo apt-get install gimp

Conclusion

These were the top 5 apps we found for the year 2018 for capturing and editing screenshots on Linux. We found these to be the richest in features and the easiest to download and use. No matter which app you choose from the list, you will not face any problem capturing your screen and editing it suitable enough to serve whichever purpose you have in mind.

Source

Cross-platform development library SDL2 2.0.9 is out

SDL 2.0.9 has been released today featuring some rather interesting new stuff. It’s been a while, with 2.0.8 being released back in March of last year.

What is SDL 2? Well, in their own words “Simple DirectMedia Layer is a cross-platform development library designed to provide low level access to audio, keyboard, mouse, joystick, and graphics hardware via OpenGL and Direct3D.” (and Vulkan since 2.0.6). It’s used by many game developers including Valve, Unity, Feral Interactive and no doubt a great many more.

A few random highlights of what’s new:

  • SDL_GetDisplayOrientation() to get the current display orientation along with SDL_DISPLAYEVENT for changes to it.
  • Added HIDAPI joystick drivers for more consistent support for Xbox, PS4 and Nintendo Switch Pro controller support across platforms. Valve contributed PS4 and Nintendo Switch Pro controller support.
  • Support for many other popular game controllers
  • Multiple extras to deal with gamepads like SDL_GameControllerMappingForDeviceIndex() to get the mapping for a controller before it’s opened
  • Specifically for Linux, there’s now a “SDL_LinuxSetThreadPriority()” feature to allow adjusting the thread priority of native threads.

The SDL render batching and some caching, which Ryan Gordon wrote about in this Patreon post didn’t make it into this release but it should be in the next one. Ryan told me he didn’t want to risk breakage late in the development cycle. That should make the next version of SDL after this really quite exciting.

On top of that, Wayland server-side window decorations on KDE should also be in the following release. You can see a video of that from Ryan Gordon below:

You can see a human readable changelog here for 2.0.9 and for the commit log to get down and dirty with more details find that here.

Source

Ubuntu 19.04 Has Been Codenamed Disco Dingo

November 1, 2018

This is a continually updated article about Ubuntu 19.04 Disco Dingo release date, new features and everything important associated with it.

Ubuntu 18.10 is released and it’s time to start looking for the upcoming Ubuntu 19.04.

As spotted by OMG Ubuntu, Ubuntu 19.04 will be called Disco Dingo. Since there is not much known about Ubuntu 19.04 features yet, let’s talk about this cheesy codename.

Ubuntu 19.04 Codename

Ubuntu 19.04 Disco Dingo

If you have read my earlier article about the Linux distributions’ naming trivia, you probably already know that each release of Ubuntu is codenamed with two words starting with the same letter. These letters are in the alphabetical order. So after Ubuntu 18.04 Bionic Beaver, you had Ubuntu 18.10 Cosmic Cuttlefish.

The first word is usually an adjective and the second word is a (usually endangered) species.

At least this is what it used to be for years. The pattern in the second word was broken with the release of Ubuntu 14.10 Utopic Unicorn. Instead of an endangered species, it was a fictional animal. Yes, Unicrons are fictional. Stop believing in that rainbow farting animal.

The pattern was broken again a year later with Ubuntu 15.10 Willy Werewolf. No matter how much you want to believe, Werewolves are neither endangered nor real. Stop watching Twilight for Bella’s sake.

With Ubuntu 19.04, the pattern has been broken again but this time, it’s the first word of the codename.

The first word used to be an adjective however ‘disco’ is a noun and a verb but not an adjective. I wonder how come Ubuntu team ran out of ideas for an adjective starting with letter D. I guess they just wanted to party.

Dingo is a type of dog native to Australia. It’s not an endangered species but at least it’s a real animal. If Ubuntu were going for a fancy name with fictional animals, Disco Dragon would have been a lot more fun in its own way. And yes, Dragons are not real as well. Sorry to break your heart.

Ubuntu 19.04 Release Date

There is no official release schedule for Ubuntu 19.04 Disco Dingo yet. However, you can easily make a few guess.

You probably already know the logic behind Ubuntu’s version number. 19.04 will be released in the month ’04’ of the year ’19’. In other words, it will be released in April 2019.

But that’s the month. What about the exact release date? Considering that a non LTS release of Ubuntu follows a 26-week schedule, it will be safe to predict that Ubuntu 19.04 will be released on 18th April 2019.

Have you ever noticed that a new Ubuntu version is released on Thursdays only?

What new features are coming to Ubuntu 19.04?

It’s difficult to say at this moment because the development for Ubuntu 19.04 has hardly begun. You may expect better power management, better boot time with new compression algorithm, Android integration among other things.

Source

Download Bitnami Django Stack Linux 2.1.2-1

Bitnami Django Stack is a free and cross-platform software project that provides users with an all-in-one installer designed to greatly simplify the installation of the Django application and its runtime dependencies on desktop computers and laptops. It includes ready-to-run versions of Python, Django, MySQL and Apache web technologies.

What is Django?

Django is a high-level, free, platform-independent and widely-used Python web framework that encourages rapid development and clean, pragmatic design. It lets users build elegant and high-performing web applications quickly.

Installing Bitnami Django Stack

The Bitnami Django Stack product is distributed as native installers, which have been built using BitRock’s cross-platform installer tool, designed for the GNU/Linux, Microsoft Windows and Mac OS X operating systems.

To install the Django application and all of its server-related requirements, you will have to download the file that corresponds to your computer’s hardware architecture (32-bit or 64-bit), run it and follow the instructions displayed on the screen.

Run Django in the cloud

Thanks to Bitnami’s pre-build cloud images, users can run the Django application in the cloud with their hosting platform or by using a pre-built cloud image for the Windows Azure and Amazon EC2 cloud hosting providers.

Virtualize Django or use the Docker container

In addition to run Django in the cloud or install it on personal computers, it is possible to virtualize it, thanks to Bitnami’s virtual appliance based on the latest LTS (Long Term Support) release of Ubuntu Linux and designed for the Oracle VirtualBox and VMware ESX, ESXi virtualization software.

The Bitnami Django Module

Unfortunately, Bitnami does not offer a Django module for its LAMP (Linux, Apache, MySQL and PHP), WAMP (Windows, Apache, MySQL and PHP) or MAMP (Mac, Apache, MySQL and PHP) stacks, which could have allows users to deploy Django on personal computers without its runtime dependencies.

Source

Why Your Server Monitoring (Still) Sucks

Five observations about why your your server monitoring still
stinks by a monitoring specialist-turned-consultant.

Early in my career, I was responsible for managing a large fleet of
printers across a large campus. We’re talking several hundred networked
printers. It often required a 10- or 15-minute walk to get to
some of those printers physically, and many were used only sporadically. I
didn’t
always know what was happening until I arrived, so it was anyone’s
guess as to the problem. Simple paper jam? Driver issue? Printer currently
on fire? I found out only after the long walk. Making this even more
frustrating for everyone was that, thanks to the infrequent use of some of
them, a printer with a problem might go unnoticed for weeks, making itself
known only when someone tried to print with it.

Finally, it occurred to me: wouldn’t it be nice if I knew about the problem
and the cause before someone called me? I found my first monitoring tool
that day, and I was absolutely hooked.

Since then, I’ve helped numerous people overhaul their monitoring
systems. In doing so, I noticed the same challenges repeat themselves regularly. If
you’re responsible for managing the systems at your organization, read
on; I have much advice to dispense.

So, without further ado, here are my top five reasons why your monitoring
is crap and what you can do about it.

1. You’re Using Antiquated Tools

By far, the most common reason for monitoring being screwed up is a
reliance on antiquated tools. You know that’s your issue when you spend
more time working around the warts of your monitoring tools or when
you’ve got a bunch of custom code to get around some major missing
functionality. But the bottom line is that you spend more time trying to
fix the almost-working tools than just getting on with your job.

The problem with using antiquated tools and methodologies is that
you’re just making it harder for yourself. I suppose it’s certainly
possible to dig a hole with a rusty spoon, but wouldn’t you prefer to use a
shovel?

Great tools are invisible. They make you more effective, and the job is
easier to accomplish. When you have great tools, you don’t even notice
them.

Maybe you don’t describe your monitoring tools as “easy to use”
or “invisible”. The words you might opt to use would make my editor
break out a red pen.

This checklist can help you determine if you’re screwing yourself.

  • Are you using Nagios or a Nagios derivative to monitor
    elastic/ephemeral infrastructure?
  • Is there a manual step in your deployment process for a human to “Add
    $thing to monitoring”?
  • How many post-mortems contained an action item such as, “We
    weren’t monitoring $thing”?
  • Do you have a cron job that tails a log file and sends an email via
    sendmail?
  • Do you have a syslog server to which all your systems forward their
    logs…never to be seen again?
  • Do you collect system metrics only every five metrics (or even less
    often)?

If you answered yes to any of those, you are relying on bad, old-school
tooling. My condolences.

The good news is your situation isn’t permanent. With a little work, you
can fix it.

If you’re ready to change, that is.

It is somewhat amusing (or depressing?) that we in Ops so readily replace
entire stacks, redesign deployments over a week, replace configuration
management tools and introduce modern technologies, such as Docker and
serverless—all without any significant vetting period.

Yet, changing a monitoring platform is verboten. What gives?

I think the answer lies in the reality of the state of monitoring at many
companies. Things are pretty bad. They’re messy, inconsistent in
configuration, lack a coherent strategy, have inadequate automation…but
it’s all built on the tools we know. We know their failure modes; we know
their warts.

For example, the industry has spent years and a staggering amount of
development hours bolting things onto Nagios to make it more palatable
(such as
nagios-herald, NagiosQL, OMD), instead of asking, “Are we throwing
good money after bad?”

The answer is yes. Yes we are.

Not to pick on Nagios—okay, yes, I’m going to pick on Nagios. Every change
to the Nagios config, such as adding or removing a host, requires a config
reload. In an infrastructure relying on ephemeral systems, such as
containers, the entire infrastructure may turn over every few minutes. If
you have two dozen containers churning every 15 minutes, it’s possible that
Nagios is reloading its config more than once a minute. That’s insane.

And what about your metrics? The old way to decide whether something was broken
was to check the current value of a check output against a threshold. That
clearly results in some false alarms, so we added the ability to fire
an alert only if N number of consecutive checks violated the threshold. That has
a pretty glaring problem too. If you get your data every minute, you may
not know of a problem until 3–5 minutes after it’s happened. If you’re
getting your data every five minutes, it’s even worse.

And while I’m on my soapbox, let’s talk about automation. I remember back
when I was responsible for a dozen servers. It was a big day when I spun up
server #13. These sorts of things happened only every few months. Adding my
new server to my monitoring tools was, of course, on my checklist, and it
certainly took more than a few minutes to do.

But the world of tech isn’t like that anymore. Just this morning, a
client’s infrastructure spun up a dozen new instances and spun down
half of them an hour later. I knew it happened only after the fact. The
monitoring systems knew about the events within seconds, and they adjusted
accordingly.

The tech world has changed dramatically in the past five years. Our beloved
tools of choice haven’t quite kept pace. Monitoring must be 100% automated,
both in registering new instances and services, and in de-registering them
all when they go away. Gone are the days when you can deal with a 5 (or
15!) minute delay in knowing something went wrong; many of the top
companies know within seconds that’s something isn’t right.

Continuing to rely on methodologies and tools from the old days, no matter
how much you enjoy them and know their travails, is holding you back from
giant leaps forward in your monitoring.

The bad old days of trying to pick between three equally terrible
monitoring tools are long over. You owe it to yourself and your company to
at least consider modern tooling—whether it’s SaaS or self-hosted
solutions.

2. You’re Chasing “the New Hotness”

At the other end of the spectrum is an affinity for new-and-exciting tools.
Companies like Netflix and Facebook publish some really cool stuff, sure.
But that doesn’t necessarily mean you should be using it.

Here’s the problem: you are (probably) not Facebook, Netflix, Google or
any of the other huge tech companies everyone looks up to. Cargo
culting

never made anything better.

Adopting someone else’s tools or strategy because they’re successful with
them misses the crucial reasons of why it works for them.

The tools don’t make an organization successful. The organization is
successful because of how its members think. Its approaches, beliefs,
people and strategy led the organization to create those tools. Its
success stems from something much deeper than, “We wrote our own monitoring
platform.”

To approach the same sort of success the industry titans are having, you
have to go deeper. What do they do know that you don’t? What are they
doing, thinking, saying, believing that you aren’t?

Having been on the inside of many of those companies, I’ll let you in on
the secret: they’re good at the fundamentals. Really good. Mind-blowingly
good.

At first glance, this seems unrelated, but allow me to quote John Gall,
famed systems theorist:

A complex system that works is invariably found to have evolved
from a simple system that worked. A complex system designed from scratch
never works and cannot be patched up to make it work. You have to start
over, beginning with a working simple system.

Dr. Gall quite astutely points out the futility of adopting other people’s
tools wholesale. Those tools evolved from simple systems to suit the needs
of that organization and culture. Dropping such a complex system into
another organization or culture may not yield favorable results, simply
because you’re attempting to shortcut the hard work of evolving a simple
system.

So, you want the same success as the veritable titans of industry? The
answer is straightforward: start simple. Improve over time. Be patient.

3. You’re Unnecessarily Afraid of “Vendor Lock-in”

If there’s one argument I wish would die, it’s the one where people opine
about wanting to “avoid vendor lock-in”. That argument is utter hogwash.

What is “vendor lock-in”, anyway? It’s the notion that if you were to go
all-in on a particular vendor’s product, it would become prohibitively
difficult or expensive to change. Keurig’s K-cups are a famous example of
vendor lock-in. They can be used only with a Keurig coffee machine, and
a Keurig coffee machine accepts only the proprietary Keurig K-cups. By
buying a Keurig, you’re locked into the Keurig ecosystem.

Thus, if I were worried about being locked in to the Keurig ecosystem, I’d
just avoid buying a Keurig machine. Easy.

If I’m worried about vendor lock-in with, say, my server infrastructure,
what do I do? Roll out both Dell and HP servers together? That seems like a
really dumb idea. It makes my job way more difficult. I’d have to build to
the lowest common denominator of each product and ignore any
product-specific features, including the innovations that make a product
appealing. This ostensibly would allow me to avoid being locked in to one
vendor and keep any switching costs low, but it also means I’ve got a
solution that only half works and is a nightmare to manage at any sort of
scale. (Have you ever tried to build tools to manage and automate both
iDRAC and IPMI? You really don’t want to.)

In particular, you don’t get to take advantage of a product’s
unique features. By trying to avoid vendor lock-in, you end up with a
“solution” that ignores any advanced functionality.

When it comes to monitoring products, this is even worse. Composability and
interoperability are a core tenet of most products available to you. The
state of monitoring solutions today favors a high degree of interoperability
and open APIs. Yes, a single vendor may have all of your data, but it’s
often trivial to move that same data to another vendor without a major loss
of functionality.

One particular problem with this whole vendor lock-in argument is that it’s
often used as an excuse to not buy SaaS or commercial, proprietary
applications. The perception is that by using only self-hosted, open-source
products, you gain more freedom.

That assumption is wrong. You haven’t gained more freedom or avoided vendor
lock-in at all. You’ve traded one vendor for another.

By opting to do it all yourself (usually poorly), you effectively become
your own vendor—a less experienced, more overworked vendor. The chances
you would design, build, maintain and improve a monitoring platform
better—on top of your regular duties—than a monitoring vendor? They round to
zero. Is tool-building really the business you want to be in?

In addition, switching costs from in-house solutions are astronomically
higher than from one commercial solution to another, because of the
interoperability that commercial vendors have these days. Can the same be
said of your in-house solution?

4. You’re Monitoring the Wrong Stuff

Many years ago, at one of my first jobs, I checked out a database server
and noticed it had high CPU utilization. I figured I would let my boss
know.

“Who complained about it?”, my boss asked.

“Well, no one”, I replied.

My boss’ response has stuck with me. It taught me a valuable lesson:
“if it’s not impacting anyone, is there really a problem?”

My lesson is this: data without context isn’t useful. In monitoring, a
metric matters only in the context of users. If low free memory is a
condition you notice but it’s not impacting users, it’s not worth
firing an alert.

In all my years of operations and system administration, I’ve not once seen
an OS metric directly indicate active user impact. A metric sometimes
can be an indirect indicator, but I’ve never seen it directly indicate an
issue.

Which brings me to the next point. With all of these metrics and logs from
the infrastructure, why is your monitoring not better off? The reason is
because Ops can solve only half the problem. While monitoring nginx
workers, Tomcat garbage collection or Redis key evictions are all
important metrics for understanding infrastructure performance, none of
them help you understand the software your business runs. The biggest value
of monitoring comes from instrumenting the applications on which your users
rely.
(Unless, of course, your business provides infrastructure as a
service—then, by all means, carry on.)

Nowhere is this more clear than in a SaaS company, so let’s consider
that as an example.

Let’s say you have an application that is a standard three-tier web app:
nginx on the front end, Rails application servers and PostgreSQL on the
back end. Every action on the site hits the PostgreSQL database.

You have all the standard data: access and error logs, nginx metrics, Rails
logs, Postgres metrics. All of that is great.

You know what’s even better? Knowing how long it takes for a user to log in.
Or how many logins occur per minute. Or even better: how many login
failures occur per minute.

The reason this information is so valuable is that it tells you about the
user experience directly
. If login failures rose during the past five
minutes, you know you have a problem on your hands.

But, you can’t see this sort of information from the infrastructure
perspective alone. If I were to pay attention only to the
nginx/Rails/Postgres performance, I would miss this incident entirely. I
would miss something like a recent code deployment that changed some
login-related code, which caused logins to fail.

To solve this, become closer friends with your engineering team. Help them
identify useful instrumentation points in the code and implement more
metrics and logging. I’m a big fan of the statsd protocol for this sort of
thing; most every monitoring vendor supports it (or their own
implementation of it).

5. You Are the Only One Who Cares

If you’re the only one who cares about monitoring, system performance and
useful metrics will never meaningfully improve. You can’t do this alone.
You can’t even do this if only your team cares. I can’t begin to count how
many times I’ve seen Ops teams put in the effort to make improvements, only
to realize no one outside the team paid attention or thought it mattered.

Improving monitoring requires company-wide buy-in. Everyone from the
receptionist to the CEO has to believe in the value of what you’re doing.
Everyone in the company knows the business needs to make a profit.
Similarly, it requires a company-wide understanding that improving
monitoring improves the bottom line and protects the company’s profit.

Ask yourself: why do you care about monitoring?

Is it because it helps you catch and resolve incidents faster? Why is that
important to you?

Why should that be important to your manager? To your manager’s
manager? Why should the CEO care?

You need to answer those questions. When you do so, you can start making
compelling business arguments for the investments required (including in
the best new tools).

Need a starting point? Here are a few ideas why the business might care
about improving monitoring:

  • The business can manage and mitigate the risk of incidents and
    failures.
  • The business can spot areas for performance improvements, leading to a
    better customer experience and increased revenue.
  • The business can resolve incidents faster (often before they become
    critical), leading to more user goodwill and enhanced reputation.
  • The business avoids incidents going from bad to worse, which protects
    against loss of revenue and potential SLA penalty payments.
  • The business better controls infrastructure costs through capacity
    planning and forecasting, leading to improved profits and lower
    expenses.

I recommend having a candid conversation with your team on why they care
about monitoring. Be sure to involve management as well. Once you’ve had
those conversations, repeat them again with your engineering team. And your
product management team. And marketing. And sales. And customer support.

Monitoring impacts the entire company, and often in different ways. By the
time you find yourself in a conversation with executives to request an
investment in monitoring, you will be able to speak their language. Go
forth and fix your monitoring. I hope you found at least a few ideas to
improve your monitoring. Becoming world-class in this is a long, hard,
expensive road, but the good news is that you don’t really need to be
among the best to see massive benefits. A few straightforward changes,
added over time, can radically improve your company’s monitoring.

To recap:

  1. Use better tools. Replace them as better tools become available.
  2. But, don’t fixate on the tools. The tools are there to help you solve
    a problem—they aren’t the end goal.
  3. Don’t worry about vendor lock-in. Pick products you like and go all-in
    on them.
  4. Be careful about what you collect and on what you issue alerts. The
    best data tells you about things that have a direct user impact.
  5. Learn why your company cares about monitoring and express it in
    business outcomes. Only then can you really get the investment you
    want.

Good luck, and happy monitoring.

Source

WP2Social Auto Publish Powered By : XYZScripts.com