Amazon RDS Performance Insights is Generally Available on RDS for Oracle

Posted On: Oct 31, 2018

Amazon RDS Performance Insights, an advanced database performance monitoring feature that makes it easy to diagnose and solve performance challenges on Amazon RDS databases, is now generally available on Amazon RDS for Oracle. It offers a free tier with 7 days of data retention and a paid long-term data retention option.

Performance Insights allows non-experts to detect performance problems with an easy-to-understand dashboard that visualizes database load. It also supports Amazon Aurora with PostgreSQL compatibility, Amazon Aurora with MySQL compatibility, Amazon RDS for PostgreSQL, and Amazon RDS for MySQL, with additional database engines available in preview.

You can get started by logging into the Amazon RDS Management Console, enabling Performance Insights for a new or existing database instance, and viewing the Performance Insights dashboard. The Amazon Web Services API and SDK make it easy to integrate Performance Insights data into on-premises and third-party monitoring tools. For more information, please visit the documentation.

Amazon RDS makes it easy to set up, operate, and scale database deployments in the cloud. For more information about Amazon RDS Performance Insights, please visit the product page. For regional availability, see the Performance Insights pricing page.

Source

How to Multi-Task in Linux with the Command Line

How to Multi-Task in Linux with the Command Line

How to Multi-Task in Linux with the Command Line

One of the most jarring moments when moving from a Windows-based environment to using the command line is the loss of easy multi-tasking. Even on Linux, if you use an X Window system, you can use the mouse to just click on a new program and open it. On the command line, however, you’re pretty much stuck with what’s on your screen at any given time. In this tutorial, we will show you how to multi-task in Linux with the command line.

Background and Foreground Process Management

However, there are still ways to multi-task in Linux, and some of them are more comprehensive than others. One in-built way that doesn’t require any kind of additional software is simply moving processes into the background and the foreground. We’d written a tutorial on that a short while back. However, it has some disadvantages.

Disadvantages

First, to send a process into the background, you have to pause it first. There’s no way to send an already running program into the background and keep it running in one go.

Second, you need to break your workflow to start a new command. You have to exit what you’re currently doing and type more commands into the shell. It works, but it’s inconvenient.

Third, you have to look out for output from the background processes. Any output from them will appear on the command line and interfere with what you’re doing in the current moment. So background tasks need to either redirect their output to a separate file, or they need to be muted altogether.

Because of these disadvantages, there are huge problems with background and foreground process management. A better solution is to use the “screen” command utility as shown below.

But First – You Can Always Open a new SSH Session

Don’t forget that you just open a new SSH session. Here’s a screenshot of we doing just that:

Open Two Separate SSH Shells

It can get inconvenient to open new sessions all the time. And that’s when you need “screen”

Using “Screen” Instead

The “screen” utility allows you to have multiple workflows open at the same times – the closest analog to “windows”. It’s available by default within the regular Linux repositories. Install it in CentOS/RHEL like this:

sudo yum install screen

install screen linux

Opening a New Screen

Now start your session by typing “screen”.

This will create a blank window within your existing SSH session and give it a number that’s shown in the title bar like this:

Waiting for Input

My screen here has the number “0” as shown. In this screenshot, I’m using a dummy “read” command to block the terminal and make it wait for input. Now let’s say we want to do something else while we wait.

To open a new screen and do something else, we type:

ctrl+a c

“ctrl+a” is the default key combination for managing screens within the screen program. What you type after it, determines the action. So for example:

  • ctrl+a c – Creates a new screen
  • ctrl+a [number] – Goes to a specific screen number
  • ctrl+a k – Kills the current screen
  • ctrl+a n – Goes to the next screen
  • ctrl+a ” – Lists all active screens in the session

So if we press “ctrl+a c”, we get a new screen with a new number as shown here:

Second Screen Linux

You can use the cursor keys to navigate the list and go to whichever screen you want.
Screens are the closest thing you’ll get to a “windows” like system in the Linux command line. Sure, it’s not as easy as clicking with the mouse, but then the graphical subsystem is very resource intensive in the first place. With screens, you can get almost the same functionality and enable full multi-tasking!

If you are one of our managed VPS hosting customers, you can always ask our system administrators to set up this for you, They are available 24/7 and can take care of your request immediately.

If you liked this post on how to multi-task in Linux command line, please share it with your friends on social media networks, or if you have any question regarding the blog post please leave a comment below and one of our system administrators will reply to it.

Source

GOG adds a Linux version of the RPG ‘Silver’, still has a graphical glitch during combat

After THQ Nordic released the RPG ‘Silver’ on Steam with Linux support back in June last year GOG now have the Linux version too.

About the game:

Silver is the European answer to JRPGs, which were very popular at the time. With eye-candy in the form of pre-rendered 2D backgrounds, manga-inspired character design, and console-style gameplay, it introduced PC gamers to a new genre. This game is very unique on PC as there are only a few such titles on this platform, so if you feel like trying a different approach to role-playing, then this is a title for you!

Features:

  • Form your party from a cast of interesting and diverse characters, each with their own unique traits.
  • Gripping story and polished gameplay combine to make a “simple” RPG that’s still a lot of fun.
  • Unique combat system with real-time, fast-paced action–you assume direct control over one of the characters, and the rest of your team fights on their own.

While it’s an interesting game, the Linux release suffers from a troublesome graphical glitch. Whenever you hit someone, a square appears that I assume should show blood or some other effect. Instead, it basically makes them go transparent so you see whatever is behind them. It’s weird, it doesn’t exactly look good and it’s been an issue now for well over a year since the re-release.

With that issue in mind, I hesitate to say it’s playable. It doesn’t break the game, but it doesn’t look good either.

Find it on GOG now.

Source

Download Bitnami Diaspora Stack Linux 0.7.7.0-1

Bitnami Diaspora Stack is a free and multiplatform software project, an all-in-one, one-click install solution that includes ready-to-run versions of the Diaspora application, Apache web server, MySQL database server and PHP server-side scripting language. It is the perfect product for those who want to deploy Diaspora on personal computer and know nothing about how to install a database or web server.

What it Diaspora?

Diaspora is a free and open source personal Web server engineered to implement a distributed social networking service, which is comprised of nodes, called “pods.” The BitNami Diaspora Stack product can be deployed as native installers, as a virtual appliance or on your own server, in the cloud. It supports both Linux and Mac OS X operating systems.

Installing Bitnami Diaspora Stack

To install the Diaspora application and its server-related requirements, you will have to download the package that corresponds to your computer’s hardware architecture (32-bit and 64-bit), run it and follow the on-screen instructions.

Run Diaspora in the cloud

Thanks to Bitnami, users will be able to run their own Diaspora stack server in the cloud using one of the pre-built cloud images for the Windows Azure and Amazon EC2 cloud hosting providers or with their own hosting platform.

Virtualize Diaspora or use the Docker container

Besides installing Diaspora on your personal computer or deploy it in the cloud, it is possible to virtualize it on the VMware ESX, ESXi and Oracle VirtualBox virtualization software, thanks to Bitnami’s virtual appliance based on the latest stable version of the Ubuntu Linux distribution.

The Bitnami Diaspora Module

Unfortunately, Bitnami does not provide users with modules for its LAMP (Linux, Apache, MySQL and PHP) or MAMP (Mac, Apache, MySQL and PHP) stacks, which could have helped them to deploy the Diaspora application on desktop computers or laptops without having to install its runtime dependencies.

Source

The Monitoring Issue | Linux Journal

In 1935, Austrian physicist, Erwin Schrödinger, still flying high after his
Nobel Prize win from two years earlier, created a simple thought experiment.

It ran something like this:

If you have a file server, you cannot know if that server is up or
down…until you check on it. Thus, until you use it, a file server
is—in a sense—both up and down. At the same time.

This little brain teaser became known as Schrödinger’s File Server, and
it’s
regarded as the first known critical research on the intersection of Systems
Administration and Quantum Superposition. (Though, why Erwin chose,
specifically, to use a “file server” as an example remains a bit of a
mystery—as the experiment works equally well with any type of server.
It’s like,
we get it, Erwin. You have a nice NAS. Get over it.)

… Okay, perhaps it didn’t go exactly like that. But I’m confident it would
have…you know…had good old Erwin had a nice Network Attached Storage
server instead of a cat.

Regardless, the lessons from that experiment certainly hold true for servers.
If you haven’t checked on your server recently, how can you be truly sure
it’s running properly? Heck, it might not even be running at all!

Monitoring a server—to be notified when problems occur or, even better,
when problems look like they are about to occur—seems, at first blush, to
be a simple task. Write a script to ping a server, then email me when the
ping times out. Run that script every few minutes and, shazam, we’ve got a
server monitoring solution! Easy-peasy, time for lunch!

Whoah, there! Not so fast!

That server monitoring solution right there? It stinks. It’s fragile. It
gives you very little information (other than the results of a ping). Even
for administering your own home server, that’s barely enough information and
monitoring to keep things running smoothly.

Even if you have a more robust solution in place, odds are there are
significant shortcomings and problems with it. Luckily, Linux
Journal
has
your back—this issue is chock full of advice, tips and tricks for how to
keep your servers effectively monitored.

You know, so you’re not just guessing of the cat is still alive in there.

Mike Julian (author of O’Reilly’s Practical Monitoring) goes into detail on a
bunch of the ways your monitoring solution needs serious work in his
adorably titled “Why Your Server Monitoring (Still) Sucks” article.

We continue “telling it like it is” with Corey Quinn’s treatise on Amazon’s
CloudWatch, “CloudWatch Is of the Devil, but I Must Use It”. Seriously,
Corey, tell us how you really feel.

With our cathartic, venting session behind us, we’ve got a detailed, hands-on
walk-through of how to use Monit (an open-source process supervisor for
Linux) coupled with RRDtool (a GPL’d tool for capturing data over long
periods of time, such as from shell scripts, and graphing it) to monitor your
server in a fairly simple, and very open-source, way.

Round that out with an interview with Steve Newman (one of the folks who
created Writely, which you might know as Google Docs, following Google’s
acquisition in 2006) on his company, Scalyr, which handles server monitoring
and log management—and you’ve got more server monitoring information than you can shake a stick at. Or, you can go back to guessing if the cat is still alive. That’s fun too.

Source

How to Change Screen Resolution on Ubuntu

Screen resolution is an important factor in enjoying your system. For allowing us, humans, to perform tasks, we have to have a way to interact with the machine. Monitors are almost the most important parts of the I/O framework. Each monitor has a specific resolution. When your system sends the output to the screen, your monitor stretches the image to fit the screen. If your system sends the right resolution of frames, then your monitor will provide the best display output. Otherwise, you’ll see clutters and/or lags; simply put, your system won’t be usable at all. Let’s check out changing your screen resolution on Ubuntu – one of the most popular Linux distros of all!

Before changing the resolution, make sure that your system contains all the latest graphics driver. Get the latest driver of NVIDIA, AMD or Intel.

Go to GNOME menu. Search for “resolution”. Open “Displays” from “Settings” section. Here, you’ll have the option of changing the resolution. Click on the “Resolution” section.

There are a number of available resolutions. By default, the present one should be your system’s resolution. Here are some of the most popular screen resolutions with their acronyms.

  • Standard HD (720p) – 1280 × 720 px
  • HD (1080p) – 1920 x 1080 px
  • Quad HD (1440p) – 2560 x 1440 px
  • 4K – 3480 x 2160 px

Once you’ve selected the option, you’ll notice the “Apply” button on the top-right corner of the window. After applying the option, the system will wait for 15 seconds for your surety to change the resolution. If you don’t decide to change, the system will revert back to the default resolution again. Sometimes, you may have chosen a wrong resolution with the revert option way out of the screen. In that case, the countdown can save you a lot of trouble.

After applying the resolution, it’s better to restart your system for letting all the apps to adjust to the new resolution.

Source

Finally! The Venerable RISC OS is Now Open Source

November 1, 2018

It was recently announced that RISC OS was going to be released as open-source. RISC OS has been around for over 30 years. It was the first operating system to run on ARM technology and is still available on modern ARM-powered single-board computers, like the Raspberry Pi.

What is RISC OS?

RISC OS is open source

To give you the history of RISC OS, we need to go back to the early 1970s. UK entrepreneurs Clive Sinclair and Chris Curry founded Science of Cambridge (which later became Sinclair Research) to sell electronics. One of their early products was a kit computer. Curry wanted to develop it into a full computer, but could not convince Sinclair to agree. As a result, Curry left Sinclair Research to found a new company with friend Hermann Hauser. The new company was eventually named Acorn Computer. (This name was chosen because it would come before Apple Computer in the phone book.)

Over the next decade, Sinclair and Acorn competed for the growing UK PC market. In the early 1980s, a project was started at Acorn to create a new computer system based on RISC technology. They had seen how popular the IBM PC was among businesses and they wanted to capture some of that market. At the same time, Acorn engineers were working on an operating system for the new line of computers. RISC OS was originally launched in 1987 as Arthur 1.20 on the new Acorn Archimedes.

Acorn suffered financially during the late 80s and 90s. In 1999, the company changed its name to Element 14 and changed its focus to designing silicon. Development of RISC OS was halted at 3.60. In the years that followed, the RISC OS license has bounced from company to company. This led to the ownership of RISC OS being very messy. RISC OS Developments Ltd has attempted to fix this by purchasing the most recent owner of the license Castle Technology Ltd.

RISC OS 5

Welcome to the Open Source Community

RISC OS Open announced on October 23rd that RISC OS would be open-sourced under the Apache 2.0 License. Responsibilities will be shared by two organizations: RISC OS Open Limited will “offer professional services to customers wishing to deploy RISC OS commercially” and RISC OS Developments Ltd will handle development and investment in the operating system.

RISC OS 5.26 has been released to reflect the operating system’s new open-source nature. It even says in the announcement that “This is actually functionally identical to 5.24, so we don’t have to retest everything as actually being stable.”

Why RISC OS?

I’m sure a few of you in the audience are wondering why you should care about an operating system that is over 30 years old. I will give you two reasons.

First, it is an important part of computer history, specifically UK computer history. After all, it ran on ARM before ARM ran everything. Many of us know about the early days of Apple and IBM, which can mislead up into thinking that the US has always been the center of the PC world. In some ways that might be true, but other countries have made amazing contributions to technology that we take for granted. We mustn’t forget that.

Second, it is one of the few operating systems written to take advantage of ARM. The majority of operating systems and software that is available for ARM has been written for something else first and therefore is not optimized for ARM. According to RISC OS Development Ltd, “A high performance and low footprint system, RISC OS provides a modern desktop interface coupled with easy access to programming, hardware and connectivity. It continues to incorporate the world-renowned programming language, BBC BASIC, and remains amazingly compact, fitting onto a tiny 16MB SD card.”

Final Thoughts

I would like to welcome RISC OS to the open-source community. I have never used RISC OS. Mainly because I don’t have any hardware to run it on. However, now I’m starting to eye a Raspberry Pi. Maybe that’ll be a future article. We’ll have to see.

Have you ever used RISC OS? If so, what are your favorite features?

Source

Download Bitnami Discourse Stack Linux 2.1.2-0

Bitnami Discourse Stack is a multiplatform and free software project that aims to deliver an all-in-one, easy-to-install and easy-to-use native installers for the Discourse discussion application, as well as all of its required dependencies. The Discourse stack is also distributed as cloud images, a virtual appliance, and a Docker container.

What is Discourse?

Discourse is an open source and freely distributed discussion platform that features built-in governance and moderation systems, which let discussion communities protect themselves from spambots, bad actors and trolls. It offers a wide variety of attractive functionality.

Installing Bitnami Discourse Stack

Bitnami Discourse Stack is available for download on the GNU/Linux and Mac OS X operating systems, supporting both 32-bit and 64-bit (recommended) computers. To install Discourse on your desktop computer or laptop, you must download the package that corresponds to your computer’s hardware architecture, run it and follow the instructions displayed on the screen. Please note that the Discourse stack is not available for the Microsoft Windows platform.

Run Discourse in the cloud

Thanks to Bitnami, users are now able to run their own Discourse stack server in the cloud with their hosting platform or by using a pre-built cloud image for the Amazon EC2 and Windows Azure cloud hosting providers.

Bitnami’s Discourse virtual appliance

Bitnami also offers a virtual appliance for virtualizing the Discourse application on the Oracle VirtualBox and VMware ESX, ESXi virtualization software, based on the latest stable version of the Ubuntu Linux distribution.

The Discourse Docker container and LAMP and MAMP modules

Besides installing Discourse on your personal computer, run it in the cloud or virtualize it, you can use the Docker container, which is available for download on the project’s homepage (see link below for details). Unfortunately, you won’t be able to deploy Discourse on top of a Bitnami LAMP (Linux, Apache, MySQL and PHP) Stack or Bitnami MAMP (Mac, Apache, MySQL and PHP) Stack products, without having to deal with its runtime dependencies.

Source

System76 Announces American-Made Desktop PC with Open-Source Parts

Early in 2017—nearly two years ago—System76 invited me, and a handful of others, out to its Denver headquarters for a sneak peek at something new they’d been working on.

We were ushered into a windowless, underground meeting room. Our phones and cameras confiscated. Seriously. Every word of that is true. We were sworn to total and complete secrecy. Assumedly under penalty of extreme death…though that part was, technically, never stated.

Once the head honcho of System76, Carl Richell, was satisfied that the room was secure and free from bugs, the presentation began.

System76 told us the company was building its own desktop computers. Ones that it designed themselves. From-scratch cases. With wood. And inlaid metal. What’s more, these designs would be open. All built right there in Denver, Colorado.

We were intrigued.

Then they showed them to us, and we darn near lost our minds. They were gorgeous. We all wanted them.

But they were not ready yet. This was early on in the design and engineering, and they were looking for feedback—to make sure System76 was on the right track.

They were.

Flash-forward to today (November 1, 2018), and these Linux-powered, made in America desktop machines are finally being unveiled to the world as the Thelio line (which they’ve been teasing for several weeks with a series of sci-fi themed stories).

The Thelio comes in three sizes:

  • Thelio (aka “small”) — max 32GB RAM, 24TB storage.
  • Thelio Major (aka “medium”) — max 128GB RAM, 46TB storage.
  • Thelio Massive (aka “large”) — max 768GB RAM, 86TB storage.

""

All three sport the same basic look: part black metal, part wood (with either maple or walnut options) with rounded side edges. The cases open with a single slide up of the outer housing, with easy swapping of components. Lots of nice little touches, like a spot for in-case storage of screws that can be used in securing drives.

In an awesomely nerdy touch, the rear exhaust grill shows the alignment of planets in the solar system…at UNIX Epoch time. Also known as January 1, 1970. A Thursday.

""

They come in both Intel and AMD CPU varieties. So you get to pick between an Intel chip (ranging from i5 to i9 to Xeon) or an AMD chip (Ryzen 5 or 7 or Threadripper) with a bunch of GPU options available, including an AMD RX Vega 11, RX 580, NVIDIA GeForce RTX 2080, Titan V3 and a quite few others (both beefier and less so).

Temperature control is assisted by a custom daughterboard that controls airflow (along with power and LED), dubbed “Thelio Io”. This daughterboard has open firmware and is certified by the Open Source Hardware Association (OSHWA).

That last little bit is what I find most interesting about this new endeavor from System76. The more open a design is, the better for all. Makes maintenance and customization easier and helps others to learn from the designs for their own projects.

Thelio hardware is not completely open. But the company says that’s what it’s working toward. As System76 puts it, the company is “chipping away at the proprietary bits until it’s 100% open source.” This is a big move in a wonderfully open direction.

Also…wood. The case is partially made out of wood. A computer. Made with wood.

A wooden computer.

There need to be more things like that in this world.

Source

How to Search for Files from the Linux Command Line | Linux.com

Learn how to use the find command in this tutorial from our archives.

It goes without saying that every good Linux desktop environment offers the ability to search your file system for files and folders. If your default desktop doesn’t — because this is Linux — you can always install an app to make searching your directory hierarchy a breeze.

But what about the command line? If you happen to frequently work in the command line or you administer GUI-less Linux servers, where do you turn when you need to locate a file? Fortunately, Linux has exactly what you need to locate the files in question, built right into the system.

The command in question is find. To make the understanding of this command even more enticing, once you know it, you can start working it into your Bash scripts. That’s not only convenience, that’s power.

Let’s get up to speed with the find command so you can take control of locating files on your Linux servers and desktops, without the need of a GUI.

How to use the find command

When I first glimpsed Linux, back in 1997, I didn’t quite understand how the find command worked; therefore, it never seemed to function as I expected. It seemed simple; issue the command find FILENAME (where FILENAME is the name of the file) and the command was supposed to locate the file and report back. Little did I know there was more to the command than that. Much more.

If you issue the command man find, you’ll see the syntax of the find command is:

find [-H] [-L] [-P] [-D debugopts] [-Olevel] [starting-point…] [expression]

Naturally, if you’re unfamiliar with how man works, you might be confused about or overwhelmed by that syntax. For ease of understanding, let’s simplify that. The most basic syntax of a basic find command would look like this:

find /path option filename

Now we’ll see it at work.

Find by name

Let’s break down that basic command to make it as clear as possible. The most simplistic structure of the find command should include a path for the file, an option, and the filename itself. You may be thinking, “If I know the path to the file, I’d already know where to find it!”. Well, the path for the file could be the root of your drive; so / would be a legitimate path. Entering that as your path would take find longer to process — because it has to start from scratch — but if you have no idea where the file is, you can start from there. In the name of efficiency, it is always best to have at least an idea where to start searching.

The next bit of the command is the option. As with most Linux commands, you have a number of available options. However, we are starting from the beginning, so let’s make it easy. Because we are attempting to find a file by name, we’ll use one of two options:

  • name – case sensitive
  • iname – case insensitive

Remember, Linux is very particular about case, so if you’re looking for a file named Linux.odt, the following command will return no results.

find / -name linux.odt

If, however, you were to alter the command by using the -iname option, the find command would locate your file, regardless of case. So the new command looks like:

find / -iname linux.odt

Find by type

What if you’re not so concerned with locating a file by name but would rather locate all files of a certain type? Some of the more common file descriptors are:

  • f – regular file
  • d – directory
  • l – symbolic link
  • c – character devices
  • b – block devices

Now, suppose you want to locate all block devices (a file that refers to a device) on your system. With the help of the -type option, we can do that like so:

find / -type c

The above command would result in quite a lot of output (much of it indicating permission denied), but would include output similar to:

/dev/hidraw6
/dev/hidraw5
/dev/vboxnetctl
/dev/vboxdrvu
/dev/vboxdrv
/dev/dmmidi2
/dev/midi2
/dev/kvm

Voilà! Block devices.

We can use the same option to help us look for configuration files. Say, for instance, you want to locate all regular files that end in the .conf extension. This command would look something like:

find / -type f -name “*.conf”

The above command would traverse the entire directory structure to locate all regular files ending in .conf. If you know most of your configuration files are housed in /etc, you could specify that like so:

find /etc -type f -name “*.conf”

The above command would list all of your .conf files from /etc (Figure 1).

Outputting results to a file

One really handy trick is to output the results of the search into a file. When you know the output might be extensive, or if you want to comb through the results later, this can be incredibly helpful. For this, we’ll use the same example as above and pipe the results into a file called conf_search. This new command would look like: ​

find /etc -type f -name “*.conf” > conf_search

You will now have a file (conf_search) that contains all of the results from the find command issued.

Finding files by size

Now we get to a moment where the find command becomes incredibly helpful. I’ve had instances where desktops or servers have found their drives mysteriously filled. To quickly make space (or help locate the problem), you can use the find command to locate files of a certain size. Say, for instance, you want to go large and locate files that are over 1000MB. The find command can be issued, with the help of the -size option, like so:

find / -size +1000MB

You might be surprised at how many files turn up. With the output from the command, you can comb through the directory structure and free up space or troubleshoot to find out what is mysteriously filling up your drive.

You can search with the following size descriptions:

  • c – bytes
  • k – Kilobytes
  • M – Megabytes
  • G – Gigabytes
  • b – 512-byte blocks

Keep learning

We’ve only scratched the surface of the find command, but you now have a fundamental understanding of how to locate files on your Linux systems. Make sure to issue the command man find to get a deeper, more complete, knowledge of how to make this powerful tool work for you.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Source

WP2Social Auto Publish Powered By : XYZScripts.com