55 Percent of Cloud Developers Contribute to Open Source, Says Survey | Linux.com

In presenting the results of its survey of 4,300 cloud developers, DigitalOcean seemed surprised that only 55 percent of respondents contribute to open source projects. Yet to tech outsiders — or old-timers — it may be more surprising that more than half of developers do contribute. There are relatively few professions in which companies and their employers regularly offer pro bono services for the greater good of the industry.

DigitalOcean, which provides cloud infrastructure software and services, has timed its “Currents” survey release in conjunction with the conclusion of its fifth annual Hacktoberfest program. Co-hosted with GitHub and twilio, Hacktoberfest invites developers to collaborate during the month of October on a smorgasbord of open source projects.

Corporate leaders appear to be sending mixed messages to their developers about open source. Although 71 percent of respondents to the DigitalOcean survey said that their employers “expect them to use open source software as part of their day-to-day development work,” employers are less supportive of their developers contributing to software that doesn’t directly benefit the company. Only 34 percent of respondents said they were given time to work on open source projects not related to work.

Younger developers more willing to contribute

The report reveals some encouraging signs, as well. Some 37 percent of the developers said they would contribute more to open source if their companies gave them the time to do so. In addition, despite some 44 percent of respondents saying they don’t contribute because they feel they lack the right skills and 45 percent saying they don’t know how to get started, the less experienced, and presumably younger, developers appear more open to contributing. A total of 60 percent of developers with five or fewer years of experience contribute to open source, while the number is “significantly less” for developers with more experience, says DigitalOcean. This bodes well for future contribution levels.

Developers in India were more likely to contribute to open source projects (68 percent) than any other nationality. Although DigitalOcean did not speculate, this may be due in part to the younger average age of Indian developers.

Motivations to contribute include the opportunity to improve coding skills, learn new technologies, and advance one’s career. Also noted was the less tangible benefit of being part of a community.

Among the many other findings in the survey, the leading programming language for open source projects was JavaScript (62 percent) followed by Python at 52 percent. The only other languages over 20 percent were PHP (29 percent), Java (28 percent), and CSS (25 percent). When asked which open source projects have “excited you the most” over the last three years, the React.js JavaScript library for building UIs took the top spot with 468 mentions, followed by Kubernetes (335), Docker (252), Linux (240), and Tensorflow (226).

Companies are failing to lead the open source charge by example. Only 18 percent of employees said their companies actively participated in open source organizations such as the listed examples: Apache Foundation, Node.js Foundation, and Cloud Native Computing Foundation. Three out of four respondents said their companies have donated $1,000 or less to such organizations over the last year.

Not surprisingly, high cost was the leading reason (38 percent) why companies skimp on open source donations and labor contributions. This was followed by a preference for in-house development (33 percent) and lack of knowledge of the listed organizations (27 percent). More promisingly, 29 percent said their companies plan to contribute to such organizations in the future.

When asked which of the five leading tech companies were doing the most to support open source, 53 percent listed Google, and Microsoft came in second at 23 percent. Next came Facebook (10%), Amazon (4%), and Apple (1%). Although IBM does not appear on this list, its $34 billion acquisition of Red Hat this weekend — the second largest software acquisition in history — should boost its already extensive open source contributions in cloud software.

For more survey results, check out ActiveState’s survey of 1,407 open source developers, which focuses on open source runtimes, and the open source programs survey from The New Stack and The Linux Foundation, which looks at the role of open source programs within organizations.

Source

Use Linux SFTP Command to Transfer Files on Remote Servers

sftp commands

SFTP is a protocol that offers a secure and private channel for transfer of files between systems using encryption. A misconception exists about the acronym SFTP, with some thinking it stands for Secure File Transfer Protocol. It stands for SSH File Transfer Protocol. Where in ‘FTPS’ is different, FTPS client will first check if the ftps server’s certificate is trusted and use Secure Sockets Layer protocol (TLS).

One may be forgiven to think that SFTP and FTP are similar in terms of functionality but, the two employ varied protocols. You, therefore, cannot use a standard FTP client to connect to an SFTP server. In this guide, we will focus on the commonly used SFTP commands.

Read Also: 12 lftp Commands to Manage Files with Examples

The standard application of SFTP is to run on a command interface within its own environment. That is why in most cases you will notice the program interface changing to the prompt sftp>. The moment you have invoked the SFTP session, the usual system commands will not execute unless you call them using a specific language that is in the SFTP command line standard.

Not all computers can process SFTP commands. You can choose to use the graphical interface version of SFTP or the command line depending on the Operating System you are using. The GUI interface requires you to install an SFTP utility.

In this article, we will take you through some SFTP commands examples that you can use via the unix/linux command line.

How to Connect With SFTP

The SSH protocol used to establish communication are the same as those required by an SFTP channel. Most people prefer to use saved passwords set as defaults, but I would recommend the use of SSH keys that you can use when you need to access any system.

To start an SFTP session, you need a username and the remote hostname. Alternatively, you can use the IP address of the host name at the prompt like shown below:

~ # sftp [email protected]
[email protected]‘s password:
Connected to [email protected]

In the above example, if there were a connection that allows the processing of the above command, you would expect a password prompt before gaining access.

1) How to Get Help at the Prompt

If you have no clue on the type or format that can be used on an SFTP command line, use the “?” or “help” at the prompt as follows

sftp ?

sFTP commands

2) Confirm the Working Directory

Using the command lpwd will give more information on the working directory. The pwd is used to check the remote working directory.

sftp> lpwd

Output

Local working directory: /root
sftp> pwd

Output

Remote working directory: /upload

3) Listing Files

At the SFTP command prompt, you list both remote and local files using different commands.

Remote listing

sftp> ls

Local listing

sftp> lls

4) Uploading Files

Uploading can take place by placing single or multiple files on the remote host.

Single file on the remote host use,

sftp> put Hello-World.txt

Output

Uploading Hello-World.txt to /upload/Hello-World.txt
Hello-World.txt

Multiple files on the remote host

sftp> mput *.txt

Output

Uploading Hello-World.txt to /upload/Hello-World.txt
Hello-World.txt 100% 0 0.0KB/s 00:00
Uploading file1.txt to /upload/file1.txt
file1.txt 100% 0 0.0KB/s 00:00
Uploading file2.txt to /upload/file2.txt
file2.txt 100% 0 0.0KB/s 00:00
Uploading file3.txt to /upload/file3.txt
file3.txt 100% 0 0.0KB/s 00:00
Uploading file4.txt to /upload/file4.txt
file4.txt 100% 0 0.0KB/s 00:00

5) Downloading Files

You will be able to download single or multiple files in a local-path or system.

sftp> get file1.pdf

Output

Fetching /upload/file1.pdf to file1.pdf

Download multiple files on a local-path or system

sftp> mget * .pdf

Output

Fetching /upload/file1.pdf to file1.pdf
Fetching /upload/file2.pdf to file2.pdf
Fetching /upload/file3.pdf to file3.pdf
Fetching /upload/file4.pdf to file4.pdf
Fetching /upload/file5.pdf to file5.pdf

It is evident that when downloading a file in the local system is done using the same name. When you want to use a different name on a remote file download, the name should be specified at the end of the command.

6) Switching Directories

On the remote server, you use the command,

sftp> cd test

On the local machine, you use the command,

sftp> lcd Documents

7) Creating directories

Creating directories on both remote and local paths is possible

A new directory on the local path

sftp> mkdir test

A new directory on a remote host

sftp> lmkdir Documents

8) Removing Directories

Removing a directory or file in remote hosts

Removing a file in remote hosts

sftp> rm Report.xls

Removing directory in remote hosts

sftp> rmdir Department

Note: This command will only work if the target directory is empty

9) Exiting the Command Shell

The exclamation mark! (known as a command in this case) is used to get out of the SFTP command prompt as shown in the following example.

sftp>!

[[email protected] ~]# exit
Shell exited with status 1
sftp>

As simple as it may look like, SFTP is a very powerful tool used for administering servers and managing file transfers between hosts. The utility can be used on both remote and local servers.

Read Also:

Source

Download Bitnami Drupal Module Linux 8.6.2-0

Bitnami Drupal Module is a free and multiplatform software project, which has been designed from the offset to act as a module that can be used on top of a Bitnami LAMP, WAMP and MAMP stack, specially designed for the deployment of the Drupal application on desktop computers or laptops where the aforementioned stacks are installed.

What is Drupal?

In general, Drupal is used for community web portals, discussion websites, corporate websites, Intranet applications, personal websites or blogs, aficionado websites, e-commerce applications, resource directories, as well as social networking websites.

Installing Bitnami Drupal Module

Bitnami’s modules and stacks automate the setup of a web-based application on GNU/Linux, Microsoft Windows or Mac OS X operating systems. It is available for download as native installers for the 32-bit and 64-bit hardware platforms.

To install the Drupal application on top of a Bitnami LAMP (Linux, Apache, MySQL and PHP) Stack, Bitnami MAMP (Mac, Apache, MySQL and PHP) Stack or Bitnami WAMP (Windows, Apache, MySQL and PHP) Stack, you must download the package that corresponds to your computer’s arch, run it and follow the on-screen instructions.

Run Drupal in the cloud or virtualize it

Thanks to Bitnami, users will be able to run Drupal in the cloud with their hosting provider or by using a pre-built cloud image for the Windows Azure and Amazon EC2 cloud hosting providers. Virtual appliances for Drupal are also available on Bitnami’s website, based on the latest stable release of Ubuntu and designed to support the Oracle VirtualBox and VMware ESX, ESXi virtualization software.

The Bitnami Drupal Stack and Docker container

Besides the Bitnami Drupal Module reviewed here, Bitnami offers all-in-one native installers that greatly simplify the installation and hosting of the Drupal application on desktop computers and laptops. Bitnami Drupal Stack is available for download on Softpedia, free of charge. A Drupal Docker container will also be distributed on the project’s website.

Source

Set Timezone Ubuntu | Linux Hint

Time is a very important part of our everyday computing. We, humans, may even tolerate hours of time mismatch but in the case of the computer, even a millisecond mismatch can cause some real trouble. For ensuring that your system’s time is on the correct path, it’s necessary to set the right time zone. When you first install Ubuntu, you can choose the right time zone. In case you need to change the time zone, this guide will help you out.

There are 2 different approaches to changing the time zone – using system tools and using commands.

Change time zone from system settings

Open the GNOME menu.

Search for “time zone”. Select the “Date and Time” from “Settings” section.

Uncheck the option “Automatic Time Zone”. Click on “Time Zone”.

Change to the time zone you want, then close the window.

It’s recommended to restart your system to make sure that all your software are working the updated time zone.

Changing the time zone using the commands

Open up a terminal and run the following commands –

sudo -s
dpkg-reconfigure tzdata

Follow the on-screen steps for selecting your target time zone. Once the time zone change is complete, you’ll see the following confirmation message –

Enjoy!

Source

Download Bitnami Drupal Stack Linux 8.6.2-0

Bitnami Drupal Stack is a free and multi-platform software project, an all-in-one installer that greatly simplifies the deployment of the Drupal web-based application and its runtime dependencies (MySQL, PHP and Apache) on desktop computers or laptops. It can be deployed using native installers, a virtual machines, cloud images, a Docker container or *AMP modules.

What is Drupal?

Drupal is an open source, free and cross-platform content management system that allows an individual or a group of users to easily publish, manage, and organize a wide variety of content on a website.

Installing Bitnami Drupal Stack

Bitnami Drupal Stack is mainly distributed as native installers for the GNU/Linux, Microsoft Windows and Mac OS X operating systems, supporting 64-bit (recommended) and 32-bit hardware platforms.

To install the Drupal application on your personal computer, just download the package that corresponds to your PC’s hardware architecture, run it and follow the instructions displayed on the screen.

Run Drupal in the cloud

Thanks to Bitnami, users are now able to run their own Drupal stack server in the cloud with their hosting platform or by using a pre-built cloud image for the Windows Azure or Amazon EC2 cloud hosting providers.

Virtualize Drupal or use the Docker container

In addition to install Drupal on your PC or run it in the cloud, you can also use a virtual appliance, designed by Bitnami for the VMware ESX, ESXi and Oracle VirtualBox virtualization software, and based on the latest stable release of the Ubuntu Linux distribution. A Drupal Docker container will also be available on the project’s homepage.

The Bitnami Drupal Module

Besides the Bitnami Drupal Stack product reviewed here, Bitnami also offers a module for its LAMP, WAMP and MAMP stacks, which allows users to deploy the Drupal application on personal computers, without having to install its runtime dependencies.

Source

Fedora 29 and Ubuntu 18:10 Released » Linux Magazine

New releases focus on boot time, new hardware, and modular design.

October is the time of the year when users get to play with new versions of Ubuntu and Fedora.

Canonical announced Ubuntu 18:10, and the Fedora community announced Fedora 29. Both are Gnome-based distributions. Ubuntu focused on faster boot times and improved support for new hardware; Fedora focused on improving its modular design.

“Modularity helps make it easier to include alternative versions of software and updates than those shipped with the default release, designed to enable some users to use tried-and-true versions of software while enabling others to work with just-released innovation without impacting the overall stability of the Fedora operating system,” according to Fedora press release.

Fedora comes in 3 editions: Workstation, Cloud, and Atomic Host. The latest version of Fedora’s desktop-focused edition provides new tools and features for general users as well as developers with the inclusion of GNOME 3.30. Fedora is putting its weight behind Flatpack.

Ubuntu also comes in different editions: Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, and Ubuntu for IoT. There are different flavors of Ubuntu that support various desktop environments, including KDE Plasma, LXQt, etc.

Snap is the default app packaging and delivery mechanism of Ubuntu that competes with Flatpack. Canonical said that Ubuntu’s Linux app store includes 4,100 snaps from over 1,700 developers with support across 24 Linux distributions.18.10 enables native desktop controls to access files via the host system.

While Fedora remains a distribution for developers (Linus Torvalds himself uses Fedora), Ubuntu still appeals to a wider audience, from gamers to enterprise customers.

Download Ubuntu: https://www.ubuntu.com/download

Download Fedora: https://getfedora.org/en/workstation/download/

Source

Open Source Leadership Summit | Linux.com

The Linux Foundation Open Source Leadership Summit is the premier forum where open source leaders convene to drive digital transformation with open source technologies and learn how to collaboratively manage the largest shared technology investment of our time.

An intimate, by invitation only event, Open Source Leadership Summit fosters innovation, growth and partnerships among the leading projects and corporations working in open technology development. It is a must-attend for business and technical leaders looking to advance open source strategy, implementation and investment.

Read more

Source

RedHat: RHSA-2018-3400:01 Important: libvirt security update

Posted by Anthony Pell

RedHat Linux
An update for libvirt is now available for Red Hat Enterprise Linux 6.6 Advanced Update Support and Red Hat Enterprise Linux 6.6 Telco Extended Update Support. Red Hat Product Security has rated this update as having a security impact —–BEGIN PGP SIGNED MESSAGE—–
Hash: SHA256

=====================================================================
Red Hat Security Advisory

Synopsis: Important: libvirt security update
Advisory ID: RHSA-2018:3400-01
Product: Red Hat Enterprise Linux
Advisory URL: https://access.redhat.com/errata/RHSA-2018:3400
Issue date: 2018-10-30
CVE Names: CVE-2018-3639
=====================================================================

1. Summary:

An update for libvirt is now available for Red Hat Enterprise Linux 6.6
Advanced Update Support and Red Hat Enterprise Linux 6.6 Telco Extended
Update Support.

Red Hat Product Security has rated this update as having a security impact
of Important. A Common Vulnerability Scoring System (CVSS) base score,
which gives a detailed severity rating, is available for each vulnerability
from the CVE link(s) in the References section.

2. Relevant releases/architectures:

Red Hat Enterprise Linux Server AUS (v. 6.6) – x86_64
Red Hat Enterprise Linux Server Optional AUS (v. 6.6) – x86_64
Red Hat Enterprise Linux Server Optional TUS (v. 6.6) – x86_64
Red Hat Enterprise Linux Server TUS (v. 6.6) – x86_64

3. Description:

The libvirt library contains a C API for managing and interacting with the
virtualization capabilities of Linux and other operating systems. In
addition, libvirt provides tools for remote management of virtualized
systems.

Security Fix(es):

* An industry-wide issue was found in the way many modern microprocessor
designs have implemented speculative execution of Load & Store instructions
(a commonly used performance optimization). It relies on the presence of a
precisely-defined instruction sequence in the privileged code as well as
the fact that memory read from address to which a recent memory write has
occurred may see an older value and subsequently cause an update into the
microprocessor’s data cache even for speculatively executed instructions
that never actually commit (retire). As a result, an unprivileged attacker
could use this flaw to read privileged memory by conducting targeted cache
side-channel attacks. (CVE-2018-3639 virt-ssbd AMD)

Note: This is the libvirt side of the CVE-2018-3639 mitigation.

Red Hat would like to thank Ken Johnson (Microsoft Security Response
Center) and Jann Horn (Google Project Zero) for reporting this issue.

4. Solution:

For details on how to apply this update, which includes the changes
described in this advisory, refer to:

https://access.redhat.com/articles/11258

After installing the updated packages, libvirtd will be restarted
automatically.

5. Bugs fixed (https://bugzilla.redhat.com/):

1566890 – CVE-2018-3639 hw: cpu: speculative store bypass

6. Package List:

Red Hat Enterprise Linux Server AUS (v. 6.6):

Source:
libvirt-0.10.2-46.el6_6.9.src.rpm

x86_64:
libvirt-0.10.2-46.el6_6.9.x86_64.rpm
libvirt-client-0.10.2-46.el6_6.9.i686.rpm
libvirt-client-0.10.2-46.el6_6.9.x86_64.rpm
libvirt-debuginfo-0.10.2-46.el6_6.9.i686.rpm
libvirt-debuginfo-0.10.2-46.el6_6.9.x86_64.rpm
libvirt-devel-0.10.2-46.el6_6.9.i686.rpm
libvirt-devel-0.10.2-46.el6_6.9.x86_64.rpm
libvirt-python-0.10.2-46.el6_6.9.x86_64.rpm

Red Hat Enterprise Linux Server TUS (v. 6.6):

Source:
libvirt-0.10.2-46.el6_6.9.src.rpm

x86_64:
libvirt-0.10.2-46.el6_6.9.x86_64.rpm
libvirt-client-0.10.2-46.el6_6.9.i686.rpm
libvirt-client-0.10.2-46.el6_6.9.x86_64.rpm
libvirt-debuginfo-0.10.2-46.el6_6.9.i686.rpm
libvirt-debuginfo-0.10.2-46.el6_6.9.x86_64.rpm
libvirt-devel-0.10.2-46.el6_6.9.i686.rpm
libvirt-devel-0.10.2-46.el6_6.9.x86_64.rpm
libvirt-python-0.10.2-46.el6_6.9.x86_64.rpm

Red Hat Enterprise Linux Server Optional AUS (v. 6.6):

x86_64:
libvirt-debuginfo-0.10.2-46.el6_6.9.x86_64.rpm
libvirt-lock-sanlock-0.10.2-46.el6_6.9.x86_64.rpm

Red Hat Enterprise Linux Server Optional TUS (v. 6.6):

x86_64:
libvirt-debuginfo-0.10.2-46.el6_6.9.x86_64.rpm
libvirt-lock-sanlock-0.10.2-46.el6_6.9.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and
details on how to verify the signature are available from
https://access.redhat.com/security/team/key/

7. References:

https://access.redhat.com/security/cve/CVE-2018-3639
https://access.redhat.com/security/updates/classification/#important
https://access.redhat.com/security/vulnerabilities/ssbd

8. Contact:

The Red Hat security contact is . More contact
details at https://access.redhat.com/security/team/contact/

Copyright 2018 Red Hat, Inc.
—–BEGIN PGP SIGNATURE—–
Version: GnuPG v1

iQIVAwUBW9hmt9zjgjWX9erEAQjL3w//REIrMBGNTNbKYC8OHWDjEloO/PxqCPQr
e0wxa/67xN+8i/DFZGd0scd8UghTIoZqj4IK7ZVjxxq1Vf5YlNhC3ot4uAFZ5zi2
d+HxmA5X5901w7bOIbkQNBak6IP6KQbZW1VcucBC5uMdklzogEwAyhYkZOnzXPNd
ix9Ul1IcrTmM+hr8qzJ/KZuTkweXIuSZ+B+cKa2cGc5ZlGp2a+jrnndVO2qyILmo
9KjpfN2BuAc+bK+NveFIJYXFXTbbTqIjA3Ax5t01k+Q7Kz4nhA3qdUsmXdgsL5hz
mUnmsagQrnPhsLw7VetbD4/R65HRxR/W/Vskudt2rYo1Qm9PnLOYK1VrTTgv4Ee/
UTf3utrlGXmX7vHgMUqOlZviN4Izy8qFW/iLas5XuLHtVb2rNyt5qVeAcOmnW6x2
oMvMVIg0znfwpdK07SO3SDhGoRKnqAVGeHY1laZS/j14NdFcP1UjyZr2gxtcsj2W
Crhj6qbnk+5FvjreXRyaoWWOVAWqcq3LIU0t1LHhBk336R06S1y/zZAuYeCW4gFV
uKqnJaMVZfaWQeWKU+1JrTXjy2Sd7gwDvPIwWXIakhBfSM6vY3VEqE/3sAE4tpfV
snb/ASJ3g3sOUasw8t+sMI0g+eqShcOKoGLOOj655HlhNkFCRb5m31+EYYkg/jmD
0gzfMxuSQf4=
=znp/
—–END PGP SIGNATURE—–


RHSA-announce mailing list
RHSA-announce@redhat.com
https://www.redhat.com/mailman/listinfo/rhsa-announce

Source

CloudWatch Is of the Devil, but I Must Use It

Let’s talk about Amazon CloudWatch.

For those fortunate enough to not be stuck in the weeds of Amazon Web
Services (AWS), CloudWatch is, and I quote from the official
AWS description, “a monitoring and
management service built for developers, system operators, site reliability
engineers (SRE), and IT managers.” This is all well and good, except for the
part where there isn’t a single named constituency who enjoys working with
the product. Allow me to dispense some monitoring heresy.

Better, let me describe this in the context of the 14 Amazon
Leadership Principles
that reportedly guide every decision Amazon makes.
When you take a hard look at CloudWatch’s complete failure across all
14 Leadership Principles, you wonder how this product ever made it out
the door in its current state.

“Frugality”

I’ll start with billing. Normally left for the tail end of articles like
this, the CloudWatch billing paradigm is so terrible, I’m leading with
it instead. You get billed per metric, per month. You get billed per
thousand metrics you request to view via the API. You get billed per
dashboard per month. You get billed per alarm per month. You get charged for
logs based upon data volume ingested, data volume stored and “vended logs”
that get published natively by AWS services on behalf of the customer. And,
you get billed per custom event. All of this can be summed up best as
“nobody on the planet understands how your CloudWatch metrics and logs get
billed”, and it leads to scenarios where monitoring vendors can inadvertently
cost you thousands of dollars by polling CloudWatch too frequently. When the
AWS charges are larger than what you’re paying your monitoring vendor, it’s
not a wonderful feeling.

“Invent and Simplify”

CloudWatch Logs, CloudWatch Events, Custom Metrics, Vended Logs and Custom
Dashboards all mean different things internally to CloudWatch from what you’d
expect, compared to metrics solutions that actually make some fathomable
level of sense. There are, thus, multiple services that do very different
things, all operating under the “CloudWatch” moniker. For example, it’s not
particularly intuitive to most people that scheduling a Lambda function to
invoke once an hour requires a custom CloudWatch Event. It feels overly
complicated, incredibly confusing, and very quickly, you find yourself in a
situation where you’re having to build complex relationships to monitor
things that are themselves far simpler.

“Think Big”

All business people, when asked what they want from a monitoring platform,
will respond with something that resembles “a dashboard” or “a
single pane of glass view”. CloudWatch offers minutia up the wazoo, but
it categorically offers no global view, no green/yellow/red status
indicator that gives you even a glimmer of the overall health of your site.
Want a graph of each core in your instance’s CPU for the past 30
seconds? Easy! Want to know if your entire company should be putting out the
burning fire that is the current production state of your website? Keep
looking—CloudWatch has nothing to offer you.

“Insist on the Highest Standards”

By its very nature, CloudWatch feels like small thinking. The entire
experience, start to finish, smacks of “what’s the absolute least we
could do and get away with it?” They built their MVP, and then just
sorta…stopped, frozen in amber. They created a set of building blocks,
except they didn’t solve the problem of “how do I monitor my AWS resources?”
Instead, it feels like the entire team phoned it in and let a large market
of monitoring vendors develop as a result. None of those vendors have the
level of access to the raw data that CloudWatch does; all of them have built
better products. You’d think the CloudWatch team would take a clue from
the innovation that’s rapidly happening in this space, but that’d
require someone to Learn and Be Curious.

“Are Right, a Lot”

Recent data is “eventually consistent”, so you always get graphs like the
one shown in Figure 1.

CloudWatch Graph

Figure 1. Example CloudWatch Graph

Here in reality, that would be a terrifying thing to see on an accurate
dashboard
—something is obviously very wrong with your site! For better or
worse, the “accurate” description doesn’t apply to CloudWatch, and that’s
just how your graphs always look. “Your metrics will be eventually
consistent” is very nearly the last thing you want to hear about your
monitoring platform, second only to “what metrics?” This ties directly
to…

“Earn Trust”

Let me be very clear here—the real issue isn’t the ingestion problem.
Absolutely every vendor on the planet has the same issue—you can’t
display data you don’t have. Where CloudWatch drops the ball is in
exposing this behavior to the end user without explanation as to what’s
going on. Thus, until you grow accustomed to it, you have a heart-stopping
moment of “what the hell just happened to the site” whenever you
glance at a dashboard. This conditions you to be entirely too calm when
looking at sensible dashboards when a disaster just happened. If you trust
what the CloudWatch dashboards show you, you’re making a terrible
mistake.

“Dive Deep”

If you’re using Lambda or Fargate, you have no choice but to use CloudWatch
Logs, wherein searching for everything is absolutely terrible. If you’re
using CloudWatch Logs to diagnose anything, congratulations: you’re
diving so deep, you may drown before making it back to the surface.
For example, if I have a Lambda function that throws an error, in order to
diagnose the problem, I must:

  • Find the fact that it encountered an error in the first place by looking at
    the invocation error CloudWatch dashboard. I also could set up a filter to
    run a continuous query on the logs and alert when something shows up, except
    that isn’t natively supported—I need a third-party tool for that (such
    as
    PagerDuty).
  • Go diving into a variety of CloudWatch log groups and find the one named
    after the specific erroring function.
  • Scroll manually through the many, many, many pages of log groups to find the
    specific invocation that threw an error.
  • Realize that the JSON object that’s retained isn’t enough to troubleshoot
    with, cry in despair, and go write an article just like this one.
  • Do some quick math and realize I’m paying an uncomfortable percentage of my
    AWS bill for a service that’s only of somewhat marginal utility at best.

“Deliver Results”

All of your metrics, all of your logs—they’re locked away inside
CloudWatch’s various components. You’re not going to find a
“page me when this threshold is exceeded” option in CloudWatch; your
options are relegated to “design an alert delivery pipeline with baling
wire and SNS” or pay a non-AWS vendor for another monitoring product.

“Customer Obsession”

CloudWatch keeps all of your metrics. It keeps your logs. It lets you build
custom dashboards to view your metrics all in one place. The building blocks
of a great service are already here—it’s the expression of that utility
that falls short, sometimes drastically. The fact that large monitoring
vendors are premier sponsors of AWS events would be laughable if CloudWatch
ever were to get its act together. You’d not need a third party to make
sense of a pure AWS environment, and many of them would starve to death as
they grow too weak to interrupt your conversation to ask if they can scan
your badge. Choosing to use CloudWatch vs. literally anything else is like
buying a car. “Why yes, I would like to buy the Yugo instead of the Honda.
After all, it checks all the boxes of technically being a car, so it’s fine,
right?”

“Disagree and Commit”

It may very well be that the root cause of many of CloudWatch’s failings
comes from the product engineers who built it misunderstanding this
(admittedly slippery!) Leadership Principle. It’s envisioned as
passionately expressing your reservations about a decision, but once
it’s reached that you commit to the decision that was made.
Unfortunately, it appears that the engineering teams responsible for
CloudWatch decided to “Disagree in Commits” and inflict their
arguments upon the world in the form of the product.

“Ownership”

If I were to go on the internet and post about how terrible virtually any
other AWS service was, people would rally to that service’s defense.
It’s the internet; people will do that. But when these and many more
similar comments about CloudWatch appear, and nobody from AWS pipes in to
say “wow, I’m sorry, why do you feel that way?”, it’s
abundantly clear that if any people on the CloudWatch team really care about
the product, they’ve been locked in a malfunctioning bathroom stall for
the better part of a decade. These comments go back at least that far, but
Amazon
is
totally
on
it, rocking
the company’s “Bias for Action” principle.

“Hire and Develop the Best”

The people who build CloudWatch aren’t terrible at their jobs; I
genuinely believe they don’t quite grasp how their product is perceived.
Given that it’s poor form to write a rant like this and not offer
suggestions for positive improvement, here are some product enhancements I’d
like to see:

  • Give me the option to rate-limit API calls at arbitrary levels rather than
    being surprised at month end by a bill that’s approximately Zanzibar’s
    GDP.
  • “Here’s an error that your Lambda function threw, here’s the log output from
    that specific function” should be at most two clicks away—not 30.
  • If your dog has a litter of 14 puppies, perhaps you don’t need to name
    all of them subtle variations of the term “CloudWatch”. The proliferation of
    services and companies that all start with the word “Cloud” is the subject
    of a completely separate rant.

Please don’t misunderstand me. I use, enjoy and promote AWS services,
and I’m considered to be “an authentic voice” largely because in
addition to praising things that are wonderful, I’ll call out things
that aren’t, as I’ve just done. I’ve built my career and
business on working within that ecosystem. I find AWS employees to be
intelligent and well-intentioned, and most of their services quite good.
CloudWatch could get there with some work, but it’s got a number of very
painful usability issues that keep it from being good, let alone great.

Source

WP2Social Auto Publish Powered By : XYZScripts.com