Ubuntu 19.04 Has Been Codenamed Disco Dingo

November 1, 2018

This is a continually updated article about Ubuntu 19.04 Disco Dingo release date, new features and everything important associated with it.

Ubuntu 18.10 is released and it’s time to start looking for the upcoming Ubuntu 19.04.

As spotted by OMG Ubuntu, Ubuntu 19.04 will be called Disco Dingo. Since there is not much known about Ubuntu 19.04 features yet, let’s talk about this cheesy codename.

Ubuntu 19.04 Codename

Ubuntu 19.04 Disco Dingo

If you have read my earlier article about the Linux distributions’ naming trivia, you probably already know that each release of Ubuntu is codenamed with two words starting with the same letter. These letters are in the alphabetical order. So after Ubuntu 18.04 Bionic Beaver, you had Ubuntu 18.10 Cosmic Cuttlefish.

The first word is usually an adjective and the second word is a (usually endangered) species.

At least this is what it used to be for years. The pattern in the second word was broken with the release of Ubuntu 14.10 Utopic Unicorn. Instead of an endangered species, it was a fictional animal. Yes, Unicrons are fictional. Stop believing in that rainbow farting animal.

The pattern was broken again a year later with Ubuntu 15.10 Willy Werewolf. No matter how much you want to believe, Werewolves are neither endangered nor real. Stop watching Twilight for Bella’s sake.

With Ubuntu 19.04, the pattern has been broken again but this time, it’s the first word of the codename.

The first word used to be an adjective however ‘disco’ is a noun and a verb but not an adjective. I wonder how come Ubuntu team ran out of ideas for an adjective starting with letter D. I guess they just wanted to party.

Dingo is a type of dog native to Australia. It’s not an endangered species but at least it’s a real animal. If Ubuntu were going for a fancy name with fictional animals, Disco Dragon would have been a lot more fun in its own way. And yes, Dragons are not real as well. Sorry to break your heart.

Ubuntu 19.04 Release Date

There is no official release schedule for Ubuntu 19.04 Disco Dingo yet. However, you can easily make a few guess.

You probably already know the logic behind Ubuntu’s version number. 19.04 will be released in the month ’04’ of the year ’19’. In other words, it will be released in April 2019.

But that’s the month. What about the exact release date? Considering that a non LTS release of Ubuntu follows a 26-week schedule, it will be safe to predict that Ubuntu 19.04 will be released on 18th April 2019.

Have you ever noticed that a new Ubuntu version is released on Thursdays only?

What new features are coming to Ubuntu 19.04?

It’s difficult to say at this moment because the development for Ubuntu 19.04 has hardly begun. You may expect better power management, better boot time with new compression algorithm, Android integration among other things.

Source

Download Bitnami Django Stack Linux 2.1.2-1

Bitnami Django Stack is a free and cross-platform software project that provides users with an all-in-one installer designed to greatly simplify the installation of the Django application and its runtime dependencies on desktop computers and laptops. It includes ready-to-run versions of Python, Django, MySQL and Apache web technologies.

What is Django?

Django is a high-level, free, platform-independent and widely-used Python web framework that encourages rapid development and clean, pragmatic design. It lets users build elegant and high-performing web applications quickly.

Installing Bitnami Django Stack

The Bitnami Django Stack product is distributed as native installers, which have been built using BitRock’s cross-platform installer tool, designed for the GNU/Linux, Microsoft Windows and Mac OS X operating systems.

To install the Django application and all of its server-related requirements, you will have to download the file that corresponds to your computer’s hardware architecture (32-bit or 64-bit), run it and follow the instructions displayed on the screen.

Run Django in the cloud

Thanks to Bitnami’s pre-build cloud images, users can run the Django application in the cloud with their hosting platform or by using a pre-built cloud image for the Windows Azure and Amazon EC2 cloud hosting providers.

Virtualize Django or use the Docker container

In addition to run Django in the cloud or install it on personal computers, it is possible to virtualize it, thanks to Bitnami’s virtual appliance based on the latest LTS (Long Term Support) release of Ubuntu Linux and designed for the Oracle VirtualBox and VMware ESX, ESXi virtualization software.

The Bitnami Django Module

Unfortunately, Bitnami does not offer a Django module for its LAMP (Linux, Apache, MySQL and PHP), WAMP (Windows, Apache, MySQL and PHP) or MAMP (Mac, Apache, MySQL and PHP) stacks, which could have allows users to deploy Django on personal computers without its runtime dependencies.

Source

Why Your Server Monitoring (Still) Sucks

Five observations about why your your server monitoring still
stinks by a monitoring specialist-turned-consultant.

Early in my career, I was responsible for managing a large fleet of
printers across a large campus. We’re talking several hundred networked
printers. It often required a 10- or 15-minute walk to get to
some of those printers physically, and many were used only sporadically. I
didn’t
always know what was happening until I arrived, so it was anyone’s
guess as to the problem. Simple paper jam? Driver issue? Printer currently
on fire? I found out only after the long walk. Making this even more
frustrating for everyone was that, thanks to the infrequent use of some of
them, a printer with a problem might go unnoticed for weeks, making itself
known only when someone tried to print with it.

Finally, it occurred to me: wouldn’t it be nice if I knew about the problem
and the cause before someone called me? I found my first monitoring tool
that day, and I was absolutely hooked.

Since then, I’ve helped numerous people overhaul their monitoring
systems. In doing so, I noticed the same challenges repeat themselves regularly. If
you’re responsible for managing the systems at your organization, read
on; I have much advice to dispense.

So, without further ado, here are my top five reasons why your monitoring
is crap and what you can do about it.

1. You’re Using Antiquated Tools

By far, the most common reason for monitoring being screwed up is a
reliance on antiquated tools. You know that’s your issue when you spend
more time working around the warts of your monitoring tools or when
you’ve got a bunch of custom code to get around some major missing
functionality. But the bottom line is that you spend more time trying to
fix the almost-working tools than just getting on with your job.

The problem with using antiquated tools and methodologies is that
you’re just making it harder for yourself. I suppose it’s certainly
possible to dig a hole with a rusty spoon, but wouldn’t you prefer to use a
shovel?

Great tools are invisible. They make you more effective, and the job is
easier to accomplish. When you have great tools, you don’t even notice
them.

Maybe you don’t describe your monitoring tools as “easy to use”
or “invisible”. The words you might opt to use would make my editor
break out a red pen.

This checklist can help you determine if you’re screwing yourself.

  • Are you using Nagios or a Nagios derivative to monitor
    elastic/ephemeral infrastructure?
  • Is there a manual step in your deployment process for a human to “Add
    $thing to monitoring”?
  • How many post-mortems contained an action item such as, “We
    weren’t monitoring $thing”?
  • Do you have a cron job that tails a log file and sends an email via
    sendmail?
  • Do you have a syslog server to which all your systems forward their
    logs…never to be seen again?
  • Do you collect system metrics only every five metrics (or even less
    often)?

If you answered yes to any of those, you are relying on bad, old-school
tooling. My condolences.

The good news is your situation isn’t permanent. With a little work, you
can fix it.

If you’re ready to change, that is.

It is somewhat amusing (or depressing?) that we in Ops so readily replace
entire stacks, redesign deployments over a week, replace configuration
management tools and introduce modern technologies, such as Docker and
serverless—all without any significant vetting period.

Yet, changing a monitoring platform is verboten. What gives?

I think the answer lies in the reality of the state of monitoring at many
companies. Things are pretty bad. They’re messy, inconsistent in
configuration, lack a coherent strategy, have inadequate automation…but
it’s all built on the tools we know. We know their failure modes; we know
their warts.

For example, the industry has spent years and a staggering amount of
development hours bolting things onto Nagios to make it more palatable
(such as
nagios-herald, NagiosQL, OMD), instead of asking, “Are we throwing
good money after bad?”

The answer is yes. Yes we are.

Not to pick on Nagios—okay, yes, I’m going to pick on Nagios. Every change
to the Nagios config, such as adding or removing a host, requires a config
reload. In an infrastructure relying on ephemeral systems, such as
containers, the entire infrastructure may turn over every few minutes. If
you have two dozen containers churning every 15 minutes, it’s possible that
Nagios is reloading its config more than once a minute. That’s insane.

And what about your metrics? The old way to decide whether something was broken
was to check the current value of a check output against a threshold. That
clearly results in some false alarms, so we added the ability to fire
an alert only if N number of consecutive checks violated the threshold. That has
a pretty glaring problem too. If you get your data every minute, you may
not know of a problem until 3–5 minutes after it’s happened. If you’re
getting your data every five minutes, it’s even worse.

And while I’m on my soapbox, let’s talk about automation. I remember back
when I was responsible for a dozen servers. It was a big day when I spun up
server #13. These sorts of things happened only every few months. Adding my
new server to my monitoring tools was, of course, on my checklist, and it
certainly took more than a few minutes to do.

But the world of tech isn’t like that anymore. Just this morning, a
client’s infrastructure spun up a dozen new instances and spun down
half of them an hour later. I knew it happened only after the fact. The
monitoring systems knew about the events within seconds, and they adjusted
accordingly.

The tech world has changed dramatically in the past five years. Our beloved
tools of choice haven’t quite kept pace. Monitoring must be 100% automated,
both in registering new instances and services, and in de-registering them
all when they go away. Gone are the days when you can deal with a 5 (or
15!) minute delay in knowing something went wrong; many of the top
companies know within seconds that’s something isn’t right.

Continuing to rely on methodologies and tools from the old days, no matter
how much you enjoy them and know their travails, is holding you back from
giant leaps forward in your monitoring.

The bad old days of trying to pick between three equally terrible
monitoring tools are long over. You owe it to yourself and your company to
at least consider modern tooling—whether it’s SaaS or self-hosted
solutions.

2. You’re Chasing “the New Hotness”

At the other end of the spectrum is an affinity for new-and-exciting tools.
Companies like Netflix and Facebook publish some really cool stuff, sure.
But that doesn’t necessarily mean you should be using it.

Here’s the problem: you are (probably) not Facebook, Netflix, Google or
any of the other huge tech companies everyone looks up to. Cargo
culting

never made anything better.

Adopting someone else’s tools or strategy because they’re successful with
them misses the crucial reasons of why it works for them.

The tools don’t make an organization successful. The organization is
successful because of how its members think. Its approaches, beliefs,
people and strategy led the organization to create those tools. Its
success stems from something much deeper than, “We wrote our own monitoring
platform.”

To approach the same sort of success the industry titans are having, you
have to go deeper. What do they do know that you don’t? What are they
doing, thinking, saying, believing that you aren’t?

Having been on the inside of many of those companies, I’ll let you in on
the secret: they’re good at the fundamentals. Really good. Mind-blowingly
good.

At first glance, this seems unrelated, but allow me to quote John Gall,
famed systems theorist:

A complex system that works is invariably found to have evolved
from a simple system that worked. A complex system designed from scratch
never works and cannot be patched up to make it work. You have to start
over, beginning with a working simple system.

Dr. Gall quite astutely points out the futility of adopting other people’s
tools wholesale. Those tools evolved from simple systems to suit the needs
of that organization and culture. Dropping such a complex system into
another organization or culture may not yield favorable results, simply
because you’re attempting to shortcut the hard work of evolving a simple
system.

So, you want the same success as the veritable titans of industry? The
answer is straightforward: start simple. Improve over time. Be patient.

3. You’re Unnecessarily Afraid of “Vendor Lock-in”

If there’s one argument I wish would die, it’s the one where people opine
about wanting to “avoid vendor lock-in”. That argument is utter hogwash.

What is “vendor lock-in”, anyway? It’s the notion that if you were to go
all-in on a particular vendor’s product, it would become prohibitively
difficult or expensive to change. Keurig’s K-cups are a famous example of
vendor lock-in. They can be used only with a Keurig coffee machine, and
a Keurig coffee machine accepts only the proprietary Keurig K-cups. By
buying a Keurig, you’re locked into the Keurig ecosystem.

Thus, if I were worried about being locked in to the Keurig ecosystem, I’d
just avoid buying a Keurig machine. Easy.

If I’m worried about vendor lock-in with, say, my server infrastructure,
what do I do? Roll out both Dell and HP servers together? That seems like a
really dumb idea. It makes my job way more difficult. I’d have to build to
the lowest common denominator of each product and ignore any
product-specific features, including the innovations that make a product
appealing. This ostensibly would allow me to avoid being locked in to one
vendor and keep any switching costs low, but it also means I’ve got a
solution that only half works and is a nightmare to manage at any sort of
scale. (Have you ever tried to build tools to manage and automate both
iDRAC and IPMI? You really don’t want to.)

In particular, you don’t get to take advantage of a product’s
unique features. By trying to avoid vendor lock-in, you end up with a
“solution” that ignores any advanced functionality.

When it comes to monitoring products, this is even worse. Composability and
interoperability are a core tenet of most products available to you. The
state of monitoring solutions today favors a high degree of interoperability
and open APIs. Yes, a single vendor may have all of your data, but it’s
often trivial to move that same data to another vendor without a major loss
of functionality.

One particular problem with this whole vendor lock-in argument is that it’s
often used as an excuse to not buy SaaS or commercial, proprietary
applications. The perception is that by using only self-hosted, open-source
products, you gain more freedom.

That assumption is wrong. You haven’t gained more freedom or avoided vendor
lock-in at all. You’ve traded one vendor for another.

By opting to do it all yourself (usually poorly), you effectively become
your own vendor—a less experienced, more overworked vendor. The chances
you would design, build, maintain and improve a monitoring platform
better—on top of your regular duties—than a monitoring vendor? They round to
zero. Is tool-building really the business you want to be in?

In addition, switching costs from in-house solutions are astronomically
higher than from one commercial solution to another, because of the
interoperability that commercial vendors have these days. Can the same be
said of your in-house solution?

4. You’re Monitoring the Wrong Stuff

Many years ago, at one of my first jobs, I checked out a database server
and noticed it had high CPU utilization. I figured I would let my boss
know.

“Who complained about it?”, my boss asked.

“Well, no one”, I replied.

My boss’ response has stuck with me. It taught me a valuable lesson:
“if it’s not impacting anyone, is there really a problem?”

My lesson is this: data without context isn’t useful. In monitoring, a
metric matters only in the context of users. If low free memory is a
condition you notice but it’s not impacting users, it’s not worth
firing an alert.

In all my years of operations and system administration, I’ve not once seen
an OS metric directly indicate active user impact. A metric sometimes
can be an indirect indicator, but I’ve never seen it directly indicate an
issue.

Which brings me to the next point. With all of these metrics and logs from
the infrastructure, why is your monitoring not better off? The reason is
because Ops can solve only half the problem. While monitoring nginx
workers, Tomcat garbage collection or Redis key evictions are all
important metrics for understanding infrastructure performance, none of
them help you understand the software your business runs. The biggest value
of monitoring comes from instrumenting the applications on which your users
rely.
(Unless, of course, your business provides infrastructure as a
service—then, by all means, carry on.)

Nowhere is this more clear than in a SaaS company, so let’s consider
that as an example.

Let’s say you have an application that is a standard three-tier web app:
nginx on the front end, Rails application servers and PostgreSQL on the
back end. Every action on the site hits the PostgreSQL database.

You have all the standard data: access and error logs, nginx metrics, Rails
logs, Postgres metrics. All of that is great.

You know what’s even better? Knowing how long it takes for a user to log in.
Or how many logins occur per minute. Or even better: how many login
failures occur per minute.

The reason this information is so valuable is that it tells you about the
user experience directly
. If login failures rose during the past five
minutes, you know you have a problem on your hands.

But, you can’t see this sort of information from the infrastructure
perspective alone. If I were to pay attention only to the
nginx/Rails/Postgres performance, I would miss this incident entirely. I
would miss something like a recent code deployment that changed some
login-related code, which caused logins to fail.

To solve this, become closer friends with your engineering team. Help them
identify useful instrumentation points in the code and implement more
metrics and logging. I’m a big fan of the statsd protocol for this sort of
thing; most every monitoring vendor supports it (or their own
implementation of it).

5. You Are the Only One Who Cares

If you’re the only one who cares about monitoring, system performance and
useful metrics will never meaningfully improve. You can’t do this alone.
You can’t even do this if only your team cares. I can’t begin to count how
many times I’ve seen Ops teams put in the effort to make improvements, only
to realize no one outside the team paid attention or thought it mattered.

Improving monitoring requires company-wide buy-in. Everyone from the
receptionist to the CEO has to believe in the value of what you’re doing.
Everyone in the company knows the business needs to make a profit.
Similarly, it requires a company-wide understanding that improving
monitoring improves the bottom line and protects the company’s profit.

Ask yourself: why do you care about monitoring?

Is it because it helps you catch and resolve incidents faster? Why is that
important to you?

Why should that be important to your manager? To your manager’s
manager? Why should the CEO care?

You need to answer those questions. When you do so, you can start making
compelling business arguments for the investments required (including in
the best new tools).

Need a starting point? Here are a few ideas why the business might care
about improving monitoring:

  • The business can manage and mitigate the risk of incidents and
    failures.
  • The business can spot areas for performance improvements, leading to a
    better customer experience and increased revenue.
  • The business can resolve incidents faster (often before they become
    critical), leading to more user goodwill and enhanced reputation.
  • The business avoids incidents going from bad to worse, which protects
    against loss of revenue and potential SLA penalty payments.
  • The business better controls infrastructure costs through capacity
    planning and forecasting, leading to improved profits and lower
    expenses.

I recommend having a candid conversation with your team on why they care
about monitoring. Be sure to involve management as well. Once you’ve had
those conversations, repeat them again with your engineering team. And your
product management team. And marketing. And sales. And customer support.

Monitoring impacts the entire company, and often in different ways. By the
time you find yourself in a conversation with executives to request an
investment in monitoring, you will be able to speak their language. Go
forth and fix your monitoring. I hope you found at least a few ideas to
improve your monitoring. Becoming world-class in this is a long, hard,
expensive road, but the good news is that you don’t really need to be
among the best to see massive benefits. A few straightforward changes,
added over time, can radically improve your company’s monitoring.

To recap:

  1. Use better tools. Replace them as better tools become available.
  2. But, don’t fixate on the tools. The tools are there to help you solve
    a problem—they aren’t the end goal.
  3. Don’t worry about vendor lock-in. Pick products you like and go all-in
    on them.
  4. Be careful about what you collect and on what you issue alerts. The
    best data tells you about things that have a direct user impact.
  5. Learn why your company cares about monitoring and express it in
    business outcomes. Only then can you really get the investment you
    want.

Good luck, and happy monitoring.

Source

Internal IT – the enabler and protector of enterprise data

A recent report from McAfee has revealed that while most organisations believe they use a very modest-sounding 30 cloud services, in reality they use approximately 1,935 services, a a frankly terrifying amount. This massive disconnect has been mainly caused by the advent of Shadow IT and an ever-escalating need for business agility. With cloud services so easy to procure and use these days, business units within companies are taking IT into their own hands and creating the resources that they need, as opposed to following internal procurement processes.

Shadow IT isn’t necessarily a bad thing – during this digital revolution, everyone needs to operate in a more agile fashion, and Shadow IT is just one way of doing things. However, not only does this lead to businesses not knowing where their data and apps are, it also leads to potential accounting nightmares and the threat of sensitive data being unprotected.

Sensitive data in the cloud

According to the report, 21% of all files in the cloud contain sensitive data, which is an increase of 17% over the past two years. If properly secured and stored, this shouldn’t be a problem. However, the report also reveals that businesses have an average of at least 14 misconfigured IaaS instances running at any one time, leaving sensitive data unprotected. With 65% of organisations around the world using some form of IaaS, that is a large number of potential data breaches waiting to happen.

Additionally, the report reveals that 5.5% of all AWS S3 buckets in use aren’t configured correctly – this means that anyone with the link can access the contents of the bucket through the public internet. 5.5% doesn’t sound like much, but that’s around one in every twenty S3 buckets that have not been secured.

Is the public cloud unsafe?

Some of the public cloud naysayers out there will be predicting that the end of the (public cloud) world is nigh, and that public cloud has no place within an enterprise. However, that’s not really true. Just as a hammer, a knife or even a car can be inherently unsafe in the wrong hands with inadequate protection, training or expertise, so too can the public cloud.

The public cloud has had a phenomenal impact on businesses of all sizes – it has enabled startups to get off the ground without having to invest substantial amounts of capital expenditure in hardware and software. It’s enabled business units within enterprises to react quickly to changes in customer and market demand. It’s generated millions of dollars of revenue for enterprises (and governments) of all sizes around the world. But is it right for everything?

Internal IT as a service provider

In the turbulent world we live in, increases in regulatory compliance and changes in the global political arena means that we need to be more careful of where we store our data, and how we protect it. While in the past some business units may have seen internal IT teams, procurement and provisioning as an inhibitor to getting their job done, internal IT teams are a necessity within businesses, particularly enterprises.

This is particularly highlighted by the statistic in the report that most organisations think they use 30 cloud services, but actually use approximately 1,935. Allowing internal IT teams to take this role within the business means that the line of business departments are able to get back to focusing on their roles, as opposed to trying to be a mini-service provider for themselve

The case for multi-cloud and internal IT

Internal IT should be viewed as an enabler for enterprises, offering a simpler and easier to audit route to approved (and correctly configured) public cloud services, in addition to private cloud services for the most business-critical of data. As enterprises around the world explore what it means to have a bi-, or multi-modal IT stack, this kind of multi-cloud setup would seem to be an obvious choice for enterprises. But most importantly, it should be managed centrally by an internal IT team – this shouldn’t mean a reduction in business agility, or a slow-down in procurement times, it just means that enterprise data can be appropriately stored and protected so that the business can continue to grow.

Share with friends and colleagues on social media

Source

Worst Windows 10 version ever? Microsoft’s terrible, horrible, no good, very bad October

windows-10-blue-screen.jpg

The Windows 10 October 2018 Update has been plagued by trouble.

In September 2017, Microsoft boasted that it had just released the “best version of Windows 10 ever.” A year later, as Windows engineers struggle with the most recent release of the company’s flagship operating system, there’s a compelling case that the October 2018 Update is the worst version of Windows 10 ever.

The month began almost triumphantly for Microsoft, with the announcement on October 2 that its second Windows 10 release of the year, version 1809, was ready for delivery to the public, right on schedule. Then, just days later, the company took the unprecedented action of pulling the October 2018 Update from its servers while it investigated a serious, data-destroying bug.

Also: Microsoft serves up 40 new Windows 10 bug fixes

An embarrassing drip-drip-drip of additional high-profile bug reports has continued all month long. Built-in support for Zip files is not working properly. A keyboard driver caused some HP devices to crash with a Blue Screen of Death. Some system fonts are broken. Intel pushed the wrong audio driver through Windows Update, rendering some systems suddenly silent. Your laptop’s display brightness might be arbitrarily reset.

And with November fast approaching, the feature update still hasn’t been re-released.

What went wrong? My ZDNet colleague Mary Jo Foley suggests Microsoft is so focused on new features that it’s losing track of reliability and fundamentals. At Ars Technica, Peter Bright argues that the Windows development process is fundamentally flawed.

Or maybe there’s an even simpler explanation.

I suspect a large part of the blame comes down to Microsoft’s overreliance on one of the greatest management principles of the last half-century or so: “What gets measured gets done.” That’s certainly a good guiding principle for any organization, but it also leads to a trap for any manager who doesn’t also consider what’s not being measured.

Also: It’s time for Microsoft to bring Windows 10 Mobile back from the dead

For Windows 10, a tremendous number of performance and reliability events are measured constantly on every Windows 10 PC. Those streams of diagnostic data come from the Connected User Experience and Telemetry component, aka the Universal Telemetry Client. And there’s no doubt that Microsoft is using that telemetry data to improve the fundamentals of Windows 10.

In that September 2017 blog post, for example, Microsoft brags that it improved battery life by 17 percent in Microsoft Edge, made boot times 13 percent faster, and saw an 18 percent reduction in users hitting “certain system stability issues.” All that data translated into greater reliability, as measured by a dramatically reduced volume of calls to Microsoft’s support lines:

Our internal customer support teams are reporting significant reductions in call and online support request volumes since the Anniversary Update. During this time, we’ve seen a healthy decline in monthly support volumes, most notably with installation and troubleshooting update inquiries taking the biggest dip.

Microsoft has been focusing intently on stuff it can see in its telemetry dashboard, monitoring metrics like installation success rates, boot times, and number of crashes. On those measures of reliability and performance, Windows 10 is unquestionably better than any of its predecessors.

Unfortunately, that focus has been so intense that the company missed what I call “soft errors,” where everything looks perfectly fine on the telemetry dashboard and every action returns a success event even when the result is anything but successful.

Telemetry is most effective at gathering data to diagnose crashes and hangs. It provides great feedback for developers looking to fine-tune performance of Windows apps and features. It can do a superb job of pinpointing third-party drivers that aren’t behaving properly.

Also: Microsoft Windows zero-day disclosed on Twitter, again

But telemetry fails miserably at detecting anything that isn’t a crash or an unambiguous failure. In theory, those low-volume, high-impact issues should be flagged by members of the Windows Insider Program in the Feedback Hub. And indeed, there were multiple bug reports from members of the Windows Insider Program, over a period of several months, flagging the issue that caused data to be lost during some upgrades. There were also multiple reports that should have caught the Zip file issue before it was released.

So why were those reports missed? If you’ve spent any time in the Feedback Hub, you know that the quality of reporting varies wildly. As one Windows engineer complained to me, “We have so many issues reported daily that are variations of ‘dark theme sucks, you guys should die’ that it’s hard to spot the six upvotes on a real problem that we can’t repro in-house.”

In response to those missed alarms, Microsoft has added a new field to its problem reporting tool in Feedback Hub, to provide an indication of the severity of an issue.

Windows users can now flag problem reports by severity.

Time will tell if that addition helps or if testers will automatically overrate every bug report out of frustration. Even with that change, the recent problems highlight a fundamental flaw in the Windows Insider Program: Its members aren’t trained in the art of software testing.

The real value of Insider Preview builds is, not surprisingly, capturing telemetry data from a much wider population of hardware than Microsoft can test in-house. As for those manual feedback reports, I’m skeptical that even an extra layer of filtering will be sufficient to turn them into actionable data.

Ultimately, if Microsoft is going to require most of its non-Enterprise customers to install feature updates twice a year, the responsibility to test changes in those features starts in Redmond. The two most serious bugs in this cycle, both of which wound up in a released product, were caused because of a change in the fundamental working of a feature.

Also: Top ten features in the Windows 10 October 2018 Update TechRepublic

An experienced software tester could have and should have caught those issues. A good tester knows that testing edge cases matters. A developer rushing to check in code to meet a semi-annual ship deadline is almost certainly not going to test every one of those cases and might not even consider the possibility that customers will use that feature in an unintended way.

Sometime in the next few days, Microsoft will re-release the October 2018 Update, and everything in the Windows-as-a-service world will return to normal. But come next April, when the 19H1 version is approaching public release, a lot of people will be holding their breath.

Related links

Windows 10 telemetry secrets: Where, when, and why Microsoft collects your data

How does Windows 10 telemetry really work? It’s not a state secret. I’ve gone through the documentation and sorted out the where, when, and why. If you’re concerned about private documents accidentally leaving your network, you might want to turn the telemetry setting down.

Two Windows 10 feature updates a year is too many

Opinion: The idea of delivering two full Windows 10 upgrades every year sounds great on paper. In practice, the Windows 10 upgrade cycle has been unnecessarily disruptive, especially to home users who don’t have the technical skills to deal with those updates.

Windows 10 1809 bungle: We won’t miss early problem reports again, says Microsoft

Microsoft makes changes to its Feedback Hub after failing to notice early reports flagging up data losses caused by the Windows 10 October 2108 Update.

Microsoft halts rollout of Windows 10 October 2018 Update: What happens next?

Only days after releasing its latest feature update to Windows 10, Microsoft abruptly stopped the rollout and pulled the new version from its download servers as it investigates “isolated reports” of a data-destroying bug. What should you do now?

Source

Top 5 Video Players Ubuntu

You will find a bunch of video players online which you can download for free on your Linux operating system and start watching your favorite movies and videos right away. While every video player will have the capability of playing a video file, the ones which will interest you more are those which will offer additional features to make the software convenient to use and your experience more enjoyable. Are you looking for a good video player for Ubuntu but do not know where to start? You can start right here! After plenty of research, we have sieved out all the ordinary ones and handpicked for you only the best video players which are guaranteed to offer the experience you deserve. Without further ado, here are the top 5 video players for Ubuntu:

Released back in 2001, VLC Media player is one of the oldest and the most popular video players available on the internet. The reasons for its popularity are many. Not only is it available for Ubuntu, but for countless other operating systems as well including Windows, Android, iOS etc. This open source media player can support almost any media file you throw at it without the hassle of any additional plugins. Besides the flexibility to play most kinds of audio and video files on VLC, viewing subtitles is also a breeze.

What also makes it stand out is the support for DVD and videos saved on your USB flash drive which is not very common in media players available for Linux. The list of features is endless; streaming and downloading videos from websites like Youtube, add-ons for browsers such as Mozilla Firefox and Google Chrome, support for high definition videos such as MPEG and HEVC, ability to download subtitles without any additional plugin, and so on.

Installation of VLC on Ubuntu is as simple as typing:

on the command line.

2. SMPlayer

Another favorite of Ubuntu users, SMPlayer, is actually an upgrade of the older Mplayer with a user-friendly interface. Released in 2006 under GNU GPLv2, this media player is just as capable of running most audio and video files without the requirement of any additional plugins as the first one in our list.

Without any additional codec, you can use the software to play and download Youtube videos, search and download subtitles from the internet and load them into the video via the player, and resume videos from the point where you stopped watching. Other features include countless skins which can easily be downloaded from the internet, the ability to adjust playback speed, the presence of effective audio and video equalizers, and a customizable toolbar.

All the attractive features apart, a good media player should offer a great playback performance and SMPlayer maintains a reputation for delivering exactly that. Download SMPlayer simply by running the commands:

sudo add-apt-repository ppa:rvm/smplayer
sudo apt-get update
sudo apt-get install smplayer smplayer-themes smplayer-skins

3. MPV Player

MPV Player is another free media player released in Oct 2016 under license from GPLv2. Similar to SMPlayer, this media player is also an advancement to the older MPlayer. The primary reason for this advancement was to make the media player easier to use by including a graphical interface. Certain other features were introduced as well to make the overall experience better for the user. Another improvement in the design of the original MPlayer worth mentioning is the improved quality of client API which MPV offers since it can be used by other programs with a library interface by the name of libmpv.

Although you will not find an option to open a media file using the player, you can drag and drop a video or audio file on the video player to play it. What’s distinguishing about MPV is its ability to decode 4K videos which is better than what you would find in most other video players available for Ubuntu. With the use of youtube-dl, you can play high definition videos from Youtube and hundreds of other websites using the video player. Besides supporting almost all the different video and audio file extensions, MPV also offers media encoding, smooth transition between two frames, color management, and more.

Here are the commands using which you can get MPV Player on your system:

sudo add-apt-repository ppa:mc3man/mpv-tests
sudo apt-get update
sudo apt-get install -y mpv

4. XBMC – Kodi Media Center

Kodi, which was originally known as Xbox Media Center or XBMC, is a cross-platform media player licensed under GNU which is quite a popular software for playing audio and video files on Ubuntu. It offers support for most formats of audio and video files that are available online or saved in your system. In the form of add-ons, it offers loads of attractive features including screensavers and themes for a customized interface, syncing and downloading subtitles, video streaming, and visualizations. As Kodi was originally designed for a gaming console, Xbox, it also offers support for joystick and other gaming controllers.

Downloading Kodi on Ubuntu is very simple using the commands:

sudo add-apt-repository ppa:team-xbmc/ppa
sudo apt-get update
sudo apt-get install -y kodi

5. Miro

Miro, which was formerly called Democracy Player or DTV, is a free audio and video player and also an internet television application which you may use on Ubuntu besides all the other major operating systems including Windows. It is released under GNU General Public License and offers support for almost all media formats including the ones with HD quality. It also features a user-friendly video converter based on FFmpeg which can convert almost any video/audio file format into mp4 or h264. The media player is easy to use and allows you to download and watch videos from various websites on the internet using RSS.

Type the following commands on the command line to get Miro on Ubuntu:

sudo add-apt-repository ppa:joyard-nicolas/ffmpeg
sudo apt-get update
sudo apt-get install ffmpeg miro

So, now that you have the top 5 video players for Ubuntu on your fingertips, it shouldn’t be hard to pick the one that suits your requirements best. Download your pick right away and start enjoying your favorite movies and songs without a hitch.

Source

Two Point Hospital now has a sandbox mode allowing you to be a little more creative

To keep you entertained a little longer, Two Point Studios have updated Two Point Hospital with a new sandbox game mode.

While it’s a nice addition, it does require you to have played the game for a while first. You’re not actually able to access it, unless you’ve gained at least one star in the third hospital. If you’ve already done that, you need to load you campaign once for it to show up.

It’s not exactly a sandbox in the way you might think, you don’t just get given a blank lot to build up your hospital. Instead, you can pick from one of the existing locations and do basically whatever you want there. There’s a number of options to tweak and eventually you can start building and keep going forever:

You also now have the ability to specify diagnosis or treatment for rooms that can handle both, they’ve added a visual effect for projector in Training Room along with various bug fixes.

I’m keen to see what else they add to the game. I do like it rather a lot, with some especially good humour. You can find it on Humble Store and Steam.

Source

Fedora 29 Released For Bleeding-edge Linux Desktop Experience

Fedora is known to offer a bleeding edge Linux desktop experience; other distributions often employ many new technologies that are first implemented by Fedora. It’s also known as RHEL’s testing lab as Red Hat provides the newest features to Fedora users before shipping them in RHEL.

Following the same trend, the Red Hat-supported and community-driven Fedora has just received its latest update in the form of Fedora 29. The next week also marks 15 years since the initial release of Fedora Core 1, so it’s kind of special.

What makes Fedora 29 more exciting is the fact that it’s the first release to include Fedora Modularity feature on all different versions and spins. With Modularity, the developers can ship different versions of a package on the same base. You can choose a version of the software that matches your needs.

Another big change that’ll surely be noticeable to users is GNOME 3.30 that comes with its own set of features and changes. It goes without saying that a large number of open source packages are now updated.

Fedora 29 also marks the first release of Silverblue variant. It’s the new face of Fedora Atomic Workstation from Project Atomic. With the focus on container-based workflows, this Workstation version targets developers. You can read more about it here.

If you’re already using Fedora, simply update the system to the latest features. In case you want to install a fresh image, go ahead and download it here.

Also Read: Manjaro 18.0 “Illyria” Released

Source

Download Bitnami DreamFactory Stack Linux 2.14.1-1

Bitnami DreamFactory Stack is a free and cross-platform software that provides users with an one-click install solution for easy deployment of the DreamFactory web-based application and its runtime dependencies on real hardware.

What is DreamFactory?

DreamFactory is an open source mobile services platform heavily used by enterprise developers to build native HTML5 or mobile apps. It is designed to allow programmers to begin coding immediately, without the need of managing the backend infrastructure.

Installing Bitnami DreamFactory Stack

Bitnami DreamFactory Stack is distributed as native installers for the GNU/Linux, Mac OS X and Microsoft Windows operating systems, which have been designed to support both 32-bit and 64-bit (recommended) hardware platforms.

To install DreamFactory on a desktop computer or laptop, download the package that corresponds to your PC’s hardware architecture, run it and follow the instruction displayed on the screen.

Run DreamFactory in the cloud

Thanks to Bitnami, users can run DreamFactory in the cloud by using a pre-built cloud image for the Windows Azure and Amazon EC2 cloud hosting platforms, or run their own DreamFactory stack server with a private hosting provider.

The DreamFactory virtual appliance and Docker container

Bitnami also offers a virtual appliance, based on the latest LTS (Long Term Support) release of the world’s most popular free operating system, Ubuntu, and designed for the Oracle VirtualBox and VMware ESX, ESXi virtualization software. A DreamFactory Docker container is also available for download on the project’s homepage.

The Bitnami DreamFactory Module

Besides the Bitnami DreamFactory Stack product reviewed here, you can also download a Bitnami DreamFactory Module software for the Bitnami LAMP, WAMP and MAMP stacks, which allows users to deploy the DreamFactory application on top of the aforementioned stacks, without having to install its runtime dependencies. Bitnami DreamFactory Module is available for download on Softpedia, free of charge.

Source

Install Tomcat on Ubuntu | Linux Hint

Tomcat also known as Apache Tomcat is a web server just like Apache 2 HTTP server that we mostly use to serve PHP web applications. Apache Tomcat is used to serve Java based web applications. Tomcat has support for many of the Java web technologies such as Java Servlet, JavaServer Pages, Java Expression Language and Java WebSocket technologies. In this article, I will show you how to install Tomcat web server on Ubuntu 18.04 LTS. So, let’s get started.

Tomcat 8.5.x is available in the universe section of the official package repository of Ubuntu 18.04 LTS. So, it is really easy to install. First, make sure that the universe section of the official Ubuntu 18.04 LTS package repository is enabled.

To do that, run the following command:

$ egrep ‘^deb http.*universe.*$’ /etc/apt/sources.list

As you can see, I have the universe section of the official package repository enabled.

If it’s not enabled in your case, you can easily enable it. If you’re using a desktop environment on your Ubuntu 18.04 LTS machine, then open Software & Updates app and make sure the Community-maintained free and open-source software (universe) repository is checked on the Ubuntu Software tab as marked in the screenshot below. The universe section of the package repository should be enabled.

If you’re using Ubuntu 18.04 LTS server in headless mode, then run the following command to enable the universe section of the package repository:

$ sudo apt-add-repository “deb http://us.archive.ubuntu.com/ubuntu/ bionic universe”

Now, update the APT package repository cache with the following command:

The APT package repository cache should be updated.

Finally, install Tomcat 8.5.x with the following command:

$ sudo apt install tomcat8

Now, press y and then press <Enter> to continue.

Tomcat 8.5.x should be installed.

Starting and Stopping Tomcat Service:

In this section, I am going to show you how to manage Tomcat service on Ubuntu 18.04 LTS. You can check whether the Tomcat service is running on your Ubuntu 18.04 LTS machine with the following command:

$ sudo systemctl status tomcat8

As you can see, Tomcat service is running.

If you want to stop Tomcat service, then run the following command:

$ sudo systemctl stop tomcat8

As you can see, Tomcat service is not running anymore.

If you want to start the Tomcat service again, then run the following command:

$ sudo systemctl start tomcat8

As you can see, the Tomcat service is running again.

Starting Tomcat at System Boot:

If you want Apache Tomcat server to start when your Ubuntu 18.04 LTS machine boot, then you have to add the Tomcat service to the system startup of your Ubuntu 18.04 LTS machine. To do that, run the following command:

$ sudo systemctl enable tomcat8

Tomcat service should be added to system startup of your Ubuntu 18.04 LTS machine. The next time you boot, it should automatically start.

Removing Tomcat from System Startup:

If you don’t want to start Apache Tomcat web server when your Ubuntu 18.04 LTS machine boots anymore, all you have to do is remove the Tomcat service from the system startup of your Ubuntu 18.04 LTS machine.

To do that, run the following command:

$ sudo systemctl disable tomcat8

Tomcat service should be removed from the system startup of your Ubuntu 18.04 LTS machine. Apache Tomcat web server won’t start when your Ubuntu machine boots anymore.

Accessing Tomcat Web Server:

By default, Apache Tomcat web server runs on port 8080. If you’re using Ubuntu 18.04 LTS desktop, just open your web browser and visit http://localhost:8080

As you can see, the welcome screen of Apache Tomcat web server showed up.

If you’re using Ubuntu 18.04 LTS headless server, then run the following command to get the IP address of your Ubuntu machine which is running the Tomcat web server:

As you can see, the IP address is 192.168.163.134 in my case.

Now from the web browser of any computer connected to the same network as your Ubuntu server machine, visit http://IP_ADDR:8080, in my case http://192.168.163.134:8080

As you can see, I can still access the Tomcat web server running on my Ubuntu machine.

Managing Tomcat Web Server Using Web Based Management Interfaces:

Apache Tomcat has graphical management interfaces which you can use to manage your Tomcat web server from the web browser. In this section, I will show you how to configure it. To get the Tomcat Management interfaces on Ubuntu 18.04 LTS, you have to install 2 additional software packages tomcat8-admin and tomcat8-user.

To do that, run the following command:

$ sudo apt install tomcat8-admin tomcat8-user

tomcat8-admin and tomcat8-user packages should be installed.

Now you have to configure a username and password that you want to use to log in to the Tomcat Web based management interfaces. To do that you have to edit the configuration file /etc/tomcat8/tomcat-users.xml and add the required roles and users there.

To edit the configuration file /etc/tomcat8/tomcat-users.xml, run the following command:

$ sudo nano /etc/tomcat8/tomcat-users.xml

The configuration file should be opened.

Now navigate to the end of the file and add the following lines just before the </tomcat-users> line.

<role rolename=”admin-gui”/>
<role rolename=”manager-gui”/>
<user username=”YOUR_USERNAME” password=”YOUR_PASSWORD” roles=”admin-gui,manager-gui”/>

Here, change YOUR_USERNAME and YOUR_PASSWORD to the username and password that you want to use to log in to the Tomcat web based management interfaces. I am going to set it to tomcat and tomcat for the demonstration.

Finally, it should look something like this. Now, press <Ctrl> + x and then press y and then press <Enter> to save the changes.

Now, restart Tomcat service with the following command:

$ sudo systemctl restart tomcat8

Now you can access the Tomcat Web Application Manager interface. Just visit http://localhost:8080/manager/html from your web browser and you should be prompted for the username and the password. Type in the username and password you just set and click on OK.

You should be logged in to the Tomcat Web Application Manager interface. From here, you can manage (start, stop and reload) the running web applications and many more.

There is also another web app for managing Tomcat web server called Virtual Host Manager which you can access at http://localhost:8080/host-manager/html

As you can see, the Virtual Host Manager interface is displayed in my web browser.

So, that’s how you install and use Tomcat web server on Ubuntu 18.04 LTS. Thanks for reading this article.

Source

WP2Social Auto Publish Powered By : XYZScripts.com