How to Install Django Web Framework on CentOS 7

How to install Django on CentOS 7How to install Django on CentOS 7

Install Django on CentOS 7

Django is the most popular web framework which is designed to develop fully featured Python Web Applications. By using Django you can build secure, scalable and maintainable dynamic web applications. In this tutorial, you are going to install Django on CentOS 7 using Python Virtual Environment. The best thing to use Python Virtual Environment is you can create multiple Django Environments on a single computer without affecting other Django projects. It also will become easier to install a specific module for each project.

Prerequisites

Before you start to install Django on CentOS 7. You must have the non-root user account on your system with sudo privileges.

Install tree command to use it in further tutorial for better understanding.

sudo yum install tree

Install Python 3

In CentOS Python 2.7 is by default installed as it is the critical part of CentOS base system. Here we need to install Python 3.6 for Django Installation. To install Python 3.6 we need to enable SCL repository (Software Collections Repository) which allows to install Python 3.6 retaining Python 2.7 so the base system will work properly.

Enable SCL repository installing CentOS 7 release SCL file using following command:

sudo yum install centos-release-scl

Now as the repository is enabled install Python 3.6 running following command:

sudo yum install rh-python36

By using venv module we can create virtual environments in Python 3.6. To get venv module we need to install python3-venv package to do so enter following command.

sudo apt install python3-venv

Now we can create Virtual Environment for Django Applications.

Create Virtual Environment

Create a new directory for your Django application and go inside the directory.

mkdir new_django_app && cd new_django_app

Now you should launch new shell using scl tool to access Python 3.6

scl enable rh-python36 bash

Now create virtual Environment by running following command. It will create directory named venv which includes supporting files, Standard python library, Python binaries, Pip package manager.

python3 -m venv venv

To start using the virtual environment we need to activate it. To activate the virtual environment run following command.

source venv/bin/activate

Now your path will change and it will show the name of your virtual environment (venv)

Install Django

Now install Django by using Pip (Python Package Manager).

pip install Django

Confirm the installation and Check the version typing following command.

python -m django –version

The output should be as given below. NOTE: you can get slightly different output.

Output:
2.1.4

Creating Django Project

Create a Django project by using django-admin utility named newdjangoapp. Enter following command to create new Django project.

django-admin startproject newdjangoapp

Now newdjangoapp directory will be created. Check the directory structure by using the following command. This directory has manage.py file used to manage the project and other Django specific files about database configuration settings, routes, and settings

tree newdjangoapp/

Output should be

newdjangoapp/
|– manage.py
`– mydjangoapp
|– __init__.py
|– settings.py
|– urls.py
`– wsgi.py

Now go inside newdjangoapp directory.

cd newdjangoapp

Now we need to migrate the database.

python manage.py migrate

Output should be:

Operations to perform:
Apply all migrations: admin, auth, contenttypes, sessions
Running migrations:
Applying contenttypes.0001_initial… OK
Applying auth.0001_initial… OK
Applying admin.0001_initial… OK
Applying admin.0002_logentry_remove_auto_add… OK
Applying admin.0003_logentry_add_action_flag_choices… OK
Applying contenttypes.0002_remove_content_type_name… OK
Applying auth.0002_alter_permission_name_max_length… OK
Applying auth.0003_alter_user_email_max_length… OK
Applying auth.0004_alter_user_username_opts… OK
Applying auth.0005_alter_user_last_login_null… OK
Applying auth.0006_require_contenttypes_0002… OK
Applying auth.0007_alter_validators_add_error_messages… OK
Applying auth.0008_alter_user_username_max_length… OK
Applying auth.0009_alter_user_last_name_max_length… OK
Applying sessions.0001_initial… OK

Create administrative user running following command.

python manage.py createsuperuser

NOTE: Above command can prompt you for Username, Password and Email Address for your user.

Testing the development server

Run development server using following command.

python manage.py runserver

The output should be:

Performing system checks…

System check identified no issues (0 silenced).
December 29, 2018 – 08:55:33
Django version 2.1.4, using settings ‘mydjangoapp.settings’
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.

NOTE: If you are using the virtual machine then you need to add your server IP address inside settings.py file.

Go to http://127.0.0.1:8000/ in your browser.

You can go to admin page by visiting http://127.0.0.1:8000/admin/ page.

Enter username and password we have created recently after successful authentication you will be redirected to the administrative page.

Stop the development server pressing Ctrl+C in terminal.

Deactivate The Virtual Environment

To deactivate virtual environment after work run following command.

deactivate

Conclusion

You have successfully learned how to install Django Web Framework on CentOS 7. If you have any queries please don’t forget to comment below.

NOTE: You can create multiple development environments repeating above steps.

Source

The State of Desktop Linux 2019

A snapshot of the current state of Desktop Linux at the start of
2019—with comparison charts and a roundtable Q&A with the leaders of three top
Linux distributions.

I’ve never been able to stay in one place for long—at least in terms of which Linux distribution I call home.
In my time as a self-identified “Linux Person”, I’ve bounced around between a
number of truly excellent ones. In my early days, I picked up boxed copies of
S.u.S.E. (back before they made the U uppercase and dropped the dots
entirely) and Red Hat Linux (before Fedora was a thing) from store shelves at
various software outlets.

Side note: remember when we used to buy Operating Systems—and even most
software—in actual boxes, with actual physical media and actual printed
manuals? I still have big printed manuals for a few early Linux versions, which, back then, were necessary for getting just about everything working
(from X11 to networking and sound). Heck, sometimes simply getting
a successful boot required a few trips through those heavy manuals. Ah, those
were the days.

Debian, Ubuntu, Fedora, openSUSE—I spent a good amount of time living in
the biggest distributions around (and many others). All of them were
fantastic. Truly stellar. Yet, each had their own quirks and peculiarities.

As I bounced from distro to distro, I developed a strong attachment to just
about all of them, learning, as I went, to appreciate each for what it
was. Just the same, when asked which distribution I recommend to others,
my brain begins to melt down. Offering any single recommendation feels
simply inadequate.

Choosing which one to call home, even if simply on a secondary PC, is a
deeply personal choice.

Maybe you have an aging desktop computer with limited RAM and an older, but
still absolutely functional, CPU. You’re going to need something light on
system resources that runs on 32-bit processors.

Or, perhaps you work with a wide variety of hardware architectures and need a
single operating system that works well on all of them—and standardizing
on a single Linux distribution would make it easier for you to administer
and update all of them. But what options even are available?

To help make this process a bit easier, I’ve put together a handy set of
charts and graphs to let you quickly glance and find the one that fits your
needs (Figures 1 and 2).

""

Figure 1. Distribution Comparison Chart I

""

Figure 2. Distribution Comparison Chart II

But, let’s be honest, knowing that a particular system meets your hardware
needs (and preferences) simply is not enough. What is the community like?
What’s in store for the future of this new system you are investing in? Do
the ideals of its leadership match up with your own?

In the interests of helping to answer those questions, I sat down with the
leaders of three of the most prominent Linux distros of the day:

  • Chris Lamb: Debian Project Leader
  • Daniel Fore: elementary Founder
  • Matthew Miller: Fedora Project Leader

Each of these systems is unique, respected and brings
something truly valuable to the world.

I asked all three leaders the exact same questions—and gave each the chance to
respond to each other. The topics are all over the place and designed to
help show the similarities and differences between the distributions, both in terms of
goals and culture.

Note that the Fedora project leader, Matthew Miller, was having an unusually
busy time (both for work and personally), but he still made time to answer as
many questions as he could. That, right there, is what I call dedication.

Bryan (LJ):

Introduce your Linux distribution (the short, elevator-pitch version—just
a few sentences) and your role within it.

Daniel (elementary):

elementary is focused on growing the market for open-source software and
chipping away at the share of our closed-source competitors. We believe in
providing a great user experience for both new users and pro users, and
putting a strong emphasis on security and privacy. We build elementary OS: a
consumer-focused operating system for desktops and notebooks.

My role at elementary is as Founder and CEO. I work with our various teams
(like design, development, web and translation teams) to put together a
cohesive vision, product roadmap and ensure that we’re following an
ethical path to sustainable funding.

Chris (Debian):

The Debian Project, which celebrated its 25th birthday this year, is
one of the oldest and largest GNU/Linux distributions and is run on an
entirely volunteer basis.

Not only does it have stellar reputation for stability and technical
excellence, it has a unwavering philosophical stance on free software (i.e., it
comes with no proprietary software pre-installed and the main repository is
only free software
). As it underpins countless derivative distributions,
such as Ubuntu, et al., it is uniquely poised and able to improve the Free
Software world as a whole.

The Debian Project Leader (DPL) is a curious beast. Far from being a BDFL—the
DPL has no authoritative or deciding say in technical matters—the
project leader is elected every year to a heady mix of figurehead, spokesperson
and focus/contact point, but the DPL is also responsible for the quotidian business
of keeping the project moving with respect to reducing bureaucracy and
smoothing any and all roadblocks to Debian Developers’ productivity
.

Matthew (Fedora):

The Fedora distribution brings all of the innovation of thousands of upstream
projects and hundreds of thousands of upstream developers together into a
polished operating system for users, with releases on a six-month cadence.
We’re a community project tied together through the shared project mission
and through the “four Fs” of our foundations: Freedom, Friends, Features
and First. Something like 3,000 people contribute directly to Fedora in any
given year, with a core active group of around 400 people participating in
any given week.

We just celebrated the 15th anniversary of our first release, but our history
goes back even further than that to Red Hat Linux. I’m the Fedora Project
Leader, a role funded by Red Hat—paying people to work on the project is
the largest way Red Hat acts as a sponsor. It’s not a dictatorial role;
mostly, I collect good ideas and write short persuasive essays about them.
Leadership responsibility is shared with the Fedora Council, which includes
both funded roles, members selected by parts of the community and at-large
elected representatives.

Bryan (LJ):

With introductions out of the way, let’s start with this (perhaps
deceptively) simple question:

How many Linux distributions should there be? And why?

Daniel (elementary):

As long as there are a set of users who aren’t getting their needs met by
existing options, there’s a purpose for any number of distros to exist.
Some come and some go, and many are very very niche, but that’s okay. I
think there’s a lot of people who are obsessed with trying to have some
dominant player take a total monopoly, but in every other market category,
it’s immediately apparent how silly that idea is. You wouldn’t want a
single clothing manufacturer or a single restaurant chain or a single
internet provider (wink hint nudge) to have total market dominance. Diversity
and choice in the marketplace is good for customers, and I think it’s no
different when it comes to operating systems.

Matthew (Fedora):

[Responding to Daniel]
Yes, I agree exactly. That said, creating an entirely from scratch distro is
a lot of work, and a lot of it not very interesting work. If you’ve
got something innovative at the how-we-put-the-OS-together level (like
CoreOS), there’s room for that, but if you’re focused higher up the stack,
like a new desktop environment or something else around user experience, it
makes the most sense to make a derivative of one of the big community-powered
distros. There’s a lot of boring hard work, and it makes sense to reuse
rather than carry those same rocks to the top of a slightly different hill.

In Fedora, we’re aiming to make custom distro creation as easy as possible.
We have “spins”, which are basically mini custom distros. This is stuff like
the Python Classroom Lab or Fedora Jam (which is focused on musicians). We
have a framework for making those within the Fedora project—I’m all
about encouraging bigger, broader sharing and collaboration in Fedora. But if
you want to work outside the project—say, you really have different ideas
on free and open-source vs. proprietary software—we have Fedora Remixes
that let you do that.

Chris (Debian):

The competing choice of distributions is often cited as a reason preventing
Linux from becoming mainstream as it robs the movement of a consistent and focused
marketing push.

However, philosophical objections against monopolistic behaviour granted, the
diversity and freedom that this bazaar of distributions affords is, in my
view, paradoxically exactly why
it has succeeded.

That people are free—but more important, feel free—to create a
new distribution as a means to try experimental or outlandish approaches to
perceived problems is surely sufficient justification
for some degree of proliferation or even duplication of effort.

In this capacity, Debian’s technical excellence, flexibility and deliberate
lack of a top-down direction has resulted in it becoming the base
underpinning countless derivatives, clearly and evidently able to provide the
ingredients to build one’s “own” distribution, often without overt credit.

Matthew wrote: “if you want to work outside the project—say, you really have different
ideas on free and open source vs. proprietary software—we have Fedora
Remixes that let you do that.”

Given that, I would be curious to learn how you protect your reputation if
you encourage, or people otherwise use your infrastructure, tools and possibly
even your name to create and
distribute works that are antithetical to the cause of software and user
freedom?

Bryan (LJ):

Thinking about it from a slightly different angle—how many distros would
be TOO many distros?

Daniel (elementary):

More than the market can sustain I guess? The thing about
Linux is that it powers all kinds of stuff. So even for one non-technical
person, they could still end up running a handful of distros for their
notebook, their router, their phone someday, IoT devices, etc. So the number
of distros that could exist sustainably could easily be in the hundreds or
thousands, I think.

Chris (Debian):

If I may be so bold as to interpret this more widely, whilst it might look
like we have “too many” distributions, I fear this might be misunderstanding
the reasons why people are creating these newer offerings in the first place.

Apart from the aforementioned distros created for technical experimentation,
someone spinning up their own distribution might be (subconsciously!) doing
it for the delight and satisfaction in
building something themselves and having their name attached to it—something entirely reasonable and justifiable IMHO.

To then read this creation through a lens of not being ideal for new users or
even some silly “Linux worldwide domination” metric could therefore even be
missing the point and some of the sheer delight of free software to begin
with.

Besides, the “market” for distributions seems to be doing a pretty good job
of correcting itself.

Bryan (LJ):

Okay, since you guys brought it up, let’s talk about world domination.

How much of what you do (and what your teams do) is influenced by a desire
to increase marketshare (either of your distribution specifically or
desktop Linux in general)?

Daniel (elementary):

When we first started out, elementary OS was something we made for fun out of
a desire to see something exist that we felt didn’t yet. But as the
company, and our user base, has grown, it’s become more clear that our
mission must be about getting open-source software in the hands of more
people. As of now, our estimated userbase is somewhere in the hundreds of
thousands with more than 75% of downloads coming from users of closed-source
operating systems, so I think we’re making good progress toward that
goal. Making the company mission about reaching out to people directly has
shaped the way we monetize, develop products, market and more, by ensuring
we always put users’ needs and experiences first.

Chris (Debian):

I think it would be fair to say that “increasing market share” is not an
overt nor overly explicit priority for Debian.

In our 25-year history, Debian has found that if we just continue to do good
work, then good things will follow.

That is not to say that other approaches can’t work or are harmful, but
chasing potentially chimeric concepts such as “market share” can very easily
lead to negative outcomes in the long run.

Matthew (Fedora):

A project’s user base is directly tied to its ability to have an effect in
the world. If we were just doing cool stuff but no one used it, it really
wouldn’t matter much. And, no one really comes into working on a distro
without having been a user first. So I guess to answer the question directly
for me at least, it’s pretty much all of it—even things that are not
immediately related are about helping keep our community healthy and growing
in the long term.

Bryan (LJ):

The three of you represent distros that are “funded” in very different ways.
Fedora being sponsored (more or less) by Red Hat, elementary being its own
company and Debian being, well, Debian.

I would love to hear your thoughts around funding the work that goes into
building a distribution. Is there a “right” or “ideal” way to fund that work
(either from an ethical perspective or a purely practical one)?

Chris (Debian):

Clearly, melding “corporate interests” with the interests of a community
distribution can be fraught with issues.

I am always interested to hear how other distros separate influence and power
particularly in terms of increasing transparency using tools such as Councils
with community representation, etc. Indeed, this question of “optics” is
often highly under-appreciated; it is simply not enough to be honest, you
must be seen to be honest too.

Unfortunately, whilst I would love to be able to say that Debian is
by-definition free (!) of all such problems by not having a “big sister”
company sitting next to it, we have a long history of conversations regarding
the role of money in funding contributors.

For example, is it appropriate to fund developers to do work that might not
not be done otherwise? And if it is paid for, isn’t this simply a feedback
loop that effectively ensures that this work will cease to within the remit
of volunteers. There are no easy answers and we have no firm consensus, alas.

Daniel (elementary):

I’m not sure that there’s a single right way, but I think we have the
opinion that there are some wrong ways. The biggest questions we’re
always trying to ask about funding are where it’s coming from and what
it’s incentivizing. We’ve taken a hard stance that advertising income is
not in the interest of our users. When companies make their income from
advertising, they tend to have to make compromises to display advertising
content instead of the things their users actually want to see, and
oftentimes are they incentivized to invade their users’ privacy in order
to target ads more effectively. We’ve also chosen to avoid big enterprise
markets like server and IoT, because we believe that since companies will
naturally be incentivized to work on products that turn a profit, that making
that our business model would result in things like the recent Red Hat
acquisition or in killing products that users love, like Ubuntu’s Unity.

Instead, we focus on things like individual sales of software directly to our
users, bug bounties, Patreon, etc. We believe that doing business directly
with our users incentivizes the company to focus on features and products
that are in the benefit of those paying customers. Whenever a discussion
comes up about how elementary is funded, we always make a point to evaluate
if that funding incentivizes outcomes that are ethical and in the favor of
our users.

Regarding paying developers, I think elementary is a little different here.
We believe that people writing open-source software should be able to make a
living doing it. We owe a lot to our volunteer community, and the current
product could not be possible without their hard work, but we also have to
recognize that there’s a significant portion of work that would never get
done unless someone is being paid to do it. There are important tasks that
are difficult or menial, and expecting someone to volunteer their time to them
after their full work day is a big ask, especially if the people
knowledgeable in these domains would have to take time away from their
families or personal lives to do so. Many tasks are also just more suited to
sustained work and require the dedicated attention of a single person for
several weeks or months instead of some attention from multiple people over
the span of years. So I think we’re pretty firmly in the camp that not
only is it important for some work to be paid, but the eventual goal should
be that anyone writing open-source code should be able to get paid for their
contributions.

Chris (Debian):

Daniel wrote: “So I think we’re pretty firmly in the camp that not only is it
important for some work to be paid, but the eventual goal should be that anyone
writing open-source code should be able to get paid.”

Do you worry that you could be creating a two-tier community with this
approach?

Not only in terms of hard influence (eg. if I’m paid, I’m likely to be able
to simply spend longer on my approach) but moreover in terms of “soft”
influence during discussions or by putting off
so-called “drive-thru” contributions? Do you do anything to prevent the
appearance of this?

Matthew (Fedora):

Chris wrote:
“Do you worry that you could be creating a two-tier community with this
approach?”

Yeah, this is a big challenge for us. We have many people who are paid by Red
Hat to work on Fedora either full time or as part of their job, and that
gives a freedom to just be around a lot more, which pretty much directly
translates to influence. Right now, many of the community-elected positions
in Fedora leadership are filled by Red Hatters, because they’re people the
community knows and trusts. It takes a lot of time and effort to build up
that visibility when you have a different day job. But there’s some important
nuances here too, because many of these Red Hatters aren’t actually paid to
work on Fedora at all—they’re doing it just like anyone else who loves the
project.

Daniel (elementary):

Chris wrote:
“Do you worry that you could be creating a two-tier community with this
approach?”

It’s possible, but I’m not sure that we’ve measured anything to
this effect. I think you might be right that employees at elementary can have
more influence just as a byproduct of having more time to participate in more
discussions, but I wouldn’t say that volunteers’ opinions are
discounted in any way or that they’re underrepresented when it comes to
major technical decisions. I think it’s more that we can direct labor
after design and architecture decisions have been discussed. As an example,
we recently had decided to make the switch from CMake to Meson. This was a
group discussion primarily led by volunteers, but the actual implementation
was then largely carried out by employees.

Chris (Debian):

Daniel wrote:
“Do you worry that you could be creating a two-tier community with
this approach? … It’s possible, but I’m not sure that we’ve measured anything to
this effect.”

I think it might be another one of those situations where the optics in play
is perhaps as important as the reality. Do you do anything to prevent the
appearance of any bias?

Not sure how best to frame it hypothetically, but if I turned up to your
project tomorrow and learned that some developers were paid for their work
(however fairly integrated in practice), that would perhaps put me off
investing my energy.

Bryan (LJ):

What do you see as the single biggest challenge currently facing both your
specific project—and desktop Linux in general?

Daniel (elementary):

Third-party apps! Our operating systems are valuable to people only if they can
use them to complete the tasks that they care about. Today, that increasingly
means using proprietary services that tie in to closed-source and non-native
apps that often have major usability and accessibility problems. Even major
open-source apps like Firefox don’t adhere to free desktop standards like
shipping a .desktop file or take advantage of new cross-desktop metadata
standards like AppStream. If we want to stay relevant for desktop users, we
need to encourage the development of native open-source apps and invest in
non-proprietary cloud services and social networks. The next set of
industry-disrupting apps (like DropBox, Sketch, Slack, etc.) need to be open source and
Linux-first.

Chris (Debian):

Third-party apps/stores are perhaps the biggest challenge facing all
distributions within the medium- to long-term, but whilst I would concede
there are cultural issues in play here, I believe they have some element of
being technical challenges or at least having some technical ameliorations.

More difficult, however, is that our current paradigms of what constitutes
software freedom are becoming difficult to square with the increased usage of
cloud services. In the years ahead we may need to revise our perspectives,
ideas and possibly even our definitions of what constitutes free software.

There will be a time when the FLOSS community will have to cease the casual
mocking of “cloud” and acknowledge the reality that it is, regardless of
one’s view of it, here to stay.

Matthew (Fedora):

For desktop Linux, on the technical side, I’m worried about hardware
enablement—not just the work dealing with driver compatibility and
proprietary hardware, but more fundamentally, just being locked out. We’ve
just seen Apple come out with hardware locked so Linux won’t even boot—even with signed kernels. We’re going to see more of that, and more tablets
and tablet-keyboard combos with similar locked, proprietary operating
systems.

A bigger worry I have is with bringing the next generation to open
source—a lot of Fedora core contributors have been with the project since it started
15 years ago, which on the one hand is awesome, but also, we need to
make sure that we’re not going to end up with no new energy. When I was a
kid, I got into computers through programming BASIC on an Apple ][. I could
see commercial software and easily imagine myself making the same kind of
thing. Even the fanciest games on offer—I could see the pixels and could
use PEEK and POKE to make those beeps and boops. But now, with kids getting
into computers via Fortnite or whatever, that’s not something one can just
sit down and make an approximation of as a middle-school kid. That’s
discouraging and makes a bigger hill to climb.

This is one reason I’m excited about Fedora IoT—you can use Linux and open
source at a tinkerer’s level to make something that actually has an effect on
the world around you, and actually probably a lot better than a lot of
off-the-shelf IoT stuff.

Bryan (LJ):

Where do you see your distribution in five years? What will be its place be in
the broader Linux and computing world?

Chris (Debian):

Debian naturally faces some challenges in the years ahead, but I sincerely
believe that the Project remains as healthy as ever.

We are remarkably cherished and uniquely poised to improve the free software
ecosystem as a whole. Moreover, our stellar reputation for technical
excellence, stability and software freedom remains highly respected where
losing this would surely be the beginning of the end for Debian.

Daniel (elementary):

Our short-term goals are mostly about growing our third-party app ecosystem and
improving our platform. We’re investing a lot of time into online
accounts integration and working with other organizations, like GNOME, to
make our libraries and tooling more compelling. Sandboxed packaging and
Wayland will give us the tools to help keep our users’ data private and
to keep their operating system stable and secure. We’re also working with
OEMs to make elementary OS more shippable and to give users a way to get an
open-source operating system when they buy a new computer. Part of that work
is the new installer that we’re collaborating with System76 to develop.
Overall, I’d say that we’re going to continue to make it easier to
switch away from closed-source operating systems, and we’re working on
increasing collaborative efforts to do that.

Bryan (LJ):

When you go to a FOSS or Linux conference and see folks using Mac and Windows
PCs, what’s your reaction? Is it a good thing or a bad thing when
developers of Linux software primarily use another platform?

Chris (Debian):

Rushing to label this as a “good” or “bad” thing can make it easy to miss the
underlying and more interesting lessons we can learn here.

Clearly, if everyone was using a Linux-based operating system, that would be a
better state of affairs, but if we are overly quick to dismiss the usage of
Mac systems as “bad”, then we can often fail to understand why people have
chosen to adopt the trade-offs of these platforms in the first place.

By not demonstrating sufficient empathy for such users as well as newcomers
or those without our experience, we alienate potential users and contributors
and tragically fail to communicate our true
message. Basically, we can be our own worst enemy sometimes.

Daniel (elementary):

Within elementary, we strongly believe in dogfood, but I think when we see
someone at a conference using a closed-source operating system, it’s a
learning opportunity. Instead of being upset about it or blaming them, we
should be asking why we haven’t been able to make a conversion. We need
to identify if the problem is a missing product, feature, or just with
outreach and then address that.

Bryan (LJ):

How often do you interact with the leaders of other distributions? And is
that the right amount?

Chris (Debian):

Whilst there are a few meta-community discussion groups around, they tend to
have a wider focus, so yes, I think we could probably talk a little more, even
just as a support group or a place to rant!

More seriously though, this conversation itself has been fairly insightful,
and I’ve learned a few things that I think I “should” have known already,
hinting that we could be doing a better job here.

Daniel (elementary):

With other distros, not too often. I think we’re a bit more active with
our partners, upstreams and downstreams. It’s always interesting to hear
about how someone else tackles a problem, so I would be interested in
interacting more with others, but in a lot of cases, I think there are
philosophical or technical differences that mean our solutions might not be
relevant for other distros.

Bryan (LJ):

Is there value in the major distributions standardizing on package management
systems? Should that be done? Can that be done?

Chris (Debian):

I think I would prefer to see effort go toward consistent philosophical
outlooks and messaging on third-party apps and related issues before I saw
energy being invested into having a single package management format.

I mean, is this really the thing that is holding us all back? I would grant
there is some duplication of effort, but I’m not sure it is the most egregious
example and—as you suggest—it is not even really technically
feasible or is at least subject to severe diminishing returns.

Daniel (elementary):

For users, there’s a lot of value in being able to sideload
cross-platform, closed-source apps that they rely on. But outside of this use
case, I’m not sure that packaging is much more than an implementation
detail as far as our users are concerned. I do think though that developers
can benefit from having more examples and more documentation available, and
the packaging formats can benefit from having a diverse set of
implementations. Having something like Flatpak or Snap become as well
accepted as SystemD would probably be good in the long run, but our users
probably never noticed when we switched from Upstart, and they probably
won’t notice when we switch from Debian packages.

Bryan (LJ):

Big thanks to Daniel, Matthew and Chris for taking time out to answer
questions and engage in this discussion with each other. Seeing the
leadership of such excellent projects talking together about the things they
differ on—and the things they align on completely—warms my little
heart.

Source

5 Best Android Emulators for Linux

The emulator is software on a computer system that behaves like another computer system. When I am talking about Android Emulators for Linux, it means a program for Linux that runs like the Android environment. It is used by developers and testers to test their apps for Android using the Linux system. You can run Android apps and games on your Linux system. Emulators are also used by gamers to run Android games on their system. I have already listed best Android Emulators for PC but that basically included Android Emulators for Windows and Mac. So, I decided to make a dedicated list of Android Emulators for Linux.

Also see: Best hairstyling Apps

Up to Rs. 10000 Cashback on Android Phones

Up to Rs. 10000 Cashback on Android Phones

1. Android-x86

If you are looking to run Android apps on your Ubuntu environment, you can use Android-x86 iOS file inside any virtual environment. You just need to download the ISO file from the link given below. You can use any virtual environment software like VirtualBox to mount the ISO and run the Android Environment on your Linux-based system. It is very simple to setup and use.

Android-x86 is an open source project. You can contribute or donate to help the development.

Download

2. Andy OS

Andy OS is another excellent Android Emulator for Linux. It is available for several Linux based environments including Mac. So, you can now play Android games on your Linux based system easily. Andy OS is also available for Mac. While setting up Andy OS for your Linux system, make sure to allot proper resources to the virtual environment if you want smoother experience. If you want to run heavy games or apps, it should get enough RAM.

Download

3. Android SDK

If you are a developer and want Android Emulator for testing your apps, you can use Android SDK that comes with an Emulator to run your apps. You can download the Android SDK and Android Studio on your Linux system. The whole setup is heavy, so it takes much time to download and install. Then you can play Android games and run Android apps on your system without needing an Android phone.

Download

4. GenyMotion

GenyMotion is another good Android Emulator for Linux. It works similar to Android SDK and allows developers to test their apps on different Android devices. Other people can also use this to test how a game or app work on Android without needing an Android phone. You can download the GenyMotion Android Emulator for Linux from GenyMotion’s website after creating a free account. Setting up GenyMotion on a Linux PC is easy.

Download

5. Anbox

Anbox is also a good Opensource Android Emulator you can install on your Linux system. This emulator runs the entire Android system, so you can run any Android app. It also puts Android apps into a tightly sealed box, so none of the hardware can access to the data. It makes this platform secure. The only issue with this Android Emulator for Linux is that it doesn’t ship with Google Play Store. But you can always install third-party apps by using the APK file.

Download

Final Words

Here is the list of best Android Emulators for Linux. If you use Linux system and wants a good Android Emulator for Linux, you can check this list and download one for you. You can use these Emulators for playing Android games or trying an Android app on your Linux PC.

If you think I missed any good Android Emulator for Linux to add in this list, you can let me know. I will surely try that and include in this list.

Source

Maximize your Ansible skills with these 7 how-tos

Maximize your Ansible skills with these 7 how-tos

A collection of playbooks, guides, and tutorials to maximize your Ansible skills.

gears and lightbulb to represent innovation

Ansible is a powerful, agentless (but easy-to-use and lightweight) automation tool that’s been steadily gaining popularity since its introduction in 2012. This popularity is due in part to its simplicity.  Ansible’s most basic dependencies, Python and SSH, are available by default almost everywhere, making it easy to use Ansible for a wide range of systems: servers, workstations, Raspberry Pis, industrial controllers, Linux containers, network devices, and so on.

Ansible is also diverse in the tasks it can perform. From core modules to manage system configuration, network management, cloud resource creation, and even Kubernetes integration, Ansible can integrate with a wide variety of systems and software. It’s easy to write custom modules for Ansible as well, extending it to perform all manner of functions in the environment.

You can even use Ansible to install, customize, and run your favorite games, including DwarfFortress!

This diversity in support and operation is reflected in our list of the best Ansible articles of 2018, which cover systems administration, monitoring, workstation management, Kubernetes, continuous integration and deployment, and more. Check it out.

Top 7 Ansible articles from 2018

tools_osyearbook2016_sysadmin_cc.png
Learn how to save time doing updates with the Ansible IT automation engine.
a checklist for a team
There are many ways to automate common sysadmin tasks with Ansible. Here are several of them.
How to manage your workstation configuration with Ansible
Learn how to automate your workstation setup via Ansible, which will allow you to easily restore…
How to use Ansible to set up system monitoring with Prometheus
In summer 2017, I wrote two how-to articles about using Ansible . After the first article , I…
The new Operator SDK makes it easy to create a Kubernetes controller to deploy and manage a service…
Streamline and tighten automation processes in complex IT environments with these Ansible playbooks.
Manage your workstation with Ansible: Automating configuration
Learn how to make Ansible automatically apply configuration changes to a fleet of laptops and…

Source

How to install Dropbox on CentOS 7 Server

How to install Dropbox on CentOS 7How to install Dropbox on CentOS 7

Install Dropbox on CentOS 7

Dropbox is the online storage service supports Linux distros. Dropbox provides online storage to backup and store data automatically with security. It has both free and paid plan. In the free plan, it provides 2GB storage if you want more storage then you can buy paid plans. In this tutorial, you are going to learn how to install Dropbox on CentOS 7 server.

Prerequisites

Before you start to install Dropbox on CentOS. You must have the non-root user account on your server with sudo privileges.

Install Dropbox Client

Here we will first install Dropbox Client. Download Dropbox Client using the following command.

curl -Lo dropbox-linux-x86_64.tar.gz https://www.dropbox.com/download?plat=lnx.x86_64

Create a directory for Dropbox installation by using the following command.

sudo mkdir -p /opt/dropbox

Now extract the downloaded file inside /opt/dropbox directory.

sudo tar xzfv dropbox-linux-x86_64.tar.gz –strip 1 -C /opt/dropbox

Setup account for Dropbox

In this section, we will link your Dropbox account to Dropbox client on your server to do so execute the following command.

/opt/dropbox/dropboxd

You will get the following output, just copy the link given inside output and run it inside your favorite browser on your local machine.

Host ID Link:
This computer isn’t linked to any Dropbox account…
Please visit https://www.dropbox.com/cli_link_nonce?nonce=3d88f2e1f2949265ebcac8d159913770 to link this device.

If you have existing Dropbox account then just Sign in otherwise create a new account on Dropbox.

Install Dropbox on CentOS - Register to DropboxInstall Dropbox on CentOS – Register to Dropbox

Once you complete above process you will see the following output on your CentOS system.

Link success output:
This computer is now linked to Dropbox. Welcome Sammy

You have successfully linked your Dropbox account to Dropbox client. A new Dropbox directory is created inside the HOME directory to store synchronized Dropbox files. Now Enter Ctrl+c to setup Dropbox as a service.

Setup Dropbox as a Service

To set up Dropbox as a service, you should create a init script and Systemd unit file to do so enter the following command.

sudo curl -o /etc/init.d/dropbox https://gist.githubusercontent.com/thisismitch/6293d3f7f5fa37ca6eab/raw/2b326bf77368cbe5d01af21c623cd4dd75528c3d/dropboxsudo curl -o /etc/systemd/system/dropbox.service https://gist.githubusercontent.com/thisismitch/6293d3f7f5fa37ca6eab/raw/99947e2ef986492fecbe1b7bfbaa303fefc42a62/dropbox.service

Run following script to make above files executables.

sudo chmod +x /etc/systemd/system/dropbox.service /etc/init.d/dropbox

/etc/sysconfig/dropbox file should contain system user names who will run Dropbox. Run following command to edit this file.

sudo nano /etc/sysconfig/dropbox

Set username as given in below example then save and exit the file.

DROPBOX_USERS=”john”

Now reload the Systemd daemon.

sudo systemctl daemon-reload

Now start and enable Dropbox service executing following command.

sudo systemctl start dropbox && sudo systemctl enable dropbox

Install Dropbox CLI

Enter following command to download the Dropbox CLI script.

cd ~ && curl -LO https://www.dropbox.com/download?dl=packages/dropbox.py

Make the file executable by running following command.

chmod +x ~/dropbox.py

As Dropbox CLI expects ~/.dropbox-dist to contain your Dropbox installation files to do so run following command.

ln -s /opt/dropbox ~/.dropbox-dist

Now you can run Dropbox client by using following command. It will instructions for how to use Dropbox CLI.

~/dropbox.py

You can check the status of Dropbox by typing following command.

~/dropbox.py status

You should get following output

Output:
Up to date

To turn off the automatic LAN sync use following command.

~/dropbox.py lansync n

If you want more information about a specific command enter following command

~/dropbox.py help sharelink

Above command will provide you more information about sharelink command.

dropbox running

Returns 1 if running 0 if not running.

Check status of dropbox by typing

~/dropbox.py status

If Dropbox is not active then start service by running following command

~/dropbox.py start

To stop dropbox service enter following command

~/dropbox.py stop

Get sync status of Dropbox file by typing

~/dropbox.py filestatus Dropbox/test.txt

Generate shareable link for a file by typing

~/dropbox.py sharelink Dropbox/test.txt

You can exclude the directory from syncing by using the following command

~/dropbox.py exclude add Dropbox/dir1

To list excluded directories type following command

~/dropbox.py exclude list

Remove directory from excluded list typing

~/dropbox.py exclude remove Dropbox/dir1

Link Additional Dropbox Account

To link additional Dropbox account run following command then copy the given url in output.

/opt/dropbox/dropboxd

Now go to the URL given in the output and complete the authentication process.
Then add the user inside /etc/default/dropbox file.

sudo nano /etc/default/dropbox

Unlink Dropbox Account

First stop the Dropbox service.

sudo service dropbox stop

Remove the user from /etc/default/dropbox file.

sudo nano /etc/default/dropbox

Then delete Dropbox user directry using following command replacing USERNAME with the real username of system.

sudo rm -r ~/USERNAME/Dropbox

Now start the Dropbox service.

Conclusion

You have successfully learned how to install Dropbox on CentOS 7. If you have any queries don’t forget to comment below.

Source

2018: Top 10 biggest news stories from Linux and open source world

The year 2018 turn out to be big newsmaker for Linux and open source world. The most important acquisition in the open source world, Deepfakes, important security flows in CPUs, and the Facebook scandal all happened in 2018. Vivek Gite picks top 10 most significant and biggest news stories from Linux and open source world that rock IT world.

Top 10 biggest news stories from Linux and open source world in 2018

Biggest news stories from Linux and open source world in 2018
Below are the most talked or shared stories from my social media account and site.

1. IBM buys Red Hat

IBM acquired number one open source enterprise Linux distribution and software maker Red Hat Inc. in a $33.4 billion. Red Hat started in 1993 with stong fouse on Linux and open source software. IBM and Red Hat combine to create leading hybrid and multi-cloud service provider.

I always knew either Oracle or IBM will eat up Redhat Enterprise Linux. IBM Nears Deal to Acquire Software Maker Red Hat https://t.co/DOhU1KIpOw What do you think?

— The Best Linux Blog In the Unixverse (@nixcraft) October 28, 2018

2. Meltdown and Spectre bug

Meltdown and Spectre exploit critical vulnerabilities in modern CPUs. It was a severe security problem in the Intel, AMD, ARM and other processors. The Spectre breaks the isolation between different applications. The Meltdown attack allows a program to access the memory, and thus also the secrets, of other programs and the operating system. The big cloud vendor such as Google and AWS patched bug before anyone else. Most Linux distros and IT vendors did release patches on time too. Further, Intel and AMD released microcode updates. However, *BSD family of operating systems vendors ignored by Intel. FreeBSD was made aware of the problems in late December 2017. OpenBSD claimed that they received no non-public information. Patching these security bugs affected the performance of the computers too.

  1. How to check Linux for Spectre and Meltdown vulnerability
  2. How to patch Meltdown CPU Vulnerability CVE-2017-5754 on Linux
  3. Howto patch Spectre Vulnerability CVE-2017-5753/CVE-2017-5715 on Linux

3. A Kubernetes security flaw

Kubernetes is open source and most popular platform for managing containerized services. It provides management, automation, scaling and much more. Critical vulnerability found in Kubernetes. It allowed attackers to take over any vulnerable node using a specially crafted request:

With a specially crafted request, users that are authorized to establish a connection through the Kubernetes API server to a backend server can then send arbitrary requests over the same connection directly to that backend, authenticated with the Kubernetes API server’s TLS credentials used to establish the backend connection.

Post mortem report (PDF) provided a fantastic summary of problems and how to avoid them in future.

4. Microsoft makes its 60000 patents open source to help Linux

To capitalize better on open source, Linux and cloud computing market, Microsoft announced that they joined the Open Invention Network (“OIN”). The OIN is a community dedicated to protecting Linux and other open source software programs from patent risk:

We know Microsoft’s decision to join OIN may be viewed as surprising to some; it is no secret that there has been friction in the past between Microsoft and the open source community over the issue of patents. For others who have followed our evolution, we hope this announcement will be viewed as the next logical step for a company that is listening to customers and developers and is firmly committed to Linux and other open source programs.

Now, as we join OIN, we believe Microsoft will be able to do more than ever to help protect Linux and other important open source workloads from patent assertions. We bring a valuable and deep portfolio of over 60,000 issued patents to OIN. We also hope that our decision to join will attract many other companies to OIN, making the license network even stronger for the benefit of the open source community.

5. Linux adopts a new CoC (Code of Conduct)

In September 2018, Linus Torvalds issued an apology regarding his public behavior and announced that he would be taking some time off from the Linux kernel. Linux kernel creator Linus Torvalds has told the BBC that he is seeking professional help to become more empathetic towards fellow developers, but admits he may have to “fake it until I make it.”

Linux 4.19-rc4 released, an apology, and a maintainership note – Linus Torvalds https://t.co/RVRP3wfMIt HT @ddayhere pic.twitter.com/kUUhWWRSDh

— The Best Linux Blog In the Unixverse (@nixcraft) September 17, 2018

Following his public apology for his behavior over the years, the Linux community adopted a new “Code of Conduct (CoC).” The new CoC provides harassment-free experience for everyone who participates in Linux kernel development. However, both Linus’s apology and CoC met with lots of strong reaction over social media/mailing lists/forums especially conspiracy theories. A revised version of CoC was finally released.

6. EQT buys SUSE

Suse is very popular in Europe and significant commercial Linux distributors. It plays a vital role in Linux, open-source infrastructure and management space. Micro Focus announced that Suse is changing owners yet again and the deal valued Suse at $2.535 billion:

SUSE today announced plans to partner with growth investor EQT to continue momentum, strategy execution and product expansion as an independent business. The completion of EQT’s acquisition of SUSE from Micro Focus is subject to Micro Focus shareholder and customary regulatory approvals and is expected to occur in early 2019. Having enjoyed seven years of continuous expansion, SUSE is set to be acquired from Micro Focus by EQT, which is a development-focused investor with extensive experience in the software industry. Under Micro Focus ownership and with their investment and support, SUSE has developed as a business, cementing its position as a leading provider of enterprise-grade, open source software-defined infrastructure and application delivery solutions.

7. Microsoft buys GitHub

GitHub is a web-based hosting service for version control using Git. It quickly becomes favorite amount developers, especially open source hackers due to various collaboration features. On June 4, 2018, Microsoft announced it had reached an agreement to acquire GitHub for US$7.5 billion and the purchase closed on October 26, 2018.

Microsoft Corp. on Monday announced it has reached an agreement to acquire GitHub, the world’s leading software development platform where more than 28 million developers learn, share and collaborate to create the future. Together, the two companies will empower developers to achieve more at every stage of the development lifecycle, accelerate enterprise use of GitHub, and bring Microsoft’s developer tools and services to new audiences.

“Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to developer freedom, openness and innovation,” said Satya Nadella, CEO, Microsoft. “We recognize the community responsibility we take on with this agreement and will do our best work to empower every developer to build, innovate and solve the world’s most pressing challenges.”

I wasn’t happy as it gives too much power to a single vendor:

For people asking and defending Microsoft acquiring github. Here is what’s wrong with Microsoft and why people in open source world do not trust MS. What is wrong with Microsoft buying GitHub https://t.co/AcHdQ07iRA

— The Best Linux Blog In the Unixverse (@nixcraft) June 4, 2018

I got an email from one my client who is @Github private repo business customer. They want to move out of Github to a personal GIT server hosted either in AWS or Google Cloud. They fear that Microsoft might get insight into their codebase. Small startups/business do not trust MS.

— The Best Linux Blog In the Unixverse (@nixcraft) June 3, 2018

8. Proton beta released to run Windows games on Linux

One of the biggest reason Linux failed as the desktop is lack of quality games. Most developers release games on Windows operating system. To bring Windows games on Linux Valve announced a new variation of Wine, named Proton. Proton is fully open-source software, and it integrates directly with Linux version of Steam software:

Proton is a tool for use with the Steam client which allows games which are exclusive to Windows to run on the Linux operating system. It uses Wine to facilitate this.

9. Microsoft release it’s own Linux distro for cloud

In the past, Microsoft has strategically excluded any support for Linux and called cancer at one time. To stay relevant in the cloud computing era and IT world, Microsoft accepted Linux and started to embrace open source:

  1. Windows Subsystem for Linux (WSL) is a compatibility layer for running Linux binary executables natively on Windows 10 and Windows Server 2019. Microsoft is working closely with Canonical who develops the Ubuntu operating system and was hired in to provide support to run Ubuntu natively on Windows 10.
  2. Azure Sphere is a first Linux-based operating system created by Microsoft for the Internet of Things applications

10. MIPS goes open source

MIPS is an acronym for Microprocessor without Interlocked Pipelined Stages. MIPS based CPUs are used in embedded systems such as home gateways, WiFi , and router. Now, MIPS goes open source:

Without question, 2018 was the year RISC-V genuinely began to build momentum among chip architects hungry for open-source instruction sets. That was then. Wave Computing (Campbell, Calif.) announced Monday (Dec. 17) that it is putting MIPS on open source, with MIPS Instruction Set Architecture (ISA) and MIPS’ latest core R6 available in the first quarter of 2019. Art Swift, hired by Wave this month as president of its MIPS licensing business, described the move as critical to accelerate the adoption of MIPS in an ecosystem. Going open source is “a big plan” that Wave CEO Derek Meyer, a MIPS veteran, has been quietly fostering since Wave acquired MIPS Technologies in June, explained Swift.

What was your favorite news related to Linux and open source this year? Share it in the comments!

Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

Source

Weekend Reading: Multimedia | Linux Journal

Put the fun back in computing. With this weekend’s reading, we encourage you to build yourself an internet radio station, create your own Audible or even live-stream your pets on YouTube. Sky’s the limit with Linux. Enjoy!

Building Your Own Audible

by Shawn Powers

I have audiobooks from a variety of sources, which I’ve purchased in a variety of ways. I have some graphic audio books in MP3 format, a bunch of Audible books in their DRM’d format and ripped CDs varying from m4b (Apple format for books) to MP3 and even some OGG. That diversity makes choosing a listening platform difficult. Here I take a quick look at some options for streaming audio books.

Linux Gets Loud

by Joshua Curry

Linux is ready for prime time when it comes to music production. New offerings from Linux audio developers are pushing creative and technical boundaries. And, with the maturity of the Linux desktop and growth of standards-based hardware setups, making music with Linux has never been easier.

Using gphoto2 to Automate Taking Pictures

by Shawn Powers

With my obsession—er, I mean hobby—regarding BirdCam, I’ve explored a great number of camera options. Whether that means trying to get Raspberry Pi cameras to focus for a macro shot of a feeder or adjusting depth of field to blur out the neighbor’s shed, I’ve fiddled with just about every webcam setting there is. Unfortunately, when it comes to lens options, nothing beats a DSLR for quality. Thankfully, there’s an app for that.

Creating an Internet Radio Station with Icecast and Liquidsoap

by Bill Dengler

Ever wanted to stream prerecorded music or a live event, such as a lecture or concert for an internet audience? With Icecast and Liquidsoap, you can set up a full-featured, flexible internet radio station using free software and open standards.

Live Stream Your Pets with Linux and YouTube!

by Shawn Powers

Anyone who reads Linux Journal knows about my fascination with birdwatching. I’ve created my own weatherproof video cameras with a Raspberry Pi. I’ve posted instructions on how to create your own automatically updating camera image page with JavaScript. Heck, I even learned CSS so I could make a mobile-friendly version of BirdCam that filled the screen in landscape mode.

Nativ Vita

by James Gray

The motto “open to anything” underpins Nativ’s development philosophy on all of its audio solutions, including its new Nativ Vita, “the world’s first High-Resolution Music Player” and touchscreen control center that is designed to function as the central access point for one’s entire music collection.

The Post-TV Age?

by Shawn Powers

The most basic cable package from Charter (Spectrum?) costs me more than $70 per month, and that’s without any equipment other than a single cable card. It’s very clear why people have been cutting the cord with cable TV companies. But, what options exist? Do the alternatives actually cost less? Are the alternatives as good? I’ve been trying to figure that out for a few months now, and the results? It depends.

Android Candy: the Verbification of Video Chat

by Shawn Powers

People who study the history of languages probably will look back at our current time and scratch their heads. We keep inventing verbs! First, Google became the verb we use for searching. Then, “Facebooking” someone became a viable way to contact them. Heck, I forgot about “texting” someone. It seems we just keep taking perfectly good nouns and making them verbs. We keep verbing all our nouns!

Source

Creating Kubernetes Cluster Using Amazon’s EKS Service – Linux Hint

Kubernetes is a complex body of software. It is meant for a distributed cluster of compute nodes and is designed to withstand surges in workload, link failures and node failures. It is also a fast moving project with constant (and often backward incompatible) changes and third party dependencies.

Given all the complexity that underlies it, it is very difficult and expensive for an organization to self-host and maintain a Kubernetes cluster and run their applications on top of it. If you are not in the business of operating Kubernetes clusters, you may want to use Amazon’s Elastic Kubernetes Service (EKS) to deploy your applications. It will greatly reduce the cost of operation and you can rest easy knowing that experienced developers and operators are incharge of it, instead.

  • An AWS account with console access and appropriate permissions. Contact your firm’s AWS operator to get the appropriate privileges.
  • An AWS IAM user with programmatic access. We will be acting as this user when controlling our Kubernetes cluster. Here’s how you can install and configure AWS CLI for the account under which EKS cluster will be created.
  • A basic understanding of Kubernetes

Creating a Kubernetes cluster

You can create the cluster via CLI as well, but most new users would find the graphical console friendlier. So we will be using that, instead. Assuming that you have logged into your AWS Console, we can get started by going to Services from the top right corner and clicking on EKS from the drop down menu:

Next menu will show AWS intro page, let’s go to the Clusters option underneath the EKS submenu.

Here you can see the list of all the Kubernetes clusters created under your account. As there is none, let’s create one.

Click on Create cluster. Give it a name, select the version of Kubernetes you want, at the time of this writing version 1.11 is supported by Amazon. Next click on Role name, because we need to create a Role that we need to provider to Amazon EKS so it can manage our cluster.

Creating and Assigning Role

Before we get started with that, let’s understand a key difference between Amazon EKS (an AWS Service) and your Kubernetes Cluster on AWS. AWS segregates responsibilities wherever it can, to give you a very fine-grained control over everything. If you wish to give yourself, or a third party, complete control over these resources you can do that as well.

Think of Amazon EKS as one such party that will manage your Kubernetes cluster (your EKS cluster) on your behalf, but it requires your explicit permission to do just that. To do that we will ‘create’ and assign the Role of managing EKS clusters under our AWS account and assign it to Amazon EKS.

In the new IAM tab, that has opened after clicking on Role name, you will see a few default roles for billing and support are already in place. Let’s create a new one for EKS. Click on Create Role.

Select the type of trusted entity as AWS service, for which the role would be created and then select EKS so your EKS cluster can talk directly to the Amazon EKS directly and perform optimally. Then click Next.

Now, you will be able to see the permissions and permission boundaries that are associated with this role. The default values are alright, just click on to next.

The next menu will prompt you to add tags (key-value pairs) to this role. It is completely optional, but quite useful if you are using the CLI to manage your AWS resources and there are a lot of different roles and resources to manage. We won’t be adding any tags, click Next and give your role a meaningful name and description.

And that’s it! Clicking on Create role and we can go back to our EKS cluster creation. The next time you want to create another cluster, you can reuse this very same role again.

Back to Cluster Creation

Even if your AWS account is brand new, there’s still default VPC ( Virtual Private Cloud) with a few subnets created within it. These are often spread across different AWS regions and you have to select at least two of them for it to be a cluster.

And select the default security group to allow most inbound and outbound traffic to go normally.

Click on Create and your Kubernetes cluster will be up and running in minutes. Once your cluster is created. You can always get an overview of it by going to EKS → Cluster → myCluster. Of course, the last part, the name of your cluster will be different.

Local Setup

The way EKS platform works is that it allows you to interact with the control plane at the plane’s API endpoint. Control plane is equivalent to the master nodes in vanilla Kubernetes clusters. It runs etcd, CAs and of course, the API server which you will use to control your Kubernetes cluster.

You will have to configure your kubectl and/or your dashboard to work with this API endpoint and once that is setup, you can start listing all your resources, deployments, etc, like you would do with a regular Kubernetes cluster.

If you don’t already have Kubectl installed on your computer, you can do so by following this link for Mac, Windows or your favorite Linux distro.

We would need an additional binary which would be AWS IAM authenticator binary for your platform. Download it from here and make it an executable.

$ sudo chmod +x ./aws-iam-authenticator

Add it to one of your $PATH folders for example /usr/bin or /sbin or /usr/local/sbin. Or you can do as Amazon recommends and just add it to within your home directory and make $HOME a part of your PATH variable.

$ cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator &&
export PATH=$HOME/bin:$PATH

Next test if the binaries work.

$ kubectl version
$ aws-iam-authenticator help

Now, we need to configure these binaries so they can talk to our Kubernetes cluster securely. You can do it manually if you don’t want to set up AWS CLI, but that’s not a reliable approach. Which is why I mentioned in the prerequisites that AWS CLI was necessary. So, assuming you have installed it and configured it to work with your AWS account, run the following command:

Note:If you were already using kubectl to manage another Kubernetes cluster, with the configuration files at the default ~/.kube location. You might want to backup this folder before running the following command.

$ aws eks update-kubeconfig –name myCluster

The name of your cluster would be different from “myCluster”, substitute that instead. The update-kubeconfig command will actually update your kubectl configuration by editing the files in the ~/.kube folder. If that location doesn’t exist, then it will create a new one for you.

Now you are ready to interface with your cluster.

$ aws eks describe-cluster –name myCluster

Where to Next?

Now you are finally ready to add worker nodes using CloudFormation and deploy your application across all the regions you cluster’s VPC has access to. All this process can also be automated to the nth degree if you choose to use AWS CLI for everything from the creation of the cluster to deploying and scaling your apps.

Hope you found this tutorial helpful and insightful.

Source

Set Alarm to Automatically Power On Linux Computer

It’s no secret that you can make your computer “sleep” to considerably save energy or battery on laptops. A battery-powered device can spend many days in standby mode. The power draw in this state is incredibly low.

You can wake your computer at any time by pressing the power button or a key on your keyboard. But what if you want it to automatically wake up at a certain time? This can help you automate certain tasks – for example, to download something at 4AM when Internet speed may be much higher. With a bit of command-line magic, you can schedule your device to wake up, take some action and then go back to sleep again.

Besides waking up from standby, you may find it even more useful to completely shut down your computer and power on at certain times. Hibernation is also supported, but Linux systems that use proprietary drivers don’t always wake from hibernation properly.

Test If Your Computer Supports Wake-Up Timers

It’s possible that some computers don’t have the proper hardware to support this feature. However, on most configurations, this should work. You can do a quick test: open a terminal emulator and enter the following command.

sudo rtcwake -m mem -s 30

Your computer should go to sleep and wake up thirty seconds later. If your device needs more than twenty seconds for standby, increase the wake-up time by changing “30” to a higher number.

Also, test if the computer supports waking up from a complete shutdown.

sudo rtcwake -m off -s 60

Regarding -m off, the command manual mentions: “Not officially supported by ACPI, but it usually works.”

If the kernel, drivers and hardware all get along with each other, you should have no problems. If the timers aren’t supported, it’s probably because the hardware and/or BIOS/UEFI configurations don’t meet the requirements. But you might as well try your luck and see if upgrading some drivers or switching from proprietary ones to open source does the trick. You can also try to install a newer kernel.

As previously mentioned, hibernation has issues non-related to the rtcwake command. It does work most of the time but also fails on occasion. When it fails, your screen will remain black or show you an error message.

How to Use the rtcwake Command

The basic use of the command is simple: pick a power-saving method and a time when to wake up. In the previous command the -s parameter was used to specify how many seconds to wait before powering back on. But usually you will want to specify an absolute time, like 9AM tomorrow morning. For that, you use the –date parameter instead of -s.

rtcwake Date Parameter

sudo rtcwake -m mem –date 09:00

Note: not all hardware supports setting wake up dates far into the future. This is something that you’ll just have to test to see if it works for your specific device.

Time specification is in 24-hour format. Here’s a relevant screenshot of the command manual with different options for setting the time and date of a wake-up event.

rtcwake-time-date-specification

“YYYY-MM-DD hh:mm” means, year, month, day, hour and minute – for example: –date 2020-02-28 15:00 for the 28th of February, year 2020, 3PM.

rtcwake Dry Run

You can add another parameter to rtcwake, -n, to display when the alarm will be set.

sudo rtcwake -m mem –date +12hours -n

This is a “dry run,” meaning it doesn’t actually set an alarm and only “pretends” to do it. It’s useful to add -n when you want to test if your date specification is correct. Once you’re sure it’s right, just use the command without -n to set the actual wake-up time.

rtcwake Power-Saving Methods

The relevant options you can pass to the -m parameter are:

  • -m mem – normal standby mode you’re familiar with from the shutdown menu.
  • -m disk – hibernate mode that saves memory content to storage device. Not recommended when using proprietary drivers.
  • -m off – normal shutdown.
  • -m disable – cancel a wake-up event you previously set.
  • -m no – don’t power off or standby, just set a wake event. For example, you can set a wake-up time for tomorrow morning, then continue to work on your computer. When you’re done, shut down normally, and the device will automatically power on in the morning.
  • -m show – show wake-up events (previously-set alarms) currently active.

Conclusion

It’s up to you to find creative ways to use rtcwake. As a starter, this can help you find the computer fully booted in the morning. This way you skip the boring boot process which can take more than a minute on some systems. You could also install an utility, such as at, to automate tasks that your computer can run after waking up. We might even explore that option in a future tutorial.

Is this article useful?

Source

WP2Social Auto Publish Powered By : XYZScripts.com