Antergos Softens Arch Learning Curve | Reviews

By Jack M. Germain

Oct 3, 2018 10:44 AM PT

Antergos Softens Arch Learning Curve

Antergos 8.9, released last month, is one of the better Arch Linux options. It is a powerful and modern computing platform, elegantly designed. It gives power users almost all they could desire.

Arch distros are not for Linux newcomers — but for seasoned Linux users who are new to Arch, Antergos has much to offer.

One of the biggest challenges in getting started with any Arch distro is surviving the installation. A secondary challenge with Arch is its software management processes. Arch users who overcome those challenges gain a solid performing Linux desktop with more layers of security and little or no software bloat.

Antergos is not a perfect solution, but it certainly is one that offers a reasonable expectation of success. That is something I can not say about typical Arch distros.

There are a few other exceptions, though. Together, they form the upper crust winners among Arch Linux entry-level distros. In addition to Antergos, this elite group includes
ArchMerge,
Anarchy and
Manjaro.

Antergos is hawked on the developer’s website as a distro for everyone, but it actually is not for everyone — at least not until it is installed and running.

That said, Antergos does provide a less frustrating user experience through the installation process. The support options and easy-to-use desktops make Antergos a good fit for most users from that point forward. Still, I highly recommend some preparation before jumping into the Antergos distro or any Arch-based release.

For example, you need a better handle on how Arch Linux works to use Antergos successfully, or to use other Arch-based distros with less frustration. This entails considerable background reading to have things make sense and minimize the frustration.

Also, check out the community’s active forum, its well-maintained
Wiki, and the
ArchWiki. You can get additional help from the
Antergos IRC Channel.

Arch World Primer

What had been the Cinnarch distro until 2013 morphed into Antergos. Cinnarch was a single-flavored Arch distro running the Cinnamon desktop environment. That desktop gave Cinnarch a comfortable user experience. Its bottom panel bar, familiar two-column menu, and other attributes resembled what most users were accustomed to seeing on a computer screen.

Spanish developer Alex Filgueira rebranded his distro from its former iteration. He expanded the reborn Arch-based distro to offer a more complete range of desktops. That expansion includes the now-default GNOME 3 desktop, along with Cinnamon, KDE Plasma 5, Xfce, Mate and Openbox.

“Antergos” is a Galician word used to link the past with the present. Making an Arch distro simpler to install was Filgueira’s focus. Near-manual installation routines that relied on a command line process had been the Arch norm. Other Arch-based distros used a combination of scripts to semi-automate the installation routine.

The Antergos Cnchi graphical installer project hosted on GitHub is the tool that smooths out the installation process considerably. Cnchi is still in beta and has a few glitches, depending on your hardware and the desktop you select. Still, it does a better job than most other Arch-based distros.

About Antergos

The prime directive for all things Arch is simplicity, modernity and pragmatism. Added to that is a focus on centrality and versatility. It seems that in general, the founding principle of simplicity simply failed when it comes to installing most Arch Linux systems.

Antergos comes much closer to obeying that initial commandment, though. Numix is another system tool that helps Antergos to honor the founding articles that set out what Arch is supposed to be. Numix brings a distinctive design of icons and themes, for a unique look and feel in Antergos.

In true Arch fashion, Antergos relies on rolling releases. Once you install Antergos and have it running well, you never have to repeat the process. Updates roll in as they are ready, so the operating system always is loaded with the latest releases.

There are no point releases or reinstallation mandates. That provides a shining example of what simplicity should be in all Linux distros.

Getting Started

There are two download options. A Live Install Image includes a fully working environment that allows you to test how Antergos performs. The Minimal Install Image includes only what is required to run the installer and thus offers a much smaller initial download.

Antergos 18.9 installation screen

Within the live DVD session, you can start the installation process only from this spash screen.

After loading the live DVD, you click one button to run Antergos in live mode if you want to check out its performance and hardware compatibility. You click a second button to install it. If you decide to install Antergos, make sure to run the live session first, so you can establish an Internet connection. You must be connected to install this distro. You then can return to the splash screen to install the OS.

The ISO file that you download to burn the installation DVD contains all six desktop options. It sort of implies that you can load each one as part of the live session testing phase.

However, that is not the case. The Live session — whether you load it to test Antergos or install it — runs only the default GNOME 3 desktop. If you decide to install Antergos, the installer offers six desktop environment options once the installation routine gets under way.

Tread Carefully

Another tipping point is how to start an installation. The ISO is for direct installation. Typically, Arch distros do not have fully functional live session environments. Those that do require you to exit the live session environment to start the install process externally.

The “simplified” distros I mentioned above do provide the ability to fully test the Arch distros. When you boot into a live session, connect the PC to the Internet and wait for the installer to open automatically. Otherwise, it will not update and open properly.

If a glitch occurs, do not try to restart the installation process. The only cure is to reboot the PC. Then start the installation routine again.

Generally, the installation routine takes some time to complete. Be patient. Cnchi has to fetch the latest packages from the Internet. That burden is increased if you agree to the options to install proprietary graphics drivers, Flash add-ons and alternative Web browsers.

Updated Impressions

I last did a full-blown, hands-on test run of Antergos in March 2015. I have dabbled in a variety of its desktop options over the years since its Cinnarch days.

Antergos has not changed much in look and feel, regardless of which desktop is at play. To a point, that is a good thing. Stability and reliability continue to be staples of this Arch distro.

Antergos 18.9 background images and color patterns

Antergos includes a nice collection of background images and color patterns.

However, I was less impressed this time around due to what struck me as complacency. Once the rather smooth installation was complete and the out-of-the-box reliability was evident, I found myself asking, “Is all there is?”

Like most Linux families — Arch, Debian, RPM-based, Ubuntu-based, Fedora-based or openSuse-based, to name a few — too many Linux distros look and play the same. In most cases, the desktop environments are all too lookalike.

That description is most true in working with the GNOME desktop. Since GNOME is still the default desktop still for Antergos, I installed that version for this review.

Arch’s other telltale traits fall to the background. GNOME is what you stare at most often while using the OS. Much like Arch Linux is Arch, so is GNOME desktop just Plane Jane uninviting GNOME.

Antergos 18.9 desktop

The GNOME 3 desktop is the default environment, but it lacks any distro-specific tweaking to make it unique or better. The same is true for the other five desktop options.

To make Antergos Linux a step better than other Arch-based distros running GNOME 3, the developer needs to build in some innovative tweaks to make the desktop’s integration just a tad bit more, umm, improved.

I have the same recommendation for Antergos’ other desktop options. This is clearly not something that all Linux developers do, but tweaking the desktop so that it is a unique part of the distro characteristics gives adopters a reason to stay with one distro.

Reasons to Go With Antergos

Desktop blandness aside, you should consider the reasons to select Antergos over other Arch Linux options. Perhaps the most important reason is that Antergos is 100 percent functional out of the box if you do not start off with the minimal installation ISO.

The preinstalled software is a small collection — that keeps bloat from setting in. You get the basics that include a video player, music software, text editor and other essentials, depending on the desktop environment.

You add to what you want from there. Additional software can be installed using the Pacman package manager. This is one of the best package managers available among Linux distributions.

Antergos uses the Arch Linux repositories. They contain the newest versions of all the software. The Arch Linux Archive is one of the best-maintained repositories.

Plus, you get access to more of the latest software additions that are not yet vetted into the official Arch repository through the Arch User Repository. This is a community-driven repository for Arch users.

Antergos comes with the Chromium Web browser by default. It runs Linux kernel 4.18.5.

Bottom Line

If you are already familiar with the Arch Linux family but want a quicker installation method, you will appreciate what Antergos brings to the Linux table. Those who are less familiar with the Arch Linux methodologies are sure to be much less enthusiastic about using the OS.

This distro gives you some of the most popular desktop environments all in one download. If you are clueless about a preferred desktop, though, you will be stuck staring at the default GNOME option. Antergos does not provide users with an easy switching tool to change the desktop option. The live session ISO does not let you try out any other option either.

Want to Suggest a Review?

Is there a Linux software application or distro you’d like to suggest for review? Something you love or would like to get to know?

Please
email your ideas to me, and I’ll consider them for a future Linux Picks and Pans column.

And use the Reader Comments feature below to provide your input!

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.

Source

Making better use of your Linux logs

Log files on Linux can provide a lot of useful information on what’s happening on your system. The below commands can help you sort through the data and pinpoint problems.

Linux systems maintain quite a collection of log files, many of which you are probably rarely tempted to view. Some of these log files are quite valuable, though, and options for exploring them might be more interesting and varied than you imagine. Let’s look at some system logs and get a handle on some of the ways in which log data might be easier to probe.

Log file rotation

First, there’s the issue of log rotation. Some Linux log files are “rotated.” In other words, the system stores more than one “generation” of these files, mostly to keep them from using too much disk space. The older logs are then compressed but left available for a while. Eventually, the oldest in a series of rotated log files will be automatically deleted in the log rotation process, but you’ll still have access to a number of the older logs so that you can examine log entries that were added in the last few days or weeks when and if you need to look a little further back into some issue you’re tracking.

To get a feel for what types of system information are being saved, simply cd over to the /var/log directory and list its contents.

/var/log# ls
alternatives.log btmp.1 kern.log.2.gz syslog.3.gz
alternatives.log.1 cups kern.log.3.gz syslog.4.gz
alternatives.log.2.gz dist-upgrade kern.log.4.gz syslog.5.gz
alternatives.log.3.gz dpkg.log lastlog syslog.6.gz
alternatives.log.4.gz dpkg.log.1 mail.err syslog.7.gz
alternatives.log.5.gz dpkg.log.2.gz mail.err.1 sysstat
apport.log dpkg.log.3.gz mail.err.2.gz tallylog
apport.log.1 dpkg.log.4.gz mail.err.3.gz ufw.log
apt dpkg.log.5.gz mail.err.4.gz ufw.log.1
atop faillog mail.log ufw.log.2.gz
auth.log fontconfig.log mail.log.1 ufw.log.3.gz
auth.log.1 gdm3 mail.log.2.gz ufw.log.4.gz
auth.log.2.gz gpu-manager.log mail.log.3.gz unattended-upgrades
auth.log.3.gz hp mail.log.4.gz wtmp
auth.log.4.gz installer speech-dispatcher wtmp.1
boot.log journal syslog
bootstrap.log kern.log syslog.1
btmp kern.log.1 syslog.2.gz

This is fairly large collection of logs and log directories — 69 files and directories in /var/log in this case, but 180 files when you include the files inside those directories.

$ cd /var/log
$ ls | wc -l
69
$ find . -type f -print | wc -l
180

When you examine your log files, you will see pretty clearly which are generations of the same basic log. For example, one of the primary log files — the syslog file — is broken into nine separate files. These represent what is basically a week’s worth of historical data along with the current file. Most of the older files are zipped to preserve space.

$ ls -l syslog*
-rw-r—– 1 syslog adm 588728 Oct 15 20:42 syslog
-rw-r—– 1 syslog adm 511814 Oct 15 00:09 syslog.1
-rw-r—– 1 syslog adm 31205 Oct 14 00:06 syslog.2.gz
-rw-r—– 1 syslog adm 34797 Oct 13 00:06 syslog.3.gz
-rw-r—– 1 syslog adm 61107 Oct 12 00:08 syslog.4.gz
-rw-r—– 1 syslog adm 31682 Oct 11 00:06 syslog.5.gz
-rw-r—– 1 syslog adm 32004 Oct 10 00:07 syslog.6.gz
-rw-r—– 1 syslog adm 32309 Oct 9 00:05 syslog.7.gz

The syslog files contain messages from many different system services — cron, sendmail and the kernel itself are just examples. You’ll also see evidence of user sessions and cron (scheduled tasks).

Most Linux systems no longer use the old messages and dmesg files that served as landing places for the bulk of our system messages for many years. Instead, a large variety of files and some special commands have become available to help present the log information that is likely to be most relevant to what you are looking for.

Depending on the file in question, you might simply use more or tail commands, or you might use a file-specific command like this use of the who command to pull user login data from the wtmp log.

$ who wtmp
shs pts/1 2018-10-05 08:42 (192.168.0.10)
shs pts/1 2018-10-08 09:41 (192.168.0.10)
shs pts/1 2018-10-11 14:00 (192.168.0.10)
shs :0 2018-10-14 19:11 (:0)
shs pts/0 2018-10-14 19:16 (192.168.0.25)
shs pts/0 2018-10-15 07:39 (192.168.0.25)
shs :0 2018-10-15 19:58 (:0)
dory pts/0 2018-10-15 20:01 (192.168.0.11)
shs pts/0 2018-10-15 20:42 (192.168.0.6)
shs pts/0 2018-10-16 07:18 (192.168.0.6)
nemo pts/1 2018-10-16 07:46 (192.168.0.14)

Similarly, you might see nothing when you run a tail faillog command, but a command like this shows you that it’s simply full of zeroes:

# od -bc faillog
0000000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000

*
0076600

You may also see very little when you try to tail lastlog only to discover that you need to use the lastlog command to view that log’s data.

So, here is a listing of log files in /var/log and some descriptions of what they contain and how to view their contents.

  • alternatives.log — “run with” suggestions from update-alternatives
  • apport.log — information on intercepted crashes
  • auth.log — user logins and authentication mechanisms used
  • boot.log — boot time messages
  • btmp — failed login attempts
  • dpkg.log — information on when packages were installed or removed
  • lastlog — recent logins (use the lastlog command to view
  • faillog — information on failed login attempts — all zeroes if none have transpired (use faillog command to view)
  • kern.log — kernel log messages
  • mail.err — information on errors detected by the mail server
  • mail.log — information from mail server
  • syslog — system services log
  • ufw — firewall log
  • wtmp — login records

journalctl

In addition to the log files maintained in /var/log, there is also the systemd journal. While not a simple “log file” in the usual sense of a single log file, this journal represents an important collection of information on user and kernel activity. The information is retrieved from a variety of sources on the system.

To view the information that has been collected, you would use the journalctl command.

How much information you will see depends on whether you are a member of the adm group or not. Non-adm users will see relatively little information, but members of the adm group will have access to a massive amount of data — as shown in this example, which is merely showing us how many lines of information are available for this adm group member to review:

$ journalctl | wc -l
666501

That’s more than 666,000 lines of text! To pare this down to a hopefully more digestible display, you’re probably going to want to use arguments that tailor what you will see displayed. Some of the options available with journalctl include:

–utc (change the time format to UTC)
-b (only show records added since the last boot)
-b -1 (only show records added since the previous to last boot)
–since and –until (only show records added within the specify timeframe, e.g., –since “2018-10-15” –until “2018-10-11 06:00”

Here’s an example:

$ journalctl –since “2018-10-16 13:28”
— Logs begin at Mon 2018-05-14 15:16:11 EDT, end at Tue 2018-10-16 13:28:57 EDT. —
Oct 16 13:28:25 butterfly kernel: [UFW BLOCK] IN=enp0s25 OUT= MAC=01:00:5e:00:00:01:02:
Oct 16 13:28:25 butterfly kernel: [UFW BLOCK] IN=enp0s25 OUT= MAC=01:00:5e:00:00:fb:00:
Oct 16 13:28:57 butterfly su[7784]: pam_unix(su:session): session closed for user root
Oct 16 13:28:57 butterfly sudo[7783]: pam_unix(sudo:session): session closed for user root
lines 1-5/5 (END)

You can also examine log entries just for some particular service. This is probably one of the more useful things that the journalctl command can do for you:

$ journalctl -u networking.service
— Logs begin at Mon 2018-05-14 15:16:11 EDT, end at Tue 2018-10-16 08:06:31 EDT
May 14 15:16:12 shs-Inspiron-530s systemd[1]: Starting Raise network interfaces.
May 14 15:16:12 shs-Inspiron-530s systemd[1]: Started Raise network interfaces.
May 14 15:49:18 butterfly systemd[1]: Stopping Raise network interfaces…
May 14 15:49:18 butterfly systemd[1]: Stopped Raise network interfaces.
— Reboot —
May 14 15:49:50 butterfly systemd[1]: Starting Raise network interfaces…
May 14 15:49:51 butterfly systemd[1]: Started Raise network interfaces.
— Reboot —

Notice how the system reboots are displayed in this output.

To get a list of services, try a command such as this:

$ service –status-all | column
[ + ] acpid [ + ] network-manager
[ – ] alsa-utils [ – ] networking
[ – ] anacron [ – ] plymouth
[ + ] apparmor [ – ] plymouth-log
[ + ] apport [ – ] pppd-dns
[ + ] atd [ + ] procps
[ + ] atop [ – ] quota
[ + ] atopacct [ – ] quotarpc
[ + ] avahi-daemon [ – ] rsync
[ – ] bluetooth [ + ] rsyslog
[ – ] console-setup.sh [ – ] saned
[ + ] cron [ + ] sendmail
[ + ] cups [ + ] speech-dispatcher
[ + ] cups-browsed [ – ] spice-vdagent
[ + ] dbus [ + ] ssh
[ – ] dns-clean [ + ] sysstat
[ + ] gdm3 [ – ] thermald
[ + ] grub-common [ + ] udev
[ – ] hwclock.sh [ + ] ufw
[ + ] irqbalance [ + ] unattended-upgrades
[ + ] kerneloops [ – ] uuidd
[ – ] keyboard-setup.sh [ + ] whoopsie
[ + ] kmod [ – ] x11-common

In the display above:

+ = active
– = inactive
? = no status option available

Here’s a useful command for getting a quick report on disk space usage:

$ journalctl –disk-usage
Archived and active journals take up 824.1M in the file system.

If you want to focus on a particular process, you can do that by providing a PID (truncated) as in the example below.

$ journalctl _PID=787
— Logs begin at Mon 2018-05-14 15:16:11 EDT, end at Tue 2018-10-16 08:25:17 EDT
Aug 03 18:02:46 butterfly apport[787]: * Starting automatic crash report genera
Aug 03 18:02:46 butterfly apport[787]: …done.
— Reboot —
Sep 16 13:26:34 butterfly atopacctd[787]: Version: 2.3.0 – 2017/03/25 09:59:59
Sep 16 13:26:34 butterfly atopacctd[787]: accounting to /run/pacct_source
— Reboot —
Oct 03 18:08:41 butterfly apport[787]: * Starting automatic crash report genera
Oct 03 18:08:41 butterfly apport[787]: …done.
— Reboot —
Oct 15 14:07:11 butterfly snapd[787]: AppArmor status: apparmor is enabled but s
Oct 15 14:07:12 butterfly snapd[787]: AppArmor status: apparmor is enabled but s
Oct 15 14:07:12 butterfly snapd[787]: daemon.go:344: started snapd/2.35.2 (serie
Oct 15 14:07:12 butterfly snapd[787]: autorefresh.go:376: Cannot prepare auto-re

NOTE: The systemd journal’s configuration file is /etc/systemd/journald.conf.

Wrap-up

The variety of log files on Linux systems is somewhat overwhelming, but discovering a handful of commands that can help pinpoint problems can save you a lot of time and stress.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Source

Microsoft Open Sources Infer.NET AI Framework [For Humanity]

Last updated October 13, 2018 By Avimanyu Bandyopadhyay Leave a Comment

Microsoft is currently extremely active in this era of Artificial Intelligence and has very recently open sourced its award-winning Infer.NET AI framework.

Microsoft’s ‘love’ for AI research is growing each day and at times it’s in the good direction. They have pledged $115 million for humanitarian related AI programs under its AI for Good initiative. Some of the AI for Good programs by Microsoft are AI for Humanitarian Action, AI for Accessibility and AI for Earth. Infer.NET is part of the program.

What is Infer.NET all about?

Infer.NET is a Machine Learning framework that can use a developer’s customized model to create a Machine Learning algorithm built around that model only. It uses Bayesian inference in graphical models and can also implement Probabilistic Programming.

This is in contrast to many learning models that require a separate learning algorithm that has already been developed previously.

Since the algorithm is completely based upon the model fed to the Infer.NET framework, interpretation and debugging becomes much easier as the developers can focus on their own individual models instead of a separate pre-existing learning algorithm.

The Model-based Machine Learning algorithm will evolve and work in only the way the developer has specifically designed the model.

You can learn more about Model-based Machine Learning from this free eBook from the developers.

Infer.NET has been in development since 2004 via Microsoft’s research centre in Cambridge, UK. It was released for academic use in 2008 and finally went Open Source on October 5, 2018.

Applications include but are not limited to:

Till now there have been hundreds of research applications of Infer.NET. Some of the following research applications can be used as examples to understand it better and its significance now as FOSS:

Three notable research initiatives through Infer.NET

The above three papers make it clear as to why the framework, now being Open Source, is a Positive Step towards Humanity.

Infer.NET is available on GitHub and also as NuGet packages.

Do you like Microsoft’s new initiative? Are you interested in Machine Learning? Share your thoughts with us on the comments below.


About Avimanyu Bandyopadhyay

Avimanyu is a Doctoral Researcher on GPU-based Bioinformatics and a big-time Linux fan. He strongly believes in the significance of Linux and FOSS in Scientific Research. Deep Learning with GPUs is his new excitement! He is a very passionate video gamer (his other side) and loves playing games on Linux, Windows and PS4 while wishing that all Windows/Xbox One/PS4 exclusive games get support on Linux some day! Both his research and PC gaming are powered by his own home-built computer. He is also a former Ubisoft Star Player (2016) and mostly goes by the tag “avimanyu786” on web indexes.

Source

Linux Today – Install and Configure Webmin on your Ubuntu System

Oct 16, 2018, 08:00 (0 Talkback[s])

(Other stories by Karim Buzdar)

The Webmin console is your answer to on-the-fly management of Linux as an administrator. You can use any web browser to setup user accounts, Apache, DNS, file sharing, and much more. In this article, we will describe a step-by-step installation of Webmin on your Ubuntu system.

Complete Story

Related Stories:

Source

Download KDE Plasma Linux 5.12.7 LTS

KDE Plasma (formerly K Desktop Environment and KDE Software Compilation and KDE Plasma Workspace and Applications) is an open source project comprised of numerous packages, libraries and applications designed to provide a modern graphical desktop environment for Linux and UNIX-like workstations.

A beautiful, modern and traditional desktop environment

It combines ease-of-use, superior graphical design and powerful functionality with the unique features and architecture of the Linux operating system. It’s comprised of the KDE Plasma Workspaces and KDE Applications components.

Additionally, it contains extra add-on for the panel and desktop, a download manager, an instant messenger, an addressbook, a document viewer, a multimedia layer called Phonon (similar to GStreamer on GNOME), and accessibility functionality, such as a powerful text-to-speech system.

Numerous GNU/Linux distributions use KDE

Numerous modern Linux distributions are built around the KDE desktop environment, either using it as their main computing environment or as an alternate one. Just like in the GNOME desktop environment, KDE tries to keep the same look and feel across all of its default applications. The entire KDE project is written with the Qt toolkit.

Under the hood

Under the hood, the KDE Plasma project is comprised of many core libraries that form KDE Frameworks and are required by all applications, various runtime components that ensure the proper functionality of the included apps, the applications, and the actual user environment.

Comes pre-loaded with apps for daily tasks

Users will find at least one application for each common task, including emailing, web browsing, news reading, painting, image viewing, video and music playback, and many more.

It also includes a collection of basic utilities, such as a calculator or archive manager, various scientific and educational applications, packages that contain extra themes, icons, wallpapers, window decorations and screensavers, some system administration tools, bindings for well known programming languages, and many games.

Bottom line

It is one of the first and best desktop environments for Linux distributions. During the last couple of years, KDE proved to be a very mature, reliable and stable project, backed by a talented community.

KDE dekstop Desktop environment Window manager KDE Dekstop Environment WM

Source

Trojans and RansomWare explained in light of WannaCry RansomWare

Over the past week, around 200,000 systems are believed to have been hacked by wannacry ransomware. Let’s start with some background first, and then move into the details-

Trojans

Before you know what Ransomeware is, it’s important to know what trojans are. We can broadly classify malicious computer programs into 2 categories-

  1. Spread wildly and attack destructively
  2. Spread surgically and attack covertly

The first category comprises the typical viruses that infect your computers, get inside your USB, copy themselves to every avenue they can. They slow down your computer, limit it’s functionality, and in general, make a lot of changes that make them easy to detect. These, in general, serve no particular useful purpose for the writer of the malicious code, other than perhaps giving them the lulz or maybe some sense of accomplishment. Also, once spread, there is very limited amount (or none at all) amount of control that the writer of the malicious code has on it’s actions.

The second ones are the precisely crafted viruses called trojans. These hide behind legitimate files, spread only through very few avenues as seen fit by their programmer. Let me make this point a bit clearer-

  1. Most viruses would copy themselves to all devices attached to the infected system, try to spread via the network, internet etc. from the infected system.
  2. Trojans will not automatically copy themselves. They will stay hidden and inactive.

As with everything else, the means of spread of trojan is also precise. The malicious code writer will hide them behind a legitimate file, and then spread this file using social networks, spam mails, etc. This way, only those computers will get infected that the attacker wants to infect.

What are some examples of trojans-

  1. Remote Administration Tools (RATs) – These are trojans which, when installed on the system, silently position themselves in such a way that they allow the attacker to control the system remotely. This means that the attacker can browser all your files, read all your data, see what you’re typing (hence get all your accounts and passwords), get a live feed of your screen, and access your webcam. As you can clearly see, as opposed to other viruses, trojans have specific use for the malicious author. He now controls the infected computer.
  2. Botnet – This is a special use of a freely spreading trojan whose purpose is to infect as many computers as possible with a RAT like functionality but less control on who gets infected. This reduced control and increase rate of spreading is important because of the purpose of a botnet. Botnet is basically a large network of infected computers which the attacker uses to do his bidding. They are often used to carry out DDOS attacks. Suppose the trojan spread to 1000 computers (a very small number, there are HUGE botnets out there). The attacker can then use these 1000 computers to simultaneously attack websites and take them down. Another use for botnets is bitcoin mining.

Recently, a new use for trojans has been seen-

Ransomware

If you have been paying attention so far, you’ll notice that once infected by a trojan, a computer’s files are under control of the attacker. That means he can easily say- “Give me money or I’ll delete all your files”. Unfortunately for the attacker, once the victim sees this message, the trojan is no longer covert. The victim may install an antivirus, backup his important data to the cloud/ external storage media/ USB, etc.

So, the attacker needs to do something which is equivalent to deleting, but reversible. Also, the reverse procedure should require the consent of the attacker. There is one solution – Encryption. If you know what encryption is, then you should see by now what’s up. Otherwise, here’s a simpler explanation (though not entirely accurate)-

What the attacker can do is similar to what happens when you find a compressed archive with a password. If you know the password, you can uncompress the archive, otherwise not. So, the attacker will take all files except the System files (without which your computer won’t work), put them into a compressed archive with a secure password, and then delete the uncompressed files.

Once he’s done with compressing (encrypting really) everything, he’ll inform you about what just happened, and tell you to pay him a certain amount in bitcoins in exchange for the password of the compressed archive (i.e. the decryption key). If you don’t pay up, he will delete the compressed archive and your data will be lost forever. Even if you manage to remove the ransomware after it announces it’s presence, it’s a bit too late. You avert the possibility of data deletion but that doesn’t mean that you can now get your data back. You still don’t know the decryption key, and unless there’s a cryptographic flaw/weakness in the encryption scheme used by the attacker (basically weak password is used), it’s almost impossible to find the key and decrypt the data.

What’s special about WannaCry?

So while there have been ransomware around for quite some time, this one has spread to epic proportions. Why?

NSA, Shadow Brokers and EternalBlue

The credit for this goes to NSA for discovering the EternalBlue exploit and Shadow Brokers for releasing it to the public. I won’t delve into further details of this, but EternalBlue exploit can hack any Windows machine which didn’t have the patch for it. What does that mean?

The standard Windows security update on 14 March 2017 resolved the issue via security update MS17-010, for all currently supported Windows versions.

“The issue” referring to the vulnerability. However, many systems have automatic updates disabled and didn’t have the patch. All these machines were vulnerable to this attack. Considering how often people end up disabling automatic updates (because they’re annoying), you can imagine the scale of the EternalBlue exploit. This is the reason why this particular ransomware was able to spread so quickly.

WannaCry

At this point, you already have enough background necessary to understand what WannaCry is, on your own. You know it’s a ransomware, and you know it uses EternalBlue to infect computers. The details can be seen n the pic below-

  1. Files have been encrypted
  2. You need to pay $300 via bitcoin
  3. If you don’t pay within 3 days, you need to pay $600
  4. If you don’t pay in a week, all files will be deleted permanently.

This is it for this article.

Suggested reading :

https://www.malwaretech.com/2017/05/how-to-accidentally-stop-a-global-cyber-attacks.html

– This guy slowed down the spread of the ransomware by registering a domain which he felt was suspiciously present in the source code. His diligence saved people a lot of money and hassle. (Oversimplified summary, please read post for more accurate analysis)

Source

The Always-On App: Fighting Application Downtime to Keep Your Business Moving

Share with friends and colleagues on social media

    Nine to five may be the hours you spend in your office (if you’re lucky), but it no longer describes a business workday. That’s because today’s organizations don’t close at night. Applications that power actions as diverse and common as mobile banking, ride sharing, flight booking, online shopping and invoice inquiries can’t shut down at 5 p.m. That means application downtime—whether planned or unplanned—can cost your organization money, time and resources. When it comes to ensuring a good customer experience, then, and maintaining the health of your business, application downtime is the enemy.

    Unfortunately, at first glance, solving issues of application downtime can be daunting. Applications rely on so many moving parts, from server hardware to networking components to software services. Yelling at an application vendor won’t help. But working with SUSE probably can. That’s because SUSE provides the foundation of enterprise infrastructure. Solving issues of application downtime is just part of what it takes to become a nonstop IT shop, a goal we’re helping organizations with every day.

    You can’t address application downtime without looking at your infrastructure holistically. Traditionally, different pieces of hardware (servers, switches, disk arrays, etc.) have limited what your infrastructure can do. These pieces have always been connected, but they had to be managed individually, which has never given IT departments the control they want. Today, more and more functionality is happening in the software. This software-defined infrastructure (SDI) approach acknowledges the connections in your data centers—and gives you the tools to manage them.

    At SUSE, we believe that a move to SDI can best help organizations tackle the availability and access demands they face today. We’ve long believed that making software more reliable, flexible and easier to manage makes it better at supporting the applications that run your business. Our SDI solutions enable IT to drive innovation with greater agility, easier automation and reduced costs. They offer flexibility and efficiency so you can improve time to market while ensuring service availability. All so that customers and employees can access applications any time they need them.

    One of the pieces of our software-defined infrastructure approach is SUSE® Linux Enterprise Live Patching. It allows you to apply critical kernel patches without interrupting the operating system for even a second. In the world of application availability, that’s huge. You can keep your operating system up to date for security purposes without interrupting the apps you rely on.

    Just as important is ensuring that your servers are arranged in resilient clusters to help ensure availability. SUSE Linux Enterprise High Availability Extension and Geo Clustering for SUSE Linux Enterprise High Availability Extension can do just that. By building server clusters with automated failover, you create a rock-solid foundation for your applications, so a small issue in one server is unnoticed by your apps or the people who rely on them. By linking those clusters across long distances using rules-based failovers, you further insulate yourself from regional disasters and other issues. Ciclum Farma, the Portuguese drug manufacturer, achieved 100 percent uptime for its mission-critical SAP solutions using the SUSE high-availability tool.

    Your customers and employees are out there relying on your services day in and day out. Your applications must be available to them. Luckily, with a software-defined infrastructure approach powered by SUSE, you can build a solid foundation for all the moving parts that keep applications available. And that reliability means that even though your apps can’t work only a 9 to 5, maybe your IT team can.

    Check out SUSE’s solutions for Business Critical computing @ https://www.suse.com/programs/business-critical/ .

    Jeff.Reser@suse.com

    @JeffReserNC

    Share with friends and colleagues on social media

      Source

      It’s a Command Line Showdown – Red Hat Enterprise Linux Blog

      Nearly a year ago, Casey Stegman and I wrote a short blog on how we had (big) plans to “change up our marketing approach”… and how it might involve comic books. We also shared our new marketing mantra: Listen. Learn. Build. Well, I have some great news. We listened, we learned, we built—and today I’d like to share.

      Listening

      In the latter half of 2017 we took our show on the road. After that fateful encounter in Austin—where we learned that some developers just want the operating system to “get out of the way”—we knew there was an ocean of knowledge and experiences to learn from. From Cape Cod (Flock 2017) to Prague (Open Source Summit Europe 2017) to Las Vegas (AWS re:Invent 2017) to San Francisco (Red Hat Summit 2018), we spoke with literally hundreds of passionate problem solvers. We also had a blast discovering people’s various superpowers and illustrating them as the (command line) heroes / heroines that they are. Some folks you may recognize:

      @mrry550 @tbyeaton & @seattledawson@fatherlinux

      Learning

      From these interviews we learned a lot about the challenges that developers, admins, and architects are facing. We learned that while many people are struggling with technical challenges, from embracing containers and re-architecting for hybrid cloud, others are facing equally impressive turbulence as they adopt agile development practices and DevOps workflows. Fun fact here: We also learned a lot about our heroes / heroines “origin stories.” While many got their start via some form of video game (Colossal Cave Adventure anyone?) others were given early access to various bits of hardware (think: “my father brought home this crazy machine”) and began their journey from there.

      Building

      What did we do with all of this newfound knowledge and information? We built. Specifically, a podcast, called Command Line Heroes, which we debuted earlier this year. It’s a podcast about the people who transform technology from the command line up. We found a successful formula by taking the stories we heard, digging into some additional research, and diving deep into everything from

      If you’ve been living in a (colossal) cave and have yet to subscribe to the podcast, it’s not too late! Command Line Heroes is available wherever you download / access podcasts today.

      We’re not done!

      More good news? We’re not done. As the fall event season approaches here in North America we plan to get back on the road. In August we’ll be in Boston at DevConf.us and then north of the border for Open Source Summit North America 2018. If you have plans to attend either event, find us—we’d love to hear more about your story.

      But if you’re not traveling, here’s another way you can help us listen, learn, and build. We’ve designed a showdown of sorts. In fact, it’s a Command Line Showdown.

      As we ramp up towards celebrating System Administrator Appreciation Day on Friday, July 27th, we’re going to pit various commands against each other and allow y’all to help us find “the most useful Linux command.” Quick note: The set of commands we chose was sourced from conversations with you at some of the aforementioned events. Of course, we couldn’t include them all, but if this takes off we can definitely look to a future with more polls, more commands, and bigger showdowns.

      So get voting! But also don’t be shy (we know you won’t be) about giving us your feedback.

      One final note. We’re hard at work on a season 2 of Command Line Heroes. And if you’d love to influence its direction, but can’t come to any of the events we’re attending, vote in “the showdown” or email us your thoughts (commandlineheroes@redhat.com). If all else fails, subscribe to the podcast and stay tuned for more news soon.

      Source

      Phoenicis PlayOnLinux 5 Alpha Release


      Finally after several years of waiting, PlayOnLinux developers released the first Alpha version of PlayOnLinux 5

      https://www.playonlinux.com/en/comments-1354.html
      Installers:

      https://repository.playonlinux.com/PlayOnLinux/5.0.0-alpha1
      Unfortunately they are scripting games from scratch, which means there are currently 135 supported Games/Installers (scripts)

      Report bugs here:

      https://github.com/PhoenicisOrg/phoenicis/issues
      They changed the names of the tabs so now we have:

      • Library
      • Apps
      • Containers
      • Engines
      • Installations
      • Settings

      Library Tab
      Is kind of confusing, specially with no games installed. It must be where the shortcuts will reside once you install something?

      Apps Tab
      This is where you can search their repository of scripted installers… or Games. Similar to clicking Install in POL4

      Containers Tab
      Shows you all of your wineprefixes and allows you to modify them like with Configure in POL4

      They removed the “run a shell command line” in the virtual drive which prevents installing winetricks and ability to install other dependencies like:

      • vcrun2015
      • dotnet 4.5 or 4.6

      Engine Tab
      Where you can choose a version of Wine to install. The only reason you would want to do this is to test the script installed game with another version of Wine. I would think the script would be updated to newer versions as game/client updates break the script…

      Installations Tab
      Appears to show current script installations that are in progress. I’m guessing it will be blank when all of your games are done installing.

      Settings Tab
      Allows you to change graphical settings in Phoenicis and Network, Repository, File Association settings. Not all features are available yet.

      I tried installing Dark Forces and there was very little feedback that anything was happening. After about 2 minutes the Steam installation began.

      There was a Steam error and after clicking “OK” nothing happened. There was a pending installation in the Installations Tab, but my only option was to “cancel”
      Then it disappeared.

      Side Note:

      • No manual installation to test your non-listed programs
      • Most of the GUI is just like PlayOnLinux 4 wrapped in new Java graphics.
      • All of the icons are re-used from PlayOnLinux 4
      • I don’t see a debug option anywhere
      • I don’t see any arguments for additional commands to run in the game console
      • No Components installations for manual testing of non-listed programs
      • No way to switch Wine versions on a Container (Installed Virtual Drive)

      Source

      WP2Social Auto Publish Powered By : XYZScripts.com