Ubuntu 18.10 Review — The Ultimate Linux Newbie Guide

Ubuntu 18.10 Cosmic Wallpaper

Ubuntu 18.10 is the latest version of the popular Ubuntu Linux operating system. It was released on 18 October as a free download, available for anyone to try, install and use.

It has the unenviable task of following April’s warmly-received Ubuntu 18.04 (Long Term Support) release, something that was never going to be easy.

Six months on, with Ubuntu 18.10 released and available to download, it’s time to find out whether the release, codenamed “Cosmic Cuttlefish” soars to new heights, or sinks under the weight of expectations.

Some of the new features in Ubuntu 18.10 include:

  • GNOME v3.30 with a new default theme, Yaru
  • Android integration with GSConnect
  • Improved boot time and performance
  • Linux kernel v5.0
  • Fingerprint scanner support
  • DLNA for Smart TV Support

Click here to read the full review (OMGUbuntu!)

Alternatively, you can download Ubuntu 18.10 from Canonical here.

Source

Amazon WorkDocs Now Lets You Control IP Address Access to Your Site – Unix Magazine

by aws@amazon.com • October 25, 2018

Amazon WorkDocs now provides you with the ability to control the IP addresses from which your WorkDocs site can be accessed. Using IP address-based allow lists, you can define and manage groups of trusted IP addresses, and only permit users to access your WorkDocs site when they’re connected to a trusted network, like corporate networks or an Amazon WorkSpaces environment.

IP address-based allow lists can be added in the WorkDocs Admin Console. You can set the IP address ranges from which you wish to provide access. When a user tries to connect to your WorkDocs site from their browser, WorkDocs drive, mobile device, sync, or companion app, the IP address from which the request originated is evaluated against your allow list. If it is not on the allow list, access will be denied. If you do not filter user access by IP address with an allow list, access will be open to all IP addresses.

This feature is available today in all AWS Regions where WorkDocs is available. To learn more about IP address-based allow lists in WorkDocs, visit our documentation site. To start using IP address-based allow lists, log in to the WorkDocs Admin Console.

Source

Linux 4.19 Improves Containers, Latency and Networking for the Long Term

The Linux 4.19 kernel was released on Oct. 22, bringing with it a host of new features for servers large and small. Linux 4.19 is the fifth major Linux kernel released in 2018 and follows the 4.18 kernel, which became generally available on Aug. 12.

The Linux 4.19 release cycle was a bit more dramatic than the other four releases in 2018 as Linux creator Linus Torvalds stepped away from the release during the development cycle to work on his own interpersonal behavior and conduct. As such, the final release was made by Linux stable branch maintainer Greg Kroah-Hartman.

“While it was not the largest kernel release ever by number of commits, it was larger than the last 3 releases, which is a non-trivial thing to do,” Kroah-Hartman wrote in his release message. “After the original -rc1 bumps, things settled down on the code side, and it looks like stuff came nicely together to make a solid kernel for everyone to use for a while. And given that this is going to be one of the ‘Long Term’ kernels I end up maintaining for a few years, that’s good news for everyone.”

A Long Term kernel is maintained and supported by the upstream stable Linux community for at least two years. The last Linux kernel to gain the Long Term support designation was Linux 4.14, which was released in November 2017.

Improved Latency

Among the big new features in Linux 4.19 is a block I/O latency controller that aims to provide a minimum I/O latency target for defined control groups (cgroups).

“This is a cgroup v2 controller for IO workload protection,” Facebook developer Josef Bacik wrote in his Linux commit message. “You provide a group with a latency target, and if the average latency exceeds that target, the controller will throttle any peers that have a lower latency target than the protected workload.”

Memory Improvements for Containers

The overlaysfs first landed in the Linux 3.8 kernel that was released in December 2014, providing an overlay on top of the existing system filesystem, on which a container engine can run without needing to interact with the base filesystem.

In Linux 4.19, overlayfs benefits from multiple memory usage improvements that should serve to help accelerate container workload operations.

Improving Networking with CAKE

The Common Applications Kept Enhanced (CAKE) queue management algorithm also makes its debut in Linux 4.19, providing an improved approach for network packet scheduling.

“sch_cake targets the home router use case and is intended to squeeze the most bandwidth and latency out of even the slowest ISP links and routers, while presenting an API simple enough that even an ISP can configure it,” Linux kernel developer Toke Høiland-Jørgensen wrote in his commit message.

Sean Michael Kerner is a senior editor at ServerWatch and InternetNews.com. Follow him on Twitter @TechJournalist.

Source

Bash by example

More bash programming fundamentals

Let’s start with a brief tip on handling command-line arguments, and then
look at bash’s basic programming constructs.

Accepting arguments

In the sample program in the introductory article, we used the environment variable “$1”,
which referred to the first command-line argument. Similarly, you can use
“$2”, “$3”, etc. to refer to the second and third arguments passed to your
script. Here’s an example:

#!/usr/bin/env bash

echo name of script is $0
echo first argument is $1
echo second argument is $2
echo seventeenth argument is $
echo number of arguments is $#

The example is self explanatory except for three small details. First, “$0”
will expand to the name of the script, as called from the command line,
second, for arguments 10 and above, you need to enclose the whole argument
number in curly braces, and third, “$#” will expand to the number of
arguments passed to the script. Play around with the above script, passing
different kinds of command-line arguments to get the hang of how it
works.

Sometimes, it’s helpful to refer to all command-line arguments at
once. For this purpose, bash features the “$@” variable, which expands to
all command-line parameters separated by spaces. We’ll see an example of
its use when we take a look at “for” loops, a bit later in this
article.

Bash programming
constructs

If you’ve programmed in a procedural language like C, Pascal, Python, or
Perl, then you’re familiar with standard programming constructs like “if”
statements, “for” loops, and the like. Bash has its own versions of most
of these standard constructs. In the next several sections, I will
introduce several bash constructs and demonstrate the differences between
these constructs and others you are already familiar with from other
programming languages. If you haven’t programmed much before, don’t worry.
I include enough information and examples so that you can follow the
text.

Conditional love

If you’ve ever programmed any file-related code in C, you know that it
requires a significant amount of effort to see if a particular file is
newer than another. That’s because C doesn’t have any built-in syntax for
performing such a comparison; instead, two stat() calls and two stat
structures must be used to perform the comparison by hand. In contrast,
bash has standard file comparison operators built in, so determining if
“/tmp/myfile is readable” is as easy as checking to see if “$myvar is
greater than 4”.

The following table lists the most frequently used bash comparison
operators. You’ll also find an example of how to use every option
correctly. The example is meant to be placed immediately after the “if”.
For example:

if [ -z “$myvar” ]
then
echo “myvar is not defined”
fi

Note: You must separate the square brackets from other
text by a space.

Common Bash comparisons
Operator Meaning Example
-z Zero-length string [ -z “$myvar” ]
-n Non-zero-length string [ -n “$myvar” ]
= String equality [ “abc” = “$myvar” ]
!= String inequality [ “abc” != “$myvar” ]
-eq Numeric equality [ 3 -eq “$myinteger” ]
-ne Numeric inequality [ 3 -ne “$myinteger” ]
-lt Numeric strict less than [ 3 -lt “$myinteger” ]
-le Numeric less than or equals [ 3 -le “$myinteger” ]
-gt Numeric strict greater than [ 3 -gt “$myinteger” ]
-ge Numeric greater than or equals [ 3 -ge “$myinteger” ]
-f Exists and is regular file [ -f “$myfile” ]
-d Exists and is directory [ -d “$mydir” ]
-nt First file is newer than second one [ “$myfile” -nt ~/.bashrc ]
-ot First file is older than second one [ “$myfile” -ot ~/.bashrc ]

Sometimes, there are several different ways that a particular comparison
can be made. For example, the following two snippets of code function
identically:

if [ “$myvar” -eq 3 ]
then
echo “myvar equals 3”
fi

if [ “$myvar” = “3” ]
then
echo “myvar equals 3”
fi

If $myvar is an integer, these two comparisons do
exactly the same thing, but the first uses arithmetic comparison
operators, while the second uses string comparison operators.

If $myvar is not an integer,

then the first comparison will fail with an error.

String comparison caveats

Most of the time, while you can omit the use of double quotes surrounding
strings and string variables, it’s not a good idea. Why? Because your code
will work perfectly, unless an environment variable happens to have a
space or a tab in it, in which case bash will get confused. Here’s an
example of a fouled-up comparison:

if [ $myvar = “foo bar oni” ]
then
echo “yes”
fi

In the above example, if myvar equals “foo”, the code will work as expected
and not print anything. However, if myvar equals “foo bar oni”, the code
will fail with the following error:

[: too many arguments

In this case, the spaces in “$myvar” (which equals “foo bar oni”) end up
confusing bash. After bash expands “$myvar”, it ends up with the following
comparison:

[ foo bar oni = “foo bar oni” ]

Similarly, if myvar is the empty string, you will have too few arguments
and the code will fail with the following error:

[: =: unary operator expected

Because the environment variable wasn’t placed inside double quotes, bash
thinks that you stuffed too many (or too few) arguments in-between the
square brackets. You can easily eliminate this problem by surrounding the
string arguments with double-quotes. Remember, if you get into the habit
of surrounding all string arguments and environment variables with
double-quotes, you’ll eliminate many similar programming errors. Here’s
how the “foo bar oni” comparison should have been written:

if [ “$myvar” = “foo bar oni” ]
then
echo “yes”
fi

More quoting specifics

If you want your environment variables to be expanded, you must enclose them
in double quotes, rather than single quotes. Single quotes
disable variable (as well as history) expansion.

The above code will work as expected and will not create any unpleasant surprises.

Looping constructs: “for”

OK, we’ve covered conditionals, now it’s time to explore bash looping
constructs. We’ll start with the standard “for” loop. Here’s a basic
example:

#!/usr/bin/env bash

for x in one two three four
do
echo number $x
done

output:
number one
number two
number three
number four

What exactly happened? The “for x” part of our “for” loop defined a new
environment variable (also called a loop control variable) called “$x”,
which was successively set to the values “one”, “two”, “three”, and
“four”. After each assignment, the body of the loop (the code between the
“do” … “done”) was executed once. In the body, we referred to the loop
control variable “$x” using standard variable expansion syntax, like any
other environment variable. Also notice that “for” loops always accept
some kind of word list after the “in” statement. In this case we specified
four English words, but the word list can also refer to file(s) on disk or
even file wildcards. Look at the following example, which demonstrates how
to use standard shell wildcards:

#!/usr/bin/env bash

for myfile in /etc/r*
do
if [ -d “$myfile” ]
then
echo “$myfile (dir)”
else
echo “$myfile”
fi
done

output:

/etc/rc.d (dir)
/etc/resolv.conf
/etc/resolv.conf~
/etc/rpc

The above code looped over each file in /etc that began with an “r”. To do
this, bash first took our wildcard /etc/r* and expanded it, replacing it
with the string /etc/rc.d /etc/resolv.conf /etc/resolv.conf~ /etc/rpc
before executing the loop. Once inside the loop, the “-d” conditional
operator was used to perform two different actions, depending on whether
myfile was a directory or not. If it was, a ” (dir)” was appended to the
output line.

We can also use multiple wildcards and even environment variables in the
word list:

for x in /etc/r??? /var/lo* /home/drobbins/mystuff/* /tmp/$/*
do
cp $x /mnt/mydir
done

Bash will perform wildcard and variable expansion in all the right places,
and potentially create a very long word list.

While all of our wildcard expansion examples have used absolute
paths, you can also use relative paths, as follows:

for x in ../* mystuff/*
do
echo $x is a silly file
done

In the above example, bash performs wildcard expansion relative to the
current working directory, just like when you use relative paths on the
command line. Play around with wildcard expansion a bit. You’ll notice
that if you use absolute paths in your wildcard, bash will expand the
wildcard to a list of absolute paths. Otherwise, bash will use relative
paths in the subsequent word list. If you simply refer to files in the
current working directory (for example, if you type “for x in *”), the
resultant list of files will not be prefixed with any path information.
Remember that preceding path information can be stripped using the
“basename” executable, as follows:

for x in /var/log/*
do
echo `basename $x` is a file living in /var/log
done

Of course, it’s often handy to perform loops that operate on a script’s
command-line arguments. Here’s an example of how to use the “$@” variable,
introduced at the beginning of this article:

#!/usr/bin/env bash

for thing in “$@”
do
echo you typed $.
done

output:

$ allargs hello there you silly
you typed hello.
you typed there.
you typed you.
you typed silly.

Shell arithmetic

Before looking at a second type of looping construct, it’s a good idea to
become familiar with performing shell arithmetic. Yes, it’s true: You can
perform simple integer math using shell constructs. Simply enclose the
particular arithmetic expression between a “$((” and a “))”, and bash will
evaluate the expression. Here are some examples:

$ echo $(( 100 / 3 ))
33
$ myvar=”56″
$ echo $(( $myvar + 12 ))
68
$ echo $(( $myvar – $myvar ))
0
$ myvar=$(( $myvar + 1 ))
$ echo $myvar
57

Now that you’re familiar performing mathematical operations, it’s time to
introduce two other bash looping constructs, “while” and “until”.

More looping constructs: “while”
and “until”

A “while” statement will execute as long as a particular condition is
true, and has the following format:

while [ condition ]
do
statements
done

“While” statements are typically used to loop a certain number of times, as
in the following example, which will loop exactly 10 times:

myvar=0
while [ $myvar -ne 10 ]
do
echo $myvar
myvar=$(( $myvar + 1 ))
done

You can see the use of arithmetic expansion to eventually cause the
condition to be false, and the loop to terminate.

“Until” statements provide the inverse functionality of “while” statements:
They repeat as long as a particular condition is false. Here’s an
“until” loop that functions identically to the previous “while” loop:

myvar=0
until [ $myvar -eq 10 ]
do
echo $myvar
myvar=$(( $myvar + 1 ))
done

Case statements

Case statements are another conditional construct that comes in handy.
Here’s an example snippet:

case “$” in
gz)
gzunpack $/$
;;
bz2)
bz2unpack $/$
;;
*)
echo “Archive format not recognized.”
exit
;;
esac

Above, bash first expands “$”. In the code, “$x” is the name of a
file, and “$” has the effect of stripping all text except that
following the last period in the filename. Then, bash compares the
resultant string against the values listed to the left of the “)”s. In
this case, “$” gets compared against “gz”, then “bz2” and finally
“*”. If “$” matches any of these strings or patterns, the lines
immediately following the “)” are executed, up until the “;;”, at which
point bash continues executing lines after the terminating “esac”. If no
patterns or strings are matched, no lines of code are executed; however,
in this particular code snippet, at least one block of code will execute,
because the “*” pattern will catch everything that didn’t match “gz” or
“bz2”.

Functions and namespaces

In bash, you can even define functions, similar to those in other
procedural languages like Pascal and C. In bash, functions can even accept
arguments, using a system very similar to the way scripts accept
command-line arguments. Let’s take a look at a sample function definition
and then proceed from there:

tarview() {
echo -n “Displaying contents of $1 ”
if [ $ = tar ]
then
echo “(uncompressed tar)”
tar tvf $1
elif [ $ = gz ]
then
echo “(gzip-compressed tar)”
tar tzvf $1
elif [ $ = bz2 ]
then
echo “(bzip2-compressed tar)”
cat $1 | bzip2 -d | tar tvf –
fi
}

Another case

The above code could have been written using a “case” statement.
Can you figure out how?

Above, we define a function called “tarview” that accepts one argument, a
tarball of some kind. When the function is executed, it identifies what
type of tarball the argument is (either uncompressed, gzip-compressed, or
bzip2-compressed), prints out a one-line informative message, and then
displays the contents of the tarball. This is how the above function
should be called (whether from a script or from the command line, after it
has been typed in, pasted in, or sourced):

$ tarview shorten.tar.gz
Displaying contents of shorten.tar.gz (gzip-compressed tar)
drwxr-xr-x ajr/abbot 0 1999-02-27 16:17 shorten-2.3a/
-rw-r–r– ajr/abbot 1143 1997-09-04 04:06 shorten-2.3a/Makefile
-rw-r–r– ajr/abbot 1199 1996-02-04 12:24 shorten-2.3a/INSTALL
-rw-r–r– ajr/abbot 839 1996-05-29 00:19 shorten-2.3a/LICENSE
….

Use ’em interactively

Don’t forget that functions, like the one above, can be placed
in your ~/.bashrc or ~/.bash_profile so that they are available
for use whenever you are in bash.

As you can see, arguments can be referenced inside the function definition
by using the same mechanism used to reference command-line arguments. In
addition, the “$#” macro will be expanded to contain the number of
arguments. The only thing that may not work completely as expected is the
variable “$0”, which will either expand to the string “bash” (if you run
the function from the shell, interactively) or to the name of the script
the function is called from.

Namespace

Often, you’ll need to create environment variables inside a function.
While possible, there’s a technicality you should know about. In most
compiled languages (such as C), when you create a variable inside a
function, it’s placed in a separate local namespace. So, if you define a
function in C called myfunction, and in it define a variable called “x”,
any global (outside the function) variable called “x” will not be affected
by it, eliminating side effects.

While true in C, this isn’t true in bash. In bash, whenever you create an
environment variable inside a function, it’s added to the global
namespace. This means that it will overwrite any global variable outside
the function, and will continue to exist even after the function
exits:

#!/usr/bin/env bash

myvar=”hello”

myfunc() {

myvar=”one two three”
for x in $myvar
do
echo $x
done
}

myfunc

echo $myvar $x

When this script is run, it produces the output “one two three three”,
showing how “$myvar” defined in the function clobbered the global variable
“$myvar”, and how the loop control variable “$x” continued to exist even
after the function exited (and also would have clobbered any global “$x”,
if one were defined).

In this simple example, the bug is easy to spot and to compensate for by
using alternate variable names. However, this isn’t the right approach;
the best way to solve this problem is to prevent the possibility of
clobbering global variables in the first place, by using the “local”
command. When we use “local” to create variables inside a function, they
will be kept in the local namespace and not clobber any global
variables. Here’s how to implement the above code so that no global
variables are overwritten:

#!/usr/bin/env bash

myvar=”hello”

myfunc() {
local x
local myvar=”one two three”
for x in $myvar
do
echo $x
done
}

myfunc

echo $myvar $x

This function will produce the output “hello” — the global “$myvar”
doesn’t get overwritten, and “$x” doesn’t continue to exist outside of
myfunc. In the first line of the function, we create x, a local variable
that is used later, while in the second example (local myvar=”one two
three””) we create a local myvar and assign it a value. The first
form is handy for keeping loop control variables local, since we’re not
allowed to say “for local x in $myvar”. This function doesn’t clobber any
global variables, and you are encouraged to design all your functions this
way. The only time you should not use “local” is when you
explicitly want to modify a global variable.

Wrapping it up

Now that we’ve covered the most essential bash functionality, it’s time to
look at how to develop an entire application based in bash. In my next
installment, we’ll do just that. See you then!

Downloadable resources

Subscribe me to comment notifications

Source

Learn Python 3 Web-Bootcamp: How To Use Variables in Python3 – NoobsLab

Python: A fairly simple, readable, interpreted, general-purpose and high-level programming language.

We are starting a tutorial on Python 3 programming language, you can call it web-based-Bootcamp. In this series, ‘Learn Python 3’ the aim is to teach you Python 3 as quickly as possible so you can start building programs. ‘Learn Python 3’ course(Tutorial) is for people of any age, doesn’t matter if you haven’t programmed before in Python or new to programming. We shall start from very basics of programming and by time we will be learning advance stuff and deep dive into Python.

Currently, Python has two versions available 2 and 3. Many companies are still using Python 2 and many useful software written in Python 2 that’s why it’s being used widely, on the other hand, people are adopting Python 3 and porting their Python 2 software to Python 3. In this tutorial series, we will use latest version of Python 3 and it’s very easy to learn Python 2 after you acquire knowledge of Python 3. There are some syntax differences in both versions. With time, we’ll construct tutorial as per user needs and make it better for all of you.

You’ll learn in this tutorial:

  • Traditional ‘Hello world’ program
  • Variables

First of all you need to setup Python on your system. If you are using Linux or Mac then your OS have it installed already, if you want to get latest version of Python then follow this guide for Linux. For Mac and Windows check official website of Python.

Note: Before you start, make sure you run each given code to understand that how things work!

Your first Python program

If you are a new to programming then today you are going to write very first program. It is traditional and almost every programmer has done it regardless of programming language.

In Python it is very very simple to do “Hello World!” program. You just need to write one line. There are numerous ways to run Python program, we keep it simple for now. In Linux, just type the version of Python installed on your system for example:

python3.5

.

If you don’t want to install Python or don’t know how to install it then you can use online Python console called ”

Relp.it

“.

We are going to do it other way by creating a file and write our code in it. Then we go to Terminal and run file with Python3. You can see this way in the following screenshot.

Once you are ready with Python console then type the following code in it and hit enter to run it.

print(“Hello World!”)

You will see the output like this:

Congratulations, on writing your first program in Python. Lets see what we are doing here:

print( )

=> it prints message to the screen, or other output device. The message can be anything string or object.

“Hello World!”

=> It is a message between double quotes, it is called string. We can surround string with double ” ” quotes or single ‘ ‘ quotes, both quotation marks functions same for example ‘Hello World!’.

Variables in Python

In programming variables used to store data or we can call them containers where we put something in them in-order to access stored data later. Every variable hold certain value(s) and variables are mutable that means we can change variable’s value at any time in our program.

Variables can store all sort of data like: numbers, strings, booleans and objects.

There are some rules when defining a variable in Python programming language:

  • Variable names should be descriptive, for example: my_message. Avoid using very short variables such as my_m, you will scratch your head later understanding your program.
  • Be careful using lowercase letter ‘l’ and uppercase letter ‘O’ because they can be mixed with 1 and 0.
  • Don’t use Python reserved keywords and function names.
  • Variables can only contain letters, numbers and underscores.
  • Variable can’t start with numbers.
  • Spaces are not permitted in variables.

Lets see how to create variables:

We will modify our first ‘Hello World’ program to show you, how variables work!

message = “Hello World!”
print(message)
First line of the program, the string ‘Hello World!’ is stored in variable called ‘message’. We can have give any name to the variable.

Then we are using

print

function to output our message on the screen. Also, we can defined variable

‘message’

as many times as we want in our program.

We can reassign a new value to

‘message’

in the same program and there won’t be any issue. Lets see an example:

message = “Hello World!”
print(message)

message = ‘We are defining another variable’
print(message)

Another example with numbers:

number = 100
print(number)Assign variable to another variable
You can also assign a variable which has a value to new variable. See following example for more clarification:

number = 100
print(number)

new_number_variable = number
print(new_number_variable)

In this example, we assigned value of variable

‘number’

to new variable called

‘new_number_variable’

. Now we can use

‘new_number_variable’

to print the same value.

Compact assignments
Python allows us to make multiple assignment in just one line, it is useful when you want to make your program compact but make less readable/complex:

varX, varY, varZ = ‘Hello’, 50, ‘1 Penny’
print(varX)
print(varY)
print(varZ)Multiple assignments
We can assign same value to multiple variables. In the following example, all variables hold the same value, you can print each variable to check its value:

my_number = number = last_number = first_number = 200
print(my_number)
print(number)
print(first_number)
print(last_number)
That’s it for today.

Stay tuned for more tutorials on Python 3 programming language! Happy coding.

Source

Amazon WorkDocs Smart Search Makes It Easier to Find Your Content

Starting today, it’s even easier for you and your users to find the content you need in Amazon WorkDocs with Smart Search. WorkDocs Smart Search lets you query across content, comments, and document labels in addition to searching for files and folders by name.

To get started, access WorkDocs in your web browser and enter the desired search term in the search box in the top navigation bar. Hitting ‘Enter’ or clicking the magnifying glass search icon will automatically search all content to which you have access across file names, content types, comments, and labels. Results will display as a sorted list, with folders listed before files.

You can use the Advanced button in the search bar to further refine your search. Advanced search can scope your search by location of files, limit the time and date range, and specifying file types. Results will display as they would with a regular search, but with your additional parameters applied.

This new search experience is immediately available to all WorkDocs customers. No user or administrator action is required to activate it. Discover more about WorkDocs, or sign up for a 30-day trial today.

Source

Using Parted To Create A New Swap Disk

Using parted to create swap

What is Parted?

Parted is a software package used to manipulate paritition tables. It is useful for formatting new disks, reorganinzing disks, and removing disk data

First select the disk you would like to use, if you are unsure you you can use fdisk to list all of the disks available to you

fdisk -l

Once you have located the correct disk you would like to use we will use it to enter the command prompt. In this case, we are creating a swap disk on a KVM virtual machine.

# parted /dev/vdb
GNU Parted 3.1
Using /dev/vdb
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted)

Replacing /dev/vdb with the disk you want to use

Label The Disk

We will first need to make the label, at the prompt type mklabel

(parted) mklabel
New disk label type? msdos

It will warn you before erase the disk, go ahead type Yes as long as you are certain this is the correct disk

Warning: The existing disk label on /dev/vdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? Yes
(parted)

Setting the size and type of the disk

Next you are going to format the disk, to do this type the following

(parted) mkpart
Partition type? primary/extended? primary
File system type? [ext2]? linux-swap
Start? 0%
End? 100%

We indicated this was going to be a primary partition, it was going to be linux-swap, that it was going to start at 0% of the disk and end at 100%. To verify what you have crated going ahead and type print to review it looks correct:

(parted) print
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 1074MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number Start End Size Type File system Flags
1 1049kB 1074MB 1073MB primary

You can now exit out of parted:

(parted) quit
Information: You may need to update /etc/fstab.

Creating the swap file

You will then want tell the operating system to make the newly created partition into a swap file:

mkswap /dev/vdb1

And then enable swapping by entering the following

swapon /dev/vdb1

You can then finally add it to /etc/fstab to ensure it starts on boot

/dev/vdb1 swap swap defaults

Source

Apollo Lake Pico-ITX doubles down on GbE and M.2

Oct 26, 2018 — by Eric Brown

IEI is launching a Pico-ITX form-factor “Hyper-AL” SBC with a dual-core Celeron N3350 and a pair each of GbE, USB 3.0, USB 2.0, and M.2 ports plus HDMI, LVDS, SATA, and serial connections.

The Hyper-AL (for Apollo Lake) was part of a recent group announcement that also included IEI’s first Arm-based SBC — the Rockchip RK3399 based Hyper-RK39. With the Hyper-AL, IEI is more in its comfort zone with an Intel dual-core Celeron N3350 with up to 2.4GHz clock and 6W TDP. Recent Apollo Lake based Pico-ITX boards include Aaeon’s PICO-APL4.

Hyper-AL (left) and detail view

Nano-GLX

EPIC SBC and two older Mini-ITX boards, targets the gaming industry. Yet, there’s nothing very gaming specific about any of these general-purpose boards. Although the Arm-based Hyper-RK39 runs Ubuntu and Android, no OS is listed for the Hyper-AL, but it presumably runs Linux, as well as Windows.

Hyper-AL portside detail view

You can load up to 8GB DDR3L and store data via a SATA header or the M.2 B-key slot. There’s also an M.2 A-key slot for PCIe and USB driven cards. Dual GbE ports are also onboard.

Dual displays are available via the HDMI port and LVDS interface, and there are 2x USB 3.0 ports and two more USB 2.0 headers. Other features include 8-bit DIO, a serial header, and a watchdog. The board has an extended -20 to 60°C range, and there’s an optional heatsink.

Specifications listed by Axiomtek for the Hyper-AL include:

  • Processor — Intel Celeron N3350 (2x Apollo Lake cores @ 1.1GHz (2.4GHz burst); Intel Gen9 Graphics
  • Memory — up to 8GB DDR3L-1866/1600 via 1x SODIMM
  • Storage — SATA III interface with 5V power; SATA also available via M.2 B-key
  • Networking — 2x Gigabit Ethernet ports (Realtek RTL8111H)
  • Media I/O:
    • HDMI port
    • 24-bit single channel LVDS
    • Dual independent displays
    • HD audio interface with optional 7.1 channel HD AC-KIT-892HD-R1
  • Other I/O:
    • 2x USB 3.0 host ports
    • 2x USB 2.0 headers
    • RS232 header
    • 8-bit DIO
  • Expansion – M.2 A-key 2230 (PCIe, USB); M.2 B-key 2242 (USB, SATA)
  • Other features — watchdog timer; optional heatsink; optional cables
  • Operating temperature — -20 to 60°C
  • Power — 12VDC jack (AT/ATX); [email protected] consumption with 8GB RAM
  • Weight — 250 g
  • Dimensions — 100 x 72mm; Pico-ITX form factor

Further information

No pricing or availability information was provided for the Hyper-AL. More information may be found on IEI’s Hyper-AL announcement and product pages.

Source

Linux Today – MongoDB Vs. MySQL

Oct 25, 2018, 08:00

(Other stories by Rishabh)

The past few years have seen a huge spike in the number of websites and apps using NoSQL databases. With MongoDB topping the charts everywhere. It is indeed fascinating how the modern web has drifted away from traditional SQL based databases. MongoDB and other NoSQL databases have a new approach in storing and retrieving data. So let us have a look at some of the key factors in which MongoDB differs from MySQL.

Complete Story

Related Stories:

Source

Radeon Software 18.40 vs. Mesa vs. AMDVLK Benchmarks With Radeon RX Vega

This week marked the release of Radeon Software 18.40 as the latest release of AMD’s Linux driver stack targeting workstation users. While the sole mentioned change was the addition of SUSE Linux Enterprise 15 support, I decided to run some benchmarks of this latest driver compared to the other open-source Radeon Linux driver options.

 

 

Using an AMD Radeon RX Vega 56 graphics card, I conducted fresh Linux gaming benchmarks of the following driver configurations:

 

Ubuntu 18.04 Stock – The stock Ubuntu 18.04.1 LTS release with the Linux 4.15 kernel and Mesa 18.0.5.

 

AMDGPU-PRO 18.40 – The Radeon Software 18.40 release using the “PRO” (closed-source) OpenGL and Vulkan drivers. The OpenGL driver is exposed as OpenGL 4.6.13540… That does remain one of the few advantages to AMD’s closed-source OpenGL driver is support for GL 4.6 while RadeonSI Gallium3D remains at OpenGL 4.5.

 

AMDGPU-Open 18.40 – The Radeon Software 18.40 release using the open-source DKMS stack and Mesa. The open-source components provided were of Mesa 18.1.0-rc4 — sad to not yet be on Mesa 18.2 and with that a very dated 18.0 build, not even one of the point releases.

 

Linux 4.19 + Mesa 18.2.3 – The Linux 4.19.0 stable kernel release paired with Pkppa for providing Mesa 18.2.3 built against LLVM 7.0 SVN.

 

Linux 4.19 + Mesa 18.3-dev – The Linux 4.19.0 stable kernel release paired with the Padoka unstable PPA for Mesa 18.3-dev built against LLVM 8.0 SVN.

 

AMDVLK 20181026 – The Linux 4.19.0 stable kernel with the latest open-source AMDVLK open-source driver code and its LLPC LLVM back-end.

 

Via the Phoronix Test Suite a variety of OpenGL and Vulkan Linux gaming benchmarks were carried out from this AMD Ryzen Threadripper + RX Vega 56 system.

Source

WP2Social Auto Publish Powered By : XYZScripts.com