Automate Sysadmin Tasks with Python’s os.walk Function

Using Python’s os.walk function to walk through a tree of files and
directories.

I’m a web guy; I put together my first site in early 1993. And
so, when I started to do Python training, I assumed that most of my
students also were going to be web developers or aspiring web
developers. Nothing could be further from the truth. Although some of my
students certainly are interested in web applications, the majority of them
are software engineers, testers, data scientists and system
administrators.

This last group, the system administrators, usually comes into my
course with the same story. The company they work for has been writing Bash
scripts for several years, but they want to move to a higher-level
language with greater expressiveness and a large number of third-party
add-ons. (No offense to Bash users is intended; you can do amazing
things with Bash, but I hope you’ll agree that the scripts can become
unwieldy and hard to maintain.)

It turns out that with a few simple tools and ideas, these system
administrators can use Python to do more with less code, as well as create
reports and maintain servers. So in this article, I describe
one particularly useful tool that’s often overlooked: os.walk, a
function that lets you walk through a tree of files and
directories.

os.walk Basics

Linux users are used to the ls command to get a list of files in a
directory. Python comes with two different functions that can return
the list of files. One is os.listdir, which means the “listdir”
function in the “os” package. If you want, you can pass the name of a
directory to os.listdir. If you don’t do that, you’ll get the names
of files in the current directory. So, you can say:

In [10]: import os

When I do that on my computer, in the current directory, I get the following:

In [11]: os.listdir(‘.’)
Out[11]:
[‘.git’,
‘.gitignore’,
‘.ipynb_checkpoints’,
‘.mypy_cache’,
‘Archive’,
‘Files’]

As you can see, os.listdir returns a list of strings, with each
string being a filename. Of course, in UNIX-type systems, directories
are files too—so along with files, you’ll also see subdirectories
without any obvious indication of which is which.

I gave up on os.listdir long ago, in favor of
glob.glob, which means
the “glob” function in the “glob” module. Command-line users are used
to using “globbing”, although they often don’t know its name. Globbing
means using the * and ? characters, among others, for more flexible
matching of filenames. Although os.listdir can return the list of
files in a directory, it cannot filter them. You can though with
glob.glob:

In [13]: import glob

In [14]: glob.glob(‘Files/*.zip’)
Out[14]:
[‘Files/advanced-exercise-files.zip’,
‘Files/exercise-files.zip’,
‘Files/names.zip’,
‘Files/words.zip’]

In either case, you get the names of the files (and subdirectories) as
strings. You then can use a for loop or a list comprehension to iterate
over them and perform an action. Also note that in contrast with
os.listdir, which returns the list of filenames without any path,
glob.glob returns the full pathname of each file, something I’ve
often found to be useful.

But what if you want to go through each file, including every file in
every subdirectory? Then you have a bit more of a problem. Sure, you could
use a for loop to iterate over each filename and then use
os.path.isdir to figure out whether it’s a subdirectory—and if so,
then you could get the list of files in that subdirectory and add them
to the list over which you’re iterating.

Or, you can use the os.walk function, which does all of this and
more. Although os.walk looks and acts like a function, it’s actually a
“generator function”—a function that, when executed, returns a
“generator” object that implements the iteration protocol. If you’re
not used to working with generators, running the function can be
a bit surprising:

In [15]: os.walk(‘.’)
Out[15]: <generator object walk at 0x1035be5e8>

The idea is that you’ll put the output from os.walk in a
for
loop. Let’s do that:

In [17]: for item in os.walk(‘.’):
…: print(item)

The result, at least on my computer, is a huge amount of output,
scrolling by so fast that I can’t read it easily. Whether that
happens to you depends on where you run this for loop on your
system and how many files (and subdirectories) exist.

In each iteration, os.walk returns a tuple containing three
elements:

  • The current path (that is, directory name) as a string.
  • A list of subdirectory names (as strings).
  • A list of non-directory filenames (as strings).

So, it’s typical to invoke os.walk such that each of these three
elements is assigned to a separate variable in the for loop:

In [19]: for currentdir, dirnames, filenames in os.walk(‘.’):
…: print(currentdir)

The iterations continue until each of the subdirectories under the
argument to os.walk has been returned. This allows you to perform
all sorts of reports and interesting tasks. For example, the above
code will print all of the subdirectories under the current directory,
“.”.

Counting Files

Let’s say you want to count the number of files (not subdirectories)
under the current directory. You can say:

In [19]: file_count = 0

In [20]: for currentdir, dirnames, filenames in os.walk(‘.’):
…: file_count += len(filenames)
…:

In [21]: file_count
Out[21]: 3657

You also can do something a bit more sophisticated, counting how many
files there are of each type, using the extension as a classifier. You
can get the extension with os.path.splitext, which returns two
items—the filename without the extension and the extension itself:

In [23]: os.path.splitext(‘abc/def/ghi.jkl’)
Out[23]: (‘abc/def/ghi’, ‘.jkl’)

You can count the items using one of my favorite Python data structures,
Counter. For example:

In [24]: from collections import Counter

In [25]: counts = Counter()

In [26]: for currentdir, dirnames, filenames in os.walk(‘.’):
…: for one_filename in filenames:
…: first_part, ext =
↪os.path.splitext(one_filename)
…: counts[ext] += 1

This goes through each directory under “.”, getting the
filenames. It then iterates through the list of filenames, splitting
the name so that you can get the extension. You then add 1 to the counter
for that extension.

Once this code has run, you can ask counts for a report. Because it’s
a dict, you can use the items method and print the keys and values
(that is, extensions and counts). You can print them as follows:

In [30]: for extension, count in counts.items():
…: print(f””)

In the above code, f strings displays the extension (in
a field of eight characters) and the count.

Wouldn’t it be nice though to show only the ten most common
extensions? Yes, but then you’d have to sort through the counts
object. It’s much easier just to use the most_common method that
the Counter object provides, which returns not only the keys and
values, but also sorts them in descending order:

In [31]: for extension, count in counts.most_common(10):
…: print(f””)
…:
.py 1149
867
.zip 466
.ipynb 410
.pyc 372
.txt 151
.json 76
.so 37
.conf 19
.py~ 12

In other words—not surprisingly—this example shows that the most common file extension
in the directory I use for teaching Python courses is .py. Files
without any extension are next, followed by .zip, .ipynb (Jupyter
notebooks) and .pyc (byte-compiled Python).

File Sizes

You can ask more interesting questions as well. For example, perhaps
you want to know how much disk space is used by each of these file
types. Now you don’t add 1 for each time you encounter a file
extension, but rather the size of the file. Fortunately, this turns
out to be trivially easy, thanks to the os.path.getsize
function (this returns the same value that you would get from
os.stat):

for currentdir, dirnames, filenames in os.walk(‘.’):
for one_filename in filenames:
first_part, ext = os.path.splitext(one_filename)
try:
counts[ext] +=
↪os.path.getsize(os.path.join(currentdir,one_filename))
except FileNotFoundError:
pass

The above code includes three changes from the previous version:

  1. As indicated, this no longer adds 1 to the count for each extension,
    but rather the size of the file, which comes from
    os.path.getsize.
  2. os.path.join puts the path and filename together
    and (as a
    bonus) uses the current operating system’s path separation character.
    What are the odds of a program being used on a Windows system and,
    thus, needing a backslash rather than a slash? Pretty slim, but it
    doesn’t hurt to use this sort of built-in operation.
  3. os.walk doesn’t normally look at symbolic links, which means
    you potentially can get yourself into some trouble trying to
    measure the sizes of files that don’t exist. For this reason, here
    the counting is wrapped in a try/except block.

Once this is done, you can identify the file types consuming
the greatest amount of space in the directory:

In [46]: for extension, count in counts.most_common(10):
…: print(f””)
…:
.pack 669153001
.zip 486110102
.ipynb 223155683
.sql 125443333
46296632
.json 14224651
.txt 10921226
.pdf 7557943
.py 5253208
.pyc 4948851

Now things seem a bit different! In my case, it looks like I’ve got a lot of
stuff in .pack
files, indicating that my Git repository (where I store all of my
old training examples, exercises and Jupyter notebooks) is quite
large. I have a lot in zipfiles, in which I store my daily updates.
And of course, lots in Jupyter notebooks, which are written in JSON
format and can become quite large. The surprise to me is the .sql
extension, which I honestly had forgotten that I had.

Files per Year

What if you want to know how many files of each type were modified in
each year? This could be useful for removing logfiles or (if you’re
like me) identifying what large, unnecessary files are taking up
space.

In order to do that, you’ll need to get the modification time
(mtime,
in UNIX parlance) for each file. You’ll then need to convert that
mtime
from a UNIX time (that is, the number of seconds since January 1st, 1970)
to something you can parse and use.

Instead of using a Counter object to keep track of things, you
can just
use a dictionary. However, this dict’s values will be a Counter, with
the years serving as keys and the counts as values. Since you know that
all of the main dicts will be Counter objects, you can just use a
defaultdict, which will require you to write less code.

Here’s how you can do all of this:

from collections import defaultdict, Counter
from datetime import datetime

counts = defaultdict(Counter)

for currentdir, dirnames, filenames in os.walk(‘.’):
for one_filename in filenames:
first_part, ext = os.path.splitext(one_filename)
try:
full_filename = os.path.join(currentdir,
↪one_filename)
mtime =
↪datetime.fromtimestamp(os.path.getmtime(full_filename))
counts[ext][mtime.year] += 1
except FileNotFoundError:
pass

First, this creates counts as an instance of
defaultdict with a
Counter. This means if you ask for a key that doesn’t yet exist,
the key will be created, with its value being a new Counter
that allows you to say something like this:

counts[‘.zip’][2018] += 1

without having to initialize either the zip key (for counts) or the
2018 key (for the Counter object). You can just add one to the count,
and know that it’s working.

Then, when you iterate over the filesystem, you grab the mtime
from the
filename (using os.path.getmtime). That is turned into a
datetime
object with datetime.fromtimestamp, a great function that lets
you
move from UNIX timestamps to human-style dates and times. Finally, you
then add 1 to your counts.

Once again, you can display the results:

for extension, year_counts in counts.items():
print(extension)
for year, file_count in sorted(year_counts.items()):
print(f”tt”)

The counts variable is now a defaultdict, but that means it behaves
just like a dictionary in most respects. So, you can iterate over its
keys and values with items, which is shown here, getting each file
extension and the Counter object for each.

Next the extension is printed, and then it iterates over the years and their
counts, sorting them by year and printing them indented somewhat with
a tab (t) character. In this way, you can see precisely how many
files of each extension have been modified per year—and perhaps
understand which files are truly important and which you easily can get
rid of.

Conclusion

Python can’t and shouldn’t replace Bash for simple scripting, but in
many cases, if you’re working with large number of files and/or
creating reports, Python’s standard library can make it easy to
do such tasks with a minimum of code.

Source

Linux Today – U.S Supercomputers Lead Top500 Performance Ranking (all Linux-powered!)

Nov 12, 2018

The semi-annual Top500 list of the world’s most powerful supercomputers was released on Nov. 12, with the U.S holding down the top two spots overall.

The IBM POWER9 based Summit system has retained its crown that it first achieved in the June 2018 ranking. Summit is installed at the U.S. Department of Energy’s Oak Ridge National Laboratory and now has performance of 143.5 petaflops per second, up from the 122.3 petaflops the system had when it first came online.

Source

The open source racer ‘SuperTuxKart’ is looking for testers to try their new online play

SuperTuxKart [Official Site], one of the stalwarts of Linux open source gaming is finally getting online multiplayer. They’re also now asking for testers.

There’s one caveat though, you will for now need to download and compile it yourself until it’s properly released or someone provides it ready made for us. The developers said their next release isn’t actually too far off, as they’re busy working on a beta/release candidate so hopefully that won’t be too long.

Currently, they have around 20 servers up with various game modes for people to jump in and test and you can also host your own.

I’ve given it a brief test myself and the accounts system works properly, however it’s difficult to test unless you gather people since there’s no one online right at this moment. I did manage to find one other person online and gave it a quick spin, seems to work quite nicely:

I didn’t realise just how far SuperTuxKart has come along! It actually looks quite good when you turn the settings up and put it in fullscreen. Surprising really, it’s been a long time since I tested it and I’m genuinely impressed by it now. Adding in online play is going to make it rather sweet indeed.

Hopefully with us giving it a shout out, more will hop on to help them deliver a good experience.

They’re asking for feedback to be sent to them across either their forum, IRC channel and to file bug reports on their issue tracker. To arrange games our Discord and IRC are always available.

See their full blog post here.

Hat tip to Porkhaus on Mastodon.

Source

Linux IoT Landscape: Distributions | Linux.com

Linux is an Operating System: the program at the heart of controlling a computer. It decides how to partition the available resources (CPU, memory, disk, network) between all of the other programs vying for it. The operating system, while very important, isn’t useful on its own. Its purpose is to manage the compute resources for other programs. Without these other programs, the Operating System doesn’t serve much of a purpose.

A distribution provides a large number of other programs that, together with Linux, can be assembled into working sets for a vast number of purposes. These programs can range from basic program writing tools such as compilers and linkers to communications libraries to spreadsheets and editors to pretty much everything in between. A distribution tends to have a superset of what’s actually used for each individual computer or solution. It also provides many choices for each category of software components that users or companies can assemble into what they consider a working set. A rough analogy can be made to a supermarket in which there are many options for many items on the shelves, and each user picks and chooses what makes sense to them in their cart.

Source

Beyond Finding Stuff | Linux.com

Continuing the quest to become a command-line power user, in this installment, we will be taking on the find command.

Jack Wallen already covered the basics of find in an article published recently here on Linux.com. If you are completely unfamiliar with find, please read that article first to come to grips with the essentials.

Done? Good. Now, you need to know that find can be used to do much more than just for search for something, in fact you can use it to search for two or three things. For example:

find path/to/some/directory/ -type f -iname ‘*.svg’ -o -iname ‘*.pdf’

This will cough up all the files with the extensions svg (or SVG) and pdf (or PDF) in the path/to/directory directory. You can add more things to search for using the -o over and over.

You can also search in more than one directory simultaneously just be adding them to the route bit of the command. Say you want to see what is eating up all the space on your hard drive:

find $HOME /var /etc -size +500M

This will return all the files bigger than 500 Megabytes (-size +500M) in your home directory, /var and /etc.

Additionally, find also lets you do stuff with the files it… er… finds. For example, you can use the -delete action to remove everything that comes up in a search. Now, be careful with this one. If you run

# WARNING: DO NOT TRY THIS AT $HOME

find . -iname “*” -delete

find will erase everything in the current directory (. is shorthand for “the current directory“) and everything in the subdirectories under it, and then the subdirectories themselves, and then there will be nothing but emptiness and an unbearable feeling that something has gone terribly wrong.

Please do not put it to the test.

Instead, let’s look at some more constructive examples…

Moving Stuff Around

Let’s say you have bunch of pictures of Tux the penguin in several formats and spread out over dozens of directories, all under your Documents/ folder. You want to bring them all together into one directory (Tux/) to create a gallery you can revel in:

find $HOME/Documents/ ( -iname “*tux*png” -o -iname “*tux*jpg” -o -iname “*tux*svg” )
-exec cp -v ‘{}’ $HOME/Tux/ ;

Let’s break this down:

  • $HOME/Documents is the directory (and its subdirectories) find is going to search in.
  • You enclose what you want to search for between parentheses (( … )) because, otherwise -exec, the option that introduces the command you want to run on the results, will only receive the result of the last search (-iname “*tux*svg”). There are two things you have to bear in mind when you do this: (1) you have to escape the parentheses using backslashes like this: ( … ). You do that so the shell interpreter (Bash) doesn’t get confused (parentheses have a special meanings for Bash); and (2) there is one space between the opening bracket ( and -iname … and another space between “*tux*svg” and the closing bracket ). If you don’t include those spaces, find will exit with an error.
  • -exec is the option you use to introduce the command you want to run on the found files. In this case it is a simple cp (copy) command. You use cp’s -v option to see what is going on.
  • ‘{}’ is the shorthand find uses to say “the file or directory I have found that matches the criteria you gave me“. ‘{}’ gets swapped for each file or directory as it is found and, in this case, then gets copied to the Tux/ directory.
  • ; tells find to execute the command for each result sequentially, that is, one after another. There is another option, + which runs the command adding each result from find to the end of the command, making a long sausage of a string. But (1) this is not helpful for you here, and (2) you need the ‘{}’ to be at the end of the command for this to work. You could use + to make executable all the files with the .sh extension tucked away under your Documents/ folder like this:find $HOME/Documents/ -name “*.sh” -exec chmod a+x {} +

Once you have the basics of modifying files using find under your belt, you will discover all sort of situations where it comes in handy. For example…

A Terrible Mish-Mash

Client X has sent you a zip file with important documents and images for the new website you are working on for them. You copy the zip into your ClientX folder (which already contains dozens of files and directories) and uncompress it with unzip newwebmedia.zip and, gosh darn it, the person who made the zip file didn’t compress the directory itself, but the contents in the directory. Now all the images, text files and subdirectories from the zip are all mixed up with the original contents of you folder, that contains more images, text files, and subdirectories.

You could try and remember what the original files were and then move or delete the ones that came from the zip archive. But with dozens of entries of all kinds, you are bound to get mixed up at some point and forget to move a file, or, worse, delete one of your original files.

Looking at the files’ dates (ls -la *) won’t help either: the Zip program keeps the dates the files were originally created, not when they were zipped or unzipped. This means a “new” file from the zip could very well have a date prior to some of the files that were already in the folder when you did the unzipping.

You probably can guess what comes next: find to the rescue! Move into the directory (cd path/to/ClientX), make a new directory where you want the new stuff to go (mkdir NewStuff), and then try this:

find . -cnewer newwebmedia.zip -exec mv ‘{}’ NewStuff ;

Breaking that down:

  • The period (.) tells find to do its thing in the current directory.
  • -cnewer tells find to look for files that have been changed at the same time or after a certain file you give as reference. In this case the reference file is newwebmedia.zip. If you copied the file over at 12:00 and then unpacked it at 12:01, all the files that you unpacked will be tagged as changed at 12:01, that is, after newwebmedia.zip and will match that criteria! And, as long as you didn’t change anything else, they will be the only files meeting that criteria.
  • The -exec part of the instruction simply tells find to move the files and directories to the NewStuff/ directory, thus cleaning up the mess.

If you are unsure of anything find may do, you can swap -exec for -ok. The -ok option forces find to check with you before it runs the command you have given it. Accept an action by typing y or reject it with n.

Next Time

We’ll be looking at environmental variables and a way to search even more deeply into files with the grep command.

Source

Rugged, low-cost Bay Trail SBC runs Linux

Nov 13, 2018

VersaLogic released a rugged, PC/104-Plus form-factor “SandCat” SBC with a dual-core Intel Bay Trail SoC, -40 to 85℃ support, plus SATA, GbE, and mini-PCIe and more, starting at $370 in volume.

VersaLogic has spun a simpler, more affordable alternative to its BayCat single board computer, which similarly offers a Linux supported Intel Bay Trail SoC in a PC/104-Plus form-factor board. The rugged new SandCat is limited to a dual-core, 1.33GHz Atom E3825, and offers a somewhat reduced feature set, but launches at less than half the price of the dual-core version of the BayCat, selling at $370 in volume.

SandCat (left) and detail view

The venerable

PC/104-Plus

spec features a combination of ISA- and PCI-based self-stacking bus expansion. VersaLogic has used it on other SBCs, including one of the boards on its double-board, Kaby Lake based

Liger

The SandCat supports Linux, Windows, and other x86 platforms including VxWorks and QNX. Tested Linux distros include Ubuntu 14.04 LTS and Knoppix 7.4.2.

The 108 x 96mm SandCat has a tall 42mm profile due to its standard heat sink. The fanless SBC has an industrial -40 to 85℃ operating range, and features MIL-STD-202G rated vibration (Methods 204 and 214A) and shock resistance (Method 213B), as well as tolerance of humidity and thermal shock. The board also provides long-term, typically 10-year, availability.

The SandCat is available with up to 8GB DDR3L-1067 RAM via a single socket and offers a 3Gbps SATA II port. Like most PC/104 family boards, the SandCat doesn’t offer much in the way of real-world ports. You get a mini-DisplayPort with 1920 x 1080 and audio support and an optional adapter card for LVDS touch-panels.

SandCat (left) and sample mini-PCIe add-on modules

The SandCat is equipped with a GbE interface with remote boot support, as well as 4x USB 2.0, 2x RS-232/422/485, 8x DIO, and single I2C and audio interfaces. Expansion features include a full-length mini-PCIe slot with optional WiFi, GPS, mSATA, GbE, and I/O modules, as well as dual PC/104-Plus interfaces, with support for both ISA and PCI based modules.

There’s a 5V input with ACPI 3.0 sleep modes. Other features include hardware monitoring, support for RTC backup battery, and the VersaAPI board I/O interface. A variety of optional cables and other add-ons are available, and VersaLogic provides extensive hardware and software customization services for 100+ volume orders.

Further information

The SandCat (VL-EPM-39EBK) is available now with quantity pricing starting at $370. More information may be found in VersaLogic’s SandCat announcement and on the SandCat product page.

Source

Install Eclipse on Ubuntu Using Command Line

Install eclipse oxygen ubuntu

Eclipse is a free integrated development environment IDE which is used by programmers around to write software mostly in Java but also in other major programming languages via Eclipse plugins.

Eclipse is not only good at developing applications but you can use its collection of tools to easily enhance your Eclipse desktop IDE, including GUI builders and tools for modeling, charting, reporting testing, and more.

To install Eclipse on Ubuntu, follow the steps below:

Eclipse requires Java JDK to be installed on the system you want to use. At this time, only Java JDK 8 is fully compatible.. to install JDK, use the steps below:

The easiest way to install Oracle Java JDK 8 on Ubuntu is via a third-party PPA. To add that PPA, run the commands below

sudo add-apt-repository ppa:webupd8team/java

After running the commands above, you should see a prompt to accept the PPA key onto Ubuntu. accept and continue

Now that the PPA repository has been added to Ubuntu, run the commands below to download Oracle Java 9 installer. the installer should install the latest Java JDK 9 on your Ubuntu machines.

sudo apt update
sudo apt install oracle-java8-installer

When you run the commands above you’ll be prompted to access the license terms of the software. accept and continue.

Set Oracle JDK8 as default, to do that, install the Oracle-java8-set-default package. This will automatically set the JAVA env variable.

sudo apt install oracle-java8-set-default

The command above will automatically set Java 9 as the default… and that should complete your installation, you can check your java version by running following command.

javac -version

Now that Java JDK 8 is installed, got and download Eclipse Oxygen IDE package for your systems. the link below can be used to get it.

Download Eclipse


Extract the downloaded package to the directory/opt using the commands below. by default Eclipse package should be downloaded into the folder~/Downloads of your home directory.

Use the commands below to extract the content in the ~/Downloads folder. The next line launches the installer…

tar xfz ~/Downloads/eclipse-inst-linux64.tar.gz
~/Downloads/eclipse-installer/eclipse-inst

Select the package IDE you want to install and continu.

Use the onscreen instructions to complete the installer. Accept the default installation directory and continue.

Next, accept the license terms and continue. wait for Eclipse installer to download and install all the packages.

After downloading the installer should complete. all you have to do is launch the program.

Now that Eclipse is downloaded and installed, create a launcher for the application. to do that, run the commands below.

nano .local/share/applications/eclipse.desktop

Next, copy and paste the content below into the file and save.

[Desktop Entry]
Name=Eclipse JEE Oxygen
Type=Application
Exec=/home/smart/eclipse/jee-oxygen/eclipse/eclipse
Terminal=false
Icon=/home/smart/eclipse/jee-oxygen/eclipse/icon.xpm
Comment=Integrated Development Environment
NoDisplay=false
Categories=Development;IDE;
Name[en]=Eclipse

Replace the highlighted username (smart) with your own account name. also, the Exec = location and icon.xpm should depend on where Eclipse got installed on your system.

Save the file and exit.

You should then have a launcher for Eclipse JEE Oxygen. open Dash or the activities overview and search for Eclipse and then launch.

To create additional IDEs, you must repeat step 3 by launching the installer again and created an application launcher for that IDE.

When the app launches, you should be able to configure it for your environment.

Read Also:

Source

Red Hat Releases Red Hat OpenStack Platform 14 and a New Virtual Office Solution, ownCloud Enterprise Integrates with SUSE Ceph/S3 Storage, Run a Linux Shell on iOS with iSH and Firefox Launches Two New Test Pilot Features

News briefs for November 13, 2018.

Red
Hat this morning released Red Hat OpenStack Platform 14
, delivering “enhanced
Kubernetes integration, bare metal management and additional automation”. According to the press
release, it will be available in the coming weeks via the Red Hat Customer Portal and as a component of both Red Hat Cloud Infrastructure and
Red Hat Cloud Suite.

Red
Hat also announced a new virtual office solution
today. This solution “provides a
blueprint for modernizing telecommunications operations at the network edge via an open,
software-defined infrastructure platform”. Learn more about it here.

ownCloud yesterday announced SUSE Enterprise Storage Ceph/S3 API as a certified storage backend for
ownCloud Enterprise Edition. The press release notes that the “SUSE Ceph/S3 Storage
integration reduces dependency on proprietary hardware
by replacing an organization’s storage infrastructure with an open, unified and
smarter software-defined storage solution”. For more information on ownCloud, visit here.

There’s a new project called iSH that lets you run a Linux shell on an iOS device. Bleeping
Computer reports
that the project is available as a TestFlight beta for iOS devices, and it is
based on Alpine Linux. It allows you to “transfer files, write shell scripts, or simply to use Vi to
develop code or edit files”. You first need to
install the TestFlight app, and then you can start
testing the app by visiting this page:
https://testflight.apple.com/join/97i7KM8O
.

The Firefox Test Pilot Team announces two new features: Price Wise and Email Tabs. Price Wise lets
you add products to your Price Watcher list, and you’ll receive desktop notifications whenever the price
drops. With Email Tabs, you can “select and send links to one or many open tabs all within Firefox in
a few short steps, making it easier than ever to share your holiday gift list, Thanksgiving recipes or
just about anything else”. See the Mozilla
Blog
for details.

Source

WP2Social Auto Publish Powered By : XYZScripts.com