Install Skype on Ubuntu | Linux Hint

Skype is one of the most popular platforms for video chat. It’s free to enjoy with powerful security features. Skype is also platform-independent, meaning that the service is available for everyone in the world.

In the case of Skype client, it’s also available for all the major platforms including Windows, macOS, and Linux. Ubuntu is, in fact, one of the most popular Linux distros all around the world.

Let’s enjoy Skype client on the Ubuntu system!

Getting Skype

Skype provides the client in an installable DEB package for Ubuntu/Debian and Ubuntu/Debian-derivatives. Get the latest DEB package of Skype.

Installing Skype

Once the download is complete, fire up a terminal and run the following commands –

sudo dpkg -i skypeforlinux-64.deb
sudo apt install -f

Uninstalling Skype

If you ever wish to remove Skype from your device, run the following command –

# Using “–purge” will remove all the account credentials and configurations
from your device
sudo apt remove –purge skypeforlinux

Using Skype

Installation complete? Time to enjoy Skype.

Launch Skype from the menu.

You’ll be on the welcome page of the new Skype client.

You have to login or sign up for a Skype account.

Enter your credentials for logging into your account.

After successful login, you will have the option to check out whether your microphone is properly configured.

Don’t forget to test your webcam as well.

After everything is set, you’ll be on the Skype dashboard.

Enjoy!

Source

10 Practical Grep Command Examples in Linux

Brief: The grep command is used to find patterns in files. This tutorial shows some of the most common grep command examples that would be specifically beneficial for software developers.

Recently, I started working with Asciidoctor.js and on the Asciidoctor.js-pug and Asciidoctor-templates.js project. It is not always easy to be immediately effective when you dig for the first time into a codebase containing several thousand of lines. But my secret weapon to find my way through so many code lines is the grep tool.

I am going to share with you how to use grep command in Linux with examples.

Using grep commands in Linux

Grep command example

If you look into the man, you will see that short description for the grep tool: “print lines matching a pattern.” However, don’t be fooled by such humble definition: grep is one of the most useful tools in the Unix toolbox and there are countless occasions to use it as soon as you work with text files.

It is always better to have real-world examples to learn how things work. So, I will use the Asciidoctor.js source tree to illustrate some of the grep capabilities. You can download that source tree from GitHub, and if you want, you may even check out the same changeset I used when writing this article. That will ensure you obtain results perfectly identical to those described in the rest of this article:

git clone https://github.com/asciidoctor/asciidoctor.js
cd asciidoctor.js
git checkout v1.5.6-rc.1

1. Find all occurrences of a string (basic usage)

Asciidoctor.js is supporting the Nashorn JavaScript engine for the Java platform. I do not know Nashorn, so I could take that opportunity to learn more about it by exploring the project parts referencing that JavaScript engine.

As a starting point, I checked if there were some settings related to Nashorn in the package.json file describing the project dependencies:

sh$ grep nashorn package.json
“test”: “node npm/test/builder.js && node npm/test/unsupported-features.js && node npm/test/jasmine-browser.js && node npm/test/jasmine-browser-min.js && node npm/test/jasmine-node.js && node npm/test/jasmine-webpack.js && npm run test:karmaBrowserify && npm run test:karmaRequirejs && node npm/test/nashorn.js”,

Yes, apparently there was some Nashorn-specific tests. So, let’s investigate that a little bit more.

2. Case insensitive search in a file set

Now, I want to have a closer look at the files from the ./npm/test/ directory mentioning explicitly Nashorn. A case-insensitive search (-i option) is probably better here since I need to find both references to nashorn and Nashorn (or any other combination of upper- and lower-case characters):

sh$ grep -i nashorn npm/test/*.js
npm/test/nashorn.js:const nashornModule = require(‘../module/nashorn’);
npm/test/nashorn.js:log.task(‘Nashorn’);
npm/test/nashorn.js:nashornModule.nashornRun(‘jdk1.8.0’);

Indeed case insensitivity was useful here. Otherwise, I would have missed the require(‘../module/nashorn’) statement. No doubt I should examine that file in greater details later.

3. Find non-matching files

By the way, is there some non-Nashorm specific files in the npm/test/ directory? To answer that question, we can use the “print non-matching files” option of grep (-L option):

sh$ grep -iL nashorn npm/test/*
npm/test/builder.js
npm/test/jasmine-browser-min.js
npm/test/jasmine-browser.js
npm/test/jasmine-node.js
npm/test/jasmine-webpack.js
npm/test/unsupported-features.js

Notice how with the -L option the output of grep has changed to display only filenames. So, none of the files above contain the string “nashorn” (regardless of the case). That does not mean they are not somehow related to that technology, but at least, the letters “n-a-s-h-o-r-n” are not present.

4. Finding patterns into hidden files and recursively into sub-directories

The last two commands used a shell glob pattern to pass the list of files to examine to the grep command. However, this has some inherent limitations: the star (*) will not match hidden files. Neither it will match files (eventually) contained in sub-directories.

A solution would be to combine grep with the find command instead of relying on a shell glob pattern:

# This is not efficient as it will spawn a new grep process for each file
$ find npm/test/ -type f -exec grep -iL nashorn {} ;
# This may have issues with filenames containing space-like characters
grep -iL nashorn $(find npm/test/ -type f)

As I mentioned it as comments it the code block above, each of these solutions has drawbacks. Concerning filenames containing space-like characters, I let you investigate the grep -z option which, combined with the -print0 option of the find command, can mitigate that issue. Don’t hesitate to use the comment section at the end of this article to share your ideas on that topic!

Nevertheless, a better solution would use the “recursive” (-r) option of grep. With that option, you give on the command line the root of your search tree (the starting directory) instead of the explicit list of filenames to examine. With the -r option, grep will examine all files in the search directory, including hidden ones, and then it will recursively descend into any sub-directory:

grep -irL nashorn npm/test/npm/
npm/test/builder.js
npm/test/jasmine-browser-min.js
npm/test/jasmine-browser.js
npm/test/jasmine-node.js
npm/test/jasmine-webpack.js
npm/test/unsupported-features.js

Actually, with that option, I could also start my exploration one level above to see in there are non-npm tests that target Nashorn too:

sh$ grep -irL nashorn npm/

I let you test that command by yourself to see its outcome; but as a hint, I can say you should find many more matching files!

5. Filtering files by their name (using regular expressions)

So, there seems to have some Nashorn specific tests in that project. Since Nashorn is Java, another question that could be raised would be “is there some Java source files in the project explicitly mentioning Nashorn?”.

Depending the version of grep you use, there are at least two solutions to answer that question. The first one is to use grep to find all files containing the pattern “nashorn”, then pipe the output of that first command to a second grep instance filtering out non-java source files:

sh $grep -ir nashorn ./ | grep “^[^:]*.java”
./spec/nashorn/AsciidoctorConvertWithNashorn.java:public class AsciidoctorConvertWithNashorn {
./spec/nashorn/AsciidoctorConvertWithNashorn.java: ScriptEngine engine = engineManager.getEngineByName(“nashorn”);
./spec/nashorn/AsciidoctorConvertWithNashorn.java: engine.eval(new FileReader(“./spec/nashorn/asciidoctor-convert.js”));
./spec/nashorn/BasicJavascriptWithNashorn.java:public class BasicJavascriptWithNashorn {
./spec/nashorn/BasicJavascriptWithNashorn.java: ScriptEngine engine = engineManager.getEngineByName(“nashorn”);
./spec/nashorn/BasicJavascriptWithNashorn.java: engine.eval(new FileReader(“./spec/nashorn/basic.js”));

The first half of the command should be understandable by now. But what about that “^[^:]*\.java” part?

Unless you specify the -F option, grep assumes the search pattern is a regular expression. That means, in addition to plain characters that will match verbatim, you have access to a set of metacharacter to describe more complex patterns. The pattern I used above will only match:

  • ^ the start of the line
  • [^:]* followed by a sequence of any characters except a colon
  • . followed by a dot (the dot has a special meaning in regex, so I had to protect it with a backslash to express I want a literal match)
  • java and followed by the four letters “java.”

In practice, since grep will use a colon to separate the filename from the context, I keep only lines having .java in the filename section. Worth mention it would match also .javascript filenames. This is something I let try solving by yourself if you want.

6. Filtering files by their name using grep

Regular expressions are extremely powerful. However, in that particular case, it seems overkill. Not mentioning with the above solution, we spend time examining all files in search for the “nashorn” pattern— most of the results being discarded by the second step of the pipeline.

If you are using the GNU version of grep, something which is likely if you are using Linux, you have another solution though with the –include option. This instructs grep to search only into files whose name is matching the given glob pattern:

sh$ grep -ir nashorn ./ –include=’*.java’
./spec/nashorn/AsciidoctorConvertWithNashorn.java:public class AsciidoctorConvertWithNashorn {
./spec/nashorn/AsciidoctorConvertWithNashorn.java: ScriptEngine engine = engineManager.getEngineByName(“nashorn”);
./spec/nashorn/AsciidoctorConvertWithNashorn.java: engine.eval(new FileReader(“./spec/nashorn/asciidoctor-convert.js”));
./spec/nashorn/BasicJavascriptWithNashorn.java:public class BasicJavascriptWithNashorn {
./spec/nashorn/BasicJavascriptWithNashorn.java: ScriptEngine engine = engineManager.getEngineByName(“nashorn”);
./spec/nashorn/BasicJavascriptWithNashorn.java: engine.eval(new FileReader(“./spec/nashorn/basic.js”));

7. Finding words

The interesting thing about the Asciidoctor.js project is it is a multi-language project. At its core, Asciidoctor is written in Ruby, so, to be usable in the JavaScript world, it has to be “transpiled” using Opal, a Ruby to JavaScript source-to-source compiler. Another technology I did not know about before.

So, after having examined the Nashorn specificities, I assigned to myself the task of better understanding the Opal API. As the first step in that quest, I searched all mentions of the Opal global object in the JavaScript files of the project. It could appear in affectations (Opal =), member access (Opal.) or maybe even in other contexts. A regular expression would do the trick. However, once again, grep has some more lightweight solution to solve that common use case. Using the -w option, it will match only words, that is patterns preceded and followed by a non-word character. A non-word character is either the begin of the line, the end of the line, or any character that is neither a letter, nor a digit, nor an underscore:

sh$ grep -irw –include=’*.js’ Opal .

8. coloring the output

I did not copy the output of the previous command since there are many matches. When the output is dense like that, you may wish to add a little bit of color to ease understanding. If this is not already configured by default on your system, you can activate that feature using the GNU –color option:

sh $grep -irw –color=auto –include=’*.js’ Opal .

You should obtain the same long result as before, but this time the search string should appear in color if it was not already the case.

9. Counting matching lines or matching files

I mentioned twice the output of the previous commands was very long. How long exactly?

sh$ grep -irw –include=’*.js’ Opal . | wc -l
86

That means we have a total 86 matching lines in all the examined files. However, how many different files are matching? With the -l option you can limit the grep output the matching files instead of displaying matching lines. So that simple change will tell how many files are matching:

sh$ grep -irwl –include=’*.js’ Opal . | wc -l
20

If that reminds you of the -L option, no surprise: as it is relatively common, lowercase/uppercase are used to distinguish complementary options. -l displays matching filenames. -L displays non-matching filenames. For another example, I let you check the manual for the -h/-H options.

Let’s close that parenthesis and go back to our results: 86 matching lines. 20 matching files. However, how are distributed the matching lines in the matching files? We can know that using the -c option of grep that will count the number of matching lines per examined file (including files with zero matches):

grep -irwc –include=’*.js’ Opal .

Often, That output needs some post-processing since it displays its results in the order in which the files were examined, and it also includes files without any match— something that usually does not interest us. That latter is quite easy to solve:

grep -irwc –include=’*.js’ Opal . | grep -v ‘:0$’

As about ordering things, you may add the sort command at the end of the pipeline:

sh$ grep -irwc –include=’*.js’ Opal . | grep -v ‘:0$’ | sort -t: -k2n

I let you check the sort command manual for the exact meaning of the options I used. Don’t forget to share your findings using the comment section below!

10. Finding the difference between two matching sets

If you remember, few commands ago, I searched for the word “Opal.” However, if I search in the same file set for all occurrence of the string “Opal,” I obtain about twenty more answers:

sh$ grep -irw –include=’*.js’ Opal . | wc -l
86
sh$ grep -ir –include=’*.js’ Opal . | wc -l
105

Finding the difference between those two sets would be interesting. So, what are the lines containing the four letters “opal” in a row, but where those four letters do not form an entire word?

This is not that easy to answer that question. Because the same line can contains both the word Opal as well as some larger word containing those four letters. But as a first approximation, you may use that pipeline:

sh$ grep -ir –include=’*.js’ Opal . | grep -ivw Opal
./npm/examples.js: const opalBuilder = OpalBuilder.create();
./npm/examples.js: opalBuilder.appendPaths(‘build/asciidoctor/lib’);
./npm/examples.js: opalBuilder.appendPaths(‘lib’);

Apparently, my next stop would be to investigate the opalBuilder object, but that will be for another day.

The last word

Of course, you will not understand a project organization, even less the code architecture, by just issuing a couple of grep commands! However, I find that command unavoidable to identify benchmarks and starting points when exploring a new codebase. So, I hope this article helped you to understand the power of the grep command and that you will add it to your tool chest. No doubt you will not regret it!

Source

AMD Ryzen Threadripper 2920X & 2970WX Linux Performance Benchmarks

Beginning today the AMD Ryzen Threadripper 2970WX and 2920X processors are shipping and we are now allowed to share our performance benchmarks for these latest Zen+ Threadripper 2 processors. Here’s a look at the Linux performance and related metrics for these new 12-core/24-thread and 24-core/48-thread processors.

 

 

The availability of the Threadripper 2920X and 2970WX round out the initial line-up of AMD Threadripper 2 processors announced this summer. It was back in August that was the launch of the Threadripper 2950X as a very viable upgrade over the first-generation Threadripper 1950X and then jaw-dropping was the top-end Threadripper 2990WX with its 32-cores and 64-threads.

 

 

As we’ve shared over the days and months that followed, the 2990WX in particular has been really quite legendary on Linux (as well as BSD) operating systems while Microsoft Windows had some initial (and seemingly still ongoing) scheduling bottlenecks with Threadripper 2. These high-core/thread count processors also make a lot of sense in general to Linux users due to generally compiling code and other parallel tasks more often than a normal Windows user would likely utilize. Now here we are today to look at the Threadripper 2920X and 2970WX to see how these HEDT processors perform compared to the other Threadripper parts as well as the Intel competition.

 

 

The Threadripper 2920X is a 12-core / 24-thread processor that offers a 3.5GHz base frequency with 4.3GHz boost frequency. This CPU has a 32MB L3 cache, manufactured on a 12nm process, and the rest of the common features throughout the Threadripper 2 line-up like quad-channel DDR4-2933 support. The 2920X has a 180 Watt TDP like the Threadripper 2950X. This CPU is launching at $649 USD compared to the Threadripper 2950X in its 16C/32T configuration with the same base clock frequency but with a 4.4GHz boost frequency (+100MHz) priced at $899 USD, so it’s a nice step above what is offered by the current top-end Ryzen 7 (2700X) and only ~$100~150 more than the Intel Core i9 9900K 8C/16T part.

 

 

The Threadripper 2970WX meanwhile comes in at $1299 USD to fit in between the $899 Threadripper 2950X and the $1799 Threadripper 2990WX. The Threadripper 2970WX offers 24 cores / 48 threads and matches the base/boost clock frequencies of the 2990WX at 3.0GHz and 4.2GHz, respectively. The specs come to being just like the Threadripper 2990WX but with 24 cores / 48 threads rather than 32 cores / 64 threads, which leads to the price being $500 less. The 2970WX still maintains a 250 Watt TDP and 64MB L3 cache.

 

 

The AMD Threadripper 2920X and 2970WX processors continue to work with all existing AMD X399 motherboards, for my launch day testing I have been mostly testing these new parts with the Gigabyte X399 AORUS GAMING 7 motherboard. Thanks to AMD for supplying these review samples in time for being able to once again deliver launch-day Linux performance results especially given the many common multi-threaded workloads Linux users tend to encounter.

Source

Open Source 3D Printing: Exploring Scientific and Medical Solutions

3D Printing is not a new thing to hear about. It is a very popular industry right now that began in the early 80s. But how different is Open Source 3D Printing from proprietary designs? How does this affect its applications in Science and Medicine? Let’s read on.

What is 3D Printing All About?

Just like a conventional computer printer is used to print in 2D on paper, the job of a 3D printer is to create actual three-dimensional objects, solidified from a digital 3D file, aided by a computer of course. They use different processes by adding materials layer by layer in most methods.

Materials can be in liquid or powder form meant to be fused together, to serve as input material for the 3D printer just like inkjet 2D printers require ink cartridges. The objects that are created can be of almost any geometrical shape.

A more industrial term for 3D Printing is Additive Manufacturing.

Why is 3D printing so useful?

3D Printing

3D Printing has limitless applications due to which it is so popular. Let’s briefly look into some of these applications, although our main focus will be on Applied Science and Medical Applications.

1. Rapid Prototyping

In 3D Printing, Rapid Prototyping is a process in which smaller parts of a larger device can be quickly manufactured for enhanced productivity, with the help of 3D CAD. This is a great way to test the usability of prototypes for industry standards, hence the term.

2. Vehicles

There are a wide variety of 3D printed manufacturing processes for aviation, automotive, aerospace, shipbuilding and more.

3. Environment

3D Printed coral structures are now being developed to save our dying coral reefs.

4. Construction

Parts of entire buildings can now be created with 3D Printing that can all be reassembled later for the construction of various architectures. You can now 3D Print your house in under a day!

5. Dentistry

Did you know that
even teeth can be 3D Printed? Think of how accurately they can be
designed to replace or repair teeth!

6. Gadgets and Tools

You can even 3D print your own customized gadgets and tools for personal use!

7. Organs

Yes, that’s correct, 3D Printing Research has now made so much progress that it is now also possible to recreate human organs ready for a transplant of patients who either have a liver, kidney, heart, lungs or any other vital organ that is damaged beyond “repair”.

Now that we have
seen some of the various applications, let us now ponder on which of
the following two approaches is more suitable for them:

Proprietary (Closed Source) 3D Printing

Proprietary 3D Printing, as the phrase suggests, uses proprietary software that does not enable access to source code for community-wide development. Any changes done on the hardware will also void your warranty if you happen to own a proprietary 3D Printer. If you need to change the way the printer works in order to customize it for your specific requirement, you are barred by a number of such issues.

If such rules are followed for any of the 3D Printing applications that we discussed in the above section, it becomes really difficult to focus on actual project objectives.

Proprietary 3D Printing can be really expensive, not just in terms of money, but also if you consider time, which is also quite valuable to consider while working on a 3D Printing project.

Open Source 3D
Printing

Open Source 3D
Printing eliminates all the issues we just discussed in the
proprietary section. Not only does it reduce costs, it enables easier
innovation to solve issues faced during 3D manufacturing.

Apparently, the phrase, “Open Source 3D Printing” is also gaining popularity as is evident with a simple search online.

It is now possible for users to completely go Open Source, making it possible to greatly reducing production time and manufacturing costs!

Examples of Applied Science and Medical Solutions Achieved with Open Source 3D Printing

We thought about which of these many applications is most significant for radically enhancing and sustaining the quality of our life and our planet, and hence we decided to specifically explore the Scientific and Medical Solutions to do just that.

So in this final and most important section, let us pick related applications that we just discussed and look into some examples in detail where we feel Open Source Approach is most necessary:

1. Saving Our Coral Reefs

3D Printed Coral Reefs developed by Reef Design Lab3D Printed Coral Reefs developed by Reef Design Lab

Coral Reefs are an extremely important part of our planet’s biodiversity and they are dying.

3D Printed coral reefs is now a very promising initiative to help restore them. Reef Design Lab has recently made it possible to support coral life. The design of the 3D models involved in the project will be made Open Source so that researchers who want to contribute in the same can actively take part.

2. Replacement of Teeth

3D Printed teeth? Yes, that’s a definite possibility today! There’s also an interesting improvement in the design: These teeth are designed with material that is anti-bacterial in nature! This makes it possible to kill the bacteria responsible for tooth decay on contact of the food that you chew!

3. Bioprinting

A 3D Bioprinter is a device that requires “bio-ink” to be used as material to 3D Print bioengineered tissue.

The following short video describes the process of Bioprinting a human ear. Note how they do not use plastic or rubber but living tissue as a biomaterial!

The Open Bioprinting Initiative

As we learnt that Tissue Engineering is greatly driven by 3D Printing technology, we should also consider that every patient is different, and so it is necessary for an open platform that allows customized manufacturing for tissue and organ generation.

An Open System that enables such customized Printing of bio-materials that diverse in nature will make it much easier to conduct research in Tissue Engineering.

The Open Bioprinting Initiative was a step that addresses this same primary objective. The related paper is not Open Access. But for educational purpose, it has been made available on their GitHub repository named Papers.

The paper signifies how an Open Source multi-channel 3D Bioprinting system is important both in terms of Hardware and Software. It also mentions cost-effectiveness because the system is designed and integrated with an Open Source Approach to find optimal conditions for 3D Bioprinting.

The Quest for a Fully Functional Bioprinted Heart!

We all know how important the heart is for our health. A medical technology company named Biolife4D recently demonstrated their ability to 3D Bioprint Human Heart Tissue! This is a remarkable achievement!

They use living cells to bioprint biological structures. The patient’s own white blood cells were reprogrammed to create pluripotent stem cells and cardiac cells, wherein the process took a matter of days for a complete generation in the form of a cardiac patch.

Currently, their research involves the development of individual parts such as heart valves and blood vessels for the heart. Their ultimate objective at the moment is to create a fully functional bioprinted heart.

We looked up online for their Open Source repositories but were unable to find any. We hope they make some of their research Open Source in the future so that more academicians and researchers can collaboratively contribute towards developing a fully functional 3D Bioprinted Heart. Such an action would also greatly empower initiatives like Open Bioprinting.

Applied Nanotechnology for Organ Transplant

“The field of tissue engineering is advancing steadily, partly due to advancements in rapid prototyping technology. Even with increasing focus, successful complex tissue regeneration of vascularized bone, cartilage and the osteochondral interface remains largely illusive. This review examines current three-dimensional printing techniques and their application towards bone, cartilage and osteochondral regeneration. The importance of, and benefit to, nanomaterial integration is also highlighted with recent published examples. Early-stage successes and challenges of recent studies are discussed, with an outlook to future research in the related areas.”

Nowicki, M., Castro, N. J., Rao, R., Plesniak, M., & Zhang, L. G. (2017). Integrating three-dimensional printing and nanotechnology for musculoskeletal regeneration. Nanotechnology, 28(38), 382001. doi:10.1088/1361-6528/aa8351

In our previous Open Science article, we discussed the nanotech and open source topic in detail while mentioning this article in its summary. Nanotechnology and 3D Printing share a strong correlation.

We discussed materials that are used to create 3D objects via the printers. These materials can also be designed at the nanoscale.

Since the materials are designed from the bottom-up nanoscale, enabling three extremely necessary precision levels i.e. nano-micro-macro, it is now possible to retain properties like maximum strength with minimal weight. This means we can now adjust the elasticity, strength or hardness of such 3D Printed objects with high accuracy.

Such an extremely high accuracy is of utmost importance in the development of 3D Printed Human Organs, that can now actually make it possible to save countless lives. Tissue Engineering would be greatly enhanced and thus promote effective manufacturing of 3D printed bone, cartilage or osteochondral tissue.

That’s not all, as we have already seen how other vital organs are even more significant.

4. Drug Discovery

“The current achievements include multifunctional drug delivery systems with accelerated release characteristic, adjustable and personalized dosage forms, implants and phantoms corresponding to specific patient anatomy as well as cell-based materials for regenerative medicine.”

Jamróz, W., Szafraniec, J., Kurek, M., & Jachowicz, R. (2018). 3D Printing in Pharmaceutical and Medical Applications – Recent Achievements and Challenges. Pharmaceutical Research, 35(9). doi:10.1007/s11095-018-2454-x

We previously discussed why Open Source Pharma is said to be “Linux for Drugs”. 3D Printing strengthens that initiative because it allows greater flexibility, time-saving and manufacturing medicine with extreme precision. Such a drug discovery method makes use of 3D Printing’s basic method of layer-by-layer CAD in order to formulate drug materials with the correct dosage.

The FDA approved the first 3D Printed drug some years ago. 3D Printed Drug Development addresses the challenges of conventional manufacturing techniques in pharmaceutical units. The greater advantage lies in its far better ability to create quality drugs in terms of drug loading, drug release, drug stability and pharmaceutical dosage form stability, as described in this Open Access paper in much detail.

Summary

So in this extensive article covering 3D Printing, we started by briefly introducing you to the concept followed by understanding its significance with different examples of applications.

Further ahead, we differentiated between Proprietary and Open Source 3D Printing Models to understand the advantages of the latter.

Finally, we focused on the Scientific and Medical Solutions for Open Source Bioprinting by looking into initiatives for saving our corals, teeth replacement with anti-bacterial abilities, Bioprinting with focus on Open Source Bioprinting and Applied Nanotechnology for Organ Transplant. In our final subsection, we also highlighted the role of 3D Printing in Drug Discovery.

These are only some of the many applications of 3D Printing. We believe there is a need for Proprietary manufacturers to migrate towards Open Source Business Models that would promote better applicability for our planet.

What are your views? Do you think there should be more effort in Open Bioprinting and other 3D Printing Applications? Have you ever been involved with 3D Printing? Please share your thoughts with us in the comments below.

Source

Download Elementary OS 5.0

elementary OS is an open source operating system based on Ubuntu Linux, the world’s most popular free OS, and built around the GNOME desktop environment. It features its own theme, icons and applications.

Distributed as 64-bit and 32-bit Live DVDs

The system is usually distributed as two Live DVD ISO images, one for each of the supported hardware platforms, 64-bit and 32-bit. It allows users to use the live environment directly from USB flash drives or blank DVDs.

Boot options

The design of the boot prompt and it’s default functionality is unchanged from Ubuntu, allowing users to run a memory test, boot an existing operating system from the first disk drive, test the OS without installing, or directly install it (not recommended).

If you don’t press a key to force the boot from the external USB stick or DVD disc, it will automatically load and start the live desktop environment, which is comprised of a top panel, from where users can access the unique main menu and launch apps, as well as a dock (application launcher) on the bottom edge of the screen.

Default applications

Default applications include the Midori web browser, Nautilus (Files) file manager, Empathy multi-protocol instant messenger, File Roller archive manager, Geary email client, GParted disk partition editor, Totem movie player, Evince document viewer, Shotwell image viewer and organizer, and Scratch text editor.

It also comes with in-house developed applications, such as calendar and music clients, called Calendar and Music. However, everything in elementary OS is designed to perfection and engineered to define the unwritten laws of Linux-based operating systems.

You can add even more applications using the included Software Center tool, from where you can also update or remove applications. It is also possible to install the operating system directly from the live session using the graphical installer provided on the dock.

Bottom line

What can we say? elementary OS is an extraordinary project that provides users with an independent and highly-customizable operating system that is based on and compatible with the current Ubuntu LTS distribution.

Source

SUSE Manager for Retail 3.2 now available: Lowering Costs and Optimizing Retail Operations

Share with friends and colleagues on social media

Before I introduce you to the new SUSE Manager for Retail 3.2 offering, let me walk you through the journey of how SUSE has evolved its offerings for the retail environment.

We have moved away from just an image management paradigm with the legacy SUSE Linux Enterprise Point of Service offering to a comprehensively managed end-point paradigm with SUSE Manager for Retail.

Legacy SUSE Linux Enterprise Point of Service

Traditionally the solution SUSE offered for the point of service (POS) environments was SUSE Linux Enterprise Point of Service, or SLEPOS as we call it.

SLEPOS was a 3-tiered solution, with 3 separate components;

We offered the SLEPOS Admin Server, which was ideally installed in a central location. This was a central point for basic management of the point of service infrastructure. It hosted an LDAP database to store configuration information for the point of service client devices and the branch server. Among other things it also provided the tools for creation and customization of system images, as well as storing of those system images for distribution to the branch servers and the point of service terminals. There was usually 1 Admin Server in the entire environment.

Then the next component was the SLEPOS Branch Server. An ideal environment included 1 Branch Server in the back office of every store, and provided the network boot and system management infrastructure for the point of service terminals.

Finally there were the SLEPOS Clients. These were the customized operating system images for the point of service terminals themselves.

A lot of our current customers, the big retailers, are still using this 3-tiered SLEPOS stack. However, retailers strongly expressed the need to have a unified solution for management of their data center and store infrastructure. Something that will help them optimize their operating costs.

This was a core driver for us to enhance our offering which was targeted to this market, which is why we introduced SUSE Manager into the mix.

SUSE Manager for Retail

What we offer our customers now, is a product which provides them the ability to manage their traditional data center infrastructure and at the same time is optimized to manage the store IT infrastructure (including the servers and point of service terminals).

This product is called SUSE Manager for Retail.

In a nutshell, what we have done is, we have taken the feature-set which SLEPOS Admin and Branch Server had to offer and added those functionalities in SUSE Manager. Effectively, SUSE Manager has been enhanced with a feature set that is relevant and tailored to retailers. Going forward, we will continue to add on top of SUSE Manager’s core features and optimize the product for use in retail environments.

SUSE Manager for Retail is simplified and has only 2 main components:

  • A central SUSE Manager Server
  • SUSE Manager for Retail Branch Server (which can be installed in the back offices of every store).

The transition

With SUSE Manager for Retail 3.1 we did not completely replace the SLEPOS Admin and Branch servers with SUSE Manager. That full integration has been achieved with SUSE Manager for Retail 3.2.

What we delivered with SUSE Manager for Retail 3.1 was a close integration between traditional SLEPOS and SUSE Manager. We enabled the customers to install the SLEPOS Branch Server and SUSE Manager Proxy on the same host in the store environment -Something which was not available earlier. We packaged them as one product: SUSE Manager for Retail 3.1. With this packaging customers were still required to install SUSE Manager and SLEPOS as separate products.

Today we are we pleased to announce our first fully integrated release, SUSE Manager for Retail 3.2.

SUSE Manager for Retail 3.2

This product delivers best-in-class open source infrastructure management, optimized and tailored specifically for the retail industry. It is designed to help retailers reduce costs, optimize operations and ensure compliance in their environment. It provides a reliable, flexible and open platform for managing point of service and point of sale terminals, kiosks, self-service systems and reverse-vending systems.

The following are the retail optimizations that have been built on top of the core functionalities of SUSE Manager 3.2, to deliver SUSE Manager for Retail 3.2:

  • Simple and flexible image building – To help the retailers more quickly build customized images for their POS terminals, thereby saving them time and money. Through the SUSE Manager UI retailers can now setup an image build host and build customized images or leverage pre-build image templates to setup their POS systems.
  • Formulas with Forms – To quickly and efficiently setup retail store servers. We are leveraging the forms-based framework of SUSE Manager to configure key services such as DHCP, DNS, PXE, TFTP and FTP, on the store server. The store server could then provide the network boot and systems management infrastructure for the point of service terminals in the store.

The store server can be configured in several different configurations:

  • If the store server has a dedicated network interface card and terminals use an isolated internal branch network, then in this configuration, the store server manages the internal network and provides DHCP, DNS, PXE, FTP and TFTP services.
  • If the store server shares a network with the terminals, and provides a connection to the central server, then in this configuration the store server is not required to manage a net-work (DHCP and DNS services). Instead it acts as a PXE boot server and provides FTP and TFTP services.

It is important to note that SUSE Manager for Retail 3.2 is built on the latest release of SUSE Manager.

Here is how SUSE Manager for Retail 3.2 will prove beneficial in a retailer’s environment:

  • Reduce bandwidth costs, minimize resource needs and ease deployment with the SUSE Manager for Retail Branch Server – This allows the product to easily scale larger environments
  • Improve operational efficiencies by providing easy automation of repetitive tasks without the need for advanced scripting skills via re-usable action chains.
  • Ensure retailer can meet their business’ compliance requirements, by monitoring and patching devices and container-based workloads to the latest security and patch levels.
  • Easily manage complexity with the extended capabilities in our easy to use forms-based UI. This allows administrators to define/model even more complex configurations, helping ensure more consistent and repeatable installations and deployments of those complex retail configurations.

Learn more about SUSE Manager and SUSE Manager for Retail at:

https://www.suse.com/products/suse-manager/

https://www.suse.com/products/suse-manager-retail/

 

Share with friends and colleagues on social media

Source

You Can Play Over 2,600 Windows Games on Linux Via Steam Play

on Monday October 29, 2018 @01:22PM

from the growing-synergy dept.

At the end of August, Valve announced

a new version of Steam Play for Linux that included Proton

, a WINE fork that made many Windows games, including more recent ones ,such as Witcher 3, Dark Souls 3 and Dishonored, playable on Linux. Just two months later, ProtonDB says there are

over 2,600 Windows games that users can play on Linux

, and the number is rapidly growing daily. From a report:

When Valve Software launched Steam Play with Proton, it made it easier for gamers to play Windows games that hadn’t yet been ported to Linux with the click of a button. Not all games may run perfectly on Linux, but that’s also often the case with Windows 10, which can not play older games as well as previous versions of Windows did, even under Compatibility Mode. In only two months, the database of games that work with Proton has increased to over 2,600 — more than half of the 5,000 Linux-native games that can be obtained through the Steam store.

“You can’t make a program without broken egos.”

Working…

Source

Fedora Appreciation Week, Qt Announces the Deprecation of Qbs, D Language Front End Merged with GCC, Security Bug in Systemd and IBM Acquires Red Hat

News briefs for October 29, 2018.

The first ever Fedora
Appreciation Week
will run November 5th to the 11th.
This week-long event takes place during the 15th anniversary of the Fedora
Project and was organized by the Fedora Community Operations team to “to
celebrate efforts of Fedora Project contributors and to say ‘thank you’
to each other.” Go here
to see how to participate.

The Qt Company announced the deprecation of Qbs. The last Qbs release will come out in April 2019, and the company intends to improve support for CMake significantly and eventually switch to CMake for building Qt itself.

The D language front end has finally merged with GCC 9. According to Phoronix,
“The code is merged for GDC including the libphobos library (D run-time
library) and D2 test suite. Adding the D support touches more than three
thousand files (most of which is test suite cases) and 859,714 lines of
code….Yes, the better part of a million new lines.”

A security bug was discovered in systemd last week that can crash a Linux
machine or execute malicious code. The
Register reports
that the “maliciously crafted DHCPv6 packets can try to
exploit the programming cockup and arbitrarily change parts of memory in
vulnerable systems, leading to potential code execution. This code could
install malware, spyware, and other nasties, if successful”. The vulnerability
is in the DHCPv6 client of the systemd management suite.

And finally, you’ve likely already heard that IBM yesterday
announced its acquisition of Red Hat for $34 billion
. Interesting note: Bob
Young, founder of Red Hat, was Linux Journal‘s first editor in chief.

Source

Learn to Work with the Linux Command Line | Linux.com

Open source software isn’t just proliferating within technology infrastructures around the world, it is also creating profound opportunities for people with relevant skills. Organizations of all sizes have reported widening skills gaps in this area. Linux tops the list as the most in-demand open source skill, according to the 2018 Open Source Jobs Report. With this in mind, In this article series, we are taking a closer look at one of the best new ways to gain open source and Linux fluency: the Introduction to Open Source Software Development, Git and Linux training course from The Linux Foundation.

This story is the third in a four-part article series that highlights major aspects of the training course. The first article in the series covered the course’s general introduction to working with open source software, with a focus on such essentials as project collaboration, licensing, legal issues and getting help. The second article covered the course curriculum dedicated to working with Bash and Linux basics.

Working with commands and command-line tools are essential Linux skills, and the course delves into task- and labs-based instruction on these topics. The discussion of major command-line tools is comprehensive and includes lessons on:

  • Tools for creating, removing and renaming files and directories
  • Locating files with find and locate
  • Finding character strings in files using grep
  • Substituting strings in files using sed

There is a Labs module that asks you to set the prompt to a current directory and encourages follow up by changing the prompt to any other desired configuration. In addition to being self-paced, the course focuses on performing meaningful tasks rather than simply reading or watching.

Overall, the course contains 43 hands-on lab exercises that will allow you to practice your skills, along with a similar number of quizzes to check your knowledge. It also provides more than 20 videos showing you how to accomplish important tasks.

As you go through these lessons, keep in mind that the online course includes many summary slides, useful lists, graphics, and other resources that can be referenced later. It’s definitely worth setting up a desktop folder and regularly saving screenshots of especially useful topics to a folder for handy reference. For example, here is a slide that summarizes the handy utilities that any user should have in his or her toolbox:

With the groundwork laid for working with the command line and command line tools, the course then comprehensively covers working with Git, including hands-on learning modules. We will explore the course’s approach to this important topic in the next installment in this series.

Source

Introducing Amazon AppStream 2.0 AWS CloudFormation Support and User Pool APIs

Today, Amazon AppStream 2.0 adds two new features to simplify development with AppStream 2.0. You can provision AppStream 2.0 resources using AWS CloudFormation and automate user pool management using new APIs.

With CloudFormation, you can automate creating fleets, deploying stacks, adding and managing user pool users, launching image builders, and creating directory configurations alongside your other AWS resources. To learn how to get started, read AWS CloudFormation support for Amazon AppStream 2.0 resources and API enhancements

Source

WP2Social Auto Publish Powered By : XYZScripts.com