What Operations Professionals Need to Know to Fuel Career Advancement – Linux.com

O’Reilly conducted a recent survey[1] of operations professionals, and the results offer useful information and insights to empower your career planning.

Scripting languages are the most popular programming languages among respondents, with Bash being the most used (66% of respondents), followed by Python (63%), and JavaScript (42%).

Go is used by 20% of respondents, and those who use Go tend to have one of the higher median salaries at $102,000, similar to LISP and Swift. This could be related to the types of companies that are pushing these programming languages. Google and Apple, for example, are very large companies and, as noted, salary and company size are related.

And what about the operating system in which respondents work? Linux tops the charts at 87% usage. Windows is also used frequently (63%), often as a mix between workstations and servers, and in some cases as a front end for Linux/Unix servers.

Source

Amazon EKS now supports additional VPC CIDR blocks

Posted On: Oct 25, 2018

Amazon Elastic Container Service for Kubernetes (EKS) now allows clusters to be created in a Amazon VPC addressed with additional IPv4 CIDR blocks in the 100.64.0.0/10 and 198.19.0.0/16 ranges. This allows customers additional flexibility in configuring the networking for their EKS clusters.

The CIDR blocks supported by Amazon VPC are here, in the table titled IPv4 CIDR Block Association Restrictions.

Previously, EKS customers could only create clusters in VPCs that were addressed with RFC 1918 private IP address ranges. This meant customers were often unable to allocate sufficient private IP address space to support the number of Kubernetes pods managed by EKS.

Now, customers can create EKS clusters in Amazon VPCs addressed with CIDR blocks in the 100.64.0.0/10 and 198.19.0.0/16 ranges. This gives customers more available IP addresses for their pods managed by Amazon EKS and more flexibility for networking architectures. Additionally, by adding secondary CIDR blocks to a VPC from the 100.64.0.0/10 and 198.19.0.0/16 ranges, in conjunction with the CNI Custom Networking feature, it is possible for pods to no longer consume any RFC 1918 IP addresses in a VPC.

For more information about Amazon EKS networking, visit the documentation.

Please visit the AWS region table to see all AWS regions where Amazon EKS is available.

Source

IBM’s acquisition of Red Hat is huge news for the Linux world

IBM today announced it would be acquiring iconic Linux firm Red Hat in a $34 billion all-cash deal.

According to a joint statement issued by both companies, IBM will pay $190 for each share of Red Hat, with Big Blue intending to absorb its latest purchase into its Hybrid Cloud division.

[NEWS] @IBM to acquire Red Hat and become the world’s leading #hybridcloud provider. https://t.co/goihRICRr3 https://t.co/G8SKS5gsVk pic.twitter.com/GJL4UmBu1B

— Red Hat, Inc. (@RedHat) October 28, 2018

This move is huge for IBM. After spending the past ten years delving into the worlds of artificial intelligence (via Watson) and blockchain with little to show for it, the company is returning to an area where it’s traditionally excelled — enterprise services.

Red Hat, although not quite a household name, is an undeniably significant company, with lots of fingers in lots of pies, especially when it comes to cloud computing and the Linux ecosystem.

The gem in its crown is arguably the platform-as-a-service (PaaS) provider OpenShift, which directly competes with the Salesforce-owned Heroku and Google App Engine. It also owns and develops Red Hat Enterprise Linux (RHEL), which is employed across several commercial settings, including workstations, servers, and supercomputers.

It’s also worth noting that this is by no means an exhaustive list. Over its 25 years, Red Hat has invested heavily in a variety of enterprise-friendly technologies — from containers and serverless computing, to storage and big data file systems. Thanks to this acquisition, IBM is getting its hands on all of them.

Beyond the two companies, this acquisition has massive implications for the Linux ecosystem.

After all, Red Hat is an enthusiastic contributor to several major Linux projects, playing a role in developing Libre Office and GNOME, as well as the Kernel itself.

For context, in 2016, Red Hat was the second most prolific contributor to the Linux Kernel, narrowly trailing silicon mega-titan Intel.

Understandably, many are wondering what this acquisition means for these projects. After all, IBM is no stranger to shifting priorities, and it isn’t afraid to scale back its efforts and workforce as the market dictates. Just ask the OS/2 team.

I’m no clairvoyant, but I don’t think there’s much to worry about here. For starters, both Red Hat and IBM has emphatically stressed that for Red Hat’s open source contributions, things will be very much business as usual. Here’s the pertinent paragraph from the press release:

“With this acquisition, IBM will remain committed to Red Hat’s open governance, open source contributions, participation in the open source community and development model, and fostering its widespread developer ecosystem. In addition, IBM and Red Hat will remain committed to the continued freedom of open source, via such efforts as Patent Promise, GPL Cooperation Commitment, the Open Invention Network and the LOT Network.”

Worried Linux enthusiasts and developers can also look at IBM’s track record, which is impressive.

Big Blue has historically been a stalwart contributor to Linux and several Linux-related projects. The company first announced its support for the free operating system in 1999, back when Microsoft Windows reigned triumphant across both desktop and server, and Linux was nowhere near as mature as it is today.

By 2008, IBM employed around 600 developers working across over 100 Linux projects, including Xen, the Linux Toolchain, Apache, Eclipse, and the kernel itself.

And weirdly enough, back in 2001, it hired Star Trek: Deep Space Nine actor Avery Brooks as part of a $210 million advertising blitz, where he narrated an advertisement touting the company’s enthusiasm for Linux.

In short, Linux is in IBM’s lifeblood. It has been for an extremely long time. Over the past nineteen years, the company has spent millions — possibly billions — supporting the Linux ecosystem by donating money and developer time.

While it didn’t exactly do this from a place of pure altruism, the fact remains that IBM has had an undeniably positive impact on Linux. Moving forward, it’ll be interesting to see what the acquisition of Red Hat means for iconic tech company, as well for the broader Linux ecosystem.

Source

Hot Clone A CentOS Server With Rsync

Hot Clone is the term used to describe to completely clone a Linux server using r-sync across the network. This is useful in situations which you would like to create a clone with little to no downtime that would be typical of taking the original server offline. You can use this to perhaps move a single server in to a cluster environment or certain situation in which you want to upgrade or reduce drives etc.

This guide makes a couple assumptions:

First both servers need to have the same disk configuration. Either both servers use hardware raid, software raid, or single disks. They typically need to match.

The new server should have the same major install release as the source server. So both servers would be CentOS 6.x or both need to be 7.x.

The new server has hard drive partitions in a the same format as the old server and they are either the same size or can accommodate all of the used space on the source system.

Prepare the systems:

Install needed software packages on both servers:

yum install -y rsync

On the server you want to copy from perform the following:

Create and edit /root/exclude-files.txt and add the following:

/boot
/dev
/tmp
/sys
/proc
/backup
/etc/fstab
/etc/mtab
/etc/mdadm.conf
/etc/sysconfig/network*

This excludes files which directly pertain to the source system and should not be copied to the new system.

Hot Clone the server:

Once you have saved that file you can go ahead and rsync to the server you want to copy to:

rsync -vPa -e ‘ssh -o StrictHostKeyChecking=no’ –exclude-from=/root/exclude-files.txt / DESTINATIONIP:/

This will rsync over everything from the source system to the new system. The size of the drives and load on the servers will determine how long the copy will take. Be sure to update DESTINATIONIP with the IP address or hostname of the server you are copying to.

After the rsync has completed you can reboot the freshly copied system to have it load everything that has been copied. If you were going to replace the old system with the new system and wanted the same IP addresses, host name etc to be used, you would then remove /etc/sysconfig/network* from the exclusion file.

Once the new server is back up from the reboot. Go ahead and login using the old servers login credentials and verify everything is working as expected.

Apr 30, 2017LinuxAdmin.io

Source

Install Skype on Ubuntu | Linux Hint

Skype is one of the most popular platforms for video chat. It’s free to enjoy with powerful security features. Skype is also platform-independent, meaning that the service is available for everyone in the world.

In the case of Skype client, it’s also available for all the major platforms including Windows, macOS, and Linux. Ubuntu is, in fact, one of the most popular Linux distros all around the world.

Let’s enjoy Skype client on the Ubuntu system!

Getting Skype

Skype provides the client in an installable DEB package for Ubuntu/Debian and Ubuntu/Debian-derivatives. Get the latest DEB package of Skype.

Installing Skype

Once the download is complete, fire up a terminal and run the following commands –

sudo dpkg -i skypeforlinux-64.deb
sudo apt install -f

Uninstalling Skype

If you ever wish to remove Skype from your device, run the following command –

# Using “–purge” will remove all the account credentials and configurations
from your device
sudo apt remove –purge skypeforlinux

Using Skype

Installation complete? Time to enjoy Skype.

Launch Skype from the menu.

You’ll be on the welcome page of the new Skype client.

You have to login or sign up for a Skype account.

Enter your credentials for logging into your account.

After successful login, you will have the option to check out whether your microphone is properly configured.

Don’t forget to test your webcam as well.

After everything is set, you’ll be on the Skype dashboard.

Enjoy!

Source

10 Practical Grep Command Examples in Linux

Brief: The grep command is used to find patterns in files. This tutorial shows some of the most common grep command examples that would be specifically beneficial for software developers.

Recently, I started working with Asciidoctor.js and on the Asciidoctor.js-pug and Asciidoctor-templates.js project. It is not always easy to be immediately effective when you dig for the first time into a codebase containing several thousand of lines. But my secret weapon to find my way through so many code lines is the grep tool.

I am going to share with you how to use grep command in Linux with examples.

Using grep commands in Linux

Grep command example

If you look into the man, you will see that short description for the grep tool: “print lines matching a pattern.” However, don’t be fooled by such humble definition: grep is one of the most useful tools in the Unix toolbox and there are countless occasions to use it as soon as you work with text files.

It is always better to have real-world examples to learn how things work. So, I will use the Asciidoctor.js source tree to illustrate some of the grep capabilities. You can download that source tree from GitHub, and if you want, you may even check out the same changeset I used when writing this article. That will ensure you obtain results perfectly identical to those described in the rest of this article:

git clone https://github.com/asciidoctor/asciidoctor.js
cd asciidoctor.js
git checkout v1.5.6-rc.1

1. Find all occurrences of a string (basic usage)

Asciidoctor.js is supporting the Nashorn JavaScript engine for the Java platform. I do not know Nashorn, so I could take that opportunity to learn more about it by exploring the project parts referencing that JavaScript engine.

As a starting point, I checked if there were some settings related to Nashorn in the package.json file describing the project dependencies:

sh$ grep nashorn package.json
“test”: “node npm/test/builder.js && node npm/test/unsupported-features.js && node npm/test/jasmine-browser.js && node npm/test/jasmine-browser-min.js && node npm/test/jasmine-node.js && node npm/test/jasmine-webpack.js && npm run test:karmaBrowserify && npm run test:karmaRequirejs && node npm/test/nashorn.js”,

Yes, apparently there was some Nashorn-specific tests. So, let’s investigate that a little bit more.

2. Case insensitive search in a file set

Now, I want to have a closer look at the files from the ./npm/test/ directory mentioning explicitly Nashorn. A case-insensitive search (-i option) is probably better here since I need to find both references to nashorn and Nashorn (or any other combination of upper- and lower-case characters):

sh$ grep -i nashorn npm/test/*.js
npm/test/nashorn.js:const nashornModule = require(‘../module/nashorn’);
npm/test/nashorn.js:log.task(‘Nashorn’);
npm/test/nashorn.js:nashornModule.nashornRun(‘jdk1.8.0’);

Indeed case insensitivity was useful here. Otherwise, I would have missed the require(‘../module/nashorn’) statement. No doubt I should examine that file in greater details later.

3. Find non-matching files

By the way, is there some non-Nashorm specific files in the npm/test/ directory? To answer that question, we can use the “print non-matching files” option of grep (-L option):

sh$ grep -iL nashorn npm/test/*
npm/test/builder.js
npm/test/jasmine-browser-min.js
npm/test/jasmine-browser.js
npm/test/jasmine-node.js
npm/test/jasmine-webpack.js
npm/test/unsupported-features.js

Notice how with the -L option the output of grep has changed to display only filenames. So, none of the files above contain the string “nashorn” (regardless of the case). That does not mean they are not somehow related to that technology, but at least, the letters “n-a-s-h-o-r-n” are not present.

4. Finding patterns into hidden files and recursively into sub-directories

The last two commands used a shell glob pattern to pass the list of files to examine to the grep command. However, this has some inherent limitations: the star (*) will not match hidden files. Neither it will match files (eventually) contained in sub-directories.

A solution would be to combine grep with the find command instead of relying on a shell glob pattern:

# This is not efficient as it will spawn a new grep process for each file
$ find npm/test/ -type f -exec grep -iL nashorn {} ;
# This may have issues with filenames containing space-like characters
grep -iL nashorn $(find npm/test/ -type f)

As I mentioned it as comments it the code block above, each of these solutions has drawbacks. Concerning filenames containing space-like characters, I let you investigate the grep -z option which, combined with the -print0 option of the find command, can mitigate that issue. Don’t hesitate to use the comment section at the end of this article to share your ideas on that topic!

Nevertheless, a better solution would use the “recursive” (-r) option of grep. With that option, you give on the command line the root of your search tree (the starting directory) instead of the explicit list of filenames to examine. With the -r option, grep will examine all files in the search directory, including hidden ones, and then it will recursively descend into any sub-directory:

grep -irL nashorn npm/test/npm/
npm/test/builder.js
npm/test/jasmine-browser-min.js
npm/test/jasmine-browser.js
npm/test/jasmine-node.js
npm/test/jasmine-webpack.js
npm/test/unsupported-features.js

Actually, with that option, I could also start my exploration one level above to see in there are non-npm tests that target Nashorn too:

sh$ grep -irL nashorn npm/

I let you test that command by yourself to see its outcome; but as a hint, I can say you should find many more matching files!

5. Filtering files by their name (using regular expressions)

So, there seems to have some Nashorn specific tests in that project. Since Nashorn is Java, another question that could be raised would be “is there some Java source files in the project explicitly mentioning Nashorn?”.

Depending the version of grep you use, there are at least two solutions to answer that question. The first one is to use grep to find all files containing the pattern “nashorn”, then pipe the output of that first command to a second grep instance filtering out non-java source files:

sh $grep -ir nashorn ./ | grep “^[^:]*.java”
./spec/nashorn/AsciidoctorConvertWithNashorn.java:public class AsciidoctorConvertWithNashorn {
./spec/nashorn/AsciidoctorConvertWithNashorn.java: ScriptEngine engine = engineManager.getEngineByName(“nashorn”);
./spec/nashorn/AsciidoctorConvertWithNashorn.java: engine.eval(new FileReader(“./spec/nashorn/asciidoctor-convert.js”));
./spec/nashorn/BasicJavascriptWithNashorn.java:public class BasicJavascriptWithNashorn {
./spec/nashorn/BasicJavascriptWithNashorn.java: ScriptEngine engine = engineManager.getEngineByName(“nashorn”);
./spec/nashorn/BasicJavascriptWithNashorn.java: engine.eval(new FileReader(“./spec/nashorn/basic.js”));

The first half of the command should be understandable by now. But what about that “^[^:]*\.java” part?

Unless you specify the -F option, grep assumes the search pattern is a regular expression. That means, in addition to plain characters that will match verbatim, you have access to a set of metacharacter to describe more complex patterns. The pattern I used above will only match:

  • ^ the start of the line
  • [^:]* followed by a sequence of any characters except a colon
  • . followed by a dot (the dot has a special meaning in regex, so I had to protect it with a backslash to express I want a literal match)
  • java and followed by the four letters “java.”

In practice, since grep will use a colon to separate the filename from the context, I keep only lines having .java in the filename section. Worth mention it would match also .javascript filenames. This is something I let try solving by yourself if you want.

6. Filtering files by their name using grep

Regular expressions are extremely powerful. However, in that particular case, it seems overkill. Not mentioning with the above solution, we spend time examining all files in search for the “nashorn” pattern— most of the results being discarded by the second step of the pipeline.

If you are using the GNU version of grep, something which is likely if you are using Linux, you have another solution though with the –include option. This instructs grep to search only into files whose name is matching the given glob pattern:

sh$ grep -ir nashorn ./ –include=’*.java’
./spec/nashorn/AsciidoctorConvertWithNashorn.java:public class AsciidoctorConvertWithNashorn {
./spec/nashorn/AsciidoctorConvertWithNashorn.java: ScriptEngine engine = engineManager.getEngineByName(“nashorn”);
./spec/nashorn/AsciidoctorConvertWithNashorn.java: engine.eval(new FileReader(“./spec/nashorn/asciidoctor-convert.js”));
./spec/nashorn/BasicJavascriptWithNashorn.java:public class BasicJavascriptWithNashorn {
./spec/nashorn/BasicJavascriptWithNashorn.java: ScriptEngine engine = engineManager.getEngineByName(“nashorn”);
./spec/nashorn/BasicJavascriptWithNashorn.java: engine.eval(new FileReader(“./spec/nashorn/basic.js”));

7. Finding words

The interesting thing about the Asciidoctor.js project is it is a multi-language project. At its core, Asciidoctor is written in Ruby, so, to be usable in the JavaScript world, it has to be “transpiled” using Opal, a Ruby to JavaScript source-to-source compiler. Another technology I did not know about before.

So, after having examined the Nashorn specificities, I assigned to myself the task of better understanding the Opal API. As the first step in that quest, I searched all mentions of the Opal global object in the JavaScript files of the project. It could appear in affectations (Opal =), member access (Opal.) or maybe even in other contexts. A regular expression would do the trick. However, once again, grep has some more lightweight solution to solve that common use case. Using the -w option, it will match only words, that is patterns preceded and followed by a non-word character. A non-word character is either the begin of the line, the end of the line, or any character that is neither a letter, nor a digit, nor an underscore:

sh$ grep -irw –include=’*.js’ Opal .

8. coloring the output

I did not copy the output of the previous command since there are many matches. When the output is dense like that, you may wish to add a little bit of color to ease understanding. If this is not already configured by default on your system, you can activate that feature using the GNU –color option:

sh $grep -irw –color=auto –include=’*.js’ Opal .

You should obtain the same long result as before, but this time the search string should appear in color if it was not already the case.

9. Counting matching lines or matching files

I mentioned twice the output of the previous commands was very long. How long exactly?

sh$ grep -irw –include=’*.js’ Opal . | wc -l
86

That means we have a total 86 matching lines in all the examined files. However, how many different files are matching? With the -l option you can limit the grep output the matching files instead of displaying matching lines. So that simple change will tell how many files are matching:

sh$ grep -irwl –include=’*.js’ Opal . | wc -l
20

If that reminds you of the -L option, no surprise: as it is relatively common, lowercase/uppercase are used to distinguish complementary options. -l displays matching filenames. -L displays non-matching filenames. For another example, I let you check the manual for the -h/-H options.

Let’s close that parenthesis and go back to our results: 86 matching lines. 20 matching files. However, how are distributed the matching lines in the matching files? We can know that using the -c option of grep that will count the number of matching lines per examined file (including files with zero matches):

grep -irwc –include=’*.js’ Opal .

Often, That output needs some post-processing since it displays its results in the order in which the files were examined, and it also includes files without any match— something that usually does not interest us. That latter is quite easy to solve:

grep -irwc –include=’*.js’ Opal . | grep -v ‘:0$’

As about ordering things, you may add the sort command at the end of the pipeline:

sh$ grep -irwc –include=’*.js’ Opal . | grep -v ‘:0$’ | sort -t: -k2n

I let you check the sort command manual for the exact meaning of the options I used. Don’t forget to share your findings using the comment section below!

10. Finding the difference between two matching sets

If you remember, few commands ago, I searched for the word “Opal.” However, if I search in the same file set for all occurrence of the string “Opal,” I obtain about twenty more answers:

sh$ grep -irw –include=’*.js’ Opal . | wc -l
86
sh$ grep -ir –include=’*.js’ Opal . | wc -l
105

Finding the difference between those two sets would be interesting. So, what are the lines containing the four letters “opal” in a row, but where those four letters do not form an entire word?

This is not that easy to answer that question. Because the same line can contains both the word Opal as well as some larger word containing those four letters. But as a first approximation, you may use that pipeline:

sh$ grep -ir –include=’*.js’ Opal . | grep -ivw Opal
./npm/examples.js: const opalBuilder = OpalBuilder.create();
./npm/examples.js: opalBuilder.appendPaths(‘build/asciidoctor/lib’);
./npm/examples.js: opalBuilder.appendPaths(‘lib’);

Apparently, my next stop would be to investigate the opalBuilder object, but that will be for another day.

The last word

Of course, you will not understand a project organization, even less the code architecture, by just issuing a couple of grep commands! However, I find that command unavoidable to identify benchmarks and starting points when exploring a new codebase. So, I hope this article helped you to understand the power of the grep command and that you will add it to your tool chest. No doubt you will not regret it!

Source

AMD Ryzen Threadripper 2920X & 2970WX Linux Performance Benchmarks

Beginning today the AMD Ryzen Threadripper 2970WX and 2920X processors are shipping and we are now allowed to share our performance benchmarks for these latest Zen+ Threadripper 2 processors. Here’s a look at the Linux performance and related metrics for these new 12-core/24-thread and 24-core/48-thread processors.

 

 

The availability of the Threadripper 2920X and 2970WX round out the initial line-up of AMD Threadripper 2 processors announced this summer. It was back in August that was the launch of the Threadripper 2950X as a very viable upgrade over the first-generation Threadripper 1950X and then jaw-dropping was the top-end Threadripper 2990WX with its 32-cores and 64-threads.

 

 

As we’ve shared over the days and months that followed, the 2990WX in particular has been really quite legendary on Linux (as well as BSD) operating systems while Microsoft Windows had some initial (and seemingly still ongoing) scheduling bottlenecks with Threadripper 2. These high-core/thread count processors also make a lot of sense in general to Linux users due to generally compiling code and other parallel tasks more often than a normal Windows user would likely utilize. Now here we are today to look at the Threadripper 2920X and 2970WX to see how these HEDT processors perform compared to the other Threadripper parts as well as the Intel competition.

 

 

The Threadripper 2920X is a 12-core / 24-thread processor that offers a 3.5GHz base frequency with 4.3GHz boost frequency. This CPU has a 32MB L3 cache, manufactured on a 12nm process, and the rest of the common features throughout the Threadripper 2 line-up like quad-channel DDR4-2933 support. The 2920X has a 180 Watt TDP like the Threadripper 2950X. This CPU is launching at $649 USD compared to the Threadripper 2950X in its 16C/32T configuration with the same base clock frequency but with a 4.4GHz boost frequency (+100MHz) priced at $899 USD, so it’s a nice step above what is offered by the current top-end Ryzen 7 (2700X) and only ~$100~150 more than the Intel Core i9 9900K 8C/16T part.

 

 

The Threadripper 2970WX meanwhile comes in at $1299 USD to fit in between the $899 Threadripper 2950X and the $1799 Threadripper 2990WX. The Threadripper 2970WX offers 24 cores / 48 threads and matches the base/boost clock frequencies of the 2990WX at 3.0GHz and 4.2GHz, respectively. The specs come to being just like the Threadripper 2990WX but with 24 cores / 48 threads rather than 32 cores / 64 threads, which leads to the price being $500 less. The 2970WX still maintains a 250 Watt TDP and 64MB L3 cache.

 

 

The AMD Threadripper 2920X and 2970WX processors continue to work with all existing AMD X399 motherboards, for my launch day testing I have been mostly testing these new parts with the Gigabyte X399 AORUS GAMING 7 motherboard. Thanks to AMD for supplying these review samples in time for being able to once again deliver launch-day Linux performance results especially given the many common multi-threaded workloads Linux users tend to encounter.

Source

Open Source 3D Printing: Exploring Scientific and Medical Solutions

3D Printing is not a new thing to hear about. It is a very popular industry right now that began in the early 80s. But how different is Open Source 3D Printing from proprietary designs? How does this affect its applications in Science and Medicine? Let’s read on.

What is 3D Printing All About?

Just like a conventional computer printer is used to print in 2D on paper, the job of a 3D printer is to create actual three-dimensional objects, solidified from a digital 3D file, aided by a computer of course. They use different processes by adding materials layer by layer in most methods.

Materials can be in liquid or powder form meant to be fused together, to serve as input material for the 3D printer just like inkjet 2D printers require ink cartridges. The objects that are created can be of almost any geometrical shape.

A more industrial term for 3D Printing is Additive Manufacturing.

Why is 3D printing so useful?

3D Printing

3D Printing has limitless applications due to which it is so popular. Let’s briefly look into some of these applications, although our main focus will be on Applied Science and Medical Applications.

1. Rapid Prototyping

In 3D Printing, Rapid Prototyping is a process in which smaller parts of a larger device can be quickly manufactured for enhanced productivity, with the help of 3D CAD. This is a great way to test the usability of prototypes for industry standards, hence the term.

2. Vehicles

There are a wide variety of 3D printed manufacturing processes for aviation, automotive, aerospace, shipbuilding and more.

3. Environment

3D Printed coral structures are now being developed to save our dying coral reefs.

4. Construction

Parts of entire buildings can now be created with 3D Printing that can all be reassembled later for the construction of various architectures. You can now 3D Print your house in under a day!

5. Dentistry

Did you know that
even teeth can be 3D Printed? Think of how accurately they can be
designed to replace or repair teeth!

6. Gadgets and Tools

You can even 3D print your own customized gadgets and tools for personal use!

7. Organs

Yes, that’s correct, 3D Printing Research has now made so much progress that it is now also possible to recreate human organs ready for a transplant of patients who either have a liver, kidney, heart, lungs or any other vital organ that is damaged beyond “repair”.

Now that we have
seen some of the various applications, let us now ponder on which of
the following two approaches is more suitable for them:

Proprietary (Closed Source) 3D Printing

Proprietary 3D Printing, as the phrase suggests, uses proprietary software that does not enable access to source code for community-wide development. Any changes done on the hardware will also void your warranty if you happen to own a proprietary 3D Printer. If you need to change the way the printer works in order to customize it for your specific requirement, you are barred by a number of such issues.

If such rules are followed for any of the 3D Printing applications that we discussed in the above section, it becomes really difficult to focus on actual project objectives.

Proprietary 3D Printing can be really expensive, not just in terms of money, but also if you consider time, which is also quite valuable to consider while working on a 3D Printing project.

Open Source 3D
Printing

Open Source 3D
Printing eliminates all the issues we just discussed in the
proprietary section. Not only does it reduce costs, it enables easier
innovation to solve issues faced during 3D manufacturing.

Apparently, the phrase, “Open Source 3D Printing” is also gaining popularity as is evident with a simple search online.

It is now possible for users to completely go Open Source, making it possible to greatly reducing production time and manufacturing costs!

Examples of Applied Science and Medical Solutions Achieved with Open Source 3D Printing

We thought about which of these many applications is most significant for radically enhancing and sustaining the quality of our life and our planet, and hence we decided to specifically explore the Scientific and Medical Solutions to do just that.

So in this final and most important section, let us pick related applications that we just discussed and look into some examples in detail where we feel Open Source Approach is most necessary:

1. Saving Our Coral Reefs

3D Printed Coral Reefs developed by Reef Design Lab3D Printed Coral Reefs developed by Reef Design Lab

Coral Reefs are an extremely important part of our planet’s biodiversity and they are dying.

3D Printed coral reefs is now a very promising initiative to help restore them. Reef Design Lab has recently made it possible to support coral life. The design of the 3D models involved in the project will be made Open Source so that researchers who want to contribute in the same can actively take part.

2. Replacement of Teeth

3D Printed teeth? Yes, that’s a definite possibility today! There’s also an interesting improvement in the design: These teeth are designed with material that is anti-bacterial in nature! This makes it possible to kill the bacteria responsible for tooth decay on contact of the food that you chew!

3. Bioprinting

A 3D Bioprinter is a device that requires “bio-ink” to be used as material to 3D Print bioengineered tissue.

The following short video describes the process of Bioprinting a human ear. Note how they do not use plastic or rubber but living tissue as a biomaterial!

The Open Bioprinting Initiative

As we learnt that Tissue Engineering is greatly driven by 3D Printing technology, we should also consider that every patient is different, and so it is necessary for an open platform that allows customized manufacturing for tissue and organ generation.

An Open System that enables such customized Printing of bio-materials that diverse in nature will make it much easier to conduct research in Tissue Engineering.

The Open Bioprinting Initiative was a step that addresses this same primary objective. The related paper is not Open Access. But for educational purpose, it has been made available on their GitHub repository named Papers.

The paper signifies how an Open Source multi-channel 3D Bioprinting system is important both in terms of Hardware and Software. It also mentions cost-effectiveness because the system is designed and integrated with an Open Source Approach to find optimal conditions for 3D Bioprinting.

The Quest for a Fully Functional Bioprinted Heart!

We all know how important the heart is for our health. A medical technology company named Biolife4D recently demonstrated their ability to 3D Bioprint Human Heart Tissue! This is a remarkable achievement!

They use living cells to bioprint biological structures. The patient’s own white blood cells were reprogrammed to create pluripotent stem cells and cardiac cells, wherein the process took a matter of days for a complete generation in the form of a cardiac patch.

Currently, their research involves the development of individual parts such as heart valves and blood vessels for the heart. Their ultimate objective at the moment is to create a fully functional bioprinted heart.

We looked up online for their Open Source repositories but were unable to find any. We hope they make some of their research Open Source in the future so that more academicians and researchers can collaboratively contribute towards developing a fully functional 3D Bioprinted Heart. Such an action would also greatly empower initiatives like Open Bioprinting.

Applied Nanotechnology for Organ Transplant

“The field of tissue engineering is advancing steadily, partly due to advancements in rapid prototyping technology. Even with increasing focus, successful complex tissue regeneration of vascularized bone, cartilage and the osteochondral interface remains largely illusive. This review examines current three-dimensional printing techniques and their application towards bone, cartilage and osteochondral regeneration. The importance of, and benefit to, nanomaterial integration is also highlighted with recent published examples. Early-stage successes and challenges of recent studies are discussed, with an outlook to future research in the related areas.”

Nowicki, M., Castro, N. J., Rao, R., Plesniak, M., & Zhang, L. G. (2017). Integrating three-dimensional printing and nanotechnology for musculoskeletal regeneration. Nanotechnology, 28(38), 382001. doi:10.1088/1361-6528/aa8351

In our previous Open Science article, we discussed the nanotech and open source topic in detail while mentioning this article in its summary. Nanotechnology and 3D Printing share a strong correlation.

We discussed materials that are used to create 3D objects via the printers. These materials can also be designed at the nanoscale.

Since the materials are designed from the bottom-up nanoscale, enabling three extremely necessary precision levels i.e. nano-micro-macro, it is now possible to retain properties like maximum strength with minimal weight. This means we can now adjust the elasticity, strength or hardness of such 3D Printed objects with high accuracy.

Such an extremely high accuracy is of utmost importance in the development of 3D Printed Human Organs, that can now actually make it possible to save countless lives. Tissue Engineering would be greatly enhanced and thus promote effective manufacturing of 3D printed bone, cartilage or osteochondral tissue.

That’s not all, as we have already seen how other vital organs are even more significant.

4. Drug Discovery

“The current achievements include multifunctional drug delivery systems with accelerated release characteristic, adjustable and personalized dosage forms, implants and phantoms corresponding to specific patient anatomy as well as cell-based materials for regenerative medicine.”

Jamróz, W., Szafraniec, J., Kurek, M., & Jachowicz, R. (2018). 3D Printing in Pharmaceutical and Medical Applications – Recent Achievements and Challenges. Pharmaceutical Research, 35(9). doi:10.1007/s11095-018-2454-x

We previously discussed why Open Source Pharma is said to be “Linux for Drugs”. 3D Printing strengthens that initiative because it allows greater flexibility, time-saving and manufacturing medicine with extreme precision. Such a drug discovery method makes use of 3D Printing’s basic method of layer-by-layer CAD in order to formulate drug materials with the correct dosage.

The FDA approved the first 3D Printed drug some years ago. 3D Printed Drug Development addresses the challenges of conventional manufacturing techniques in pharmaceutical units. The greater advantage lies in its far better ability to create quality drugs in terms of drug loading, drug release, drug stability and pharmaceutical dosage form stability, as described in this Open Access paper in much detail.

Summary

So in this extensive article covering 3D Printing, we started by briefly introducing you to the concept followed by understanding its significance with different examples of applications.

Further ahead, we differentiated between Proprietary and Open Source 3D Printing Models to understand the advantages of the latter.

Finally, we focused on the Scientific and Medical Solutions for Open Source Bioprinting by looking into initiatives for saving our corals, teeth replacement with anti-bacterial abilities, Bioprinting with focus on Open Source Bioprinting and Applied Nanotechnology for Organ Transplant. In our final subsection, we also highlighted the role of 3D Printing in Drug Discovery.

These are only some of the many applications of 3D Printing. We believe there is a need for Proprietary manufacturers to migrate towards Open Source Business Models that would promote better applicability for our planet.

What are your views? Do you think there should be more effort in Open Bioprinting and other 3D Printing Applications? Have you ever been involved with 3D Printing? Please share your thoughts with us in the comments below.

Source

Download Elementary OS 5.0

elementary OS is an open source operating system based on Ubuntu Linux, the world’s most popular free OS, and built around the GNOME desktop environment. It features its own theme, icons and applications.

Distributed as 64-bit and 32-bit Live DVDs

The system is usually distributed as two Live DVD ISO images, one for each of the supported hardware platforms, 64-bit and 32-bit. It allows users to use the live environment directly from USB flash drives or blank DVDs.

Boot options

The design of the boot prompt and it’s default functionality is unchanged from Ubuntu, allowing users to run a memory test, boot an existing operating system from the first disk drive, test the OS without installing, or directly install it (not recommended).

If you don’t press a key to force the boot from the external USB stick or DVD disc, it will automatically load and start the live desktop environment, which is comprised of a top panel, from where users can access the unique main menu and launch apps, as well as a dock (application launcher) on the bottom edge of the screen.

Default applications

Default applications include the Midori web browser, Nautilus (Files) file manager, Empathy multi-protocol instant messenger, File Roller archive manager, Geary email client, GParted disk partition editor, Totem movie player, Evince document viewer, Shotwell image viewer and organizer, and Scratch text editor.

It also comes with in-house developed applications, such as calendar and music clients, called Calendar and Music. However, everything in elementary OS is designed to perfection and engineered to define the unwritten laws of Linux-based operating systems.

You can add even more applications using the included Software Center tool, from where you can also update or remove applications. It is also possible to install the operating system directly from the live session using the graphical installer provided on the dock.

Bottom line

What can we say? elementary OS is an extraordinary project that provides users with an independent and highly-customizable operating system that is based on and compatible with the current Ubuntu LTS distribution.

Source

WP2Social Auto Publish Powered By : XYZScripts.com