Innovating Nanotechnology with Open Science and AI

Nanotechnology is evidently a very popular buzzword, comprising of several remarkable applicabilities in Science and Technology. In this new article for Open Science and Artificial Intelligence, we will explore how both of them impact Nanotechnology research.

Trivia: The term “Open Source” was coined by futurist Christine Peterson, who is an American nanotechnologist and also the co-founder of Foresight Institute, which is primarily focused on Nanotechnology research.

What is Nanotechnology, again?

Before understanding what Nanotechnology is, lets first look into the term “nano” (this might remind you of the text editor that is a favourite of many of us Linuxers in the FOSS community!). In both the cases, “nano” simply refers to a scale of measurement. For example, if we want to measure distance in nanometer (nm), a comparable value in meter would be:

1 meter = 1000,000,000 nanometers;

that is, if you take 1 billionth of a meter, what you get is 1 nm. So, it is an extremely small scale of measurement. The video included in this section takes you to that small scale and explains Nanotechnology in a very simple manner.

Nanotechnology is the implementation of different techniques using Science, Engineering and Technology in order to study phenomena at the nanoscale. In general, such studies are carried out in the range of 1-100 nm.

Why is Nanotechnology so significant?

Nanotechnology is of immense significance due to its wide variety of applicabilities in diverse fields such as biology, physics, chemistry, material sciences and many others.

It would be much easier to further understand its significance if we talk about some of the noteworthy applications of Nanotechnology as follows:

1. Healthcare

As we already know that Nanotechnology works at the Nano scale, it means we can work at the molecular and subatomic level. Nanomedicine, for instance, has a created a revolution in the field of drug delivery because nanotechnology enables the therapeutic molecule (contained in the medicine) to lock on to the desired protein right on target after consumption. This is carried out with the help of Nanoparticles.

It is due to Nanotechnology that chemotherapy can now be focused only on the disease affected area, ensuring the whole body need not go through the process of cancer treatment. Thus, the immune system is saved from getting destroyed as chemotherapy involves the use of toxic chemicals to get rid of cancerous regions.

2. Nanorobotics

You might have already heard about Nanobots. As we have discussed just earlier about the term called “Nano”, Nanobots are intelligent machines that are built on the Nano scale.

Nanobots are extremely helpful in medicine and industry. They can be preprogrammed to carry out a specific task. For example, Nanobots can be used to tackle oil pollution in a very effective manner, thus helping in cleaning up the environment.

AI (Artificial Intelligence) today greatly empowers Nanorobotics. We will talk about it in brief in a later section where we will see how AI and Nanotechnology converge with each other so effectively.

3. Biomaterials

Biomaterials are substances that are medically used for therapy of a disease or diagnosing one. They can include living tissue or artificially created material for use in biological systems to repair, replace or stimulate a damaged biological function. In the above video title, Biomimetic, as the word hints, implies mimicking the behaviour of any specific biological system.

Since Nanotechnology allows nanoscale level precision, it is a boon for developing Biomaterials. For example, in bone tissue engineering, ceramics, polymers, metals and composites can be developed at the nanoscale allowing extreme accuracy. Such nanophase biomaterials are of great significance in orthopaedic implants.

Why do we need Open Source Nanotechnology?

An article named “Make nanotechnology research open-source” on the Nature journal, encourages the adoption of an Open Source Approach and explains in a very simple manner about how innovation in Nanotechnology can be greatly hindered by patent abuse.

Excessive patenting in Nanotechnology leads to:

  • increases in costs,
  • slows down technical development and
  • removes fundamental knowledge from the public domain

There is also a separate section in the article titled “Open Source Alternatives” which clearly highlights how an Open Source Model would allow Nanotechnology companies to freely use the best tools, materials and devices available for carrying out research and using the technology without worrying about IP monopolies.

License fees would be eliminated thus reducing costs. Such savings can be used for innovating instead, which is very necessary for a company to survive. This openness also creates scope for small startups to enter the market and innovate in Nanotechnology research.

The field of nanotechnology is a combination of information (such as chemical formulae), software (for example, modelling tools) and hardware (such as atomic force microscopes). All three areas can adopt open-source principles, and some steps have already been taken towards this.

Pearce, J. M. (2012). Make nanotechnology research open-source. Nature, 491(7425), 519-521. doi:10.1038/491519a

An Open Science Approach towards Nanotechnology

Now that we have seen how Open Source revolutionizes Nanotechnology Research, let us discuss in detail about three important initiatives in Open Source Nanotechnology. If we recall from our first Science article, Open Science implies Open Source, Open Access, Open Data and Open Standards.

1. nanoHUB: A Massive Open Source Initiative in Nanotechnology

Also mentioned in the article that we just discussed, nanoHUB is an initiative that was initially begun in 2002 by the US National Science Foundation, who established a university network called the Network for Computational Nanotechnology (NCN), to support the National Nanotechnology Initiative.

nanoHUB.org has facilitated researchers, educators, and professionals to collaborate and share resources in order to solve nanotechnology problems.

NCN has three noble goals:

  • bringing computational tools online,
  • making the tools easy to use, and
  • educating users about the tools and nanoscience.

Read more about it on the paper here which has been written with an educational perspective. You can also read a more recent paper here which contains some useful references on nanoHUB.

2. caNanoLab: To speed up the use of nanotechnology in biomedicine

Another Open Source Initiative, caNanoLab enables data sharing to make the use of applied nanotechnology more convenient. The portal enables information sharing across research communities to accelerate and validate the use of nanotechnology in biomedicine.

Just like Bioinformatics (as we discussed in a previous Open Science article), Nanoinformatics has also emerged as a field of study. It primarily deals with any data related to Nanotechnology. After this data is collected and validated, it is stored and analyzed through various methods for useful applications. caNanoLab makes Nanoinformatics studies much easier.

3. NBI: Nano-Biomaterials Interactions Knowledgebase

The NBI Knowledgebase was created for use by the industry, academia, the general public, and regulatory agencies as a platform for an unbiased understanding of how biological systems are affected by nanomaterial exposure.

On the portal, you will find two sections:

Nanomaterial Library

Here you can look up a library of Nanomaterials with notable parameters like material type, core, surface chemistry, shape, charge, size and dendrimer generation.

Analysis of Nanomaterials

This section has all the parameters as in the library but with two additional options named Heatmap and Plot intended for analytical display.

Now that we spoke about three Open Science Initiatives in Nanotechnology, let’s conclude this section by leaving this link containing some exhaustive resources for Nanotechnology research. The page is intended for the eNanoMapper database which also contains links to other initiatives.

Open Source NanoAI: Open Convergence of AI and Nanotechnology

Its very obvious that somehow or the other, AI (Artificial Intelligence) and Nanotechnology had to make a convergence one day. This has opened up a whole new era of amazing possibilities.

Today’s AI can now solve many challenges faced by nanotechnologists. Some of the ways are listed as under:

  • Interpreting results obtained from Nanoscale experiments
  • Estimation of multiple parameters effectively
  • Automatic characterization of Nanomaterial properties and complex I/O responses
  • System optimization
  • Data and algorithm design for Nanocomputers

Read more about Artificial Intelligence and Nanotechnology here.

If we think of all of the above applications of AI in Nanotechnology with an Open Source Perspective, we can clearly perceive the elevated benefits. Nanotechnologists can collaborate effectively by sharing open information, source code and datasets that can efficiently speed up Nanotechnology research with applied AI.

So, Open Source NanoAI facilitates working with FOSS that involves software that implements both Artificial Intelligence and Nanotechnology.

Did you know that AI could use nanotechnology to create human-organs for replacement and repair of damaged ones, allowing people to live on longer? There’s more, AI can even be used to create artificial meat from nano-stem cells!

Stem cells, as we see, can very much be used for a higher purpose. With Nanotechnology, stem cells can be transformed into bone cells on command. The process can also be used to treat deadly conditions such as heart disease and Parkinson’s.

AI is a driving force in Nanorobotics in the application of therapeutics that involve the immune system. Nanobots can make use of unsupervised machine learning to identify damaged human cells. All of this is possible because AI programs can tell Nanobots about differentiating between good and bad cells with the help of a vast library included within the system consisting of all knowledge about Nanoinformatics and our human body.

Read more about the application of AI in Nanorobotics here.

Ongoing Research in Nanotechnology

Let’s now look into some interesting research work that has been happening in the field in recent times. We found many and picked two:

1. Formulation of nanoparticles to promote crop immunity

This can be applied in rice crop fields to make rice plants immune to fungi, which can prove to be great news for farmers and agriculturists.

doi: 10.1101/339283

2. Delivering medicine through nanocarriers via nose-to-brain

Nasal delivery of surface modified nanomedicines has been proposed for the treatment of several central nervous system conditions including:

  • Migraines
  • Sleep disorders
  • Viral infections
  • Brain tumors
  • Multiple sclerosis
  • Schizophrenia
  • Parkinson’s disease
  • Alzheimer’s disease
  • Obesity

Benefits of this research are that the side effects as in traditional drugs need not be worried about since nanocarriers can completely bypass the blood-brain barrier through nasal delivery.

doi: 10.3390/pharmaceutics10010034

Summary

So, in this Open Science and AI article, we introduced how Nanotechnology works and why it is an important field of study. We shared three examples, namely, Healthcare, Nanorobotics and Biomaterials. We then saw how Open Source Nanotechnology is a greater necessity to carry on tasks and research on the same more effectively.

Further on, we saw how an Open Science Approach drives Nanotechnology at a rapid pace. We saw three Open Science Initiatives, namely, nanoHUB, caNanoLab and NBI.

We also highlighted how AI and Nanotechnology converge for a common purpose and finally, we noted some ongoing research work in the field of Nanotechnology.

Thank you for reading. Please share any feedback that you would like to share in the comments section below. Our next article is going to be about 3d printing that would also include some discussions about interesting nanoscale applications.

Source

Download KaOS 2018.10

KaOS is an open source Linux distribution built around the KDE Plasma Workspaces and Application project, as well as the pacman package manager software from the Arch Linux operating system.

Distributed as a 64-bit Live DVD

The system is distributed as a single Live DVD ISO image that supports only 64-bit hardware platforms. It can be written to a blank DVD disc with any CD/DVD burning software, or a USB flash drive using the UNetbootin application.

KaOS is completely independent and provides users with a rolling-release system, which will make sure that their installations will always be up-to-date without requiring them to download a new ISO image and upgrade the entire OS.

Boot options

The boot medium allows users to run the live environment with support for Nvidia and AMD/ATI Radeon graphics cards, run a memory test, detect the hardware components, or boot the currently installed operating system.

Based on Arch Linux and built around KDE

As mentioned, the live session is powered by the KDE project, which provides a modern computing experience with a neat collection of hand picked open source application for common tasks.

Its main goal is to be small and fully focused on KDE and Qt technologies. It is based on the Arch Linux operating system and uses the pacman application as its default package manager for installing, removing and updating software packages.

One of its key features is the graphical installer provided on the Live DVD, which not only that allows novice users to install the operating system with only a few mouse clicks, but it also provides advanced configuration options for experienced Linux users.

Default applications

Default applications include the QupZilla web browser, Calligra office suite, Quassel IRC client, Krita digital painting software, Clementine music player, Plasma Media Center, Kdenlive video editor, Dragon Player video player, and many others.

Even if it’s based on the Arch Linux operating system, KaOS has its own software repositories comprised of Core, Main and Apps groups, which will give users quick access to some of the best and useful applications, libraries and core components.

Bottom line

If you like KDE and Arch Linux-based distributions, you should really give KaOS a try. Who knows, it might become your only operating system.

KDE desktop Linux distribution Operating system KDE Linux Distribution Distro

Source

New Custom Linux Distro is Systemd-Free, Debian-Based, and Optimized for Windows 10

Open SourceWindowsLinux

Posted by EditorDavid

on Saturday September 22, 2018 @11:34AM

from the Windows-shopping-at-the-Microsoft-Store dept.

An anonymous reader quotes

MSPowerUser:
Nearly every Linux distro is already available in the Microsoft Store, allowing developers to use Linux scripting and other tools running on the Windows Subsystem for Linux (WSL). Now another distro has popped up in the Store, and unlike the others it claims to be specifically optimised for WSL, meaning a smaller and more appropriate package with sane defaults which helps developers get up and running faster.

WLinux is based on Debian, and the developer, Whitewater Foundry, claims their custom distro will also allow faster patching of security and compatibility issues that appear from time to time between upstream distros and WSL… Popular development tools, including git and python3, are pre-installed. Additional packages can be easily installed via the apt package management system… A handful of unnecessary packages, such as systemd, have been removed to improve stability and security.

 

The distro also offers out of the box support for GUI apps with your choice of X client, according to

the original submission

.

WLinux is open source under the MIT license, and is available for free on GitHub. It can also be downloaded from Microsoft Store at a 50% discount, with the development company promising the revenue will be invested back into new features.

 

FORTUNE’S FUN FACTS TO KNOW AND TELL:
A guinea pig is not from Guinea but a rodent from South America.

Working…

Source

Git It Right » Linux Magazine

The Git version control system is a powerful tool for managing large and small software development projects. We’ll show you how to get started.

With its egalitarian spirit and tradition of strong community involvement, open source development doesn’t scale very well without some form of version control.

Over the past several years, Git [1] has settled in as the most viable and visible version control tool for the Linux space. Git was created by Linus Torvalds himself, and it got its start as the version control system for the Linux kernel development community (see the box entitled “The Birth of Git”). Since then, Git has been adopted by hundreds of open source projects and is the featured tool on several large code-hosting sites, such as GitHub.

Even if you aren’t a professional developer, if you work in the Linux space, you occasionally need to download and compile source code, and, more often than not, that means interacting with Git. Many Linux users pick up occasional Git commands on the fly without ever getting a formal introduction to what Git is and how it works. This article is the first in a two-part series aimed at building a better understanding of Git for everyday Linux users. This first article shows how to install Git, create a Git project, commit changes, and clone the repository to a remote location. Next month, you’ll learn some advance techniques for managing code in Git.

[…]

Use Express-Checkout link below to read the full article (PDF).

Source

The Professional Approach to Upgrading Linux Servers

With the release of Ubuntu 18.04, I thought it would be the perfect time to talk about server upgrades. Specifically, I’m going to share with you the process that I’m using to perform upgrades.

I don’t shy away from work, but I hate doing work that really isn’t needed. That’s why my first question when it comes to upgrades is:

Is this upgrade even necessary?

The first thing to know is the EOL (End of Life) for support for the OS you’re using. Here are the current EOLs for Ubuntu:

Ubuntu 14.04 LTS: April 2019
Ubuntu 16.04 LTS: April 2021
Ubuntu 18.04 LTS: April 2023

(By the way, Red Hat Enterprise Linux delivers at least 10 years of support for their major releases. This is just one example why you’ll find RHEL being used in large organizations.)

So, if you are thinking of upgrading from Ubuntu 16.04 to 18.04, consider if the service that server provides is even needed beyond April 2021. If the server is going away in the next couple of years, then it probably isn’t worth your time to upgrade it.

If you do decide to go ahead with the upgrade, then…

Determine What Software Is Being Used

Hopefully, you have a script or used some sort of documented process to build the existing server. If so, then you have a good idea of what’s already on the server.

If you don’t, it’s time to start researching.

Look at the running processes with the “ps” command. I like using “ps -ef” because it shows every process (-e) with a full-format listing (-f).

ps -ef

Look at any non-default users in /etc/passwd. What processes are they running? You can show the processes of a given user by using the “-u” option to “ps.”

ps -fu www-data
ps -fu haproxy

Determine what ports are open and what processes have those ports open:

sudo netstat -nutlp
sudo lsof -nPi

Look for any cron jobs being used.

sudo ls -lR /etc/cron*
sudo ls -lR /var/spool/cron

Look for other miscellaneous clues such as disk usage and sudo configurations.

df -h
sudo du -h /home | sort -h
sudo cat /etc/sudoers
sudo ls -l /etc/sudoers.d

Determine the Current Software Versions

Now that you have a list of software that is running on your server, determine what versions are being used. Here’s an example list for an Ubuntu 16.04 system:

  • HAProxy 1.6.3
  • Nginx 1.10.3
  • MariaDB 10.0.34

One way to get the versions is to look at the packages like so:

dpkg -l haproxy nginx mariadb-server

Determine the New Software Versions

Now it’s time to see what version of each piece of software ships with the new distro version. For ubuntu 18.04 you can use “apt show PKG_NAME”:

apt show HAProxy

To display just the version, grep it out like so:

apt show HAProxy | grep -i version

Here’s our list for Ubuntu 18.04:

  • HAProxy 1.8.81
  • Nginx 1.14.0
  • MariaDB 10.1.29

Read the Release Notes

Now, find the release notes for each version of each piece of software. In this example, we are upgrading HAProxy from 1.6.3 to 1.8.81. Most software these days conform to Semantic Versioning guidelines. In short, given a version number MAJOR.MINOR.PATCH, increment the:

MAJOR version when you make incompatible API changes,
MINOR version when you add functionality in a backwards-compatible manner, and
PATCH version when you make backwards-compatible bug fixes.

This means we’re the most concerned with major versions and somewhat concerned with minor versions, and we can pretty much ignore the patch version. This means we can think of this upgrade as being from 1.6 to 1.8.

Because it’s the same major version (1), we should be fine to just perform the upgrade. That’s the theory, anyway. It doesn’t always work in practice.

In this case, read the release notes for HAProxy versions 1.7 and 1.8. Look for any signs of backward compatibility issues such as configuration syntax changes. Also look for new default values and then consider how those new default values could affect the environment.

Repeat this process for the other major pieces of software. In this example that would be going from Nginx 1.10 to 1.14 and MariaDB 10.0 to 10.1.

Make Changes Based on the Release Notes

Based on the information from the release notes, make any required or desired adjustments to the configuration files.

If you have your configuration files stored in version control, make your changes there. If you have configuration files or modifications performed by your build scripts, make your changes there. If you aren’t doing either one of those, DO IT FOR THIS DEPLOYMENT/UPGRADE. 😉 Seriously, just make a copy of the configuration files and make your changes to them. That way you can push them to your new server when it’s time to test.

If you’re not sure what configuration file or files a given service uses, refer to its documentation. You can read the man page or visit the website for the software.

Also, you can list the contents of its package and look for “etc”, “conf”, and “cfg”. Here’s an example from an Ubuntu 16.04 system:

dpkg -L haproxy | grep -E ‘etc|cfg|conf’

The “dpkg -L” command lists the files in the package while the grep command matches “etc”, “cfg”, or “conf”. The “-E” option is for extended regular expressions. The pipe (|) acts as an “or” in regular expressions.

You can also use the locate command.

locate haproxy | grep -E ‘etc|cfg|conf’

In case you’re wondering, the main configuration file for haproxy is haproxy.cfg.

Install the Software on the Upgraded Server

Now install the major pieces of software on a new server running the new release of the distro.

Of course, use a brand new server installation. You want to test your changes before you put them into production.

By the way, if you have a dedicated non-production (test/dev) network, use it for this test. If you have a service on the server you are upgrading that connects to other servers/services, it’s a good idea to isolate it from production. You don’t want to accidentally perform a production action when you’re testing. This means you may need to replicate those other servers in your non-production environment before you can fully test the particular upgrade that you’re working on.

If you have deployment scripts you can use them to perform the installs. If you use Ansible or the like, use it against the new server. Or you can manually perform the install, making notes of all the commands you run so that you can put them in a script later on. For example, to manually install HAProxy on Ubuntu 18.04, run:

apt install -y haproxy

Next, put the configuration files in place.

Start the Services

If the software that you are installing is a service, make sure it starts at boot time.

sudo systemctl enable haproxy

Start the service:

sudo systemctl start haproxy

If your existing deployment script starts the service automatically, perform a restart to make sure that any of the new configuration file changes are being used.

sudo systemctl restart haproxy

See if the service is running.

sudo systemctl status

If it failed, read the error message and make the required corrections. Perhaps there is a configuration option that worked with the previous version that isn’t valid with the new version, for example.

Import the Data

If you have services that store data, such as a database service, then import test data into the system.

If you don’t have test data, then copy over your production data to the new server.

If you are using production data, you need to be very careful at this point.

1) You don’t want to accidentally destroy or alter any production data and…

2) You don’t want your new system taking any unwanted actions based on production data.

On point #1, you don’t want to make a costly mistake such as getting your source and destinations mixed up and end up overwriting (or deleting) production data. Pro tip: make sure you have good production backups that you can restore.

One point #2, you don’t want to do something like double charge the business’s customers or send out duplicate emails, etc. To this end, stop all the software and services that are not required for the import before you do it. For example, disable cron jobs and stop any in-house written software running on the test system that might kick off an action.

It’s a good idea to have TEST data. If you don’t have test data, perhaps you can use this upgrade as an opportunity to create some. Take a copy of the production data and anonymize it. Change real email addresses to fake ones, etc.

As previously mentioned, do your tests on a non-production network that cannot directly touch production.

Perform Service Checks

If you have a service availability monitoring tool (and why wouldn’t you???), then point it at the new server. Let it do its job and tell you if something isn’t working correctly. For example, you may have installed and started HAProxy, but perhaps it didn’t open up the proper port because you accidentally forgot to copy over the configuration.

Whether or not you have a service availability monitoring tool, use what you know about the service to see if it’s working properly. For example, did it open up the proper port or ports? (Use the “netstat” and “lsof” commands from above). Are there any error messages you should be concerned about?

If you’re at all familiar with the service, test it. If it’s a web server, does it serve up the proper web pages? If it’s a database server, can you run queries against it?

If you’re not the familiar with the service or a normal user of the service, it’s time to enlist help. If you have a team that is responsible for testing, hand it over to them. Maybe it’s time to for someone in the business who uses the service to check it out and see if it works as expected.

If you don’t have a regression testing process in place, now would be a good time to create one. The goal is to make changes and know that those changes haven’t broken the service. Upgrading the OS is a major change that has the potential to break things in a major way.

Prepare for Production

Once you’ve completed this entire process and tested your work, put all your notes into a production implementation plan. Use that plan as a checklist when you’re ready to go into production. It’s probably worth it to test your plan on another newly installed system to make sure everything goes smoothly. This is especially true when you are working on a really important system.

By the way, don’t think less of yourself for having a detailed plan and checklist. It actually shows your professionalism and commitment to doing good work.

For example, would you rather fly on a plane with a pilot who uses a checklist or one who just “wings it.” I don’t care how smart or talented that pilot is, I want them to double check their work when it comes to my life.

Yes, It’s a Lot of Work

You might be thinking to yourself, “Wow, this is a very tedious and time-consuming process.” And you’d be right.

If you want to be a good/great Linux professional, this is exactly what it takes. Attention to detail and hard work are part of the job.

The good news is that you get compensated in proportion to your professionalism and level of responsibility.

If it was fast and easy, everyone would be doing it, right?

Hopefully, this post gave you some ideas beyond just blindly upgrading and hoping for the best. 😉

Speaking of the best…. I wish you the best!

Jason

P.S. If you’re ready to level-up your Linux skills, check out the courses I created for you here.

Source

LMDE 3 “Cindy” Cinnamon – BETA Release – The Linux Mint Blog

This is the BETA release for LMDE 3 “Cindy”.

LMDE 3 Cindy

LMDE is a Linux Mint project and it stands for “Linux Mint Debian Edition”. Its main goal is for the Linux Mint team to see how viable our distribution would be and how much work would be necessary if Ubuntu was ever to disappear. LMDE aims to be as similar as possible to Linux Mint, but without using Ubuntu. The package base is provided by Debian instead.

There are no point releases in LMDE. Other than bug fixes and security fixes Debian base packages stay the same, but Mint and desktop components are updated continuously. When ready, newly developed features get directly into LMDE, whereas they are staged for inclusion on the next upcoming Linux Mint point release.

Important info:

The release notes provide important information about known issues, as well as explanations, workarounds and solutions.

To read the release notes, please visit:

Release Notes for LMDE 3

System requirements:

  • 1GB RAM (2GB recommended for a comfortable usage).
  • 15GB of disk space (20GB recommended).
  • 1024×768 resolution (on lower resolutions, press ALT to drag windows with the mouse if they don’t fit in the screen).

Notes:

  • The 64-bit ISO can boot with BIOS or UEFI.
  • The 32-bit ISO can only boot with BIOS.
  • The 64-bit ISO is recommended for all modern computers (Almost all computers sold since 2007 are equipped with 64-bit processors).

Upgrade instructions:

  • This BETA release might contain critical bugs, please only use it for testing purposes and to help the Linux Mint team fix issues prior to the stable release.
  • It will be possible to upgrade from this BETA to the stable release.

Bug reports:

  • Bugs in this release should be reported on Github at https://github.com/linuxmint/lmde-3-cinnamon-beta/issues.
  • Create one issue per bug.
  • As described in the Linux Mint Troubleshooting Guide, do not report or create issues for observations.
  • Be as accurate as possible and include any information that might help developers reproduce the issue or understand the cause of the issue:
    • Bugs we can reproduce, or which cause we understand are usually fixed very easily.
    • It is important to mention whether a bug happens “always”, or “sometimes”, and what triggers it.
    • If a bug happens but didn’t happen before, or doesn’t happen in another distribution, or doesn’t happen in a different environment, please mention it and try to pinpoint the differences at play.
    • If we can’t reproduce a particular bug and we don’t understand its cause, it’s unlikely we’ll be able to fix it.
  • Please visit https://github.com/linuxmint/Roadmap to follow the progress of the development team between the BETA and the stable release.

Download links:

Here are the download links for the 64-bit ISO:

A 32-bit ISO image is also available at https://www.linuxmint.com/download_all.php.

Integrity and authenticity checks:

Once you have downloaded an image, please verify its integrity and authenticity.

Anyone can produce fake ISO images, it is your responsibility to check you are downloading the official ones.

Enjoy!

We look forward to receiving your feedback. Many thanks in advance for testing the BETA!

Source

Linux Scoop — Ubuntu Budgie 18.04 LTS Ubuntu…

Ubuntu Budgie 18.04 LTS – See What’s New

Ubuntu Budgie 18.04 LTS is the latest release of Ubuntu budgie. As part of Ubuntu 18.04 flavor this release ships with latest Budgie desktop 10.4 as default desktop environment. Powered by Linux 4.15 kernel and shipping with the same internals as Ubuntu 18.04 LTS (Bionic Beaver), the Ubuntu Budgie 18.04 LTS official flavor will be supported for three years, until April 2021.

Prominent new features include support for adding OpenVNC connections through the NetworkManager applet, better font handling for Chinese and Korean languages, improved keyboard shortcuts, color emoji support for GNOME Characters and other GNOME apps, as well as window-shuffler capability.

Source

How to change the color of your BASH prompt | Elinux.co.in | Linux Cpanel/ WHM blog

You can change the color of your BASH prompt to green with this command:

export PS1=”e[0;32m[[email protected]h W]$ e[m”

It will change the colour of bash temporarily. To make it permanent then add code in bash_profile page.

vi ~/.bash_profile

and paste above code save the file and you are done.

For other colors please see the attached list:

Color Code
Black 0;30
Blue 0;34
Green 0;32
Cyan 0;36
Red 0;31
Purple 0;35
Brown 0;33
Blue 0;34
Green 0;32
Cyan 0;36
Red 0;31
Purple 0;35
Brown 0;33
Light Color Code
Light Black 1;30
Light Blue 1;34
Light Green 1;32
Light Cyan 1;36
Light Red 1;31
Light Purple 1;35
Light Brown 1;33
Light Blue 1;34
Light Green 1;32
Light Cyan 1;36
Light Red 1;31
Light Purple 1;35
Light Brown 1;33

Source

how to force user to change their password on next login in linux ?

Method 1:

To force a user to change his/her password, first of all the password must have expired and to cause a user’s password to expire, you can use the passwd command, which is used to change a user’s password by specifying the -e or –expire switch along with username as shown.

#passwd –expire ravi

#chage -l ravi

Last password change : password must be changed

Password expires : password must be changed

Password inactive : password must be changed

Account expires : never

Minimum number of days between password change : 0

Maximum number of days between password change : 99999

Number of days of warning before password expires : 7

After running the passwd command above, you can see from the output of thechage command that the user’s password must be changed. Once the userravi tries to login next time, he will be prompted to change his password before he can access a shell .

Method 2:

Using chage command:

chage command – Change user password expiry information

Use the following syntax to force a user to change their password at next logon on a Linux:

# chage -d 0 user-name

In this example, force ravi to change his password at next logon, enter:

# chage -d 0 ravi

  • -d 0 : Set the number of days since January 1st, 1970 when the password was last changed. The date may also be expressed in the format YYYY-MM-DD. By setting it to zero, you are going to force user to change password upon first login.

Source

OWASP Security Shepherd – Insecure Cryptographic Storage Challenge 1 Solution – LSB – ls /blog


Thanks for visiting and today we have another OWASP Security Shepherd Solution for you. This time it’s the Insecure Cryptographic Storage Challenge. Cryptography is usually the safest way to communicate online but this method of encryption is not secure at all.

Get your Linux career soaring with 16% off courses site wide. COUPON CODE: LSB16

icsc1

That’s all very straight forward. The key has been encrypted using Roman Cipher. This is incorrect, the correct term is Caesar Cipher. A Caesar Cipher takes a letter from the alphabet, say A, and use a number, like 5. This would change an A to an F, moving 5 places in the alphabet.

REGISTER TODAY FOR YOUR KUBERNETES FOR DEVELOPERS (LFD259) COURSE AND CKAD CERTIFICATION TODAY! $499!

So we need to copy the cipher and go to a decoder that’s available online. We just need to paste the code into the decoder and try 5, 6, 7 places and so on.

https://www.dcode.fr/caesar-cipher easily does this for us.

icsc2

We will leave out how many places in the alphabet that this cipher moves as we would like you to try it yourself. Another challenge down, check!!.

Thanks for reading and if you enjoyed this post please leave comment. Don’t forget to follow also for more tutorials and challenges. Peace.

QuBits 2018-09-21

BUNDLE CLOUD FOUNDRY FOR DEVELOPERS COURSE(LFD232) AND THE CFCD CERTIFICATION FOR $499!

Source

WP2Social Auto Publish Powered By : XYZScripts.com