Compiling Linux Kernel (on Ubuntu)

This guide may not exactly be relevant to this blog, but as an exercise in getting familiar with Linux, I’ll post it anyways. Here are a few disclaimers-

  1. Don’t follow this guide for compiling linux kernel, there are much better guides out there for that purpose (this is the one I followed). The guide exists to help you learn some new stuff which you didn’t know before, and to improve your understanding of Linux a bit.
  2. My knowledge of Linux and operating systems, in general, is somewhat limited, and hence, some things might be wrong (or at least not perfectly correct).
  3. The main reason for writing this tutorial is because I had to submit a document showing what I did. It’s not exactly related to hacking. It just gives you some insight into linux (which I perceive is helpful).
  4. Do everything on a virtual machine, and be prepared for the eventuality that you’ll break your installation completely.

Linux Kernel

Running uname -r on your machine would show you what kernel version you’re using. uname -a would give you some more details regarding that.

Every once in a while, a new

stable

kernel release is made available on

kernel.org

. At the time of writing this, the release was 4.9.8. At the same time, there is also the latest release

candidate kernel

, which is not of our interest, as it’s bleeding edge (latest features are available in the kernel, but there could be bugs and compatibility issues), and hence not stable enough for our use.

I download the tar ball for the latest kernel (a compressed archive of ~100MB size, which becomes ~600 MB upon extraction). What we get upon extraction is the source files of your linux kernel. We need to compile this to get an object file which will run our OS. To get a feel for what this means, I have a little exercise for you-

Small (and optional) exercise

We will do the following-

  1. Make a folder, and move to that folder
  2. Write a small c++ hello world program
  3. Compile it, using make
  4. Run the compiled object file.

On the terminal, run the following-

Step 1:

mkdir testing

cd testing

Step 2:

cat > code.cpp

Paste this into the terminal
#include <iostream>

int main(){

std::cout << “Hello Worldn”;
return 0;
}

After pasting this, press ctrl+d on your keyboard (ctrl+d = EOL = end of line).

If this doesn’t work, just write the above code in your favourite text editor and save as code.cpp

Step 3:

make code

Step 4:

./code

Notice how we used the make command to compile our source code and get an executable. Also, notice how the make command itself executed this command for us-

g++ code.cpp -o code

In our case, since there was only one source file, make knew what to do (just compile the single file). However, in case there are multiple source, make can’t determine what to do.

For example, if you have 2 files, and the second one depends on the first one in some way. Then, you need the first one to be compiled before the second one. In case of the kernel, there are possibly millions of source code files, and how they get compiled is a very complex process.

If you navigate to the folder containing linux kernel (the folder where you extracted the tar ball), you’ll get an idea of the sheer magnitude of complexity behind a kernel. For example, open the Makefile file in that folder in your favourite text and editor and see the contents of the folder. Makefile contains instructions which make (the command line tool we used earlier) uses to determine how to compile the source files in that directory (and subdirectories).

Some tools

Compiling our simple c++ program didn’t need much, and your linux distribution (I’m using Ubuntu 16 for this tutorial) would come with the required tools pre-installed. However, compiling kernel needs some more stuff, and you’ll need to install the required tools. For me, this command installed everything that was needed-

sudo apt-get install libncurses5-dev gcc make git exuberant-ctags bc libssl-dev

Many of these tools would actually be pre-installed, so downloading and installing this won’t take too long.

(if you’re not on Ubuntu/Kali, then refer to this guide, as it has instruction for Red Hat based and SUSE based systems as well)

Download kernel

In the guide that I followed, he suggested that I clone this repository-

git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git

After cloning the repo, I had to choose the latest stable kernel and then proceed further with it. This would be useful when you want to keep pulling updates and recompiling your kernel. However, for the purpose of this tutorial, let’s ignore this possibility (because cloning the git repo took a lot of time and the downloaded file was huge and everything was taking forever).

Instead, we just download and extract the tarball (as discussed earlier in the Linux Kernel section).

Configuration

Here, we have two options.

  1. Use a default configuration
  2. Use the configuration of your current kernel (on which your OS is running right now).

As in downloading the kernel step, I tried both methods, and for me, the default one worked better. Anyways, for current configuration, run the following-

cp /boot/config-`uname -r`* .config

This copies the configuration for your current kernel to a file in the current folder. So, before running this command, navigate to the folder containing the extracted tarball. For me, it was /home/me/Download/linux-4.9.8

For default config (recommended), run

make defconfig

If you don’t see a config file, don’t worry. In linux, files/directories starting with . are hidden. On your terminal, type vi .config (replace vi with your favourite text editor) and you can see the config file.

Compiling

Similar to the way you compiled your c++ program, you can compile the kernel. In case of c++ program, we didn’t have any Makefile, so we had to specify the name of the source file (make code), however, since we have a Makefile here, we can simply type make, and our Makefile and .config file (and probably many more files) will tell make what to do. Note that the config file contains the options which were chosen for your current kernel. However, on a later kernel, there might be some choices which weren’t available in the the previous kernel (the one you’re using). In that case, make will ask you what to do (you’ll get to choose between option – yes and no, or options – 1,2,3,4,5,6, etc.). Pressing enter chooses the default option. Again, I suggest you use the default configuration file to avoid any issues.

To summarise, simply run this command-

make

If you have multiple cores, then specify it as an argument (compilation will be faster). For example, if you have two cores, run make -j2

If you have 4 cores, run make -j4


Now, you can do something else for a while. Compilation will take some time. When it’s finished, follow the remaining steps.

Installation

Simply run this command-

sudo make modules_install install

Fixing grub

There are following things that need to be changed in the /etc/default/grub file. Open this file as sudo, with your favourite text editor, and do the following.

  1. Remove GRUB_HIDDEN_TIMEOUT_QUIET line from the file.
  2. Change GRUB_DEFAULT to 10 from 0

This is how my file looks after being edited.

What these changes do is-

  1. Grub menu for choosing OS to boot from is hidden by default in Ubuntu, it changes that to visible.
  2. The menu shows up for 0secs, before choosing the default option. It changes it to 10 secs, so we get a chance to choose which OS to boot from.

After all this, just run the command to apply the changes.

sudo update-grub2

Now restart the machine.

Did it work?

If it worked, then you’ll ideally see something like this upon restart –


In advanced options, you’ll see two kernels. If you did everything perfectly, and no drivers issues are there, then your new kernel will boot up properly (4.9.8 for me). If you did everything reasonably well, and didn’t mess things up too bad, then at least your original kernel should work, if not the new one. If you messed things up completely, then the new kernel won’t work, nor would the old kernel (which was working fine to begin with). In my case, in the first trial, my new kernel wasn’t working. In the second trial, both kernels were working.

Once you have logged in to your new kernel, just do a uname -r and see the version, and give yourself a pat on the back if it is the kernel version you tried to download.

I did give myself a pat on the back

If your new kernel is not working, then either go through the steps and see if you did something wrong, or compare with this guide and see if I wrote something wrong. If it’s none of these, then try the other methods (default config instead of current kernel config, and vice versa). If that too doesn’t work, try out some other guides. The purpose of the guide, as explained already, isn’t to teach you how to compile linux kernel, but to improve your understanding, and I hope I succeeded in that.

Removing the kernel (optional and untidy section)

The

accepted answer here

is all you need. I’m gonna write it here anyways. Note that I’m writing this from memory, so some things may be a bit off. Follow the AskUbuntu answer to be sure.

Remove the following (this is correct)-

/boot/vmlinuz*KERNEL-VERSION*
/boot/initrd*KERNEL-VERSION*
/boot/System-map*KERNEL-VERSION*
/boot/config-*KERNEL-VERSION*
/lib/modules/*KERNEL-VERSION*/
/var/lib/initramfs/*KERNEL-VERSION*/

For me, Kernel version is 4.9.8. I don’t remember exactly what commands I typed, and am too lazy to check them again, but I think these would work (no guarantee).

cd /boot/

rm *4.9.8*

cd /lib/module

rm *4.9.8*

cd /var/lib/initramfs

rm *4.9.8*

Also, I have a faint recollection that the name of the initramfs folder was something a bit different in my case (not sure).

Kthnxbye

Source

Mount Dropbox Folder Locally As Virtual File System In Linux

by
sk
·
October 5, 2018

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: },
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
$(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
buttons:,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
$(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
buttons:,
urlCurl: ‘https://www.ostechnix.com/wp-content/themes/hueman-pro/addons/assets/front/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
$(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘Dbxfs – Mount Dropbox Folder Locally As Virtual File System In Linux’,media: ‘https://www.ostechnix.com/wp-content/uploads/2018/10/dbxfs.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});


// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var $_shareContainer = $(“.sharrre-container”),
$_header = $(‘#header’),
$_postEntry = $(‘.entry’),
$window = $(window),
startSharePosition = $_shareContainer.offset(),//object
contentBottom = $_postEntry.offset().top + $_postEntry.outerHeight(),
topOfTemplate = $_header.offset().top,
topSpacing = _setTopSpacing();

//triggered on scroll
shareScroll = function(){
var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – ($_shareContainer.outerHeight() + topSpacing);

$_shareContainer.css();

if( scrollTop > stopLocation ){
$_shareContainer.css( { position:’relative’ } );
$_shareContainer.offset(
{
top: contentBottom – $_shareContainer.outerHeight(),
left: startSharePosition.left,
}
);
}
else if (scrollTop >= $_postEntry.offset().top – topSpacing){
$_shareContainer.css( { position:’fixed’,top: ‘100px’ } );
$_shareContainer.offset(
{
//top: scrollTop + topSpacing,
left: startSharePosition.left,
}
);
} else if (scrollTop 1024 ) {
topSpacing = distanceFromTop + $(‘.nav-wrap’).outerHeight();
} else {
topSpacing = distanceFromTop;
}
return topSpacing;
}

//setup event listeners
$window.scroll( _.throttle( function() {
if ( $window.width() > 719 ) {
shareScroll();
} else {
$_shareContainer.css({
top:”,
left:”,
position:”
})
}
}, 50 ) );
$window.resize( _.debounce( function() {
if ( $window.width() > 719 ) {
shareMove();
} else {
$_shareContainer.css({
top:”,
left:”,
position:”
})
}
}, 50 ) );

});

Source

Good Alternatives To Man Pages Every Linux User Needs To Know

by
sk
·
Published October 8, 2018
· Updated October 9, 2018

alternatives to man pages

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: },
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
$(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
buttons:,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
$(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
buttons:,
urlCurl: ‘https://www.ostechnix.com/wp-content/themes/hueman-pro/addons/assets/front/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
$(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘Good Alternatives To Man Pages Every Linux User Needs To Know’,media: ‘https://www.ostechnix.com/wp-content/uploads/2017/10/Alternatives-To-Man-Pages-1.jpg’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});


// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var $_shareContainer = $(“.sharrre-container”),
$_header = $(‘#header’),
$_postEntry = $(‘.entry’),
$window = $(window),
startSharePosition = $_shareContainer.offset(),//object
contentBottom = $_postEntry.offset().top + $_postEntry.outerHeight(),
topOfTemplate = $_header.offset().top,
topSpacing = _setTopSpacing();

//triggered on scroll
shareScroll = function(){
var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – ($_shareContainer.outerHeight() + topSpacing);

$_shareContainer.css();

if( scrollTop > stopLocation ){
$_shareContainer.css( { position:’relative’ } );
$_shareContainer.offset(
{
top: contentBottom – $_shareContainer.outerHeight(),
left: startSharePosition.left,
}
);
}
else if (scrollTop >= $_postEntry.offset().top – topSpacing){
$_shareContainer.css( { position:’fixed’,top: ‘100px’ } );
$_shareContainer.offset(
{
//top: scrollTop + topSpacing,
left: startSharePosition.left,
}
);
} else if (scrollTop 1024 ) {
topSpacing = distanceFromTop + $(‘.nav-wrap’).outerHeight();
} else {
topSpacing = distanceFromTop;
}
return topSpacing;
}

//setup event listeners
$window.scroll( _.throttle( function() {
if ( $window.width() > 719 ) {
shareScroll();
} else {
$_shareContainer.css({
top:”,
left:”,
position:”
})
}
}, 50 ) );
$window.resize( _.debounce( function() {
if ( $window.width() > 719 ) {
shareMove();
} else {
$_shareContainer.css({
top:”,
left:”,
position:”
})
}
}, 50 ) );

});

Source

The deep monster taming RPG ‘Siralim 3’ has now officially launched with Linux support

For those after their next RPG fix, the monster taming game Siralim 3 [Official Site] is now officially out with Linux support as it has left Early Access.

While not the most graphically pleasing, the Siralim series do always have a really good amount of depth in them allowing you a ridiculous amount of fun.

For those who feel like they “gotta catch ’em all”, Siralim 3 has over 700 creatures to collect and breed along with special variants with different colours which are rare to find. Creatures have their own lore too, so you can learn a little about your new friends. You can customise your creatures quite a bit too, with “Spell Gems” to use new spells and further change those by enchanting them. There’s over 300 of these gems to find, with an additional 20 different properties to add so you can build a pretty unique team.

There’s randomly generated dungeons to explore, no level cap with new features introduced as you progress even past 100 hours according to what the developer said. There’s super-bosses to deal with, arena battles and all sorts to keep you entertained. There’s plenty of other features and items to collect as you progress through the game, along with new features to come after release.

It even has asynchronous player-versus-player battles, so you can truly test your monster squad again others which is pretty awesome, this mode allows you to earn special items too.

If you want to know what’s different compared with the previous game, the developer put up an FAQ here.

Find it on Steam.

Source

Valid BeastNode Promo Codes – ThisHosting.Rocks

BeastNode offers high-quality Minecraft web hosting with great 24/7 support, as well as VPS hosting. Use these tested and valid promo codes to get a discount.

Beast Node Promo Code: Get 15% Off For Life – Minecraft Premium Hosting Plans

If that^ promo code doesn’t work, try any of these:

Beast Node Promo Code: Get a Recurring 10% Discount for VPS Hosting Plans

Beast Node Promo Code: Get a 5% Lifetime Discount for Minecraft Hosting

If that^ coupon doesn’t work, try this one:

Get a discount when using a longer billing cycle at BeastNode – no promo code needed

How to use the BeastNode promo code?

  1. Get the promo code from this post.
  2. Visit https://www.beastnode.com
  3. Choose the best BeastNode hosting plan for you.
  4. Configure the plan details
  5. Go to checkout preview.
  6. Add the promo code from step 1.
  7. And that’s it. You’ve used a promo code at BeastNode to get a discount!

Source

Don’t let one bad apple spoil the whole box













Don't let one bad apple spoil the whole box










Web hosters running multi-site servers are a favourite target for today’s economy-minded hacker who uses one weak site to gain access to a whole box of others on the same server.

In this Part 1 of his article “Avoid Multi-site Hacking”, the new lead of Imunify360, Greg Zemskov, explains exactly what the threat is and how to mitigate it, covering the specific risks of PHP-based CMSes, the distinction between technical and organization protection strategies, and the benefits of site isolation.

Read Part 1 here

In the upcoming Part 2, Greg will consolidate the two distinctions, giving concrete tips for improving multi-site server security, and laying out the real-world consequences of not following them.











Topic: Imunify360 Blog
CloudLinux OS Blog



440 people viewed this
















Source

Matcha: Flat Design Theme And Icons For Ubuntu/Linux Mint – NoobsLab

If you use flat material design themes then you are on the right page.

Matcha

theme is one of the best themes which has material design and looks great along with its own icon pack. It is based on Arc theme and support Gtk3, Gtk2, Gnome Shell and support almost every desktop environment such as Gnome, Cinnamon, Mate, Xfce, Mate, etc.

We are offering Matcha themes and icon pack via our PPA for Ubuntu/Linux Mint. If you are using distribution other than Ubuntu/Linux Mint then download zip file directly from theme page and install it in this location “~/.themes” or “/usr/share/themes”. There is also theme for Gnome Shell which can go along with its Gtk version. If you find any kind of bug or issue within this theme then report it to creator and hopefully he will fix it soon.

Available for Ubuntu 18.04 Bionic/18.10/and other Ubuntu based distributions
To install Matcha GTK theme in Ubuntu/Linux Mint open Terminal (Press Ctrl+Alt+T) and copy the following commands in the Terminal:
Available for Ubuntu 18.04 Bionic/18.10/16.04 Xenial/14.04 Trusty/and other Ubuntu based distributions
To install Matcha Icons in Ubuntu/Linux Mint open Terminal (Press Ctrl+Alt+T) and copy the following commands in the Terminal:
That’s it

Source

ls Command Syntax for Files Sorting by Size in Linux

ls command sort by size linux

If you are new user in Linux world, the ls command is most popular and very useful command for listing the content of directories.

In this article, we will explain how to use ls sort option to list directory content by size.

1) List content details of directory upon size sorting

To list a content of a specific directory with size sorting, we will use -lS options with ls command.

$ ls -lS /run
output
total 24
-rw-rw-r–. 1 root utmp 2304 Sep 8 14:58 utmp
drwxr-xr-x. 16 root root 400 Aug 21 13:18 systemd
drwxr-xr-x. 7 root root 160 Aug 26 14:59 udev
drwxr-xr-x. 4 root root 100 Aug 21 13:18 initramfs
drwxr-xr-x. 4 root root 100 Sep 8 03:31 lock
drwxr-xr-x. 3 root root 100 Aug 21 13:18 NetworkManager
drwxr-xr-x. 2 root root 60 Aug 21 13:18 dbus
drwxr-xr-x. 3 root root 60 Aug 21 13:18 log
drwx–x–x. 3 root root 60 Aug 21 13:18 sudo
drwxr-xr-x. 2 root root 60 Aug 21 13:18 tmpfiles.d
drwxr-xr-x. 2 root root 60 Aug 21 13:18 tuned
drwxr-xr-x. 3 root root 60 Sep 7 23:11 user
drwxr-xr-x. 2 root root 40 Aug 21 13:18 console
drwxr-xr-x. 2 root root 40 Aug 21 13:18 faillock
drwxr-x—. 2 root root 40 Aug 21 13:18 firewalld
drwxr-xr-x. 2 root root 40 Aug 21 13:18 mount
……..

To list with file size, we will use -s option with ls command.

$ ls -s
output
total 1316
4 anaconda-ks.cfg 4 Downloads 180 index.html 0 smart.docx
4 apache2 4 echo.txt 4 nano.txt 0 smart.txt
4 cat.txt 0 file.txt 4 original-ks.cfg 0 test.txt

2) List content of directory with size reverse sorting

To list the content of a specific directory with size reverse sorting, we will use -lSr options with ls command.

$ ls -lSr /run
output
total 24
-rw——-. 1 root root 0 Aug 21 13:18 xtables.lock
-rw——-. 1 root root 0 Aug 21 13:18 ebtables.lock
———-. 1 root root 0 Aug 21 13:18 cron.reboot
-rw——-. 1 root root 3 Aug 21 13:18 syslogd.pid
-rw-r–r–. 1 root root 4 Aug 21 13:18 sshd.pid
-rw-r–r–. 1 root root 4 Sep 9 08:17 dhclient-eth0.pid
-rw-r–r–. 1 root root 4 Aug 21 13:18 crond.pid
-rw-r–r–. 1 root root 4 Aug 21 13:18 auditd.pid
drwxr-xr-x. 2 root root 40 Aug 21 13:18 setrans
drwxr-xr-x. 2 root root 40 Aug 21 13:18 sepermit
drwxr-xr-x. 2 root root 40 Aug 21 13:18 plymouth
drwxrwxr-x. 2 root root 40 Aug 21 13:18 netreport
drwxr-xr-x. 2 root root 40 Aug 21 13:18 mount
drwxr-x—. 2 root root 40 Aug 21 13:18 firewalld
……..

3) Sort output and print sizes in human readable format (e.g., 1K 48M 1G)

to sort output and print sizes in human readable format, we will use -h option with ls command.

$ ls -lSh
output
total 1.3M
-rw-r–r–. 1 root root 1.1M Aug 26 15:45 GeoIP-1.5.0-11.el7.x86_64.rpm
-rw-r–r–. 1 root root 177K Aug 26 15:29 index.html
drwxr-xr-x. 2 root root 4.0K Sep 8 13:32 apache2
drwxr-xr-x. 2 root root 4.0K Sep 8 13:31 Desktop
drwxr-xr-x. 2 root root 4.0K Sep 8 13:32 Documents
drwxr-xr-x. 2 root root 4.0K Sep 8 13:32 Downloads
drwxr-xr-x. 2 root root 4.0K Sep 8 13:32 Pictures
…….

Also, we can print sizes in human readable format for specific extention.

ls -l -S -h *.mp3
ls -l -S -h ~/Downloads/*.mp4 | more

4) List in alphabetical sorting

To list a content of a specific directory with alphabetical sorting, we will use ls command only without option, because alphabetical sorting is the default.

$ ls
output
anaconda-ks.cfg Desktop echo.txt index.html Pictures smart.txt
apache2 Documents f.txt nano.txt printf.txt vim.txt cat.txt
Downloads GeoIP-1.5.0-11.el7.x86_64.rpm original-ks.cfg smart.docx vi.txt

To list a content of a specific directory with details, add the path of the directory.

$ ls -l /run
output
total 24
-rw-r–r–. 1 root root 4 Aug 21 13:18 auditd.pid
drwxr-xr-x. 2 root root 40 Aug 21 13:18 console
-rw-r–r–. 1 root root 4 Aug 21 13:18 crond.pid
———-. 1 root root 0 Aug 21 13:18 cron.reboot
drwxr-xr-x. 2 root root 60 Aug 21 13:18 dbus
-rw-r–r–. 1 root root 4 Sep 9 08:17 dhclient-eth0.pid
-rw——-. 1 root root 0 Aug 21 13:18 ebtables.lock
drwxr-xr-x. 2 root root 40 Aug 21 13:18 faillock
drwxr-x—. 2 root root 40 Aug 21 13:18 firewalld
……….

5) List in alphabetical reverse sorting

To list a content of a specific directory with details upon alphabetical reverse sorting, we will use -lr options with ls command.

$ ls -lr /run
output
total 24
-rw——-. 1 root root 0 Aug 21 13:18 xtables.lock
-rw-rw-r–. 1 root utmp 2304 Sep 8 14:58 utmp
drwxr-xr-x. 3 root root 60 Sep 7 23:11 user
drwxr-xr-x. 7 root root 160 Aug 26 14:59 udev
drwxr-xr-x. 2 root root 60 Aug 21 13:18 tuned
drwxr-xr-x. 2 root root 60 Aug 21 13:18 tmpfiles.d
drwxr-xr-x. 16 root root 400 Aug 21 13:18 systemd
-rw——-. 1 root root 3 Aug 21 13:18 syslogd.pid
drwx–x–x. 3 root root 60 Aug 21 13:18 sudo
-rw-r–r–. 1 root root 4 Aug 21 13:18 sshd.pid
drwxr-xr-x. 2 root root 40 Aug 21 13:18 setrans
drwxr-xr-x. 2 root root 40 Aug 21 13:18 sepermit
drwxr-xr-x. 2 root root 40 Aug 21 13:18 plymouth
…….

6) List hidden content of directory in alphabetical sorting

To list hidden contents of specific directory, we will use -a or –all options with ls command.

$ ls -a /etc
output
. default gss logrotate.d pm rsyslog.conf sysctl.d
.. depmod.d host.conf machine-id polkit-1 rsyslog.d systemd
adjtime dhcp hostname magic popt.d rwtab system-release
aliases DIR_COLORS hosts makedumpfile.conf.sample postfix rwtab.d system-release-cpe
……..

7) List content of directory in alphabetical sorting

To list a content of specific directory with details, such as the file permissions, number of links, owner’s name and group owner, file size, time of last modification and the file/directory name,we will use -l option with ls command.

$ ls -l /run
output
total 24
-rw-r–r–. 1 root root 4 Aug 21 13:18 auditd.pid
drwxr-xr-x. 2 root root 40 Aug 21 13:18 console
-rw-r–r–. 1 root root 4 Aug 21 13:18 crond.pid
———-. 1 root root 0 Aug 21 13:18 cron.reboot
drwxr-xr-x. 2 root root 60 Aug 21 13:18 dbus
-rw-r–r–. 1 root root 4 Sep 8 12:41 dhclient-eth0.pid
-rw——-. 1 root root 0 Aug 21 13:18 ebtables.lock
drwxr-xr-x. 2 root root 40 Aug 21 13:18 faillock
drwxr-x—. 2 root root 40 Aug 21 13:18 firewalld
drwxr-xr-x. 4 root root 100 Aug 21 13:18 initramfs
drwxr-xr-x. 4 root root 100 Sep 8 03:31 lock
drwxr-xr-x. 3 root root 60 Aug 21 13:18 log
……..

Thanks for reading my article and please leave your comments.

Read Also:

Source

Manjaro 17.1.12 | Manjaro Linux

We are happy to announce fresh install media 17.1.12 for all our Official Releases, now available from our online storage partner OSDN:

While our XFCE and GNOME editions are mostly just rebuilds with the latest updated packages, including Libreoffice 6.0.6, Pamac 6.5.0 and Calamares 3.2.1, real news are to be reported for Manjaro-KDE, which is now running brand new KDE 5.13.4, including Plasma Browser Integration, System Settings redesigns, new look for Lock and Login Screens and a tech preview of GTK global menu integration. Additionally we updated our Xorg-Stack to 1.20 Server series and pushed out the latest MESA-Stack.

Links

Posted in: news

Source

Google’s Open Source AI Diagnoses Lung Cancer Types with Extreme Accuracy!

Last updated October 7, 2018 By Avimanyu Bandyopadhyay 4 Comments

Previously, our Open Science and AI articles have elaborately discussed the significance of Open Source Science and AI through various applications including Healthcare and Medicine. Recently, there have been promising new advancements in these fields!

Cancer Pathologists can now make use of an advanced Open Source AI system that has now achieved an extremely high level of Accuracy in detecting certain forms of Lung Cancer!

This is the realization of one of the many visions of the innovators and researchers at New York University (NYU), described two years ago in this video in great detail:

Their AI system is called DeepPATH, an Open Source framework that gathers the codes that have been used to study the use of a deep learning architecture (Google inception v3).

The future of AI-assisted therapy looks more promising than ever, now that researchers at NYU have designed the DeepPATH framework. Their algorithm has been designed to train it to differentiate and identify images of lungs that consist of both Normal and Cancer affected tissues.

Why is this great news?

The most common form of Cancer worldwide is Lung Cancer. So far in 2018, 2.09 million cases of Lung Cancer have been reported, with 1.76 million deaths linked to Lung Cancer alone. WHO details it vividly.

There are four major Cancer risk factors:

  • Tobacco use
  • Alcohol use,

  • Unhealthy diet
  • Physical inactivity

The Nature paper (preprint available here) titled “Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning”, highlights the effectiveness of their algorithm in identifying Lung Cancer Types with 97% Accuracy!

Why is the new study helpful for Cancer Pathologists?

The researchers achieved the new feat by teaching their AI algorithm to differentiate between two specific Lung Cancer Types, namely, Adenocarcinoma (LUAD) and squamous cell carcinoma (LUSC), which are the most prevalent subtypes of Lung Cancer.

Based on the left image (a cancerous tissue slice from the Lung), the AI classifies it into three categories as we see on the right: LUAD is in red, LUSC in blue, and Normal/Healthy Lung Tissue has been shown as grey | Image Source Here

In conventional medical practice, visual inspection by an experienced pathologist is absolutely essential to distinctly identify one Lung Cancer Type from the other. Now, AI can perform the same task, as the performance of their deep learning models was comparable to each of three pathologists (two thoracic and one anatomic) who were asked to participate in this study and this is the reason why this breakthrough is so significant!

Google’s inception v3 was trained to recognize tumor areas based on the pathologists’ manual selections. The researchers at NYU trained a deep convolutional neural network (Google inception v3) on whole-slide images obtained from The Cancer Genome Atlas to intelligently classify them into LUAD, LUSC or Normal Lung Tissue.

In addition to identifying cancerous tissue, the team also trained
it to identify genetic
mutations
within the tissue. Out of the ten most commonly mutated
genes in LUAD, six of them, namely STK11, EGFR, FAT1, SETBP1, KRAS
and TP53 were predicted.

Not only so, but the team of AI scientists also laid out the future aspect of applying the same algorithm to extend the classification to other Types of less common Lung Cancers such as large-cell carcinoma, small-cell lung cancer and histological subtypes of LUAD and also to non-neoplastic features (neoplastic relates to neoplasms) including necrosis, fibrosis, and other reactive changes in the tumor microenvironment.

They also did mention data insufficiency at this point in time for such applications. But in future, if more such cases are eventually seen, then more datasets would also have to become available, in order for the algorithm to train with them.

The entire deep learning study by the team was accelerated by harnessing the significantly higher computational power of Graphical Processing Units or GPUs (compared to conventional Central Processing Units or CPUs). They used a single Tesla K20m GPU in particular, with the processing time being around 20 seconds. But they also highlighted that using multiple GPUs would reduce that time further down to a few seconds.

Lung Cancer diagnose with open source AI

Our most favourite part of this news is of course that the entire code of DeepPATH is Open Source and readily available on GitHub. This would make it really helpful for academicians and researchers (both individuals and groups) who are working in similar research projects who would also like to apply the same system to analyze and interpret their own datasets with AI. These datasets can be of any form that could benefit our society.

We have discussed datasets in a prior article, where we described how NASA’s Open Science initiatives can be utilized to ask for dataset suggestions through submission on their Open Data Portal. Perhaps the datasets available there could also be quite resourceful for Google’s Open Source AI?

Isn’t this an amazing new milestone for Applied Open Source AI? Would you like to see more of such developments in the future of Applied AI with an Open Source Approach? Let us know your thoughts in the comments below.


About Avimanyu Bandyopadhyay

Avimanyu is a Doctoral Researcher on GPU-based Bioinformatics and a big-time Linux fan. He strongly believes in the significance of Linux and FOSS in Scientific Research. Deep Learning with GPUs is his new excitement! He is a very passionate video gamer (his other side) and loves playing games on Linux, Windows and PS4 while wishing that all Windows/Xbox One/PS4 exclusive games get support on Linux some day! Both his research and PC gaming are powered by his own home-built computer. He is also a former Ubisoft Star Player (2016) and mostly goes by the tag “avimanyu786” on web indexes.

Source

WP2Social Auto Publish Powered By : XYZScripts.com