How to Use Nmap Script Engine (NSE) Scripts in Linux

Nmap is a popular, powerful and cross-platform command-line network security scanner and exploration tool. It can also help you get an overview of systems that connected your network; you can use it to find out all IP addresses of live hostsscan open ports and services running on those hosts, and so much more.

One of the interesting features of Nmap is the Nmap Script Engine (NSE), which brings even more flexibility and efficiency to it. It enables you to write your own scripts in Lua programming language, and possibly share these scripts with other Nmap users out there.

Read Also29 Practical Examples of Nmap Commands for Linux

There are four types of NSE scripts, namely:

  • Prerule scripts – are scripts that run before any of Nmap’s scan operations, they are executed when Nmap hasn’t gathered any information about a target yet.
  • Host scripts – are scripts executed after Nmap has performed normal operations such as host discovery, port scanning, version detection, and OS detection against a target host.
  • Service scripts – are scripts run against specific services listening on a target host.
  • Postrule scripts – are scripts run after Nmap has scanned all of its target hosts.

Then these scripts are grouped under various categories including those for authentication (auth), discovering of hosts (broadcast), brute force attacks to guess authentication credentials (brute), discovering more about a network (discovery), causing a denial of service (dos), exploiting some vulnerability (exploit), etc. A number of scripts belong to the default category.

Note: Before we move any further, you should take a note of these key points:

  • Do not execute scripts from third parties without critically looking through them or only if you trust the authors. This is because these scripts are not run in a sandbox and thus could unexpectedly or maliciously damage your system or invade your privacy.
  • Secondly, many of these scripts may possibly run as either a prerule or postrule script. Considering this, it is recommend to use a prerule for purposes of consistency.
  • Nmap uses the scripts/script.db database to figure out the available default scripts and categories.

To see the location of all available NSE scripts, run the locate utility on the terminal, like this:

$ locate *.nse

/usr/share/nmap/scripts/acarsd-info.nse
/usr/share/nmap/scripts/address-info.nse
/usr/share/nmap/scripts/afp-brute.nse
/usr/share/nmap/scripts/afp-ls.nse
/usr/share/nmap/scripts/afp-path-vuln.nse
/usr/share/nmap/scripts/afp-serverinfo.nse
/usr/share/nmap/scripts/afp-showmount.nse
/usr/share/nmap/scripts/ajp-auth.nse
/usr/share/nmap/scripts/ajp-brute.nse
/usr/share/nmap/scripts/ajp-headers.nse
/usr/share/nmap/scripts/ajp-methods.nse
/usr/share/nmap/scripts/ajp-request.nse
/usr/share/nmap/scripts/allseeingeye-info.nse
/usr/share/nmap/scripts/amqp-info.nse
/usr/share/nmap/scripts/asn-query.nse
...

NSE scripts are loaded using the --script flag, which also allows you to run your own scripts by providing categories, script file names, or the name of directories where your scripts are located.

The syntax for enabling scripts is as follows:

$ namp -sC target     #load default scripts
OR
$ nmap --script filename|category|directory|expression,...   target    

You can view a description of a script with the --script-help option. Additionally, you can pass arguments to some scripts via the --script-args and --script-args-file options, the later is used to provide a filename rather than a command line arg.

To perform a scan with most of the default scripts, use the -sC flag or alternatively use --script=defaultas shown.

$ nmap -sC scanme.nmap.org
OR
$ nmap --script=default scanme.nmap.org
OR
$ nmap --script default scanme.nmap.org
Sample Output
Starting Nmap 7.01 ( https://nmap.org ) at 2017-11-15 10:36 IST
Nmap scan report for scanme.nmap.org (45.33.32.156)
Host is up (0.0027s latency).
Not shown: 999 filtered ports
PORT   STATE SERVICE
80/tcp open  http
|_http-title: Go ahead and ScanMe!

Nmap done: 1 IP address (1 host up) scanned in 11.74 seconds

To use a script for the appropriate purpose, you can first of all get a brief description of what it actually does, for instance http-headers.

$ nmap --script-help http-headers scanme.nmap.org
Sample Output
Starting Nmap 7.01 ( https://nmap.org ) at 2017-11-15 10:37 IST

http-headers
Categories: discovery safe
https://nmap.org/nsedoc/scripts/http-headers.html
  Performs a HEAD request for the root folder ("/") of a web server and displays the HTTP headers returned.

Loading NSE Scripts To Perform Nmap Scans

You can select or load scripts to perform a scan in different methods explained below.

Using Script Name

Once you know what a script does, you can perform a scan using it. You can use one script or enter a comma-separated list of script names. The command below will enable you view the HTTP headers configured on the web server at the target host.

$ nmap --script http-headers scanme.nmap.org
Scan HTTP Headers
Starting Nmap 7.01 ( https://nmap.org ) at 2017-11-15 10:39 IST
Nmap scan report for scanme.nmap.org (45.33.32.156)
Host is up (0.27s latency).
Not shown: 996 closed ports
PORT      STATE    SERVICE
22/tcp    open     ssh
80/tcp    open     http
| http-headers: 
|   Date: Wed, 15 Nov 2017 05:10:04 GMT
|   Server: Apache/2.4.7 (Ubuntu)
|   Accept-Ranges: bytes
|   Vary: Accept-Encoding
|   Connection: close
|   Content-Type: text/html
|   
|_  (Request type: HEAD)
179/tcp   filtered bgp
31337/tcp open     Elite

Nmap done: 1 IP address (1 host up) scanned in 20.96 seconds

Using Categories

You can also load scripts from one category or from a comma-separated list of categories. In this example, we are using all scripts in the default and broadcast category to carry out a scan on the host 192.168.56.1.

$ nmap --script default,broadcast 192.168.56.1
Scan a Host

Scan a Host

Using * Wildcard

This is useful when you want to select scripts with a given name pattern. For example to load all scripts with names starting with ssh, run the command below on the terminal:

$ nmap --script "ssh-*" 192.168.56.1
Load Scripts Using Wildcards-

Load Scripts Using Wildcards-

Using Boolean Expressions

You can also select scripts using boolean expressions which you can build using the andor, and not operators. And names in a Boolean expression may be a category, a filename from script.db, or all.

The following command will load scripts from the default or broadcast categories.

$ nmap --script "default or broadcast" 192.168.56.10

Which is equivalent to:

$ nmap --script default,broadcast 192.168.56.10

To load all scripts omitting those in the vuln category, run this command on the terminal.

$ nmap --script "not vuln" 192.168.56.10

The next command looks a little complicated but it is easy to understand, it selects scripts in the default, or broadcast categories, leaving out those with names starting with ssh-:

$ nmap --script "(default or broadcast) and not ssh-*" 192.168.56.10

Importantly, it is possible to combine categories, script names, a directory containing your custom scripts or a boolean expression to load scripts, like this:

$ nmap --script broadcast,vuln,ssh-auth-methods,/path/to/custom/scripts 192.168.56.10

Passing Arguments to NSE Scripts

Below is an example showing how to pass arguments to scripts with the –script-args option:

$ nmap --script mysql-audit --script-args "mysql-audit.username='root', \
mysql-audit.password='password_here', mysql-audit.filename='nselib/data/mysql-cis.audit'"

To pass a port number, use the -p nmap option:

$ nmap -p 3306 --script mysql-audit --script-args "mysql-audit.username='root', \ 
mysql-audit.password='password_here' , mysql-audit.filename='nselib/data/mysql-cis.audit'"

This above command runs an audit of the MySQL database server security configuration against parts of the CIS MySQL v1.0.2 benchmark. You can as well create your own useful custom audit files for other MySQL audits.

That’s it for now. You can find more information in the nmap man page or check out NSE Usage.

To get started with writing your own NSE scripts, check out this guide: https://nmap.org/book/nse-tutorial.html

Conclusion

Nmap is a really powerful and useful tool that every system or network administrator needs in his/her security arsenal – NSE simply adds more efficiency to it.

In this article, we introduced you to the Nmap Script Engine, and looked at how to find and use the various available scripts under different categories. If you have any questions, do not hesitate to write back to us via the comment form below.

 
Source

13 Linux Performance Monitoring Tools

If you’re working as a Linux/Unix system administrator, sure you know that you must have useful monitoring tools to monitor your computers & systems, monitoring tools are very important in the job of a system administrator or a server webmaster, it’s the best way to keep an eye on what’s going on inside your system.

Linux Peformance Monitoring

13 Linux Performance Monitoring

Today we’re going to talk about another 13 Linux monitoring tool that you may use to do the job.

21. Glances – Real Time System Monitoring

Glances is a monitoring tool built to present as much information as possible in any terminal size, it automatically takes the terminal window size it runs on, in other words, it’s a responsive monitoring tool.

Real Time Linux Monitoring

Glances

Features
  1. Licensed under LGPL and written in Python.
  2. Cross-platform, it works on Windows, Mac, BSD and Linux.
  3. Available in most Linux official repositories.
  4. A It gives a lot of information about your system.
  5. Built using curses.

Read MoreInstall Glances on RHEL/CentOS/Fedora and Ubuntu/Debian

22. Sarg – Squid Bandwidth Monitoring

Sarg (Squid Analysis Report Generator) is a free & open-source tool which act as a monitoring tool for your Squid proxy server, it creates reports about your Squid proxy server users, IP addresses, the sites they visit beside some other information.

Monitor Squid Proxy Logs

Sarg Monitors Squid Logs

Features of Sarg
  1. Licensed under GPL 2 and available in many languages.
  2. Works under Linux & FreeBSD.
  3. Generates report in HTML format.
  4. Very easy to install & use.

Read MoreInstall Sarg “Squid Bandwidth Monitoring” Tool in Linux

23. Apache Status Monitoring

Apache Module mod_status is an Apache server module that allows you to monitor the workers status of the Apache server. It generates a report in an easy to read HTML format. It shows you the status of all the workers, how much CPU each one using, and what requests are currently handled and number of working and not working workers.

Apache Monitoring in Linux

Apache Status Monitoring

Read MoreApache Web Server Load and Page Statistics Monitoring

24. Monit – Linux Process and Services Monitoring

Monit is a nice program that monitors your Linux & Unix server, it can monitor everything you have on your server, from the main server (Apache, Nginx..) to files permissions, files hashes and web services. Plus a lot of things.

Linux Process Monitoring

Monit: Linux Process Monitoring

Features of Monit
  1. Free & open-source, released under AGPL and written in C.
  2. It can be started from the command line interface or via its special web interface.
  3. Very effective in monitoring all the software on your system and services.
  4. A nice web interface with beautiful charts for CPU and RAM usage.
  5. Monit can automatically take actions in emergency situations.
  6. A lot more..

Read MoreInstall Monit Tool in RHEL/CentOS/Fedora and Ubuntu/Debian

25. Sysstat – All-in-One System Performance Monitoring

Another monitoring tool for your Linux system. Sysstat is not a real command in fact, it’s just the name of the project, Sysstat in fact is a package that includes many performance monitoring tools like iostat, sadf, pidstat beside many other tools which shows you many statistics about your Linux OS.

Sysstat All-in-One Linux Monitoring

Sysstat: Linux Statistics Monitoring

Features of Sysstat
  1. Available in many Linux distributions repositories by default.
  2. Ability to create statistics about RAM, CPU, SWAP usage. Beside the ability to monitor Linux kernel activity, NFS server, Sockets, TTY and filesystems.
  3. Ability to monitor input & output statistics for devices, tasks.. etc.
  4. Ability to output reports about network interfaces and devices, with support for IPv6.
  5. Sysstat can show you the power statistics (usage, devices, the fans speed.. etc) as well.
  6. Many other features..

Read MoreInstall Sysstat in Linux and 20 Useful Commands of Sysstat

26. Icinga – Next Generation Server Monitoring

Unlike the other tools, Icinga is a network monitoring program, it shows you many options and information about your network connections, devices and processes, it’s a very good choice for those who are looking for a good tool to monitor their networking stuffs.

Icinga Monitoring Tool

Icinga Monitoring Tool

Features of Icinga
  1. Icinga is also free and open-source.
  2. Very functional in monitoring everything you may have in networking.
  3. Support for MySQL and PostgreSQL is included.
  4. Real-time monitoring with A nice web interface.
  5. Very expendable with modules and extensions.
  6. Icinga supports applying services and actions to hosts.
  7. A lot more to discover..

Read MoreInstall Icinga in RHEL/CentOS 7/6

27. Observium – Network Management and Monitoring

Observium is also a network monitoring tool, it was designed to help you manage your network of servers easily, there are 2 versions from it; Community Edition which is free & open-source and Commercial version which costs £150/year.

Linux Network Monitoring

Observium: Linux Network Monitoring

Features of Observium
  1. Written in PHP with MySQL database support.
  2. Has a nice web interface to output information and data.
  3. Ability to manage and monitor hundreds of hosts worldwide.
  4. The community version from it is licensed under QPL license.
  5. Works on Windows, Linux, FreeBSD and more.

Read MoreObservium – Network Management and Monitoring Tool for RHEL/CentOS

28. Web VMStat – System Statistics Monitoring

Web VMStat is a very simple web application programmer, that provides a real time system information usage, from CPU to RAM, Swap and input/output information in html format.

Web VMStat Tool for Linux

Web VMStat Tool for Linux

Read MoreWeb VMStat: A Real Time System Statistics Tool for Linux

29. PHP Server Monitoring

Unlike the other tools on this list, PHP Server Monitoring is a web script written in PHP that helps you to manage you websites and hosts easily, it supports MySQL database and is released under GPL 3 or later.

PHP Server Monitor

PHP Server Monitor

Features
  1. A nice web interface.
  2. Ability to send notifications to you via Email & SMS.
  3. Ability to view the most important information about CPU and RAM.
  4. A very modern logging system to log connection errors and emails that are sent.
  5. Support for cronjob services to help you monitor your servers and websites automatically.

Read MoreInstall PHP Server Monitoring Tool in Arch Linux

30. Linux Dash – Linux Server Performance Monitoring

From its name, “Linux Dash” is a web dashboard that shows you the most important information about your Linux systems such as RAM, CPU, file-system, running processes, users, bandwidth usage in real time, it has a nice GUI and it’s free & open-source.

Linux Dash Tool

Linux Dash Tool

Read MoreInstall Linux Dash (Linux Performance Monitoring) Tool in Linux

31. Cacti – Network and System Monitoring

Cacti is nothing more than a free & open-source web interface for RRDtool, it is used often to monitor the bandwidth using SNMP (Simple Network Management Protocol), it can be used also to monitor CPU usage.

Cacti Network Monitoring

Cacti Network Monitoring

Features of Cacti
  1. Free & open-source, released under GPL license.
  2. Written in PHP with PL/SQL.
  3. A cross-platform tool, it works on Windows and Linux.
  4. User management; you may create different users accounts for Cacti.

Read MoreInstall Cacti Network and System Monitoring Tool in Linux

32. Munin – Network Monitoring

Munin is also a web interface GUI for RRDtool, it was written in Perl and licensed under GPL, Munin is a good tool to monitor systems, networks, applications and services. It works on all Unix-like operating systems and has a nice plugin system; there are 500 different plugin available to monitor anything you want on your machine. A notifications system is available to send messages to the administrator when there’s an error or when the error is resolved.

Munin Network Monitoring

Munin Network Monitoring

Read MoreInstall Munin Network Monitoring Tool in Linux

33. Wireshark – Network Protocol Analyzer

Also, unlike all the other tools on our list, Wireshark is an analyzer desktop program which is used to analyze network packets and to monitor network connections. It’s written in C with the GTK+ library and released under the GPL license.

Wireshark Network Analyzer

Wireshark Network Analyzer

Features
  1. Cross-platform: it works on Linux, BSD , Mac OS X and Windows.
  2. Command line support: there’s a command line based version from Wireshark to analyze data.
  3. Ability to capture VoIP calls, USB traffic, network data easily to analyze it.
  4. Available in most Linux distributions repositories.

Read MoreInstall Wireshark – Network Protocol Analyzer Tool in Linux

These were the most important tools to monitor your Linux/Unix machines, of course there are many other tools, but these are the most famous.

Source

20 Command Line Tools to Monitor Linux Performance

It’s really very tough job for every System or Network administrator to monitor and debug Linux System Performance problems every day. After being a Linux Administrator for 5 years in IT industry, I came to know that how hard is to monitor and keep systems up and running. For this reason, we’ve compiled the list of Top 20frequently used command line monitoring tools that might be useful for every Linux/Unix System Administrator. These commands are available under all flavors of Linux and can be useful to monitor and find the actual causes of performance problem. This list of commands shown here are very enough for you to pick the one that is suitable for your monitoring scenario.

Linux Command Line Monitoring

Linux Command Line Monitoring

1. Top – Linux Process Monitoring

Linux Top command is a performance monitoring program which is used frequently by many system administrators to monitor Linux performance and it is available under many Linux/Unix like operating systems. The top command used to dipslay all the running and active real-time processes in ordered list and updates it regularly. It display CPU usageMemory usageSwap MemoryCache SizeBuffer SizeProcess PIDUserCommands and much more. It also shows high memory and cpu utilization of a running processess. The top command is much userful for system administrator to monitor and take correct action when required. Let’s see top command in action.

# top

Top Command Example

Top Command Example

For more examples of Top command read : 12 TOP Command Examples in Linux

2. VmStat – Virtual Memory Statistics

Linux VmStat command used to display statistics of virtual memorykernerl threadsdiskssystem processesI/O blocksinterruptsCPU activity and much more. By default vmstat command is not available under Linux systems you need to install a package called sysstat that includes a vmstat program. The common usage of command format is.

# vmstat

procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
 r  b   swpd   free  inact active   si   so    bi    bo   in   cs us sy id wa st
 1  0      0 810420  97380  70628    0    0   115     4   89   79  1  6 90  3  0

For more Vmstat examples read : 6 Vmstat Command Examples in Linux

3. Lsof – List Open Files

Lsof command used in many Linux/Unix like system that is used to display list of all the open files and the processes. The open files included are disk filesnetwork socketspipesdevices and processes. One of the main reason for using this command is when a disk cannot be unmounted and displays the error that files are being used or opened. With this commmand you can easily identify which files are in use. The most common format for this command is.

# lsof

COMMAND     PID      USER   FD      TYPE     DEVICE     SIZE       NODE NAME
init          1      root  cwd       DIR      104,2     4096          2 /
init          1      root  rtd       DIR      104,2     4096          2 /
init          1      root  txt       REG      104,2    38652   17710339 /sbin/init
init          1      root  mem       REG      104,2   129900     196453 /lib/ld-2.5.so
init          1      root  mem       REG      104,2  1693812     196454 /lib/libc-2.5.so
init          1      root  mem       REG      104,2    20668     196479 /lib/libdl-2.5.so
init          1      root  mem       REG      104,2   245376     196419 /lib/libsepol.so.1
init          1      root  mem       REG      104,2    93508     196431 /lib/libselinux.so.1
init          1      root   10u     FIFO       0,17                 953 /dev/initctl

More lsof command usage and examples : 10 lsof Command Examples in Linux

4. Tcpdump – Network Packet Analyzer

Tcpdump one of the most widely used command-line network packet analyzer or packets sniffer program that is used capture or filter TCP/IP packets that received or transferred on a specific interface over a network. It also provides a option to save captured packages in a file for later analysis. tcpdump is almost available in all major Linux distributions.

# tcpdump -i eth0

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
22:08:59.617628 IP tecmint.com.ssh > 115.113.134.3.static-mumbai.vsnl.net.in.28472: P 2532133365:2532133481(116) ack 3561562349 win 9648
22:09:07.653466 IP tecmint.com.ssh > 115.113.134.3.static-mumbai.vsnl.net.in.28472: P 116:232(116) ack 1 win 9648
22:08:59.617916 IP 115.113.134.3.static-mumbai.vsnl.net.in.28472 > tecmint.com.ssh: . ack 116 win 64347

For more tcpdump usage read : 12 Tcpdump Command Examples in Linux

5. Netstat – Network Statistics

Netstat is a command line tool for monitoring incoming and outgoing network packets statistics as well as interface statistics. It is very useful tool for every system administrator to monitor network performance and troubleshoot network related problems.

# netstat -a | more

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address               Foreign Address             State
tcp        0      0 *:mysql                     *:*                         LISTEN
tcp        0      0 *:sunrpc                    *:*                         LISTEN
tcp        0      0 *:realm-rusd                *:*                         LISTEN
tcp        0      0 *:ftp                       *:*                         LISTEN
tcp        0      0 localhost.localdomain:ipp   *:*                         LISTEN
tcp        0      0 localhost.localdomain:smtp  *:*                         LISTEN
tcp        0      0 localhost.localdomain:smtp  localhost.localdomain:42709 TIME_WAIT
tcp        0      0 localhost.localdomain:smtp  localhost.localdomain:42710 TIME_WAIT
tcp        0      0 *:http                      *:*                         LISTEN
tcp        0      0 *:ssh                       *:*                         LISTEN
tcp        0      0 *:https                     *:*                         LISTEN

More Netstat examples : 20 Netstat Command Examples in Linux.

6. Htop – Linux Process Monitoring

Htop is a much advanced interactive and real time Linux process monitoring tool. This is much similar to Linux top command but it has some rich features like user friendly interface to manage processshortcut keysvertical and horizontal view of the processes and much more. Htop is a third party tool and doesn’t included in Linux systems, you need to install it using YUM package manager tool. For more information on installation read our article below.

# htop

Htop Command Example

Htop Command Example Screenshot

For Htop installation read : Install Htop (Linux Process Monitoring) in Linux

7. Iotop – Monitor Linux Disk I/O

Iotop is also much similar to top command and Htop program, but it has accounting function to monitor and display real time Disk I/O and processes. This tool is much useful for finding the exact process and high used disk read/writes of the processes.

# iotop

Iotop Command Example

Iotop Command Example Screenshot

For Ioptop installation and usage read : Install Iotop in Linux

8. Iostat – Input/Output Statistics

IoStat is simple tool that will collect and show system input and output storage device statistics. This tool is often used to trace storage device performance issues including deviceslocal disksremote disks such as NFS.

# iostat

Linux 2.6.18-238.9.1.el5 (tecmint.com)         09/13/2012

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.60    3.65    1.04    4.29    0.00   88.42

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
cciss/c0d0       17.79       545.80       256.52  855159769  401914750
cciss/c0d0p1      0.00         0.00         0.00       5459       3518
cciss/c0d0p2     16.45       533.97       245.18  836631746  384153384
cciss/c0d0p3      0.63         5.58         3.97    8737650    6215544
cciss/c0d0p4      0.00         0.00         0.00          8          0
cciss/c0d0p5      0.63         3.79         5.03    5936778    7882528
cciss/c0d0p6      0.08         2.46         2.34    3847771    3659776

For more Iostat usage and examples visit : 6 Iostat Command Examples in Linux

9. IPTraf – Real Time IP LAN Monitoring

IPTraf is an open source console-based real time network (IP LAN) monitoring utility for Linux. It collects a variety of information such as IP traffic monitor that passes over the network, including TCP flag information, ICMP details, TCP/UDP traffic breakdowns, TCP connection packet and byne counts. It also gathers information of general and detaled interface statistics of TCP, UDP, IP, ICMP, non-IP, IP checksum errors, interface activity etc.

IP Traffic Monitor

IP Traffic Monitor

For more information and usage of IPTraf tool, please visit : IPTraf Network Monitoring Tool

10. Psacct or Acct – Monitor User Activity

psacct or acct tools are very useful for monitoring each users activity on the system. Both daemons runs in the background and keeps a close watch on the overall activity of each user on the system and also what resources are being consumed by them.

These tools are very useful for system administrators to track each users activity like what they are doing, what commands they issued, how much resources are used by them, how long they are active on the system etc.

For installation and example usage of commands read the article on Monitor User Activity with psacct or acct

11. Monit – Linux Process and Services Monitoring

Monit is a free open source and web based process supervision utility that automatically monitors and managers system processes, programs, files, directories, permissions, checksums and filesystems.

It monitors services like Apache, MySQL, Mail, FTP, ProFTP, Nginx, SSH and so on. The system status can be viewed from the command line or using it own web interface.

Monit Linux Process Monitoring

Monit Linux Process Monitoring

Read More : Linux Process Monitoring with Monit

12. NetHogs – Monitor Per Process Network Bandwidth

NetHogs is an open source nice small program (similar to Linux top command) that keeps a tab on each process network activity on your system. It also keeps a track of real time network traffic bandwidth used by each program or application.

NetHogs Linux Bandwidth Monitoring

NetHogs Linux Bandwidth Monitoring

Read More : Monitor Linux Network Bandwidth Using NetHogs

13. iftop – Network Bandwidth Monitoring

iftop is another terminal-based free open source system monitoring utility that displays a frequently updated list of network bandwidth utilization (source and destination hosts) that passing through the network interface on your system. iftop is considered for network usage, what ‘top‘ does for CPU usage. iftop is a ‘top‘ family tool that monitor a selected interface and displays a current bandwidth usage between two hosts.

iftop - Network Bandwidth Monitoring

iftop – Network Bandwidth Monitoring

Read More : iftop – Monitor Network Bandwidth Utilization

14. Monitorix – System and Network Monitoring

Monitorix is a free lightweight utility that is designed to run and monitor system and network resources as many as possible in Linux/Unix servers. It has a built in HTTP web server that regularly collects system and network information and display them in graphs. It Monitors system load average and usagememory allocationdisk driver healthsystem servicesnetwork portsmail statistics (SendmailPostfixDovecot, etc), MySQL statistics and many more. It designed to monitor overall system performance and helps in detecting failures, bottlenecks, abnormal activities etc.

Monitorix Monitoring

Monitorix Monitoring

Read More : Monitorix a System and Network Monitoring Tool for Linux

15. Arpwatch – Ethernet Activity Monitor

Arpwatch is a kind of program that is designed to monitor Address Resolution (MAC and IP address changes) of Ethernet network traffic on a Linux network. It continuously keeps watch on Ethernet traffic and produces a log of IP and MAC address pair changes along with a timestamps on a network. It also has a feature to send an email alerts to administrator, when a pairing added or changes. It is very useful in detecting ARP spoofing on a network.

Read More : Arpwatch to Monitor Ethernet Activity

16. Suricata – Network Security Monitoring

Suricata is an high performance open source Network Security and Intrusion Detection and Prevention Monitoring System for LinuxFreeBSD and Windows.It was designed and owned by a non-profit foundation OISF(Open Information Security Foundation).

Read More : Suricata – A Network Intrusion Detection and Prevention System

17. VnStat PHP – Monitoring Network Bandwidth

VnStat PHP a web based frontend application for most popular networking tool called “vnstat“. VnStat PHPmonitors a network traffic usage in nicely graphical mode. It displays a total IN and OUT network traffic usage in hourlydailymonthly and full summary report.

Read More : VnStat PHP – Monitoring Network Bandwidth

18. Nagios – Network/Server Monitoring

Nagios is an leading open source powerful monitoring system that enables network/system administrators to identify and resolve server related problems before they affect major business processes. With the Nagios system, administrators can able to monitor remote Linux, Windows, Switches, Routers and Printers on a single window. It shows critical warnings and indicates if something went wrong in your network/server which indirectly helps you to begin remediation processes before they occur.

Read More : Install Nagios Monitoring System to Monitor Remote Linux/Windows Hosts

19. Nmon: Monitor Linux Performance

Nmon (stands for Nigel’s performance Monitor) tool, which is used to monitor all Linux resources such as CPU, Memory, Disk Usage, Network, Top processes, NFS, Kernel and much more. This tool comes in two modes: Online Mode and Capture Mode.

The Online Mode, is used for real-time monitoring and Capture Mode, is used to store the output in CSV format for later processing.

Nmon Monitoring

Nmon Monitoring

Read More: Install Nmon (Performance Monitoring) Tool in Linux

20. Collectl: All-in-One Performance Monitoring Tool

Collectl is a yet another powerful and feature rich command line based utility, that can be used to gather information about Linux system resources such as CPU usage, memory, network, inodes, processes, nfs, tcp, sockets and much more.

Collectl Monitoring

Collectl Monitoring

Read More: Install Collectl (All-in-One Performance Monitoring) Tool in Linux

We would like to know what kind of monitoring programs you use to monitor performance of your Linux servers? If we’ve missed any important tool that you would like us to include in this list, please inform us via comments and please don’t forget to share it.

Source

Setting Up Real-Time Monitoring with ‘Ganglia’ for Grids and Clusters of Linux Servers

Ever since system administrators have been in charge of managing servers and groups of machines, tools like monitoring applications have been their best friends. You will probably be familiar with tools like NagiosZabbixIcinga, and Centreon. While those are the heavyweights of monitoring, setting them up and fully taking advantage of their features may be somewhat difficult for new users.

In this article we will introduce you to Ganglia, a monitoring system that is easily scalable and allows to view a wide variety of system metrics of Linux servers and clusters (plus graphs) in real time.

Install Gangila Monitoring in Linux

Install Gangila Monitoring in Linux

Ganglia lets you set up grids (locations) and clusters (groups of servers) for better organization.

Thus, you can create a grid composed of all the machines in a remote environment, and then group those machines into smaller sets based on other criteria.

In addition, Ganglia’s web interface is optimized for mobile devices, and also allows you to export data en .csvand .json formats.

Our test environment will consist of a central CentOS 7 server (IP address 192.168.0.29) where we will install Ganglia, and an Ubuntu 14.04 machine (192.168.0.32), the box that we want to monitor through Ganglia’s web interface.

Throughout this guide we will refer to the CentOS 7 system as the master node, and to the Ubuntu box as the monitored machine.

Installing and Configuring Ganglia

To install the monitoring utilities in the the master node, follow these steps:

1. Enable the EPEL repository and then install Ganglia and related utilities from there:

# yum update && yum install epel-release
# yum install ganglia rrdtool ganglia-gmetad ganglia-gmond ganglia-web 

The packages installed in the step above along with ganglia, the application itself, perform the following functions:

  1. rrdtool, the Round-Robin Database, is a tool that’s used to store and display the variation of data over time using graphs.
  2. ganglia-gmetad is the daemon that collects monitoring data from the hosts that you want to monitor. In those hosts and in the master node it is also necessary to install ganglia-gmond (the monitoring daemon itself):
  3. ganglia-web provides the web frontend where we will view the historical graphs and data about the monitored systems.

2. Set up authentication for the Ganglia web interface (/usr/share/ganglia). We will use basic authentication as provided by Apache.

If you want to explore more advanced security mechanisms, refer to the Authorization and Authenticationsection of the Apache docs.

To accomplish this goal, create a username and assign a password to access a resource protected by Apache. In this example, we will create a username called adminganglia and assign a password of our choosing, which will be stored in /etc/httpd/auth.basic (feel free to choose another directory and / or file name – as long as Apache has read permissions on those resources, you will be fine):

# htpasswd -c /etc/httpd/auth.basic adminganglia

Enter the password for adminganglia twice before proceeding.

3. Modify /etc/httpd/conf.d/ganglia.conf as follows:

Alias /ganglia /usr/share/ganglia
<Location /ganglia>
    AuthType basic
    AuthName "Ganglia web UI"
    AuthBasicProvider file
    AuthUserFile "/etc/httpd/auth.basic"
    Require user adminganglia
</Location>

4. Edit /etc/ganglia/gmetad.conf:

First, use the gridname directive followed by a descriptive name for the grid you’re setting up:

gridname "Home office"

Then, use data_source followed by a descriptive name for the cluster (group of servers), a polling interval in seconds and the IP address of the master and monitored nodes:

data_source "Labs" 60 192.168.0.29:8649 # Master node
data_source "Labs" 60 192.168.0.32 # Monitored node

5. Edit /etc/ganglia/gmond.conf.

a) Make sure the cluster block looks as follows:

cluster {
name = "Labs" # The name in the data_source directive in gmetad.conf
owner = "unspecified"
latlong = "unspecified"
url = "unspecified"
}

b) In the udp_send_chanel block, comment out the mcast_join directive:

udp_send_channel   {
  #mcast_join = 239.2.11.71
  host = localhost
  port = 8649
  ttl = 1
}

c) Finally, comment out the mcast_join and bind directives in the udp_recv_channel block:

udp_recv_channel {
  #mcast_join = 239.2.11.71 ## comment out
  port = 8649
  #bind = 239.2.11.71 ## comment out
}

Save the changes and exit.

6. Open port 8649/udp and allow PHP scripts (run via Apache) to connect to the network using the necessary SELinux boolean:

# firewall-cmd --add-port=8649/udp
# firewall-cmd --add-port=8649/udp --permanent
# setsebool -P httpd_can_network_connect 1

7. Restart Apache, gmetad, and gmond. Also, make sure they are enabled to start on boot:

# systemctl restart httpd gmetad gmond
# systemctl enable httpd gmetad httpd

At this point, you should be able to open the Ganglia web interface at http://192.168.0.29/ganglia and login with the credentials from #Step 2.

Gangila Web Interface

Gangila Web Interface

8. In the Ubuntu host, we will only install ganglia-monitor, the equivalent of ganglia-gmond in CentOS:

$ sudo aptitude update && aptitude install ganglia-monitor

9. Edit the /etc/ganglia/gmond.conf file in the monitored box. This should be identical to the same file in the master node except that the commented out lines in the clusterudp_send_channel, and udp_recv_channelshould be enabled:

cluster {
name = "Labs" # The name in the data_source directive in gmetad.conf
owner = "unspecified"
latlong = "unspecified"
url = "unspecified"
}

udp_send_channel   {
  mcast_join = 239.2.11.71
  host = localhost
  port = 8649
  ttl = 1
}

udp_recv_channel {
  mcast_join = 239.2.11.71 ## comment out
  port = 8649
  bind = 239.2.11.71 ## comment out
}

Then, restart the service:

$ sudo service ganglia-monitor restart

10. Refresh the web interface and you should be able to view the statistics and graphs for both hosts inside the Home office grid / Labs cluster (use the dropdown menu next to to Home office grid to choose a cluster, Labsin our case):

Ganglia Home Office Grid Report

Ganglia Home Office Grid Report

Using the menu tabs (highlighted above) you can access lots of interesting information about each server individually and in groups. You can even compare the stats of all the servers in a cluster side by side using the Compare Hosts tab.

Simply choose a group of servers using a regular expression and you will be able to see a quick comparison of how they are performing:

Ganglia Host Server Information

Ganglia Host Server Information

One of the features I personally find most appealing is the mobile-friendly summary, which you can access using the Mobile tab. Choose the cluster you’re interested in and then the individual host:

Ganglia Mobile Friendly Summary View

Ganglia Mobile Friendly Summary View

Summary

In this article we have introduced Ganglia, a powerful and scalable monitoring solution for grids and clusters of servers. Feel free to install, explore, and play around with Ganglia as much as you like (by the way, you can even try out Ganglia in a demo provided in the project’s official website.

While you’re at it, you will also discover that several well-known companies both in the IT world or not use Ganglia. There are plenty of good reasons for that besides the ones we have shared in this article, with easiness of use and graphs along with stats (it’s nice to put a face to the name, isn’t it?) probably being at the top.

 
Source

How to Install ‘atop’ to Monitor Logging Activity of Linux System Processes

Atop is a full screen performance monitor that can report the activity of all processes, even the ones that have been completed. Atop also allows you to keep daily log of system activities. The same can be used for different purposes, including analysis, debugging, pinpointing the cause of a system overload and others.

Atop Features

  1. Check the overall resource consumption by all processes
  2. Check how much of the available resources have been utilized
  3. Logging of resource utilization
  4. Check resource consumption by individual threads
  5. Monitor process activity per user or per program
  6. Monitor network activity per process

The latest version of Atop is 2.1 and includes following features

  1. New logging mechanism
  2. New key flags
  3. New Fields (counters)
  4. Bug fixes
  5. Configurable colors

Installing Atop Monitoring Tool on Linux

1. In this article, I will show you how to install and configure atop on Linux systems like RHEL/CentOS/Fedora and Debian/Ubuntu based derivatives, so that you can easily monitor your system processes.

On RHEL/CentOS/Fedora

First you will need to enable epel repository under RHEL/CentOS/ systems, in order to install atop monitoring tool.

After you’ve enabled epel repository, you can simple use the yum package manager to install atop package as shown below.

# yum install atop
Install Atop Using Epel Repo

Install Atop Using Epel Repo

Alternatively, you may download direct atop rpm packages using following wget command and continue with the installation of atop, with the following command.

------------------ For 32-bit Systems ------------------
# wget http://www.atoptool.nl/download/atop-2.1-1.i586.rpm
# rpm -ivh atop-2.1-1.i586.rpm

------------------ For 64-bit Systems ------------------
# wget http://www.atoptool.nl/download/atop-2.1-1.x86_64.rpm
# rpm -ivh atop-2.1-1.x86_64.rpm 
Install Atop Using RPM Package

Install Atop Using RPM Package

On Debian/Ubuntu

Under Debian based systems, atop can be installed from the default repositories using apt-get command.

$ sudo apt-get install atop
Install Atop Under Debian Systems

Install Atop Under Debian Systems

2. After installing atop, make sure atop will start upon system start up, run the following commands:

------------------ Under RedHat based systems ------------------
# chkconfig --add atop
# chkconfig atop on --level 235
Enable Atop at System Boot

Enable Atop at System Boot

$ sudo update-rc.d atop defaults             [Under Debian based systems]
Add Atop at System Boot

Add Atop at System Boot

3. By default atop will log all the activity on every 600 seconds. As this might not be that useful, I will change atop’s configuration, so all the activities will be logged in interval of 60 seconds. For that purpose run the following command:

# sed 's/600/60/' /etc/atop/atop.daily -i                [Under RedHat based systems]
$ sudo sed 's/600/60/' /etc/default/atop -i              [Under Debian based systems]
Change Atop Log Interval Time

Change Atop Log Interval Time

Now that you have atop installed and configured, the next logical question is “How do I use it?”. Actually there are few ways for that:

4. If you just run atop in terminal you will have top like interface, which will update every 10 seconds.

# atop

You should see a screen similar to this one:

Atop System Process Monitoring

Atop System Process Monitoring

You can use different keys within atop to sort the information by different criteria. Here are some examples:

5. Scheduling information – “s” key – shows scheduling information for the main thread of each process. Also indicates how many processes are in state “running”:

# atop -s
Shows Scheduling Information of Process

Shows Scheduling Information of Process

6. Memory consumption – “m” key – shows memory related information about all running processes The VSIZE column indicates the total virtual memory and the RSIZE shows the resident size used per process.

The VGROW and RGROW indicate the growth during the last interval. The MEM column indicates the resident memory usage by the process.

# atop -m
Shows Process Memory Information

Shows Process Memory Information

7. Show disk utilization – “d” key – shows the disks activity on a system level (LVM and DSK columns). Disk activity is shown as amount of data that is being transferred by reads/writes (RDDSK/WRDSK columns).

# atop -d
Shows Disk Utilization

Shows Disk Utilization

8. Show variable information – “v” key – this option displays provides more specific data about the running processes like uid, pid, gid, cpu usage, etc:

# atop -v
Shows UID PID Information

Shows UID PID Information

9. Show command of processes – “c” key:

# atop -c
Shows Command Process

Shows Command Process

10. Cumulative per program – “p” key – the information shown in this window is accumulated per program. The most right column shows which programs are active (during the intervals) and the most left column shows how many process they have spawned.

# atop -p
Shows Active and Spawned Programs

Shows Active and Spawned Programs

11. Cumulative per user – “u” key – this screen shows which users were/are active during the last interval and indicates how many processes each user runs/ran.

# atop -u
Shows User Processes

Shows User Processes

12. Network usage – “n” key (requires netatop kernel module) shows the network activity per processes.

To install and active netatop kernel module, you need to have following dependency packages installed on your system from the distributor’s repository.

# yum install kernel-devel zlib-devel                [Under RedHat based systems]
$ sudo apt-get install zlib1g-dev                    [Under Debian based systems] 

Next download the netatop tarball and build the module and daemon.

# wget http://www.atoptool.nl/download/netatop-0.3.tar.gz
# tar -xvf netatop-0.3.tar.gz
# cd netatop-0.3
Download Netatop Package

Download Netatop Package

Extract Netatop Files

Extract Netatop Files

Go to the ‘netatop-0.3‘ directory and run the following commands to install and build the module.

# make
# make install
Install Netatop Module

Install Netatop Module

After netatop module installed successfully, load the module and start the daemon.

# service netatop start
OR
$ sudo service netatop start

If you want to load the module automatically after boot, run one of the following commands depending on the distribution.

# chkconfig --add netatop                [Under RedHat based systems]
$ sudo update-rc.d netatop defaults      [Under Debian based systems] 

Now check network usage using “n” key.

# atop -n
Shows Network Usage

Shows Network Usage

13. The directory where atop keeps its history files.

# /var/log/atop/atop_YYYYMMDD

Where YYYY is the year, MM is the month and DD current day of the month. For example:

atop_20150423

All files created by atop are binary. They are not log or text files and only atop can read them. Note however that Logrotate can read and rotate those files.

Let’s say you wish to see todays logs beginning 05:05 server time. Simply run the following command.

# atop -r -b 05:05 -l 1
Check Atop Logs

Check Atop Logs

The atop options are quite a lot and you may wish to see the help menu. For that purpose in the atop window simply use the “?” character to see list of arguments that atop can use. Here is list of most frequently used options:

Atop Options and Usage

Atop Options and Usage

I hope you find my article useful and help you narrow down or prevent issues with your Linux system. In case you have any questions or would like to receive clarification for the usage of atop, please post a comment in the comment section below.

 
Source

CoreFreq – A Powerful CPU Monitoring Tool for Linux Systems

CoreFreq is a CPU monitoring program intended for the Intel 64-bits processor and supports architectures such as Atom, Core2, Nehalem, SandyBridge and above, AMD Family 0F.

Its core is established on a kernel module which helps to retrieve internal performance counters from each CPU core, and works in relation with a daemon which gathers the data and a small console client links to the daemon and displays collected data.

CoreFreq CPU Monitoring

It offers a groundwork to recapture CPU data with a high degree of accuracy:

  1. Core frequencies & ratios; SpeedStep (EIST), Turbo Boost, Hyper-Threading (HTT) as well as Base Clock.
  2. Performance counters in conjunction with Time Stamp Counter (TSC), Unhalted Core Cycles (UCC), Unhalted Reference Cycles (URC).
  3. Number of instructions per cycle or second, IPS, IPC, or CPI.
  4. CPU C-States C0 C1 C3 C6 C7 – C1E – Auto/UnDemotion of C1 C3.
  5. DTS Temperature along with Tjunction Max, Thermal Monitoring TM1 TM2 state.
  6. Topology map including Caches for boostrap together with application CPU.
  7. Processor features, brand plus architecture strings.

Note: This tool is more useful and appropriate for expert Linux users and experienced system administrators, however, novice users can gradually learn how to purposefully use it.

How Does CoreFreq Works

It functions by invoking a Linux Kernel module which then uses:

  1. asm code to keep the readings of the performance counters as close as possible.
  2. per-CPU, effects slab data memory plus high-resolution timer.
  3. compliant with suspend / resume and CPU Hot-Plug.
  4. a shared memory to protect kernel from the user-space part of the program.
  5. atomic synchronization of threads to do away with mutexes and deadlock.

How to Install CoreFreq in Linux

To install CoreFreq, first you need to install the prerequisites (Development Tools) to compile and build the program from source.

$ sudo yum group install 'Development Tools'           [On CentOS/RHEL]
$ sudo dnf  group install 'Development Tools'          [On Fedora 22+ Versions]
# sudo apt-get install dkms git libpthread-stubs0-dev  [On Debian/Ubuntu] 

Next clone the CoreFreq source code from the Github repository, move into the download folder and compile and build the program:

$ git clone https://github.com/cyring/CoreFreq.git
$ cd CoreFreq
$ make 

Build CoreFreq Program

Build CoreFreq Program

Note: Arch Linux users can install corefreq-git from the AUR.

Now run the following commands to load the Linux kernel module from local directory followed by the daemon:

$ sudo insmod corefreqk.ko
$ sudo ./corefreqd

Then, start the client, as a user.

$ ./corefreq-cli

CoreFreq Linux CPU Monitoring

CoreFreq Linux CPU Monitoring

From the interface above, you can use shortcut keys:

  1. F2 to display a usage menu as seen at the top section of the screen.
  2. Right and Left arrows to move over the menu tabs.
  3. Up and Down arrows to select a menu item, then click [Enter].
  4. F4 will close the program.
  5. h will open a quick reference.

To view all usage options, type the command below:

$ ./corefreq-cli -h
CoreFreq Options
CoreFreq.  Copyright (C) 2015-2017 CYRIL INGENIERIE

usage:	corefreq-cli [-option <arguments>]
	-t	Show Top (default)
	-d	Show Dashboard
		  arguments: <left> <top> <marginWidth> <marginHeight>
	-c	Monitor Counters
	-i	Monitor Instructions
	-s	Print System Information
	-M	Print Memory Controller
	-m	Print Topology
	-u	Print CPUID
	-k	Print Kernel
	-h	Print out this message

Exit status:
0	if OK,
1	if problems,
>1	if serious trouble.

Report bugs to labs[at]cyring.fr

To print info about the kernel, run:

$ ./corefreq-cli -k

Print CPU identification details:

$ ./corefreq-cli -u

You can as well monitor CPU instructions in real-time:

$ ./corefreq-cli -i

Enable tracing of counters as below:

$ ./corefreq-cli -c

For more information and usage, visit the CoreFreq Github repository: https://github.com/cyring/CoreFreq

In this article, we reviewed a powerful CPU monitoring tool, which may be more useful to Linux experts or experienced system administrators as compared to novice users.

Source

6 Useful Tools to Monitor MongoDB Performance

Image result for mongodb images

We recently showed how to install MongoDB in Ubuntu 18.04. Once you have successfully deployed your database, you need to monitor its performance while it is running. This is one of the most important tasks under database administration.

Luckily enough, MongoDB provides various methods for retrieving its performance and activity. In this article, we will look at monitoring utilities and database commands for reporting statistics about the state of a running MongoDB instance.

1. Mongostat

Mongostat is similar in functionality to vmstat monitoring tool, which is available on all major Unix-like operating systems such as Linux, FreeBSD, Solaris as well as MacOS. Mongostat is used to get a quick overview of the status of your database; it provides a dynamic real-time view of a running mongod or mongos instance. It retrieves the counts of database operations by type, such as insert, query, update, delete and more.

You can run mongostat as shown. Note that if you have authentication enabled, put the user password in single quotes to avoid getting an error, especially if you have special characters in it.

$ mongostat -u "root" -p '=@!#@%$admin1' --authenticationDatabase "admin"

Monitor MongoDB Performance

Monitor MongoDB Performance

For more mongostat usage options, type the following command.

$ mongostat --help 

2. Mongotop

Mongotop also provides a dynamic real-time view of a running MongoDB instance. It tracks the amount of time a MongoDB instance spends reading and writing data. It returns values every second, by default.

$ mongotop -u "root" -p '=@!#@%$admin1'  --authenticationDatabase "admin"

Monitor MongoDB Activity

Monitor MongoDB Activity

For more mongotop usage options, type the following command.

$ mongotop --help 

3. serverStatus Command

First, you need to run the following command to login into mongo shell.

$ mongo -u "root" -p '=@!#@%$admin1' --authenticationDatabase "admin"

Then run the serverStatus command, which provides an overview of the database’s state, by collecting statistics about the instance.

>db.runCommand( { serverStatus: 1 } )
OR
>db.serverStatus()

4. dbStats Command

The dbStats command returns storage statistics for a particular database, such as the amount of storage used, the quantity of data contained in the database, and object, collection, and index counters.

>db.runCommand({ dbStats: 1 } )
OR
>db.stats()

5. collStats

collStats command is used to collect statistics similar to that provided by dbStats on the collection level, but its output includes a count of the objects in the collection, the size of the collection, the amount of disk space consumed by the collection, and information concerning its indexes.

>db.runCommand( { collStats : "aurthors", scale: 1024 } )

6. replSetGetStatus Command

The replSetGetStatus command outputs the status of the replica set from the perspective of the server that processed the command. This command must be run against the admin database in the followiing form.

>db.adminCommand( { replSetGetStatus : 1 } )

In this addition to the above utilities and database commands, you can also use supported third party monitoring tools either directly, or via their own plugins. These include mtopmunin and nagios.

For more information, consult: Monitoring for MongoDB Documentation.

That’s it for now! In this article, we have covered some useful monitoring utilities and database commands for reporting statistics about the state of a running MongoDB instance. Use the feedback form below to ask any questions or share your thoughts with us.

 

Source

Get started with Joplin, a note-taking app

Learn how open source tools can help you be more productive in 2019. First up, Joplin.

hands programming

Joplin

In the realm of productivity tools, note-taking apps are VERY handy. Yes, you can use the open source NixNote to access Evernote notes, but it’s still linked to the Evernote servers and still relies on a third party for security. And while you CAN export your Evernote notes from NixNote, the only format options are NixNote XML or PDF files.

Joplin graphical application

Joplin’s GUI.

Enter Joplin. Joplin is a NodeJS application that runs and stores notes locally, allows you to encrypt your notes and supports multiple sync methods. Joplin can run as a console or graphical application on Windows, Mac, and Linux. Joplin also has mobile apps for Android and iOS, meaning you can take your notes with you without a major hassle. Joplin even allows you to format notes with Markdown, HTML, or plain text.

Joplin on Android

Joplin’s Android app.

One really nice thing about Joplin is it supports two kinds of notes: plain notes and to-do notes. Plain notes are what you expect—documents containing text. To-do notes, on the other hand, have a checkbox in the notes list that allows you to mark them “done.” And since the to-do note is still a note, you can include lists, documentation, and additional to-do items in a to-do note.

When using the GUI, you can toggle editor views between plain text, WYSIWYG, and a split screen showing both the source text and the rendered view. You can also specify an external editor in the GUI, making it easy to update notes with Vim, Emacs, or any other editor capable of handling text documents.

Joplin console version

Joplin in the console.

The console interface is absolutely fantastic. While it lacks a WYSIWYG editor, it defaults to the text editor for your login. It also has a powerful command mode that allows you to do almost everything you can do in the GUI version. And it renders Markdown correctly in the viewer.

You can group notes in notebooks and tag notes for easy grouping across your notebooks. And it even has built-in search, so you can find things if you forget where you put them.

Overall, Joplin is a first-class note-taking app (and a great alternative to Evernote) that will help you be organized and more productive over the next year.

How to join a Linux computer to an Active Directory domain

Organizations with an AD infrastructure in place that wish to provision Linux computers can bind those devices to their existing domain.

istock-841033564logfiles.jpg

I’m not as strong with Linux distributions as I am with Windows and macOS. Yet when I was recently presented with a question on how to bind Linux hosts to an existing Windows AD domain, I accepted the challenge and along with it, the opportunity to pick up some more Linux experience and help a friend out.

Most IT professionals I meet are adamant about performing their tasks with the least amount of hands-on, physical presence as possible. This is not to say that they do not wish to get their hands dirty per se, but rather speaks more to the fact that IT generally has a lot on its plate so working smarter—not harder—is always greater than tying up all your resources on just one or two trouble tickets.

SEE: System update policy template download (Tech Pro Research)

Just about any administrative task you wish to perform is possible from the powerful, robust command-line interface (CLI). This is one of the areas in which Linux absolutely shines. Regardless as to whether the commands are entered manually, remotely via SSH, or automatically piped in using scripts—the ability to manage Linux hosts natively is second to none. Armed with this new-found knowledge, we head directly to the CLI to resolve this problem.

Before diving into the crux of how to perform this domain bind, please note that I included two distinct (though quite similar) processes to accomplish this task. The process used will depend on what version of the Linux kernel your distribution of choice is based on: Debian or Red Hat (RHEL).

Joining Debian-based distros to Active Directory

Launch Terminal and enter the following command:

sudo apt-get realmd

After ‘realmd’ installs successfully, enter the next command to join the domain:

realm join domain.tld –user username

Enter the password of the account with permissions to join devices to the domain, and press the enter key. If the dependencies are not currently loaded onto the Linux host, the binding process will trigger them to be installed automatically.

Joining RHEL-based distros to Active Directory

Launch Terminal and enter the following command:

yum install sssd realmd oddjob oddjob-mkhomedir adcli samba-common samba-common-tools krb5-workstation openldap-clients policycoreutils-python -y

Once the dependencies install successfully, enter the next command to join the domain:

realm join domain.tld –user=username

After authentication occurs for the first time, Linux will automatically create the /etc/sssd/sssd.conf and /etc/krb.conf files, as well as the /etc/krb5.keytab, which control how the system will connect to and communicate with Kerberos (the authentication protocol used by Microsoft’s Active Directory).

Note: The dependencies are installed with their default configurations. This may or may not work with your environment’s specific set up. Additional configuration may be necessary before domain accounts can be authenticated.

Confirm domain (realm) joined successfully

At Terminal, enter the following command for a list of the domain, along with configuration information set:

realm list

Alternatively, you can always check the properties of the computer object in Active Directory Users and Computers snap-in to verify that it was both created and has the proper trust relationship established between host and AD.

Source

How to Install Skype on CentOS 7

Skype is one of the most popular communication applications in the world that allows you to make free online audio and video calls, and affordable international calling to mobiles and landlines worldwide.

Skype is not an open source application and it is not included in the CentOS repositories.

This tutorial explains how to install the latest version of Skype on CentOS 7.

Prerequisites

The user you are logged in as must have sudo privileges to be able to install packages.

Installing Skype on CentOS

Perform the following steps to install Skype on CentOS.

1. Download Skype

Start by opening your terminal either by using the Ctrl+Alt+T keyboard shortcut or by clicking on the terminal icon.

Download the latest Skype .rpm package using the following wget command:

wget https://go.skype.com/skypeforlinux-64.rpm

2. Install Skype

Once the download is complete, install Skype by running the following command as a user with sudo privileges:

sudo yum localinstall ./skypeforlinux-64.rpm

That’s it. Skype has been installed on your CentOS desktop.

3. Start Skype

Now that Skype has been installed, you can start it either from the command line by typing skypeforlinux or by clicking on the Skype icon (Applications -> Internet -> Skype).

When you start Skype for the first time, a window like the following will appear:

From here you can sign in to Skype with your Microsoft Account and start chatting and talking with your friends and family.

Updating Skype

During the installation process, the official Skype repository will be added to your system. Use the cat command to verify the file contents:

/etc/yum.repos.d/skype-stable.repo[deb [arch=amd64] https://repo.skype.com/deb stable main]([skype-stable]
name=skype (stable)
baseurl=https://repo.skype.com/rpm/stable/
enabled=1
gpgcheck=1
gpgkey=https://repo.skype.com/data/SKYPE-GPG-KEY)

This ensures that your Skype installation will be updated automatically when a new version is released through your desktop standard Software Update tool.

Conclusion

In this tutorial, you’ve learned how to install Skype on your CentOS 7 desktop.

Feel free to leave a comment below.

Source

WP2Social Auto Publish Powered By : XYZScripts.com