Get started with WTF, a dashboard for the terminal

Keep key information in view with WTF, the sixth in our series on open source tools that will make you more productive in 2019.

Person standing in front of a giant computer screen with numbers, data

There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year’s resolutions, the itch to start the year off right, and of course, an “out with the old, in with the new” attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn’t have to be that way.

Here’s the sixth of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019.

WTF

Once upon a time, I was doing some consulting at a firm that used Bloomberg Terminals. My reaction was, “Wow, that’s WAY too much information on one screen.” These days, however, it seems like I can’t get enough information on a screen when I’m working and have multiple web pages, dashboards, and console apps open to try to keep track of things.

While tmux and Screen can do split screens and multiple windows, they are a pain to set up, and the keybindings can take a while to learn (and often conflict with other applications).

WTF is a simple, easily configured information dashboard for the terminal. It is written in Go, uses a YAML configuration file, and can pull data from several different sources. All the data sources are contained in modules and include things like weather, issue trackers, date and time, Google Sheets, and a whole lot more. Some panes are interactive, and some just update with the most recent information available.

Setup is as easy as downloading the latest release for your operating system and running the command. Since it is written in Go, it is very portable and should run anywhere you can compile it (although the developer only builds for Linux and MacOS at this time).

WTF default screen

When you run WTF for the first time, you’ll get the default screen, identical to the image above.

WTF's default config.yml

You also get the default configuration file in ~/.wtf/config.yml, and you can edit the file to suit your needs. The grid layout is configured in the top part of the file.

grid:
columns: [45, 45]
rows: [7, 7, 7, 4]

The numbers in the grid settings represent the character dimensions of each block. The default configuration is two columns of 40 characters, two rows 13 characters tall, and one row 4 characters tall. In the code above, I made the columns wider (45, 45), the rows smaller, and added a fourth row so I can have more widgets.

prettyweather on WTF

I like to see the day’s weather on my dashboard. There are two weather modules to chose from: Weather, which shows just the text information, and Pretty Weather, which is colorful and uses text-based graphics in the display.

prettyweather:
enabled: true
position:
top: 0
left: 1
height: 2
width: 1

This code creates a pane two blocks tall (height: 2) and one block wide (height: 1), positioned on the second column (left: 1) on the top row (top: 0) containing the Pretty Weather module.

Some modules, like Jira, GitHub, and Todo, are interactive, and you can scroll, update, and save information in them. You can move between the interactive panes using the Tab key. The \ key brings up a help screen for the active pane so you can see what you can do and how. The Todo module lets you add, edit, and delete to-do items, as well as check them off as you complete them.

WTF dashboard with GitHub, Todos, Power, and the weather

There are also modules to execute commands and present the output, watch a text file, and monitor build and integration server output. All the documentation is very well done.

WTF is a valuable tool for anyone who needs to see a lot of data on one screen from different sources.

Source

Kali Linux Tools Listing (Security with sources url’s) – 2019

Kali Linux Tools - Logo
Information Gathering

 

Vulnerability Analysis

 

Exploitation Tools

 

Web Applications

 

Stress Testing

 

Sniffing & Spoofing

 

Password Attacks

 

Maintaining Access

 

Hardware Hacking

 

Reverse Engineering

 

Reporting Tools

 

Kali Linux Metapackages

Metapackages give you the flexibility to install specific subsets of tools based on your particular needs. For instance, if you are going to conduct a wireless security assessment, you can quickly create a custom Kali ISO and include the kali-linux-wireless metapackage to only install the tools you need.

For more information, please refer to the original Kali Linux Metapackages blog post.

kali-linux: The Base Kali Linux System
  • kali-desktop-common
  • apache2
  • apt-transport-https
  • atftpd
  • axel
  • default-mysql-server
  • exe2hexbat
  • expect
  • florence
  • gdisk
  • git
  • gparted
  • iw
  • lvm2
  • mercurial
  • mlocate
  • netcat-traditional
  • openssh-server
  • openvpn
  • p7zip-full
  • parted
  • php
  • php-mysql
  • rdesktop
  • rfkill
  • samba
  • screen
  • snmp
  • snmpd
  • subversion
  • sudo
  • tcpdump
  • testdisk
  • tftp
  • tightvncserver
  • tmux
  • unrar | unar
  • upx-ucl
  • vim
  • whois
  • zerofree
kali-linux-full: The Default Kali Linux Install
  • kali-linux
  • 0trace
  • ace-voip
  • afflib-tools
  • aircrack-ng
  • amap
  • apache-users
  • apktool
  • armitage
  • arp-scan
  • arping | iputils-arping
  • arpwatch
  • asleap
  • automater
  • autopsy
  • backdoor-factory
  • bbqsql
  • bdfproxy
  • bed
  • beef-xss
  • binwalk
  • blindelephant
  • bluelog
  • blueranger
  • bluesnarfer
  • bluez
  • bluez-hcidump
  • braa
  • btscanner
  • bulk-extractor
  • bully
  • burpsuite
  • cabextract
  • cadaver
  • cdpsnarf
  • cewl
  • cgpt
  • cherrytree
  • chirp
  • chkrootkit
  • chntpw
  • cisco-auditing-tool
  • cisco-global-exploiter
  • cisco-ocs
  • cisco-torch
  • clang
  • clusterd
  • cmospwd
  • commix
  • copy-router-config
  • cowpatty
  • creddump
  • crunch
  • cryptcat
  • cryptsetup
  • curlftpfs
  • cutycapt
  • cymothoa
  • darkstat
  • davtest
  • dbd
  • dc3dd
  • dcfldd
  • ddrescue
  • deblaze
  • dex2jar
  • dhcpig
  • dirb
  • dirbuster
  • dmitry
  • dnmap
  • dns2tcp
  • dnschef
  • dnsenum
  • dnsmap
  • dnsrecon
  • dnstracer
  • dnswalk
  • doona
  • dos2unix
  • dotdotpwn
  • dradis
  • driftnet
  • dsniff
  • dumpzilla
  • eapmd5pass
  • edb-debugger
  • enum4linux
  • enumiax
  • ethtool
  • ettercap-graphical
  • ewf-tools
  • exiv2
  • exploitdb
  • extundelete
  • fcrackzip
  • fern-wifi-cracker
  • ferret-sidejack
  • fierce
  • fiked
  • fimap
  • findmyhash
  • flasm
  • foremost
  • fping
  • fragroute
  • fragrouter
  • framework2
  • ftester
  • funkload
  • galleta
  • gdb
  • ghost-phisher
  • giskismet
  • golismero
  • gpp-decrypt
  • grabber
  • guymager
  • hackrf
  • hamster-sidejack
  • hash-identifier
  • hashcat
  • hashcat-utils
  • hashdeep
  • hashid
  • hexinject
  • hexorbase
  • hotpatch
  • hping3
  • httrack
  • hydra
  • hydra-gtk
  • i2c-tools
  • iaxflood
  • ifenslave
  • ike-scan
  • inetsim
  • intersect
  • intrace
  • inviteflood
  • iodine
  • irpas
  • jad
  • javasnoop
  • jboss-autopwn
  • john
  • johnny
  • joomscan
  • jsql-injection
  • keimpx
  • killerbee
  • king-phisher
  • kismet
  • laudanum
  • lbd
  • leafpad
  • libfindrtp
  • libfreefare-bin
  • libhivex-bin
  • libnfc-bin
  • lynis
  • macchanger
  • magicrescue
  • magictree
  • maltego
  • maltego-teeth
  • maskprocessor
  • masscan
  • mc
  • mdbtools
  • mdk3
  • medusa
  • memdump
  • metasploit-framework
  • mfcuk
  • mfoc
  • mfterm
  • mimikatz
  • minicom
  • miranda
  • miredo
  • missidentify
  • mitmproxy
  • msfpc
  • multimac
  • nasm
  • nbtscan
  • ncat-w32
  • ncrack
  • ncurses-hexedit
  • netdiscover
  • netmask
  • netsed
  • netsniff-ng
  • netwag
  • nfspy
  • ngrep
  • nikto
  • nipper-ng
  • nishang
  • nmap
  • ohrwurm
  • ollydbg
  • onesixtyone
  • ophcrack
  • ophcrack-cli
  • oscanner
  • p0f
  • pack
  • padbuster
  • paros
  • pasco
  • passing-the-hash
  • patator
  • pdf-parser
  • pdfid
  • pdgmail
  • perl-cisco-copyconfig
  • pev
  • pipal
  • pixiewps
  • plecost
  • polenum
  • powerfuzzer
  • powersploit
  • protos-sip
  • proxychains
  • proxystrike
  • proxytunnel
  • pst-utils
  • ptunnel
  • pwnat
  • pyrit
  • python-faraday
  • python-impacket
  • python-peepdf
  • python-rfidiot
  • python-scapy
  • radare2
  • rainbowcrack
  • rake
  • rcracki-mt
  • reaver
  • rebind
  • recon-ng
  • recordmydesktop
  • recoverjpeg
  • recstudio
  • redfang
  • redsocks
  • reglookup
  • regripper
  • responder
  • rifiuti
  • rifiuti2
  • rsmangler
  • rtpbreak
  • rtpflood
  • rtpinsertsound
  • rtpmixsound
  • safecopy
  • safecopy
  • sakis3g
  • samdump2
  • sbd
  • scalpel
  • scrounge-ntfs
  • sctpscan
  • sendemail
  • set
  • sfuzz
  • sidguesser
  • siege
  • siparmyknife
  • sipcrack
  • sipp
  • sipvicious
  • skipfish
  • sleuthkit
  • smali
  • smbmap
  • smtp-user-enum
  • sniffjoke
  • snmpcheck
  • socat
  • sparta
  • spectools
  • spike
  • spooftooph
  • sqldict
  • sqlitebrowser
  • sqlmap
  • sqlninja
  • sqlsus
  • sslcaudit
  • ssldump
  • sslh
  • sslscan
  • sslsniff
  • sslsplit
  • sslstrip
  • sslyze
  • statsprocessor
  • stunnel4
  • suckless-tools
  • sucrack
  • swaks
  • t50
  • tcpflow
  • tcpick
  • tcpreplay
  • termineter
  • tftpd32
  • thc-ipv6
  • thc-pptp-bruter
  • thc-ssl-dos
  • theharvester
  • tlssled
  • tnscmd10g
  • truecrack
  • twofi
  • u3-pwn
  • ua-tester
  • udptunnel
  • unicornscan
  • uniscan
  • unix-privesc-check
  • urlcrazy
  • vboot-kernel-utils
  • vboot-utils
  • vim-gtk
  • vinetto
  • vlan
  • voiphopper
  • volafox
  • volatility
  • vpnc
  • wafw00f
  • wapiti
  • wce
  • webacoo
  • webscarab
  • webshells
  • weevely
  • wfuzz
  • whatweb
  • wifi-honey
  • wifitap
  • wifite
  • windows-binaries
  • winexe
  • wireshark
  • wol-e
  • wordlists
  • wpscan
  • xpdf
  • xprobe
  • xspy
  • xsser
  • xtightvncviewer
  • yersinia
  • zaproxy
  • zenmap
  • zim
kali-linux-all: All Available Packages in Kali Linux
  • kali-linux-forensic
  • kali-linux-full
  • kali-linux-gpu
  • kali-linux-pwtools
  • kali-linux-rfid
  • kali-linux-sdr
  • kali-linux-top10
  • kali-linux-voip
  • kali-linux-web
  • kali-linux-wireless
  • android-sdk
  • device-pharmer
  • freeradius
  • hackersh
  • htshells
  • ident-user-enum
  • ismtp
  • linux-exploit-suggester
  • openvas
  • parsero
  • python-halberd
  • sandi
  • set
  • shellnoob
  • shellter
  • teamsploit
  • vega
  • veil
  • webhandler
  • websploit
kali-linux-sdr: Software Defined Radio (SDR) Tools in Kali
  • kali-linux
  • chirp
  • gnuradio
  • gqrx-sdr
  • gr-iqbal
  • gr-osmosdr
  • hackrf
  • kalibrate-rtl
  • libgnuradio-baz
  • multimon-ng
  • rtlsdr-scanner
  • uhd-host
  • uhd-images
kali-linux-gpu: Kali Linux GPU-Powered Tools
  • kali-linux
  • oclgausscrack
  • oclhashcat
  • pyrit
  • truecrack
kali-linux-wireless: Wireless Tools in Kali
  • kali-linux
  • kali-linux-sdr
  • aircrack-ng
  • asleap
  • bluelog
  • blueranger
  • bluesnarfer
  • bluez
  • bluez-hcidump
  • btscanner
  • bully
  • cowpatty
  • crackle
  • eapmd5pass
  • fern-wifi-cracker
  • giskismet
  • iw
  • killerbee
  • kismet
  • libfreefare-bin
  • libnfc-bin
  • macchanger
  • mdk3
  • mfcuk
  • mfoc
  • mfterm
  • oclhashcat
  • pyrit
  • python-rfidiot
  • reaver
  • redfang
  • rfcat
  • rfkill
  • sakis3g
  • spectools
  • spooftooph
  • ubertooth
  • wifi-honey
  • wifitap
  • wifite
  • wireshark
kali-linux-web: Kali Linux Web-App Assessment Tools
  • kali-linux
  • apache-users
  • apache2
  • arachni
  • automater
  • bbqsql
  • beef-xss
  • blindelephant
  • burpsuite
  • cadaver
  • clusterd
  • cookie-cadger
  • cutycapt
  • davtest
  • default-mysql-server
  • dirb
  • dirbuster
  • dnmap
  • dotdotpwn
  • eyewitness
  • ferret-sidejack
  • ftester
  • funkload
  • golismero
  • grabber
  • hamster-sidejack
  • hexorbase
  • httprint
  • httrack
  • hydra
  • hydra-gtk
  • jboss-autopwn
  • joomscan
  • jsql-injection
  • laudanum
  • lbd
  • maltego
  • maltego-teeth
  • medusa
  • mitmproxy
  • ncrack
  • nikto
  • nishang
  • nmap
  • oscanner
  • owasp-mantra-ff
  • padbuster
  • paros
  • patator
  • php
  • php-mysql
  • plecost
  • powerfuzzer
  • proxychains
  • proxystrike
  • proxytunnel
  • python-halberd
  • redsocks
  • sidguesser
  • siege
  • skipfish
  • slowhttptest
  • sqldict
  • sqlitebrowser
  • sqlmap
  • sqlninja
  • sqlsus
  • sslcaudit
  • ssldump
  • sslh
  • sslscan
  • sslsniff
  • sslsplit
  • sslstrip
  • sslyze
  • stunnel4
  • thc-ssl-dos
  • tlssled
  • tnscmd10g
  • ua-tester
  • uniscan
  • vega
  • wafw00f
  • wapiti
  • webacoo
  • webhandler
  • webscarab
  • webshells
  • weevely
  • wfuzz
  • whatweb
  • wireshark
  • wpscan
  • xsser
  • zaproxy
kali-linux-forensic: Kali Linux Forensic Tools
  • kali-linux
  • afflib-tools
  • apktool
  • autopsy
  • bulk-extractor
  • cabextract
  • chkrootkit
  • creddump
  • dc3dd
  • dcfldd
  • ddrescue
  • dumpzilla
  • edb-debugger
  • ewf-tools
  • exiv2
  • extundelete
  • fcrackzip
  • firmware-mod-kit
  • flasm
  • foremost
  • galleta
  • gdb
  • gparted
  • guymager
  • hashdeep
  • inetsim
  • iphone-backup-analyzer
  • jad
  • javasnoop
  • libhivex-bin
  • lvm2
  • lynis
  • magicrescue
  • mdbtools
  • memdump
  • missidentify
  • nasm
  • ollydbg
  • p7zip-full
  • parted
  • pasco
  • pdf-parser
  • pdfid
  • pdgmail
  • pev
  • polenum
  • pst-utils
  • python-capstone
  • python-distorm3
  • python-peepdf
  • radare2
  • recoverjpeg
  • recstudio
  • reglookup
  • regripper
  • rifiuti
  • rifiuti2
  • safecopy
  • safecopy
  • samdump2
  • scalpel
  • scrounge-ntfs
  • sleuthkit
  • smali
  • sqlitebrowser
  • tcpdump
  • tcpflow
  • tcpick
  • tcpreplay
  • truecrack
  • unrar | unar
  • upx-ucl
  • vinetto
  • volafox
  • volatility
  • wce
  • wireshark
  • xplico
  • yara
kali-linux-voip: Kali Linux VoIP Tools
  • kali-linux
  • ace-voip
  • dnmap
  • enumiax
  • iaxflood
  • inviteflood
  • libfindrtp
  • nmap
  • ohrwurm
  • protos-sip
  • rtpbreak
  • rtpflood
  • rtpinsertsound
  • rtpmixsound
  • sctpscan
  • siparmyknife
  • sipcrack
  • sipp
  • sipvicious
  • voiphopper
  • wireshark
kali-linux-pwtools: Kali Linux Password Cracking Tools
  • kali-linux
  • kali-linux-gpu
  • chntpw
  • cmospwd
  • crunch
  • fcrackzip
  • findmyhash
  • gpp-decrypt
  • hash-identifier
  • hashcat
  • hashcat-utils
  • hashid
  • hydra
  • hydra-gtk
  • john
  • johnny
  • keimpx
  • maskprocessor
  • medusa
  • mimikatz
  • ncrack
  • ophcrack
  • ophcrack-cli
  • pack
  • passing-the-hash
  • patator
  • pdfcrack
  • pipal
  • polenum
  • rainbowcrack
  • rarcrack
  • rcracki-mt
  • rsmangler
  • samdump2
  • seclists
  • sipcrack
  • sipvicious
  • sqldict
  • statsprocessor
  • sucrack
  • thc-pptp-bruter
  • truecrack
  • twofi
  • wce
  • wordlists
kali-linux-top10: Top 10 Kali Linux Tools
  • kali-linux
  • aircrack-ng
  • burpsuite
  • hydra
  • john
  • maltego
  • maltego-teeth
  • metasploit-framework
  • nmap
  • sqlmap
  • wireshark
  • zaproxy
kali-linux-rfid: Kali Linux RFID Tools
  • kali-linux
  • libfreefare-bin
  • libnfc-bin
  • mfcuk
  • mfoc
  • mfterm
  • python-rfidiot
kali-linux-nethunter: Kali Linux NetHunters Default Tools
  • kali-defaults
  • kali-root-login
  • aircrack-ng
  • apache2
  • armitage
  • autossh
  • backdoor-factory
  • bdfproxy
  • beef-xss
  • burpsuite
  • dbd
  • desktop-base
  • device-pharmer
  • dnsmasq
  • dnsutils
  • dsniff
  • ettercap-text-only
  • exploitdb
  • florence
  • giskismet
  • gpsd
  • hostapd
  • isc-dhcp-server
  • iw
  • kismet
  • kismet-plugins
  • libffi-dev
  • librtlsdr-dev
  • libssl-dev
  • macchanger
  • mdk3
  • metasploit-framework
  • mfoc
  • mitmf
  • mitmproxy
  • nethunter-utils
  • nishang
  • nmap
  • openssh-server
  • openvpn
  • p0f
  • php
  • pixiewps
  • postgresql
  • ptunnel
  • python-dnspython
  • python-lxml
  • python-m2crypto
  • python-mako
  • python-netaddr
  • python-pcapy
  • python-pip
  • python-setuptools
  • python-twisted
  • recon-ng
  • rfkill
  • socat
  • sox
  • sqlmap
  • sslsplit
  • sslstrip
  • tcpdump
  • tcptrace
  • tightvncserver
  • tinyproxy
  • tshark
  • wifite
  • wipe
  • wireshark
  • wpasupplicant
  • xfce4
  • xfce4-goodies
  • xfce4-places-plugin
  • zip

 

Source

How to Install the Official Slack Client on Linux

Slack is a popular way for teams to collaborate in real time chat, with plenty of tools and organization to keep conversations on track and focused. Plenty of offices have adopted Slack, and it’s become and absolute necessity for distributed teams.

While you can use Slack through your web browser, its simpler and generally more efficient to install the official Slack client on your desktop. Slack supports Linux with Debian and RPM packages as well as an official Snap. As a result, it’s simple to get running with Slack on your distribution of choice.

Install Slack

Download Slack for Linux

While you won’t find Slack in many distribution repositories, you won’t have much trouble installing it. As an added bonus, the Debian and RPM packages provided by Slack also set up repositories on your system, so you’ll receive regular updates, whenever they become available.

Ubuntu/Debian

Open your browser, and go to Slack’s Linux download page. Click the button to download the “.DEB” package. Save it.

Once you have the package downloaded, open your terminal emulator, and change the directory into your download folder.

From there, use dpkg to install the package.

sudo dpkg -i slack-desktop-3.3.4-amd64.deb

If you run into missing dependencies, fix it with Apt.

sudo apt –fix-broken install

Fedora

Fedora is another officially supported distribution. Open your web browser and go to the Slack download page. Click the button for the “.RPM” package. When prompted, save the package.

After the download finishes, open your terminal, and change into your download directory.

Now, use the “rpm” command to install the package directly.

sudo rpm -i slack-3.3.4-0.1.fc21.x86_64.rpm

Arch Linux

Arch users can find the latest version of Slack in the AUR. If you haven’t set up an AUR helper on your system, go to Slack’s AUR page, and clone the Git repository there. Change into the directory, and build and install the package with makepkg.

cd ~/Downloads
git clone https://aur.archlinux.org/slack-desktop.git
cd slack-desktop
makepkg -si

If you do have an AUR helper, just install the Slack client.

sudo pikaur -S slack-desktop

Snap

For everyone else, the snap is always a good option. It’s an officially packaged and supported snap straight from Slack. Just install it on your system.

Using Slack

Slack is a graphical application. Most desktop environments put it under the “Internet” category. On GNOME you’ll find it listed alphabetically under “Slack.” Go ahead and launch it.

Slack Workspace URL

Slack will start right away by asking for the URL of the workspace you want to join. Enter it and click “Continue.”

Slack Enter Email

Next, Slack will ask for the email address you have associated with that workspace. Enter that, too.

Slack Enter Password

Finally, enter your password for the workspace. Once you do, Slack will sign you in.

Slack on Ubuntu

After you’re signed in, you can get to work using Slack. You can click on the different channels to move between them. To the far left, you’ll see the icon associated with your workspace and a plus sign icon below it. Click the plus if you’d like to sign in to an additional workspace.

Note the Slack icon in your system tray. You will receive desktop notifications from Slack, and if one arrives when you were away, you’ll see the blue dot in the tray icon turn red.

You’re now ready to use Slack on Linux like a pro!

Source

Linux Today – Using Linux containers to analyze the impact of climate change and soil on New Zealand crops

Method models climate change scenarios by processing vast amounts of high-resolution soil and weather data.

New Zealand’s economy is dependent on agriculture, a sector that is highly sensitive to climate change. This makes it critical to develop analysis capabilities to assess its impact and investigate possible mitigation and adaptation options. That analysis can be done with tools such as agricultural systems models. In simple terms, it involves creating a model to quantify how a specific crop behaves under certain conditions then simulating altering a few variables to see how that behavior changes. Some of the software available to do this includes CropSyst from Washington State University and the Agricultural Production Systems Simulator (APSIM) from the Commonwealth Scientific and Industrial Research Organization (CSIRO) in Australia.

Historically, these models have been used primarily for small area (point-based) simulations where all the variables are well known. For large area studies (landscape scale, e.g., a whole region or national level), the soil and climate data need to be upscaled or downscaled to the resolution of interest, which means increasing uncertainty. There are two major reasons for this: 1) it is hard to create and/or obtain access to high-resolution, geo-referenced, gridded datasets; and 2) the most common installation of crop modeling software is in an end user’s desktop or workstation that’s usually running one of the supported versions of Microsoft Windows (system modelers tend to prefer the GUI capabilities of the tools to prepare and run simulations, which are then restricted to the computational power of the hardware used).

New Zealand has several Crown Research Institutes that provide scientific research across many different areas of importance to the country’s economy, including Landcare Research, the National Institute of Water and Atmospheric Research (NIWA), and the New Zealand Institute for Plant & Food Research. In a joint project, these organizations contributed datasets related to the country’s soil, terrain, climate, and crop models. We wanted to create an analysis framework that uses APSIM to run enough simulations to cover relevant time-scales for climate change questions (>100 years’ worth of climate change data) across all of New Zealand at a spatial resolution of approximately 25km2. We’re talking several million simulations, each one taking at least 10 minutes to complete on a single CPU core. If we were to use a standard desktop, it would probably have been faster to just wait outside and see what happens.

Enter HPC

High-performance computing (HPC) is the use of parallel processing for running programs efficiently, reliably, and quickly. Typically this means making use of batch processing across multiple hosts, with each individual process dealing with just a little bit of data, using a job scheduler to orchestrate them.

Parallel computing can mean either distributed computing, where each processing thread needs to communicate with others between tasks (especially intermediate results), or it can be “embarrassingly parallel” where there is no such need. When dealing with the latter, the overall performance grows linearly the more capacity there is available.

Crop modeling is, luckily, an embarrassingly parallel problem: it does not matter how much data or how many variables you have, each variable that changes means one full simulation that needs to run. And because simulations are independent from each other, you can run as many simulations as you have CPUs.

Solve for dependency hell

APSIM is a complex piece of software. Its codebase is comprised of modules that have been written in multiple different programming languages and tightly integrated over the past three decades. The application achieves portability between the Windows and GNU/Linux operating systems by leveraging the Mono Project framework, but the number of external dependencies and workarounds that are required to run it in a Linux environment make the implementation non-trivial.

The build and install documentation is scarce, and the instructions that do exist target Ubuntu Desktop editions. Several required dependencies are undocumented, and the build process sometimes relies on the binfmt_misckernel module to allow direct execution of .exe files linked to the Mono libraries (instead of calling mono file.exe), but it does so inconsistently (this has since been fixed upstream). To add to the confusion, some .exe files are Mono assemblies, and some are native (libc) binaries (this is done to avoid differences in the names of the executables between operating system platforms). Finally, Linux builds are created on-demand “in-house” by the developers, but there are no publicly accessible automated builds due to lack of interest from external users.

All of this may work within a single organization, but it makes APSIM challenging to adopt in other environments. HPC clusters tend to standardize on one Linux distribution (e.g., Red Hat Enterprise Linux, CentOS, Ubuntu, etc.) and job schedulers (e.g., PBS, HTCondor, Torque, SGE, Platform LSF, SLURM, etc.) and can implement disparate storage and network architectures, network configurations, user authentication and authorization policies, etc. As such, what software is available, what versions, and how they are integrated are highly environment-specific. Projects like OpenHPC aim to provide some sanity to this situation, but the reality is that most HPC clusters are bespoke in nature, tailored to the needs of the organization.

A simple way to work around these issues is to introduce containerization technologies. This should not come as a surprise (it’s in the title of this article, after all). Containers permit creating a standalone, self-sufficient artifact that can be run without changes in any environment that supports running them. But containers also provide additional advantages from a “reproducible research” perspective: Software containers can be created in a reproducible way, and once created, the resulting container images are both portable and immutable.

  • Reproducibility: Once a container definition file is written following best practices (for instance, making sure that the software versions installed are explicitly defined), the same resulting container image can be created in a deterministic fashion.
  • Portability: When an administrator creates a container image, they can compile, install, and configure all the software that will be required and include any external dependencies or libraries needed to run them, all the way down the stack to the Linux distribution itself. During this process, there is no need to target the execution environment for anything other than the hardware. Once created, a container image can be distributed as a standalone artifact. This cleanly separates the build and install stages of a particular software from the runtime stage when that software is executed.
  • Immutability: After it’s built, a container image is immutable. That is, it is not possible to change its contents and persist them without creating a new image.

These properties enable capturing the exact state of the software stack used during the processing and distributing it alongside the raw data to replicate the analysis in a different environment, even when the Linux distribution used in that environment does not match the distribution used inside the container image.

Docker

While operating-system-level virtualization is not a new technology, it was primarily because of Docker that it became increasingly popular. Docker provides a way to develop, deploy, and run software containers in a simple fashion.

The first iteration of an APSIM container image was implemented in Docker, replicating the build environment partially documented by the developers. This was done as a proof of concept on the feasibility of containerizing and running the application. A second iteration introduced multi-stage builds: a method of creating container images that allows separating the build phase from the installation phase. This separation is important because it reduces the final size of the resulting container images, which will not include any dependencies that are required only during build time. Docker containers are not particularly suitable for multi-tenant HPC environments. There are three primary things to consider:

1. Data ownership

Container images do not typically store the configuration needed to integrate with enterprise authentication directories (e.g., Active Directory, LDAP, etc.) because this would reduce portability. Instead, user information is usually hardcoded explicitly in the image directly (and when it’s not, root is used by default). When the container starts, the contained process will run with this hardcoded identity (and remember, root is used by default). The result is that the output data created by the containerized process is owned by a user that potentially only exists inside the container image. NOT by the user who started the container (also, did I mention that root is used by default?).

A possible workaround for this problem is to override the runtime user when the container starts (using the docker run -u… flag). But this introduces added complexity for the user, who must now learn about user identities (UIDs), POSIX ownership and permissions, the correct syntax for the docker run command, as well as find the correct values for their UID, group identifier (GID), and any additional groups they may need. All of this for someone who just wants to get some science done.

It is also worth noting that this method will not work every time. Not all applications are happy running as an arbitrary user or a user not present in the system’s database (e.g., /etc/passwd file). These are edge cases, but they exist.

2. Access to persistent storage

Container images include only the files needed for the application to run. They typically do not include the input or raw data to be processed by the application. By default, when a container image is instantiated (i.e., when the container is started), the filesystem presented to the containerized application will show only those files and directories present in the container image. To access the input or raw data, the end user must explicitly map the desired mount points from the host server to paths within the filesystem in the container (typically using bindmounts). With Docker, these “volume mounts” are impossible to pre-configure globally, and the mapping must be done on a per-container basis when the containers are started. This not only increases the complexity of the commands needed to run an application, but it also introduces another undesired effect…

3. Compute host security

The ability to start a process as an arbitrary user and the ability to map arbitrary files or directories from the host server into the filesystem of a running container are two of several powerful capabilities that Docker provides to operators. But they are possible because, in the security model adopted by Docker, the daemon that runs the containers must be started on the host with rootprivileges. In consequence, end users that have access to the Docker daemon end up having the equivalent of root access to the host. This introduces security concerns since it violates the Principle of Least Privilege. Malicious actors can perform actions that exceed the scope of their initial authorization, but end users may also inadvertently corrupt or destroy data, even without malicious intent.

A possible solution to this problem is to implement user namespaces. But in practice, these are cumbersome to maintain, particularly in corporate environments where user identities are centralized in enterprise directories.

Singularity

To tackle these problems, the third iteration of APSIM containers was implemented using Singularity. Released in 2016, Singularity Community is an open source container platform designed specifically for scientific and HPC environments. A user inside a Singularity container is the same user as outside the container is one of Singularity’s defining characteristics. It allows an end user to run a command inside of a container image as him or herself. Conversely, it does not allow impersonating other users when starting a container.

Another advantage of Singularity’s approach is the way container images are stored on disk. With Docker, container images are stored in multiple separate “layers,” which the Docker daemon needs to overlay and flatten during the container’s runtime. When multiple container images reuse the same layer, only one copy of that layer is needed to re-create the runtime container’s filesystem. This results in more efficient use of storage, but it does add a bit of complexity when it comes to distributing and inspecting container images, so Docker provides special commands to do so. With Singularity, the entire execution environment is contained within a single, executable file. This introduces duplication when multiple images have similar contents, but it makes the distribution of those images trivial since it can now be done with traditional file transfer methods, protocols, and tools.

The Docker container recipe files (i.e., the Dockerfile and related assets) can be used to re-create the container image as it was built for the project. Singularity allows importing and running Docker containers natively, so the same files can be used for both engines.

A day in the life

To illustrate the above with a practical example, let’s put you in the shoes of a computational scientist. So not to single out anyone in particular, imagine that you want to use ToolA, which processes input files and creates output with statistics about them. Before asking the sysadmin to help you out, you decide to test the tool on your local desktop to see if works.

ToolA has a simple syntax. It’s a single binary that takes one or more filenames as command line arguments and accepts a -o {json|yaml} flag to alter how the results are formatted. The outputs are stored in the same path as the input files are. For example:

$ ./ToolA file1 file2
$ ls
file1 file1.out file2 file2.out ToolA

You have several thousand files to process, but even though ToolA uses multi-threading to process files independently, you don’t have a thousand CPU cores in this machine. You must use your cluster’s job scheduler. The simplest way to do this at scale is to launch as many jobs as files you need to process, using one CPU thread each. You test the new approach:

$ export PATH=$(pwd):${PATH}
$ cd ~/input/files/to/process/samples
$ ls -l | wc -l
38
$ # we will set this to the actual qsub command when we run in the cluster
$ qsub=””
$ for myfiles in *; do $qsub ToolA $myfiles; done

$ ls -l | wc -l
75

Excellent. Time to bug the sysadmin and get ToolA installed in the cluster.

It turns out that ToolA is easy to install in Ubuntu Bionic because it is already in the repos, but a nightmare to compile in CentOS 7, which our HPC cluster uses. So the sysadmin decides to create a Docker container image and push it to the company’s registry. He also adds you to the docker group after begging you not to misbehave.

You look up the syntax of the Docker commands and decide to do a few test runs before submitting thousands of jobs that could potentially fail.

$ cd ~/input/files/to/process/samples
$ rm -f *.out
$ ls -l | wc -l
38
$ docker run -d registry.example.com/ToolA:latest file1
e61d12292d69556eabe2a44c16cbd27486b2527e2ce4f95438e504afb7b02810
$ ls -l | wc -l
38
$ ls *out
$

Ah, of course, you forgot to mount the files. Let’s try again.

$ docker run -d -v $(pwd):/mnt registry.example.com/ToolA:latest /mnt/file1
653e785339099e374b57ae3dac5996a98e5e4f393ee0e4adbb795a3935060acb
$ ls -l | wc -l
38
$ ls *out
$
$ docker logs 653e785339
ToolA: /mnt/file1: Permission denied

You ask the sysadmin for help, and he tells you that SELinux is blocking the process from accessing the files and that you’re missing a flag in your docker run. You don’t know what SELinux is, but you remember it mentioned somewhere in the docs, so you look it up and try again:

$ docker run -d -v $(pwd):/mnt:z registry.example.com/ToolA:latest /mnt/file1
8ebfcbcb31bea0696e0a7c38881ae7ea95fa501519c9623e1846d8185972dc3b
$ ls *out
$
$ docker logs 8ebfcbcb31
ToolA: /mnt/file1: Permission denied

You go back to the sysadmin, who tells you that the container uses myuser with UID 1000 by default, but your files are readable only to you, and your UID is different. So you do what you know is bad practice, but you’re fed up: you run chmod 777 file1 before trying again. You’re also getting tired of having to copy and paste hashes, so you add another flag to your docker run:

$ docker run -d –name=test -v $(pwd):/mnt:z registry.example.com/ToolA:latest /mnt/file1
0b61185ef4a78dce988bb30d87e86fafd1a7bbfb2d5aea2b6a583d7ffbceca16
$ ls *out
$
$ docker logs test
ToolA: cannot create regular file ‘/mnt/file1.out’: Permission denied

Alas, at least this time you get a different error. Progress! Your friendly sysadmin tells you that the process in the container won’t have write permissions on your directory because the identities don’t match, and you need more flags on your command line.

$ docker run -d -u $(id -u):$(id -g) –name=test -v $(pwd):/mnt:z registry.example.com/ToolA:latest /mnt/file1
docker: Error response from daemon: Conflict. The container name “/test” is already in use by container “0b61185ef4a78dce988bb30d87e86fafd1a7bbfb2d5aea2b6a583d7ffbceca16”. You have to remove (or rename) that container to be able to reuse that name.
See ‘docker run –help’.
$ docker rm test
$ docker run -d -u $(id -u):$(id -g) –name=test -v $(pwd):/mnt:z registry.example.com/ToolA:latest /mnt/file1
06d5b3d52e1167cde50c2e704d3190ba4b03f6854672cd3ca91043ad23c1fe09
$ ls *out
file1.out
$

Success! Now we just need to wrap our command with the one used by the job scheduler and wrap all of that again with our for loop.

$ cd ~/input/files/to/process
$ ls -l | wc -l
934752984
$ for myfiles in *; do qsub -q short_jobs -N “toola_${myfiles}” docker run -d -u $(id -u):$(id -g) –name=”toola_${myfiles}” -v $(pwd):/mnt:z registry.example.com/ToolA:latest /mnt/${myfiles}; done

Now that was a bit clunky, wasn’t it? Let’s look at how using Singularity simplifies it.

$ cd ~
$ singularity pull –name ToolA.simg docker://registry.example.com/ToolA:latest
$ ls
input ToolA.simg
$ ./ToolA.simg
Usage: ToolA [-o {json|yaml}] <file1> [file2…fileN]
$ cd ~/input/files/to/process
$ for myfiles in *; do qsub -q short_jobs -N “toola_${myfiles}” ~/ToolA.simg ${myfiles}; done

Need I say more?

This works because, by default, Singularity containers run as the user that started them. There are no background daemons, so privilege escalation is not allowed. Singularity also bind-mounts a few directories by default ($PWD$HOME/tmp/proc/sys, and /dev). An administrator can configure additional ones that are also mounted by default on a global (i.e., host) basis, and the end user can (optionally) also bind arbitrary ones at runtime. Of course, standard Unix permissions apply, so this still doesn’t allow unrestricted access to host files.

But what about climate change?

Oh! Of course. Back on topic. We decided to break down the bulk of simulations that we need to run on a per-project basis. Each project can then focus on a specific crop, a specific geographical area, or different crop management techniques. After all of the simulations for a specific project are completed, they are collated into a MariaDB database and visualized using an RStudio Shinyweb app.

shinyappfrontui_nz.png

Prototype Shiny app screenshot

Prototype Shiny app screenshot shows a nationwide run of climate change’s impact on maize silage comparing current and end-of-century scenarios.

The app allows us to compare two different scenarios (reference vs. alternative) that the user can construct by choosing from a combination of variables related to the climate (including the current climate and the climate-change projections for mid-century and end of the century), the soil, and specific management techniques (like irrigation or fertilizer use). The results are displayed as raster values or differences (averages, or coefficients of variation of results per pixel) and their distribution across the area of interest.

The screenshot above shows an example of a prototype nationwide run across “arable lands” where we compare the silage maize biomass for a baseline (1985-2005) vs. future climate change (2085-2100) for the most extreme emissions scenario. In this example, we do not take into account any changes in management techniques, such as adapting sowing dates. We see that most negative effects on yield in the Southern Hemisphere occur in northern areas, while the extreme south shows positive responses. Of course, we would recommend (and you would expect) that farmers start adapting to warm temperatures starting earlier in the year and react accordingly (e.g., sowing earlier, which would reduce the negative impacts and enhance the positive ones).

Next steps

With the framework in place, all that remains is the heavy lifting. Run ALL the simulations! Of course, that is easier said than done. Our in-house cluster is a shared resource where we must compete for capacity with several other projects and teams.

Additional work is planned to further generalize how we distribute jobs across compute resources so we can leverage capacity wherever we can get it (including the public cloud if the project receives sufficient additional funding). This would mean becoming job scheduler-agnostic and solve the data gravity problem.

Work is also underway to further refine the UI and UX aspects of the web application until we are comfortable it can be published to policymakers and other interested parties.

Source

Entroware Launches Hades, Its First AMD-Powered Workstation with Ubuntu Linux

UK-based computer manufacturer Entroware has launched today Hades, their latest and most powerful workstation with Ubuntu Linux.

With Hades, Entroware debut their first AMD-powered system that’s perfect for Deep Learning, a new area of Machine Learning (ML) research, but also for businesses, science labs, and animation studios. Entroware Hades can achieve all that thanks to its 2nd generation AMD Ryzen “Threadripper” processors with up to 64 threads, Nvidia GPUs with up to 11GB memory, and up to 128GB RAM and 68TB storage.

“The Hades workstation is our first AMD system and brings the very best of Linux power, by combining cutting edge components to provide the foundation for the most demanding applications or run even the most demanding Deep Learning projects at lightning speeds with impeccable precision,” says Entroware.

Technical specifications of Entroware Hades

The Entroware Hades workstation can be configured to your needs, and you’ll be able to choose a CPU from AMD Ryzen TR 1900X, 2920X, 2950X, 2970WX, or 2990WX, and RAM from 16GB to 128GB DDR4 2933Mhz or from 32GB to 128GB DDR4 2400 Mhz ECC.

For graphics, you can configure Entroware Hades with 2GB Nvidia GeForce GT 1030, 8GB Nvidia GeForce RTX 2070 or 2080, as well as 11GB Nvidia GeForce RTX 2080 Ti GPUs. For storage, you’ll have up to 2TB SSD for main drive and up to 32TB SSD or up to 64TB HDD for additional drives.

Ports include 2 x USB Hi-Speed 2.0, 2 x USB SuperSpeed 3.0, 1 x USB SuperSpeed 3.0 Type-C, 1 x headphone jack, 1 x microphone jack, 1 x PS/2 keyboard/mouse combo, 8 x USB SuperSpeed 3.1, 1 x USB SuperSpeed 3.1 10Gbps, 1 x USB SuperSpeed 3.1 10Gbps Type-C, 5 x audio jacks, 2 x RJ-45 Gigabit Ethernet, amd 2 x Wi-Fi AC antenna connectors.

Finally, you can choose to have your brand new Entroware Hades workstation shipped with either Ubuntu 18.04 LTS, Ubuntu MATE 18.04 LTS, Ubuntu 18.10, or Ubuntu MATE 18.10. Entroware Hades’ price stars from £1,599.99 and can be delivered to UK, Spain, Italy, France, Germany, and Ireland. More details about Entroware Hades are available on the official website.

Entroware Hades

Entroware Hades

Entroware Hades

Entroware Hades

Entroware Hades

Entroware Hades

Source

A Use Case for Network Automation

A Use Case for Network Automation

""

Use the Python Netmiko module to automate switches, routers and firewalls from multiple vendors.

I frequently find myself in the position of confronting “hostile” networks. By hostile, I mean that there is no existing documentation, or if it does exist, it is hopelessly out of date or being hidden deliberately. With that in mind, in this article, I describe the tools I’ve found useful to recover control, audit, document and automate these networks. Note that I’m not going to try to document any of the tools completely here. I mainly want to give you enough real-world examples to prove how much time and effort you could save with these tools, and I hope this article motivates you to explore the official documentation and example code.

In order to save money, I wanted to use open-source tools to gather information from all the devices on the network. I haven’t found a single tool that works with all the vendors and OS versions that typically are encountered. SNMP could provide a lot the information I need, but it would have to be configured on each device manually first. In fact, the mass enablement of SNMP could be one of the first use cases for the network automation tools described in this article.

Most modern devices support REST APIs, but companies typically are saddled with lots of legacy devices that don’t support anything fancier than Telnet and SSH. I settled on SSH access as the lowest common denominator, as every device must support this in order to be managed on the network.

My preferred automation language is Python, so the next problem was finding a Python module that abstracted the SSH login process, making it easy to run commands and gather command output.

Why Netmiko?

I discovered the Paramiko SSH module quite a few years ago and used it to create real-time inventories of Linux servers at multiple companies. It enabled me to log in to hosts and gather the output of commands, such as lspcidmidecode and lsmod.

The command output populated a database that engineers could use to search for specific hardware. When I then tried to use Paramiko to inventory network switches, I found that certain switch vendor and OS combinations would cause Paramiko SSH sessions to hang. I could see that the SSH login itself was successful, but the session would hang right after the login. I never was able to determine the cause, but I discovered Netmiko while researching the hanging problem. When I replaced all my Paramiko code with Netmiko code, all my session hanging problems went away, and I haven’t looked back since. Netmiko also is optimized for the network device management task, while Paramiko is more of a generic SSH module.

Programmatically Dealing with the Command-Line Interface

People familiar with the “Expect” language will recognize the technique for sending a command and matching the returned CLI prompts and command output to determine whether the command was successful. In the case of most network devices, the CLI prompts change depending on whether you’re in an unprivileged mode, in “enable” mode or in “config” mode.

For example, the CLI prompt typically will be the device hostname followed by specific characters.

Unprivileged mode:


sfo03-r7r9-sw1>

Privileged or “enable” mode:


sfo03-r7r9-sw1#

“Config” mode:


sfo03-r7r9-sw1(config)#

These different prompts enable you to make transitions programmatically from one mode to another and determine whether the transitions were successful.

Abstraction

Netmiko abstracts many common things you need to do when talking to switches. For example, if you run a command that produces more than one page of output, the switch CLI typically will “page” the output, waiting for input before displaying the next page. This makes it difficult to gather multipage output as single blob of text. The command to turn off paging varies depending on the switch vendor. For example, this might be terminal length 0 for one vendor and set cli pager off for another. Netmiko abstracts this operation, so all you need to do is use the disable_paging() function, and it will run the appropriate commands for the particular device.

Dealing with a Mix of Vendors and Products

Netmiko supports a growing list of network vendor and product combinations. You can find the current list in the documentation. Netmiko doesn’t auto-detect the vendor, so you’ll need to specify that information when using the functions. Some vendors have product lines with different CLI commands. For example, Dell has two types: dell_force10 and dell_powerconnect; and Cisco has several CLI versions on the different product lines, including cisco_ioscisco_nxos and cisco_asa.

Obtaining Netmiko

The official Netmiko code and documentation is at https://github.com/ktbyers/netmiko, and the author has a collection of helpful articles on his home page.

If you’re comfortable with developer tools, you can clone the GIT repo directly. For typical end users, installing Netmiko using pip should suffice:


# pip install netmiko

A Few Words of Caution

Before jumping on the network automation bandwagon, you need to sort out the following:

  • Mass configuration: be aware that the slowness of traditional “box-by-box” network administration may have protected you somewhat from massive mistakes. If you manually made a change, you typically would be alerted to a problem after visiting only a few devices. With network automation tools, you can render all your network devices useless within seconds.
  • Configuration backup strategy: this ideally would include a versioning feature, so you can roll back to a specific “known good” point in time. Check out the RANCID package before you spend a lot of money on this capability.
  • Out-of-band network management: almost any modern switch or network device is going to have a dedicated OOB port. This physically separate network permits you to recover from configuration mistakes that potentially could cut you off from the very devices you’re managing.
  • A strategy for testing: for example, have a dedicated pool of representative equipment permanently set aside for testing and proof of concepts. When rolling out a change on a production network, first verify the automation on a few devices before trying to do hundreds at once.

Using Netmiko without Writing Any Code

Netmiko’s author has created several standalone scripts called Netmiko Tools that you can use without writing any Python code. Consult the official documentation for details, as I offer only a few highlights here.

At the time of this writing, there are three tools:

netmiko-show

Run arbitrary “show” commands on one or more devices. By default, it will display the entire configuration, but you can supply an alternate command with the --cmd option. Note that “show” commands can display many details that aren’t stored within the actual device configurations.

For example, you can display Spanning Tree Protocol (STP) details from multiple devices:


% netmiko-show --cmd "show spanning-tree detail" arista-eos |
 ↪egrep "(last change|from)"
sfo03-r1r12-sw1.txt:  Number of topology changes 2307 last
 ↪change occurred 19:14:09 ago
sfo03-r1r12-sw1.txt:          from Ethernet1/10/2
sfo03-r1r12-sw2.txt:  Number of topology changes 6637 last
 ↪change occurred 19:14:09 ago
sfo03-r1r12-sw2.txt:          from Ethernet1/53

This information can be very helpful when tracking down the specific switch and switch port responsible for an STP flapping issue. Typically, you would be looking for a very high count of topology changes that is rapidly increasing, with a “last change time” in seconds. The “from” field gives you the source port of the change, enabling you to narrow down the source of the problem.

The “old-school” method for finding this information would be to log in to the top-most switch, look at its STP detail, find the problem port, log in to the switch downstream of this port, look at its STP detail and repeat this process until you find the source of the problem. The Netmiko Tools allow you to perform a network-wide search for all the information you need in a single operation.

netmiko-cfg

Apply snippets of configurations to one or more devices. Specify the configuration command with the --cmd option or read configuration from a file using --infile. This could be used for mass configurations. Mass changes could include DNS servers, NTP servers, SNMP community strings or syslog servers for the entire network. For example, to configure the read-only SNMP community on all of your Arista switches:


$ netmiko-cfg --cmd "snmp-server community mysecret ro"
 ↪arista-eos

You still will need to verify that the commands you’re sending are appropriate for the vendor and OS combinations of the target devices, as Netmiko will not do all of this work for you. See the “groups” mechanism below for how to apply vendor-specific configurations to only the devices from a particular vendor.

netmiko-grep

Search for a string in the configuration of multiple devices. For example, verify the current syslog destination in your Arista switches:


$ netmiko-grep --use-cache "logging host" arista-eos
sfo03-r2r7-sw1.txt:logging host 10.7.1.19
sfo03-r3r14-sw1.txt:logging host 10.8.6.99
sfo03-r3r16-sw1.txt:logging host 10.8.6.99
sfo03-r4r18-sw1.txt:logging host 10.7.1.19

All of the Netmiko tools depend on an “inventory” of devices, which is a YAML-formatted file stored in “.netmiko.yml” in the current directory or your home directory.

Each device in the inventory has the following format:


sfo03-r1r11-sw1:
  device_type: cisco_ios
  ip: sfo03-r1r11-sw1
  username: netadmin
  password: secretpass
  port: 22

Device entries can be followed by group definitions. Groups are simply a group name followed by a list of devices:


cisco-ios:
  - sfo03-r1r11-sw1
cisco-nxos:
  - sfo03-r1r12-sw2
  - sfo03-r3r17-sw1
arista-eos:
  - sfo03-r1r10-sw2
  - sfo03-r6r6-sw1

For example, you can use the group name “cisco-nxos” to run Cisco Nexus NX-OS-unique commands, such as feature:


% netmiko-cfg --cmd "feature interface-vlan" cisco-nxos

Note that the device type example is just one type of group. Other groups could indicate physical location (“SFO03”, “RKV02”), role (“TOR”, “spine”, “leaf”, “core”), owner (“Eng”, “QA”) or any other categories that make sense to you.

As I was dealing with hundreds of devices, I didn’t want to create the YAML-formatted inventory file by hand. Instead, I started with a simple list of devices and the corresponding Netmiko “device_type”:


sfo03-r1r11-sw1,cisco_ios
sfo03-r1r12-sw2,cisco_nxos
sfo03-r1r10-sw2,arista_eos
sfo03-r4r5-sw3,arista_eos
sfo03-r1r12-sw1,cisco_nxos
sfo03-r5r15-sw2,dell_force10

I then used standard Linux commands to create the YAML inventory file:


% grep -v '^#' simplelist.txt | awk -F, '{printf("%s:\n
 ↪device_type:
%s\n  ip: %s\n  username: netadmin\n  password:
 ↪secretpass\n  port:
22\n",$1,$2,$1)}' >> .netmiko.yml

I’m using a centralized authentication system, so the user name and password are the same for all devices. The command above yields the following YAML-formatted file:


sfo03-r1r11-sw1:
  device_type: cisco_ios
  ip: sfo03-r1r11-sw1
  username: netadmin
  password: secretpass
  port: 22
sfo03-r1r12-sw2:
  device_type: cisco_nxos
  ip: sfo03-r1r12-sw2
  username: netadmin
  password: secretpass
  port: 22
sfo03-r1r10-sw2:
  device_type: arista_eos
  ip: sfo03-r1r10-sw2
  username: netadmin
  password: secretpass
  port: 22

Once you’ve created this inventory, you can use the Netmiko Tools against individual devices or groups of devices.

A side effect of creating the inventory is that you now have a master list of devices on the network; you also have proven that the device names are resolvable via DNS and that you have the correct login credentials. This is actually a big step forward in some environments where I’ve worked.

Note that netmiko-grep caches the device configs locally. Once the cache has been built, you can make subsequent search operations run much faster by specifying the --use-cache option.

It now should be apparent that you can use Netmiko Tools to do a lot of administration and automation without writing any Python code. Again, refer to official documentation for all the options and more examples.

Start Coding with Netmiko

Now that you have a sense of what you can do with Netmiko Tools, you’ll likely come up with unique scenarios that require actual coding.

For the record, I don’t consider myself an advanced Python programmer at this time, so the examples here may not be optimal. I’m also limiting my examples to snippets of code rather than complete scripts. The example code is using Python 2.7.

My Approach to the Problem

I wrote a bunch of code before I became aware of the Netmiko Tools commands, and I found that I’d duplicated a lot of their functionality. My original approach was to break the problem into two separate phases. The first phase was the “scanning” of the switches and storing their configurations and command output locally, The second phase was processing and searching across the stored data.

My first script was a “scanner” that reads a list of switch hostnames and Netmiko device types from a simple text file, logs in to each switch, runs a series of CLI commands and then stores the output of each command in text files for later processing.

Reading a List of Devices

My first task is to read a list of network devices and their Netmiko “device type” from a simple text file in the CSV format. I include the csv module, so I can use the csv.Dictreader function, which returns CSV fields as a Python dictionary. I like the CSV file format, as anyone with limited UNIX/Linux skills likely knows how to work with it, and it’s a very common file type for exporting data if you have an existing database of network devices.

For example, the following is a list of switch names and device types in CSV format:


sfo03-r1r11-sw1,cisco_ios
sfo03-r1r12-sw2,cisco_nxos
sfo03-r1r10-sw2,arista_eos
sfo03-r4r5-sw3,arista_eos
sfo03-r1r12-sw1,cisco_nxos
sfo03-r5r15-sw2,dell_force10

The following Python code reads the data filename from the command line, opens the file and then iterates over each device entry, calling the login_switch() function that will run the actual Netmiko code:


import csv
import sys
import logging
def main():
# get data file from command line
   devfile = sys.argv[1]
# open file and extract the two fields
   with open(devfile,'rb') as devicesfile:
       fields = ['hostname','devtype']
       hosts = csv.DictReader(devicesfile,fieldnames=fields,
↪delimiter=',')
# iterate through list of hosts, calling "login_switch()"
# for each one
       for host in hosts:
           hostname = host['hostname']
           print "hostname = ",hostname
           devtype = host['devtype']
           login_switch(hostname,devtype)

The login_switch() function runs any number of commands and stores the output in separate text files under a directory based on the name of the device:


# import required module
from netmiko import ConnectHandler
# login into switch and run command
def login_switch(host,devicetype):
# required arguments to ConnectHandler
    device = {
# device_type and ip are read from data file
    'device_type': devicetype,
    'ip':host,
# device credentials are hardcoded in script for now
    'username':'admin',
    'password':'secretpass',
    }
# if successful login, run command on CLI
    try:
        net_connect = ConnectHandler(**device)
        commands = "show version"
        output = net_connect.send_command(commands)
# construct directory path based on device name
        path = '/root/login/scan/' + host + "/"
        make_dir(path)
        filename = path + "show_version"
# store output of command in file
        handle = open (filename,'w')
        handle.write(output)
        handle.close()
# if unsuccessful, print error
    except Exception as e:
        print "RAN INTO ERROR "
        print "Error: " + str(e)

This code opens a connection to the device, executes the show version command and stores the output in /root/login/scan/<devicename>/show_version.

The show version output is incredible useful, as it typically contains the vendor, model, OS version, hardware details, serial number and MAC address. Here’s an example from an Arista switch:


Arista DCS-7050QX-32S-R
Hardware version:    01.31
Serial number:       JPE16292961
System MAC address:  444c.a805.6921

Software image version: 4.17.0F
Architecture:           i386
Internal build version: 4.17.0F-3304146.4170F
Internal build ID:      21f25f02-5d69-4be5-bd02-551cf79903b1

Uptime:                 25 weeks, 4 days, 21 hours and 32
                        minutes
Total memory:           3796192 kB
Free memory:            1230424 kB

This information allows you to create all sorts of good stuff, such as a hardware inventory of your network and a software version report that you can use for audits and planned software updates.

My current script runs show lldp neighborsshow runshow interface status and records the device CLI prompt in addition to show version.

The above code example constitutes the bulk of what you need to get started with Netmiko. You now have a way to run arbitrary commands on any number of devices without typing anything by hand. This isn’t Software-Defined Networking (SDN) by any means, but it’s still a huge step forward from the “box-by-box” method of network administration.

Next, let’s try the scanning script on the sample network:


$ python scanner.py devices.csv
hostname = sfo03-r1r15-sw1
hostname = sfo03-r3r19-sw0
hostname = sfo03-r1r16-sw2
hostname = sfo03-r3r8-sw2
RAN INTO ERROR
Error: Authentication failure: unable to connect dell_force10
 ↪sfo03-r3r8-sw2:22
Authentication failed.
hostname = sfo03-r3r10-sw2
hostname = sfo03-r3r11-sw1
hostname = sfo03-r4r14-sw2
hostname = sfo03-r4r15-sw1

If you have a lot of devices, you’ll likely experience login failures like the one in the middle of the scan above. These could be due to multiple reasons, including the device being down, being unreachable over the network, the script having incorrect credentials and so on. Expect to make several passes to address all the problems before you get a “clean” run on a large network.

This finishes the “scanning” portion of process, and all the data you need is now stored locally for further analysis in the “scan” directory, which contains subdirectories for each device:


$ ls scan/
sfo03-r1r10-sw2 sfo03-r2r14-sw2 sfo03-r3r18-sw1 sfo03-r4r8-sw2
 ↪sfo03-r6r14-sw2
sfo03-r1r11-sw1 sfo03-r2r15-sw1 sfo03-r3r18-sw2 sfo03-r4r9-sw1
 ↪sfo03-r6r15-sw1
sfo03-r1r12-sw0 sfo03-r2r16-sw1 sfo03-r3r19-sw0 sfo03-r4r9-sw2
 ↪sfo03-r6r16-sw1
sfo03-r1r12-sw1 sfo03-r2r16-sw2 sfo03-r3r19-sw1 sfo03-r5r10-sw1
 ↪sfo03-r6r16-sw2
sfo03-r1r12-sw2 sfo03-r2r2-sw1  sfo03-r3r4-sw2  sfo03-r5r10-sw2
 ↪sfo03-r6r17- sw1

You can see that each subdirectory contains separate files for each command output:


$ ls sfo03-r1r10-sw2/
show_lldp prompt show_run show_version show_int_status

Debugging via Logging

Netmiko normally is very quiet when it’s running, so it’s difficult to tell where things are breaking in the interaction with a network device. The easiest way I have found to debug problems is to use the logging module. I normally keep this disabled, but when I want to turn on debugging, I uncomment the line starting with logging.basicConfig line below:


import logging
if __name__ == "__main__":
#  logging.basicConfig(level=logging.DEBUG)
  main()

Then I run the script, and it produces output on the console showing the entire SSH conversation between the netmiko module and the remote device (a switch named “sfo03-r1r10-sw2” in this example):


DEBUG:netmiko:In disable_paging
DEBUG:netmiko:Command: terminal length 0
DEBUG:netmiko:write_channel: terminal length 0
DEBUG:netmiko:Pattern is: sfo03\-r1r10\-sw2
DEBUG:netmiko:_read_channel_expect read_data: terminal
 ↪length 0
DEBUG:netmiko:_read_channel_expect read_data: Pagination
disabled.
sfo03-r1r10-sw2#
DEBUG:netmiko:Pattern found: sfo03\-r1r10\-sw2 terminal
 ↪length 0
Pagination disabled.
sfo03-r1r10-sw2#
DEBUG:netmiko:terminal length 0
Pagination disabled.
sfo03-r1r10-sw2#
DEBUG:netmiko:Exiting disable_paging

In this case, the terminal length 0 command sent by Netmiko is successful. In the following example, the command sent to change the terminal width is rejected by the switch CLI with the “Authorization denied” message:


DEBUG:netmiko:Entering set_terminal_width
DEBUG:netmiko:write_channel: terminal width 511
DEBUG:netmiko:Pattern is: sfo03\-r1r10\-sw2
DEBUG:netmiko:_read_channel_expect read_data: terminal
 ↪width 511
DEBUG:netmiko:_read_channel_expect read_data: % Authorization
denied for command 'terminal width 511'
sfo03-r1r10-sw2#
DEBUG:netmiko:Pattern found: sfo3\-r1r10\-sw2 terminal
 ↪width 511
% Authorization denied for command 'terminal width 511'
sfo03-r1r10-sw2#
DEBUG:netmiko:terminal width 511
% Authorization denied for command 'terminal width 511'
sfo03-r1r10-sw2#
DEBUG:netmiko:Exiting set_terminal_width

The logging also will show the entire SSH login and authentication sequence in detail. I had to deal with one switch that was using a depreciated SSH cypher that was disabled by default in the SSH client, causing the SSH session to fail when trying to authenticate. With logging, I could see the client rejecting the cypher being offered by the switch. I also discovered another type of switch where the Netmiko connection appeared to hang. The logging revealed that it was stuck at the more? prompt, as the paging was never disabled successfully after login. On this particular switch, the commands to disable paging had to be run in a privileged mode. My quick fix was add a disable_paging() function after the “enable” mode was entered.

Analysis Phase

Now that you have all the data you want, you can start processing it.

A very simple example would be an “audit”-type of check, which verifies that hostname registered in DNS matches the hostname configured in the device. If these do not match, it will cause all sorts of confusion when logging in to the device, correlating syslog messages or looking at LLDP and CPD output:


import os
import sys
directory = "/root/login/scan"
for filename in os.listdir(directory):
    prompt_file = directory + '/' + filename + '/prompt'
    try:
         prompt_fh = open(prompt_file,'rb')
    except IOError:
         "Can't open:", prompt_file
         sys.exit()

    with prompt_fh:
        prompt = prompt_fh.read()
        prompt = prompt.rstrip('#')
        if (filename != prompt):
            print 'switch DNS hostname %s != configured
             ↪hostname %s' %(filename, prompt)

This script opens the scan directory, opens each “prompt” file, derives the configured hostname by stripping off the “#” character, compares it with the subdirectory filename (which is the hostname according to DNS) and prints a message if they don’t match. In the example below, the script finds one switch where the DNS switch name doesn’t match the hostname configured on the switch:


$ python name_check.py
switch DNS hostname sfo03-r1r12-sw2 != configured hostname
 ↪SFO03-R1R10-SW1-Cisco_Core

It’s a reality that most complex networks are built up over a period of years by multiple people with different naming conventions, work styles, skill sets and so on. I’ve accumulated a number of “audit”-type checks that find and correct inconsistencies that can creep into a network over time. This is the perfect use case for network automation, because you can see everything at once, as opposed going through each device, one at a time.

Performance

During the initial debugging, I had the “scanning” script log in to each switch in a serial fashion. This worked fine for a few switches, but performance became a problem when I was scanning hundreds at a time. I used the Python multiprocessing module to fire off a bunch of “workers” that interacted with switches in parallel. This cut the processing time for the scanning portion down to a couple minutes, as the entire scan took only as long as the slowest switch took to complete. The switch scanning problem fits quite well into the multiprocessing model, because there are no events or data to coordinate between the individual workers. The Netmiko Tools also take advantage of multiprocessing and use a cache system to improve performance.

Future Directions

The most complicated script I’ve written so far with Netmiko logs in to every switch, gathers the LLDP neighbor info and produces a text-only topology map of the entire network. For those that are unfamiliar with LLDP, this is the Link Layer Discovery Protocol. Most modern network devices are sending LLDP multicasts out every port every 30 seconds. The LLDP data includes many details, including the switch hostname, port name, MAC address, device model, vendor, OS and so on. It allows any given device to know about all its immediate neighbors.

For example, here’s a typical LLDP display on a switch. The “Neighbor” columns show you details on what is connected to each of your local ports:


sfo03-r1r5-sw1# show lldp neighbors
Port  Neighbor Device ID   Neighbor Port ID   TTL
Et1   sfo03-r1r3-sw1         Ethernet1          120
Et2   sfo03-r1r3-sw2         Te1/0/2            120
Et3   sfo03-r1r4-sw1         Te1/0/2            120
Et4   sfo03-r1r6-sw1         Ethernet1          120
Et5   sfo03-r1r6-sw2         Te1/0/2            120

By asking all the network devices for their list of LLDP neighbors, it’s possible to build a map of the network. My approach was to build a list of local switch ports and their LLDP neighbors for the top-level switch, and then recursively follow each switch link down the hierarchy of switches, adding each entry to a nested dictionary. This process becomes very complex when there are redundant links and endless loops to avoid, but I found it a great way to learn more about complex Python data structures.

The following output is from my “mapper” script. It uses indentation (from left to right) to show the hierarchy of switches, which is three levels deep in this example:


sfo03-r1r5-core:Et6  sfo03-r1r8-sw1:Ethernet1
    sfo03-r1r8-sw1:Et22 sfo03-r6r8-sw3:Ethernet48
    sfo03-r1r8-sw1:Et24 sfo03-r6r8-sw2:Te1/0/1
    sfo03-r1r8-sw1:Et25 sfo03-r3r7-sw2:Te1/0/1
    sfo03-r1r8-sw1:Et26 sfo03-r3r7-sw1:24

It prints the port name next to the switch hostname, which allows you to see both “sides” of the inter-switch links. This is extremely useful when trying to orient yourself on the network. I’m still working on this script, but it currently produces a “real-time” network topology map that can be turned into a network diagram.

I hope this information inspires you to investigate network automation. Start with Netmiko Tools and the inventory file to get a sense of what is possible. You likely will encounter a scenario that requires some Python coding, either using the output of Netmiko Tools or perhaps your own standalone script. Either way, the Netmiko functions make automating a large, multivendor network fairly easy.

Source

How to Install Visual Studio Code on Debian 9

Visual Studio Code is a free and open source cross-platform code editor developed by Microsoft. It has a built-in debugging support, embedded Git control, syntax highlighting, code completion, integrated terminal, code refactoring and snippets. Visual Studio Code functionality can be extended using extensions.

This tutorial explains how to install Visual Studio Code editor on Debian using apt from the VS Code repository.

The user you are logged in as must have sudo privileges to be able to install packages.

Complete the following steps to install Visual Studio Code on your Debian system:

  1. Start by updating the packages index and installing the dependencies by typing:
    sudo apt updatesudo apt install software-properties-common apt-transport-https curl
  2. Import the Microsoft GPG key using the following curl command:
    curl -sSL https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

    Add the Visual Studio Code repository to your system:

    sudo add-apt-repository "deb [arch=amd64] https://packages.microsoft.com/repos/vscode stable main"
  3. Once the repository is added, install the latest version of Visual Studio Code with:
    sudo apt update
    sudo apt install code

That’s it. Visual Studio Code has been installed on your Debian desktop and you can start using it.

Once the VS Code is installed on your Debian system you can launch it either from the command line by typing code or by clicking on the VS Code icon (Activities -> Visual Studio Code).

When you start VS Code for the first time, a window like the following will be displayed:

You can now start installing extensions and configuring VS Code according to your preferences.

When a new version of Visual Studio Code is released you can update the package through your desktop standard Software Update tool or by running the following commands in your terminal:

sudo apt update
sudo apt upgrade

You have successfully installed VS Code on your Debian 9 machine. Your next step could be to install Additional Components and customize your User and Workspace Settings.

Source

The Evil-Twin Framework: A tool for testing WiFi security

Learn about a pen-testing tool intended to test the security of WiFi access points for all types of threats.

lock on world map

The increasing number of devices that connect over-the-air to the internet over-the-air and the wide availability of WiFi access points provide many opportunities for attackers to exploit users. By tricking users to connect to rogue access points, hackers gain full control over the users’ network connection, which allows them to sniff and alter traffic, redirect users to malicious sites, and launch other attacks over the network..

To protect users and teach them to avoid risky online behaviors, security auditors and researchers must evaluate users’ security practices and understand the reasons they connect to WiFi access points without being confident they are safe. There are a significant number of tools that can conduct WiFi audits, but no single tool can test the many different attack scenarios and none of the tools integrate well with one another.

The Evil-Twin Framework (ETF) aims to fix these problems in the WiFi auditing process by enabling auditors to examine multiple scenarios and integrate multiple tools. This article describes the framework and its functionalities, then provides some examples to show how it can be used.

The ETF architecture

The ETF framework was written in Python because the development language is very easy to read and make contributions to. In addition, many of the ETF’s libraries, such as Scapy, were already developed for Python, making it easy to use them for ETF.

The ETF architecture (Figure 1) is divided into different modules that interact with each other. The framework’s settings are all written in a single configuration file. The user can verify and edit the settings through the user interface via the ConfigurationManager class. Other modules can only read these settings and run according to them.

Evil-Twin Framework Architecture

Figure 1: Evil-Twin framework architecture

The ETF supports multiple user interfaces that interact with the framework. The current default interface is an interactive console, similar to the one on Metasploit. A graphical user interface (GUI) and a command line interface (CLI) are under development for desktop/browser use, and mobile interfaces may be an option in the future. The user can edit the settings in the configuration file using the interactive console (and eventually with the GUI). The user interface can interact with every other module that exists in the framework.

The WiFi module (AirCommunicator) was built to support a wide range of WiFi capabilities and attacks. The framework identifies three basic pillars of Wi-Fi communication: packet sniffingcustom packet injection, and access point creation. The three main WiFi communication modules are AirScannerAirInjector, and AirHost, which are responsible for packet sniffing, packet injection, and access point creation, respectively. The three classes are wrapped inside the main WiFi module, AirCommunicator, which reads the configuration file before starting the services. Any type of WiFi attack can be built using one or more of these core features.

To enable man-in-the-middle (MITM) attacks, which are a common way to attack WiFi clients, the framework has an integrated module called ETFITM (Evil-Twin Framework-in-the-Middle). This module is responsible for the creation of a web proxy used to intercept and manipulate HTTP/HTTPS traffic.

There are many other tools that can leverage the MITM position created by the ETF. Through its extensibility, ETF can support them—and, instead of having to call them separately, you can add the tools to the framework just by extending the Spawner class. This enables a developer or security auditor to call the program with a preconfigured argument string from within the framework.

The other way to extend the framework is through plugins. There are two categories of plugins: WiFi plugins and MITM plugins. MITM plugins are scripts that can run while the MITM proxy is active. The proxy passes the HTTP(S) requests and responses through to the plugins where they can be logged or manipulated. WiFi plugins follow a more complex flow of execution but still expose a fairly simple API to contributors who wish to develop and use their own plugins. WiFi plugins can be further divided into three categories, one for each of the core WiFi communication modules.

Each of the core modules has certain events that trigger the execution of a plugin. For instance, AirScanner has three defined events to which a response can be programmed. The events usually correspond to a setup phase before the service starts running, a mid-execution phase while the service is running, and a teardown or cleanup phase after a service finishes. Since Python allows multiple inheritance, one plugin can subclass more than one plugin class.

Figure 1 above is a summary of the framework’s architecture. Lines pointing away from the ConfigurationManager mean that the module reads information from it and lines pointing towards it mean that the module can write/edit configurations.

Examples of using the Evil-Twin Framework

There are a variety of ways ETF can conduct penetration testing on WiFi network security or work on end users’ awareness of WiFi security. The following examples describe some of the framework’s pen-testing functionalities, such as access point and client detection, WPA and WEP access point attacks, and evil twin access point creation.

These examples were devised using ETF with WiFi cards that allow WiFi traffic capture. They also utilize the following abbreviations for ETF setup commands:

  • APS access point SSID
  • APB access point BSSID
  • APC access point channel
  • CM client MAC address

In a real testing scenario, make sure to replace these abbreviations with the correct information.

Capturing a WPA 4-way handshake after a de-authentication attack

This scenario (Figure 2) takes two aspects into consideration: the de-authentication attack and the possibility of catching a 4-way WPA handshake. The scenario starts with a running WPA/WPA2-enabled access point with one connected client device (in this case, a smartphone). The goal is to de-authenticate the client with a general de-authentication attack then capture the WPA handshake once it tries to reconnect. The reconnection will be done manually immediately after being de-authenticated.

Scenario for capturing a WPA handshake after a de-authentication attack

Figure 2: Scenario for capturing a WPA handshake after a de-authentication attack

The consideration in this example is the ETF’s reliability. The goal is to find out if the tools can consistently capture the WPA handshake. The scenario will be performed multiple times with each tool to check its reliability when capturing the WPA handshake.

There is more than one way to capture a WPA handshake using the ETF. One way is to use a combination of the AirScanner and AirInjector modules; another way is to just use the AirInjector. The following scenario uses a combination of both modules.

The ETF launches the AirScanner module and analyzes the IEEE 802.11 frames to find a WPA handshake. Then the AirInjector can launch a de-authentication attack to force a reconnection. The following steps must be done to accomplish this on the ETF:

  1. Enter the AirScanner configuration mode: config airscanner
  2. Configure the AirScanner to not hop channels: config airscanner
  3. Set the channel to sniff the traffic on the access point channel (APC): set fixed_sniffing_channel = <APC>
  4. Start the AirScanner module with the CredentialSniffer plugin: start airscanner with credentialsniffer
  5. Add a target access point BSSID (APS) from the sniffed access points list: add aps where ssid = <APS>
  6. Start the AirInjector, which by default lauches the de-authentication attack: start airinjector

This simple set of commands enables the ETF to perform an efficient and successful de-authentication attack on every test run. The ETF can also capture the WPA handshake on every test run. The following code makes it possible to observe the ETF’s successful execution.

███████╗████████╗███████╗
██╔════╝╚══██╔══╝██╔════╝
█████╗     ██║   █████╗
██╔══╝     ██║   ██╔══╝
███████╗   ██║   ██║
╚══════╝   ╚═╝   ╚═╝

[+] Do you want to load an older session? [Y/n]: n
[+] Creating new temporary session on 02/08/2018
[+] Enter the desired session name:
ETF[etf/aircommunicator/]::> config airscanner
ETF[etf/aircommunicator/airscanner]::> listargs
sniffing_interface =               wlan1; (var)
probes =                True; (var)
beacons =                True; (var)
hop_channels =               false(var)
fixed_sniffing_channel =                  11(var)
ETF[etf/aircommunicator/airscanner]::> start airscanner with
arpreplayer        caffelatte         credentialsniffer  packetlogger       selfishwifi
ETF[etf/aircommunicator/airscanner]::> start airscanner with credentialsniffer
[+] Successfully added credentialsniffer plugin.
[+] Starting packet sniffer on interface ‘wlan1’
[+] Set fixed channel to 11
ETF[etf/aircommunicator/airscanner]::> add aps where ssid = CrackWPA
ETF[etf/aircommunicator/airscanner]::> start airinjector
ETF[etf/aircommunicator/airscanner]::> [+] Starting deauthentication attack
– 1000 bursts of 1 packets
– 1 different packets
[+] Injection attacks finished executing.
[+] Starting post injection methods
[+] Post injection methods finished
[+] WPA Handshake found for client ’70:3e:ac:bb:78:64′ and network ‘CrackWPA’

Launching an ARP replay attack and cracking a WEP network

The next scenario (Figure 3) will also focus on the Address Resolution Protocol(ARP) replay attack’s efficiency and the speed of capturing the WEP data packets containing the initialization vectors (IVs). The same network may require a different number of caught IVs to be cracked, so the limit for this scenario is 50,000 IVs. If the network is cracked during the first test with less than 50,000 IVs, that number will be the new limit for the following tests on the network. The cracking tool to be used will be aircrack-ng.

The test scenario starts with an access point using WEP encryption and an offline client that knows the key—the key for testing purposes is 12345, but it can be a larger and more complex key. Once the client connects to the WEP access point, it will send out a gratuitous ARP packet; this is the packet that’s meant to be captured and replayed. The test ends once the limit of packets containing IVs is captured.

Scenario for capturing a WPA handshake after a de-authentication attack

Figure 3: Scenario for capturing a WPA handshake after a de-authentication attack

ETF uses Python’s Scapy library for packet sniffing and injection. To minimize known performance problems in Scapy, ETF tweaks some of its low-level libraries to significantly speed packet injection. For this specific scenario, the ETF uses tcpdump as a background process instead of Scapy for more efficient packet sniffing, while Scapy is used to identify the encrypted ARP packet.

This scenario requires the following commands and operations to be performed on the ETF:

  1. Enter the AirScanner configuration mode: config airscanner
  2. Configure the AirScanner to not hop channels: set hop_channels = false
  3. Set the channel to sniff the traffic on the access point channel (APC): set fixed_sniffing_channel = <APC>
  4. Enter the ARPReplayer plugin configuration mode: config arpreplayer
  5. Set the target access point BSSID (APB) of the WEP network: set target_ap_bssid <APB>
  6. Start the AirScanner module with the ARPReplayer plugin: start airscanner with arpreplayer

After executing these commands, ETF correctly identifies the encrypted ARP packet, then successfully performs an ARP replay attack, which cracks the network.

Launching a catch-all honeypot

The scenario in Figure 4 creates multiple access points with the same SSID. This technique discovers the encryption type of a network that was probed for but out of reach. By launching multiple access points with all security settings, the client will automatically connect to the one that matches the security settings of the locally cached access point information.

Scenario for capturing a WPA handshake after a de-authentication attack

Figure 4: Scenario for capturing a WPA handshake after a de-authentication attack

Using the ETF, it is possible to configure the hostapd configuration file then launch the program in the background. Hostapd supports launching multiple access points on the same wireless card by configuring virtual interfaces, and since it supports all types of security configurations, a complete catch-all honeypot can be set up. For the WEP and WPA(2)-PSK networks, a default password is used, and for the WPA(2)-EAP, an “accept all” policy is configured.

For this scenario, the following commands and operations must be performed on the ETF:

  1. Enter the APLauncher configuration mode: config aplauncher
  2. Set the desired access point SSID (APS): set ssid = <APS>
  3. Configure the APLauncher as a catch-all honeypot: set catch_all_honeypot = true
  4. Start the AirHost module: start airhost

With these commands, the ETF can launch a complete catch-all honeypot with all types of security configurations. ETF also automatically launches the DHCP and DNS servers that allow clients to stay connected to the internet. ETF offers a better, faster, and more complete solution to create catch-all honeypots. The following code enables the successful execution of the ETF to be observed.

███████╗████████╗███████╗
██╔════╝╚══██╔══╝██╔════╝
█████╗     ██║   █████╗
██╔══╝     ██║   ██╔══╝
███████╗   ██║   ██║
╚══════╝   ╚═╝   ╚═╝

[+] Do you want to load an older session? [Y/n]: n
[+] Creating ne´,cxzw temporary session on 03/08/2018
[+] Enter the desired session name:
ETF[etf/aircommunicator/]::> config aplauncher
ETF[etf/aircommunicator/airhost/aplauncher]::> setconf ssid CatchMe
ssid = CatchMe
ETF[etf/aircommunicator/airhost/aplauncher]::> setconf catch_all_honeypot true
catch_all_honeypot = true
ETF[etf/aircommunicator/airhost/aplauncher]::> start airhost
[+] Killing already started processes and restarting network services
[+] Stopping dnsmasq and hostapd services
[+] Access Point stopped…
[+] Running airhost plugins pre_start
[+] Starting hostapd background process
[+] Starting dnsmasq service
[+] Running airhost plugins post_start
[+] Access Point launched successfully
[+] Starting dnsmasq service

Conclusions and future work

These scenarios use common and well-known attacks to help validate the ETF’s capabilities for testing WiFi networks and clients. The results also validate that the framework’s architecture enables new attack vectors and features to be developed on top of it while taking advantage of the platform’s existing capabilities. This should accelerate development of new WiFi penetration-testing tools, since a lot of the code is already written. Furthermore, the fact that complementary WiFi technologies are all integrated in a single tool will make WiFi pen-testing simpler and more efficient.

The ETF’s goal is not to replace existing tools but to complement them and offer a broader choice to security auditors when conducting WiFi pen-testing and improving user awareness.

The ETF is an open source project available on GitHub and community contributions to its development are welcomed. Following are some of the ways you can help.

One of the limitations of current WiFi pen-testing is the inability to log important events during tests. This makes reporting identified vulnerabilities both more difficult and less accurate. The framework could implement a logger that can be accessed by every class to create a pen-testing session report.

The ETF tool’s capabilities cover many aspects of WiFi pen-testing. On one hand, it facilitates the phases of WiFi reconnaissance, vulnerability discovery, and attack. On the other hand, it doesn’t offer a feature that facilitates the reporting phase. Adding the concept of a session and a session reporting feature, such as the logging of important events during a session, would greatly increase the value of the tool for real pen-testing scenarios.

Another valuable contribution would be extending the framework to facilitate WiFi fuzzing. The IEEE 802.11 protocol is very complex, and considering there are multiple implementations of it, both on the client and access point side, it’s safe to assume these implementations contain bugs and even security flaws. These bugs could be discovered by fuzzing IEEE 802.11 protocol frames. Since Scapy allows custom packet creation and injection, a fuzzer can be implemented through it.

Source

Top 10 Artificial Intelligence Technology Trends That Will Dominate In 2019

ai

Artificial Intelligence (AI) has created machines that mimic human intelligence. The intention behind the creation and continued development of machine intelligence is to improve our daily lives and the manner in which we interact with machines. Artificial intelligence is already making a difference in our homes, as customers and as service providers. Improvement of technology will inform the growth of artificial intelligence and vice versa beyond our wildest imagination. So what are the top 10 artificial intelligence technology trends that you should anticipate in 2019? Read on to find out!

1. Machine Learning Platforms

Machines can learn and to adapt to what they have learned. Advancements in technology have led to the improvement of the methods through which learning by computers occurs. Machine learning platforms access, classify and predict data. The progress of this platform is gaining ground by providing,

  • Data applications

  • Algorithms

  • Training tools

  • Application programming interface

  • Other machines

Providing these systems automatically and autonomously enables these machines to perform their functions intelligently.

2. Chatbot

chatbot

A chatbot is a programme in an application or a website that provides customer support twenty-four hours a day, seven days a week. Chatbots interact with users through text or audio, mostly through keywords and automated responses. Chatbots often mimic human interactions. Over time, chatbots improve the users experience through machine learning platforms by identifying patterns and adapting to them. Different online service providers are already making use of this trend in artificial intelligence for their businesses. Users can;

  • Submit complaints or reviews,

  • Order for food from restaurants,

  • Make hotel reservations,

  • Plan appointments.

3. Natural Language Generation

Natural language generation is an artificial intelligence that converts data into text. The text is relayed in a natural language such as English and can be presented as spoken or written. This conversion enables the communication of ideas that are highly accurate by computers. This form of artificial intelligence is used to generate reports that are incredibly detailed. Journalists, for example, have used Natural Language Generation to avail detailed reports and articles on corporate earnings and natural disasters such as earthquakes. Chatbots and smart devices use and benefit from natural language generation.

4. Augmented Reality

augmented reality

If you have played Pokémon Go or used the Snapchat lens, then you have interacted with augmented reality. Augmented reality places computer-generated, virtual characters in the real world in real time usually through a camera lens. Whereas virtual reality completely shuts out the world, augmented reality blends its generated characters with the world.

This trend is making its way into different retail stores that make home furnishing and makeup selection more fun and interactive.

5. Virtual Agents

A virtual agent is a computer-generated intelligence that provides online customer assistance. Virtual agents are animated virtual characters that typically have human-like characteristics. Virtual agents lead discussions with customers and provide adequate responses. Additionally, virtual agents can

  • Avail product information,

  • Place an order,

  • Make a reservation,

  • Book an appointment.

They also improve their function through machine learning platforms for better service provision. Companies that provide virtual agents include Google, Microsoft, Amazon and Assist AI.

6. Speech Recognition

Speech recognition interprets words from spoken language and converts them into data the machine understands and can asses. It facilitates communication between man and machine and is built into a lot of upcoming smart devices such as speakers, phones and watches. Continued improvement of the algorithms that recognize and convert speech into machine data will solidify this trend in 2019.

7. Self-driving cars

These are cars that drive themselves independently. This is made possible by merging sensors and artificial intelligence. The sensors map out the immediate environment of the vehicle, and artificial intelligence interprets and responds to the information relayed by the sensors. This form of artificial intelligence is expected to lower collisions and place less of a burden on drivers. Companies such as Uber, Tesla, and General Motors are hard at work to make self-driving cars a commercial reality in 2019.

8. Smart devices

Smart devices are becoming increasingly popular. Technology that has been in use over the recent years is being modified and released as smart devices. They include,

  • Smart thermostat

  • Smart speakers

  • Smart light bulbs

  • Smart security cameras

  • Smartphones

  • Smartwatches

  • Smart hubs

  • Smart keychains

Smart devices interact with users and other devices through different wireless connections, sensors and artificial intelligence. They pick up on the environment and respond to any changes based on their function and programming. Smart devices are likely to increase and improve in 2019.

9. Artificial intelligence permeation

Artificial intelligence-driven technology is on the rise and is penetrating all manner of industries. The continued development of machine learning platforms is making it easier and convenient for businesses to utilize artificial intelligence. Some of the industries that are adopting this technology include the automotive industry, marketing, healthcare, and finance industries and so on.

10. Internet of Things (IoT)

iot

Internet of Things is a phrase that defines objects or devices connected via the internet that collect and share information. Merging the Internet of Things with machine intelligence will better the collection and sharing of data. The specific form of artificial intelligence being applied to the Internet of Things is machine learning platforms. Classifying and predicting data from the Internet of Things with intelligence will provide new findings and insights into connected devices.

Summary

It is not possible to specifically predict how these trends will develop or how they will disrupt the technology that is already in place. What is certain is that technology as we know it is changing thanks to the development and improvement of artificial intelligence. It is also certain 2019 will be a year of significant growth for artificial intelligence technology.

Watch out for these ten trends in 2019 and challenge yourself to interact with and learn about some, if not all of them.

Source

Akira: The Linux Design Tool We’ve Always Wanted?

Let’s make it clear, I am not a professional designer – but I’ve used certain tools on Windows (like Photoshop, Illustrator, etc.) and Figma (which is a browser-based interface design tool). I’m sure there are a lot more design tools available for Mac and Windows.

Even on Linux, there is a limited number of dedicated graphic design tools. A few of these tools like GIMP and Inkscape are used by professionals as well. But most of them are not considered professional grade, unfortunately.

Even if there are a couple more solutions – I’ve never come across a native Linux application that could replace Sketch, Figma, or Adobe XD. Any professional designer would agree to that, isn’t it?

Is Akira going to replace Sketch, Figma, and Adobe XD on Linux?

Well, in order to develop something that would replace those awesome proprietary tools – Alessandro Castellani – came up with a Kickstarter campaign by teaming up with a couple of experienced developers –
Alberto Fanjul, Bilal Elmoussaoui, and Felipe Escoto.

So, yes, Akira is still pretty much just an idea- with a working prototype of its interface (as I observed in their live stream session via Kickstarter recently).

If it does not exist, why the Kickstarter campaign?

The aim of the Kickstarter campaign is to gather funds in order to hire the developers and take a few months off to dedicate their time in order to make Akira possible.

Nonetheless, if you want to support the project, you should know some details, right?

Fret not, we asked a couple of questions in their livestream session – let’s get into it…

Akira: A few more details

Akira prototype interface

As the Kickstarter campaign describes:

The main purpose of Akira is to offer a fast and intuitive tool to create Web and Mobile interfaces, more like Sketch, Figma, or Adobe XD, with a completely native experience for Linux.

They’ve also written a detailed description as to how the tool will be different from Inkscape, Glade, or QML Editor. Of course, if you want all the technical details, Kickstarter is the way to go. But, before that, let’s take a look at what they had to say when I asked some questions about Akira.

Q: If you consider your project – similar to what Figma offers – why should one consider installing Akira instead of using the web-based tool? Is it just going to be a clone of those tools – offering a native Linux experience or is there something really interesting to encourage users to switch (except being an open source solution)?

Akira: A native experience on Linux is always better and fast in comparison to a web-based electron app. Also, the hardware configuration matters if you choose to utilize Figma – but Akira will be light on system resource and you will still be able to do similar stuff without needing to go online.

Q: Let’s assume that it becomes the open source solution that Linux users have been waiting for (with similar features offered by proprietary tools). What are your plans to sustain it? Do you plan to introduce any pricing plans – or rely on donations?

Akira: The project will mostly rely on Donations (something like Krita Foundation could be an idea). But, there will be no “pro” pricing plans – it will be available for free and it will be an open source project.

So, with the response I got, it definitely seems to be something promising that we should probably support.

Wrapping Up

What do you think about Akira? Is it just going to remain a concept? Or do you hope to see it in action?

Let us know your thoughts in the comments below.

Source

WP2Social Auto Publish Powered By : XYZScripts.com