A Use Case for Network Automation

A Use Case for Network Automation

""

Use the Python Netmiko module to automate switches, routers and firewalls from multiple vendors.

I frequently find myself in the position of confronting “hostile” networks. By hostile, I mean that there is no existing documentation, or if it does exist, it is hopelessly out of date or being hidden deliberately. With that in mind, in this article, I describe the tools I’ve found useful to recover control, audit, document and automate these networks. Note that I’m not going to try to document any of the tools completely here. I mainly want to give you enough real-world examples to prove how much time and effort you could save with these tools, and I hope this article motivates you to explore the official documentation and example code.

In order to save money, I wanted to use open-source tools to gather information from all the devices on the network. I haven’t found a single tool that works with all the vendors and OS versions that typically are encountered. SNMP could provide a lot the information I need, but it would have to be configured on each device manually first. In fact, the mass enablement of SNMP could be one of the first use cases for the network automation tools described in this article.

Most modern devices support REST APIs, but companies typically are saddled with lots of legacy devices that don’t support anything fancier than Telnet and SSH. I settled on SSH access as the lowest common denominator, as every device must support this in order to be managed on the network.

My preferred automation language is Python, so the next problem was finding a Python module that abstracted the SSH login process, making it easy to run commands and gather command output.

Why Netmiko?

I discovered the Paramiko SSH module quite a few years ago and used it to create real-time inventories of Linux servers at multiple companies. It enabled me to log in to hosts and gather the output of commands, such as lspcidmidecode and lsmod.

The command output populated a database that engineers could use to search for specific hardware. When I then tried to use Paramiko to inventory network switches, I found that certain switch vendor and OS combinations would cause Paramiko SSH sessions to hang. I could see that the SSH login itself was successful, but the session would hang right after the login. I never was able to determine the cause, but I discovered Netmiko while researching the hanging problem. When I replaced all my Paramiko code with Netmiko code, all my session hanging problems went away, and I haven’t looked back since. Netmiko also is optimized for the network device management task, while Paramiko is more of a generic SSH module.

Programmatically Dealing with the Command-Line Interface

People familiar with the “Expect” language will recognize the technique for sending a command and matching the returned CLI prompts and command output to determine whether the command was successful. In the case of most network devices, the CLI prompts change depending on whether you’re in an unprivileged mode, in “enable” mode or in “config” mode.

For example, the CLI prompt typically will be the device hostname followed by specific characters.

Unprivileged mode:


sfo03-r7r9-sw1>

Privileged or “enable” mode:


sfo03-r7r9-sw1#

“Config” mode:


sfo03-r7r9-sw1(config)#

These different prompts enable you to make transitions programmatically from one mode to another and determine whether the transitions were successful.

Abstraction

Netmiko abstracts many common things you need to do when talking to switches. For example, if you run a command that produces more than one page of output, the switch CLI typically will “page” the output, waiting for input before displaying the next page. This makes it difficult to gather multipage output as single blob of text. The command to turn off paging varies depending on the switch vendor. For example, this might be terminal length 0 for one vendor and set cli pager off for another. Netmiko abstracts this operation, so all you need to do is use the disable_paging() function, and it will run the appropriate commands for the particular device.

Dealing with a Mix of Vendors and Products

Netmiko supports a growing list of network vendor and product combinations. You can find the current list in the documentation. Netmiko doesn’t auto-detect the vendor, so you’ll need to specify that information when using the functions. Some vendors have product lines with different CLI commands. For example, Dell has two types: dell_force10 and dell_powerconnect; and Cisco has several CLI versions on the different product lines, including cisco_ioscisco_nxos and cisco_asa.

Obtaining Netmiko

The official Netmiko code and documentation is at https://github.com/ktbyers/netmiko, and the author has a collection of helpful articles on his home page.

If you’re comfortable with developer tools, you can clone the GIT repo directly. For typical end users, installing Netmiko using pip should suffice:


# pip install netmiko

A Few Words of Caution

Before jumping on the network automation bandwagon, you need to sort out the following:

  • Mass configuration: be aware that the slowness of traditional “box-by-box” network administration may have protected you somewhat from massive mistakes. If you manually made a change, you typically would be alerted to a problem after visiting only a few devices. With network automation tools, you can render all your network devices useless within seconds.
  • Configuration backup strategy: this ideally would include a versioning feature, so you can roll back to a specific “known good” point in time. Check out the RANCID package before you spend a lot of money on this capability.
  • Out-of-band network management: almost any modern switch or network device is going to have a dedicated OOB port. This physically separate network permits you to recover from configuration mistakes that potentially could cut you off from the very devices you’re managing.
  • A strategy for testing: for example, have a dedicated pool of representative equipment permanently set aside for testing and proof of concepts. When rolling out a change on a production network, first verify the automation on a few devices before trying to do hundreds at once.

Using Netmiko without Writing Any Code

Netmiko’s author has created several standalone scripts called Netmiko Tools that you can use without writing any Python code. Consult the official documentation for details, as I offer only a few highlights here.

At the time of this writing, there are three tools:

netmiko-show

Run arbitrary “show” commands on one or more devices. By default, it will display the entire configuration, but you can supply an alternate command with the --cmd option. Note that “show” commands can display many details that aren’t stored within the actual device configurations.

For example, you can display Spanning Tree Protocol (STP) details from multiple devices:


% netmiko-show --cmd "show spanning-tree detail" arista-eos |
 ↪egrep "(last change|from)"
sfo03-r1r12-sw1.txt:  Number of topology changes 2307 last
 ↪change occurred 19:14:09 ago
sfo03-r1r12-sw1.txt:          from Ethernet1/10/2
sfo03-r1r12-sw2.txt:  Number of topology changes 6637 last
 ↪change occurred 19:14:09 ago
sfo03-r1r12-sw2.txt:          from Ethernet1/53

This information can be very helpful when tracking down the specific switch and switch port responsible for an STP flapping issue. Typically, you would be looking for a very high count of topology changes that is rapidly increasing, with a “last change time” in seconds. The “from” field gives you the source port of the change, enabling you to narrow down the source of the problem.

The “old-school” method for finding this information would be to log in to the top-most switch, look at its STP detail, find the problem port, log in to the switch downstream of this port, look at its STP detail and repeat this process until you find the source of the problem. The Netmiko Tools allow you to perform a network-wide search for all the information you need in a single operation.

netmiko-cfg

Apply snippets of configurations to one or more devices. Specify the configuration command with the --cmd option or read configuration from a file using --infile. This could be used for mass configurations. Mass changes could include DNS servers, NTP servers, SNMP community strings or syslog servers for the entire network. For example, to configure the read-only SNMP community on all of your Arista switches:


$ netmiko-cfg --cmd "snmp-server community mysecret ro"
 ↪arista-eos

You still will need to verify that the commands you’re sending are appropriate for the vendor and OS combinations of the target devices, as Netmiko will not do all of this work for you. See the “groups” mechanism below for how to apply vendor-specific configurations to only the devices from a particular vendor.

netmiko-grep

Search for a string in the configuration of multiple devices. For example, verify the current syslog destination in your Arista switches:


$ netmiko-grep --use-cache "logging host" arista-eos
sfo03-r2r7-sw1.txt:logging host 10.7.1.19
sfo03-r3r14-sw1.txt:logging host 10.8.6.99
sfo03-r3r16-sw1.txt:logging host 10.8.6.99
sfo03-r4r18-sw1.txt:logging host 10.7.1.19

All of the Netmiko tools depend on an “inventory” of devices, which is a YAML-formatted file stored in “.netmiko.yml” in the current directory or your home directory.

Each device in the inventory has the following format:


sfo03-r1r11-sw1:
  device_type: cisco_ios
  ip: sfo03-r1r11-sw1
  username: netadmin
  password: secretpass
  port: 22

Device entries can be followed by group definitions. Groups are simply a group name followed by a list of devices:


cisco-ios:
  - sfo03-r1r11-sw1
cisco-nxos:
  - sfo03-r1r12-sw2
  - sfo03-r3r17-sw1
arista-eos:
  - sfo03-r1r10-sw2
  - sfo03-r6r6-sw1

For example, you can use the group name “cisco-nxos” to run Cisco Nexus NX-OS-unique commands, such as feature:


% netmiko-cfg --cmd "feature interface-vlan" cisco-nxos

Note that the device type example is just one type of group. Other groups could indicate physical location (“SFO03”, “RKV02”), role (“TOR”, “spine”, “leaf”, “core”), owner (“Eng”, “QA”) or any other categories that make sense to you.

As I was dealing with hundreds of devices, I didn’t want to create the YAML-formatted inventory file by hand. Instead, I started with a simple list of devices and the corresponding Netmiko “device_type”:


sfo03-r1r11-sw1,cisco_ios
sfo03-r1r12-sw2,cisco_nxos
sfo03-r1r10-sw2,arista_eos
sfo03-r4r5-sw3,arista_eos
sfo03-r1r12-sw1,cisco_nxos
sfo03-r5r15-sw2,dell_force10

I then used standard Linux commands to create the YAML inventory file:


% grep -v '^#' simplelist.txt | awk -F, '{printf("%s:\n
 ↪device_type:
%s\n  ip: %s\n  username: netadmin\n  password:
 ↪secretpass\n  port:
22\n",$1,$2,$1)}' >> .netmiko.yml

I’m using a centralized authentication system, so the user name and password are the same for all devices. The command above yields the following YAML-formatted file:


sfo03-r1r11-sw1:
  device_type: cisco_ios
  ip: sfo03-r1r11-sw1
  username: netadmin
  password: secretpass
  port: 22
sfo03-r1r12-sw2:
  device_type: cisco_nxos
  ip: sfo03-r1r12-sw2
  username: netadmin
  password: secretpass
  port: 22
sfo03-r1r10-sw2:
  device_type: arista_eos
  ip: sfo03-r1r10-sw2
  username: netadmin
  password: secretpass
  port: 22

Once you’ve created this inventory, you can use the Netmiko Tools against individual devices or groups of devices.

A side effect of creating the inventory is that you now have a master list of devices on the network; you also have proven that the device names are resolvable via DNS and that you have the correct login credentials. This is actually a big step forward in some environments where I’ve worked.

Note that netmiko-grep caches the device configs locally. Once the cache has been built, you can make subsequent search operations run much faster by specifying the --use-cache option.

It now should be apparent that you can use Netmiko Tools to do a lot of administration and automation without writing any Python code. Again, refer to official documentation for all the options and more examples.

Start Coding with Netmiko

Now that you have a sense of what you can do with Netmiko Tools, you’ll likely come up with unique scenarios that require actual coding.

For the record, I don’t consider myself an advanced Python programmer at this time, so the examples here may not be optimal. I’m also limiting my examples to snippets of code rather than complete scripts. The example code is using Python 2.7.

My Approach to the Problem

I wrote a bunch of code before I became aware of the Netmiko Tools commands, and I found that I’d duplicated a lot of their functionality. My original approach was to break the problem into two separate phases. The first phase was the “scanning” of the switches and storing their configurations and command output locally, The second phase was processing and searching across the stored data.

My first script was a “scanner” that reads a list of switch hostnames and Netmiko device types from a simple text file, logs in to each switch, runs a series of CLI commands and then stores the output of each command in text files for later processing.

Reading a List of Devices

My first task is to read a list of network devices and their Netmiko “device type” from a simple text file in the CSV format. I include the csv module, so I can use the csv.Dictreader function, which returns CSV fields as a Python dictionary. I like the CSV file format, as anyone with limited UNIX/Linux skills likely knows how to work with it, and it’s a very common file type for exporting data if you have an existing database of network devices.

For example, the following is a list of switch names and device types in CSV format:


sfo03-r1r11-sw1,cisco_ios
sfo03-r1r12-sw2,cisco_nxos
sfo03-r1r10-sw2,arista_eos
sfo03-r4r5-sw3,arista_eos
sfo03-r1r12-sw1,cisco_nxos
sfo03-r5r15-sw2,dell_force10

The following Python code reads the data filename from the command line, opens the file and then iterates over each device entry, calling the login_switch() function that will run the actual Netmiko code:


import csv
import sys
import logging
def main():
# get data file from command line
   devfile = sys.argv[1]
# open file and extract the two fields
   with open(devfile,'rb') as devicesfile:
       fields = ['hostname','devtype']
       hosts = csv.DictReader(devicesfile,fieldnames=fields,
↪delimiter=',')
# iterate through list of hosts, calling "login_switch()"
# for each one
       for host in hosts:
           hostname = host['hostname']
           print "hostname = ",hostname
           devtype = host['devtype']
           login_switch(hostname,devtype)

The login_switch() function runs any number of commands and stores the output in separate text files under a directory based on the name of the device:


# import required module
from netmiko import ConnectHandler
# login into switch and run command
def login_switch(host,devicetype):
# required arguments to ConnectHandler
    device = {
# device_type and ip are read from data file
    'device_type': devicetype,
    'ip':host,
# device credentials are hardcoded in script for now
    'username':'admin',
    'password':'secretpass',
    }
# if successful login, run command on CLI
    try:
        net_connect = ConnectHandler(**device)
        commands = "show version"
        output = net_connect.send_command(commands)
# construct directory path based on device name
        path = '/root/login/scan/' + host + "/"
        make_dir(path)
        filename = path + "show_version"
# store output of command in file
        handle = open (filename,'w')
        handle.write(output)
        handle.close()
# if unsuccessful, print error
    except Exception as e:
        print "RAN INTO ERROR "
        print "Error: " + str(e)

This code opens a connection to the device, executes the show version command and stores the output in /root/login/scan/<devicename>/show_version.

The show version output is incredible useful, as it typically contains the vendor, model, OS version, hardware details, serial number and MAC address. Here’s an example from an Arista switch:


Arista DCS-7050QX-32S-R
Hardware version:    01.31
Serial number:       JPE16292961
System MAC address:  444c.a805.6921

Software image version: 4.17.0F
Architecture:           i386
Internal build version: 4.17.0F-3304146.4170F
Internal build ID:      21f25f02-5d69-4be5-bd02-551cf79903b1

Uptime:                 25 weeks, 4 days, 21 hours and 32
                        minutes
Total memory:           3796192 kB
Free memory:            1230424 kB

This information allows you to create all sorts of good stuff, such as a hardware inventory of your network and a software version report that you can use for audits and planned software updates.

My current script runs show lldp neighborsshow runshow interface status and records the device CLI prompt in addition to show version.

The above code example constitutes the bulk of what you need to get started with Netmiko. You now have a way to run arbitrary commands on any number of devices without typing anything by hand. This isn’t Software-Defined Networking (SDN) by any means, but it’s still a huge step forward from the “box-by-box” method of network administration.

Next, let’s try the scanning script on the sample network:


$ python scanner.py devices.csv
hostname = sfo03-r1r15-sw1
hostname = sfo03-r3r19-sw0
hostname = sfo03-r1r16-sw2
hostname = sfo03-r3r8-sw2
RAN INTO ERROR
Error: Authentication failure: unable to connect dell_force10
 ↪sfo03-r3r8-sw2:22
Authentication failed.
hostname = sfo03-r3r10-sw2
hostname = sfo03-r3r11-sw1
hostname = sfo03-r4r14-sw2
hostname = sfo03-r4r15-sw1

If you have a lot of devices, you’ll likely experience login failures like the one in the middle of the scan above. These could be due to multiple reasons, including the device being down, being unreachable over the network, the script having incorrect credentials and so on. Expect to make several passes to address all the problems before you get a “clean” run on a large network.

This finishes the “scanning” portion of process, and all the data you need is now stored locally for further analysis in the “scan” directory, which contains subdirectories for each device:


$ ls scan/
sfo03-r1r10-sw2 sfo03-r2r14-sw2 sfo03-r3r18-sw1 sfo03-r4r8-sw2
 ↪sfo03-r6r14-sw2
sfo03-r1r11-sw1 sfo03-r2r15-sw1 sfo03-r3r18-sw2 sfo03-r4r9-sw1
 ↪sfo03-r6r15-sw1
sfo03-r1r12-sw0 sfo03-r2r16-sw1 sfo03-r3r19-sw0 sfo03-r4r9-sw2
 ↪sfo03-r6r16-sw1
sfo03-r1r12-sw1 sfo03-r2r16-sw2 sfo03-r3r19-sw1 sfo03-r5r10-sw1
 ↪sfo03-r6r16-sw2
sfo03-r1r12-sw2 sfo03-r2r2-sw1  sfo03-r3r4-sw2  sfo03-r5r10-sw2
 ↪sfo03-r6r17- sw1

You can see that each subdirectory contains separate files for each command output:


$ ls sfo03-r1r10-sw2/
show_lldp prompt show_run show_version show_int_status

Debugging via Logging

Netmiko normally is very quiet when it’s running, so it’s difficult to tell where things are breaking in the interaction with a network device. The easiest way I have found to debug problems is to use the logging module. I normally keep this disabled, but when I want to turn on debugging, I uncomment the line starting with logging.basicConfig line below:


import logging
if __name__ == "__main__":
#  logging.basicConfig(level=logging.DEBUG)
  main()

Then I run the script, and it produces output on the console showing the entire SSH conversation between the netmiko module and the remote device (a switch named “sfo03-r1r10-sw2” in this example):


DEBUG:netmiko:In disable_paging
DEBUG:netmiko:Command: terminal length 0
DEBUG:netmiko:write_channel: terminal length 0
DEBUG:netmiko:Pattern is: sfo03\-r1r10\-sw2
DEBUG:netmiko:_read_channel_expect read_data: terminal
 ↪length 0
DEBUG:netmiko:_read_channel_expect read_data: Pagination
disabled.
sfo03-r1r10-sw2#
DEBUG:netmiko:Pattern found: sfo03\-r1r10\-sw2 terminal
 ↪length 0
Pagination disabled.
sfo03-r1r10-sw2#
DEBUG:netmiko:terminal length 0
Pagination disabled.
sfo03-r1r10-sw2#
DEBUG:netmiko:Exiting disable_paging

In this case, the terminal length 0 command sent by Netmiko is successful. In the following example, the command sent to change the terminal width is rejected by the switch CLI with the “Authorization denied” message:


DEBUG:netmiko:Entering set_terminal_width
DEBUG:netmiko:write_channel: terminal width 511
DEBUG:netmiko:Pattern is: sfo03\-r1r10\-sw2
DEBUG:netmiko:_read_channel_expect read_data: terminal
 ↪width 511
DEBUG:netmiko:_read_channel_expect read_data: % Authorization
denied for command 'terminal width 511'
sfo03-r1r10-sw2#
DEBUG:netmiko:Pattern found: sfo3\-r1r10\-sw2 terminal
 ↪width 511
% Authorization denied for command 'terminal width 511'
sfo03-r1r10-sw2#
DEBUG:netmiko:terminal width 511
% Authorization denied for command 'terminal width 511'
sfo03-r1r10-sw2#
DEBUG:netmiko:Exiting set_terminal_width

The logging also will show the entire SSH login and authentication sequence in detail. I had to deal with one switch that was using a depreciated SSH cypher that was disabled by default in the SSH client, causing the SSH session to fail when trying to authenticate. With logging, I could see the client rejecting the cypher being offered by the switch. I also discovered another type of switch where the Netmiko connection appeared to hang. The logging revealed that it was stuck at the more? prompt, as the paging was never disabled successfully after login. On this particular switch, the commands to disable paging had to be run in a privileged mode. My quick fix was add a disable_paging() function after the “enable” mode was entered.

Analysis Phase

Now that you have all the data you want, you can start processing it.

A very simple example would be an “audit”-type of check, which verifies that hostname registered in DNS matches the hostname configured in the device. If these do not match, it will cause all sorts of confusion when logging in to the device, correlating syslog messages or looking at LLDP and CPD output:


import os
import sys
directory = "/root/login/scan"
for filename in os.listdir(directory):
    prompt_file = directory + '/' + filename + '/prompt'
    try:
         prompt_fh = open(prompt_file,'rb')
    except IOError:
         "Can't open:", prompt_file
         sys.exit()

    with prompt_fh:
        prompt = prompt_fh.read()
        prompt = prompt.rstrip('#')
        if (filename != prompt):
            print 'switch DNS hostname %s != configured
             ↪hostname %s' %(filename, prompt)

This script opens the scan directory, opens each “prompt” file, derives the configured hostname by stripping off the “#” character, compares it with the subdirectory filename (which is the hostname according to DNS) and prints a message if they don’t match. In the example below, the script finds one switch where the DNS switch name doesn’t match the hostname configured on the switch:


$ python name_check.py
switch DNS hostname sfo03-r1r12-sw2 != configured hostname
 ↪SFO03-R1R10-SW1-Cisco_Core

It’s a reality that most complex networks are built up over a period of years by multiple people with different naming conventions, work styles, skill sets and so on. I’ve accumulated a number of “audit”-type checks that find and correct inconsistencies that can creep into a network over time. This is the perfect use case for network automation, because you can see everything at once, as opposed going through each device, one at a time.

Performance

During the initial debugging, I had the “scanning” script log in to each switch in a serial fashion. This worked fine for a few switches, but performance became a problem when I was scanning hundreds at a time. I used the Python multiprocessing module to fire off a bunch of “workers” that interacted with switches in parallel. This cut the processing time for the scanning portion down to a couple minutes, as the entire scan took only as long as the slowest switch took to complete. The switch scanning problem fits quite well into the multiprocessing model, because there are no events or data to coordinate between the individual workers. The Netmiko Tools also take advantage of multiprocessing and use a cache system to improve performance.

Future Directions

The most complicated script I’ve written so far with Netmiko logs in to every switch, gathers the LLDP neighbor info and produces a text-only topology map of the entire network. For those that are unfamiliar with LLDP, this is the Link Layer Discovery Protocol. Most modern network devices are sending LLDP multicasts out every port every 30 seconds. The LLDP data includes many details, including the switch hostname, port name, MAC address, device model, vendor, OS and so on. It allows any given device to know about all its immediate neighbors.

For example, here’s a typical LLDP display on a switch. The “Neighbor” columns show you details on what is connected to each of your local ports:


sfo03-r1r5-sw1# show lldp neighbors
Port  Neighbor Device ID   Neighbor Port ID   TTL
Et1   sfo03-r1r3-sw1         Ethernet1          120
Et2   sfo03-r1r3-sw2         Te1/0/2            120
Et3   sfo03-r1r4-sw1         Te1/0/2            120
Et4   sfo03-r1r6-sw1         Ethernet1          120
Et5   sfo03-r1r6-sw2         Te1/0/2            120

By asking all the network devices for their list of LLDP neighbors, it’s possible to build a map of the network. My approach was to build a list of local switch ports and their LLDP neighbors for the top-level switch, and then recursively follow each switch link down the hierarchy of switches, adding each entry to a nested dictionary. This process becomes very complex when there are redundant links and endless loops to avoid, but I found it a great way to learn more about complex Python data structures.

The following output is from my “mapper” script. It uses indentation (from left to right) to show the hierarchy of switches, which is three levels deep in this example:


sfo03-r1r5-core:Et6  sfo03-r1r8-sw1:Ethernet1
    sfo03-r1r8-sw1:Et22 sfo03-r6r8-sw3:Ethernet48
    sfo03-r1r8-sw1:Et24 sfo03-r6r8-sw2:Te1/0/1
    sfo03-r1r8-sw1:Et25 sfo03-r3r7-sw2:Te1/0/1
    sfo03-r1r8-sw1:Et26 sfo03-r3r7-sw1:24

It prints the port name next to the switch hostname, which allows you to see both “sides” of the inter-switch links. This is extremely useful when trying to orient yourself on the network. I’m still working on this script, but it currently produces a “real-time” network topology map that can be turned into a network diagram.

I hope this information inspires you to investigate network automation. Start with Netmiko Tools and the inventory file to get a sense of what is possible. You likely will encounter a scenario that requires some Python coding, either using the output of Netmiko Tools or perhaps your own standalone script. Either way, the Netmiko functions make automating a large, multivendor network fairly easy.

Source

How to Install Visual Studio Code on Debian 9

Visual Studio Code is a free and open source cross-platform code editor developed by Microsoft. It has a built-in debugging support, embedded Git control, syntax highlighting, code completion, integrated terminal, code refactoring and snippets. Visual Studio Code functionality can be extended using extensions.

This tutorial explains how to install Visual Studio Code editor on Debian using apt from the VS Code repository.

The user you are logged in as must have sudo privileges to be able to install packages.

Complete the following steps to install Visual Studio Code on your Debian system:

  1. Start by updating the packages index and installing the dependencies by typing:
    sudo apt updatesudo apt install software-properties-common apt-transport-https curl
  2. Import the Microsoft GPG key using the following curl command:
    curl -sSL https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

    Add the Visual Studio Code repository to your system:

    sudo add-apt-repository "deb [arch=amd64] https://packages.microsoft.com/repos/vscode stable main"
  3. Once the repository is added, install the latest version of Visual Studio Code with:
    sudo apt update
    sudo apt install code

That’s it. Visual Studio Code has been installed on your Debian desktop and you can start using it.

Once the VS Code is installed on your Debian system you can launch it either from the command line by typing code or by clicking on the VS Code icon (Activities -> Visual Studio Code).

When you start VS Code for the first time, a window like the following will be displayed:

You can now start installing extensions and configuring VS Code according to your preferences.

When a new version of Visual Studio Code is released you can update the package through your desktop standard Software Update tool or by running the following commands in your terminal:

sudo apt update
sudo apt upgrade

You have successfully installed VS Code on your Debian 9 machine. Your next step could be to install Additional Components and customize your User and Workspace Settings.

Source

The Evil-Twin Framework: A tool for testing WiFi security

Learn about a pen-testing tool intended to test the security of WiFi access points for all types of threats.

lock on world map

The increasing number of devices that connect over-the-air to the internet over-the-air and the wide availability of WiFi access points provide many opportunities for attackers to exploit users. By tricking users to connect to rogue access points, hackers gain full control over the users’ network connection, which allows them to sniff and alter traffic, redirect users to malicious sites, and launch other attacks over the network..

To protect users and teach them to avoid risky online behaviors, security auditors and researchers must evaluate users’ security practices and understand the reasons they connect to WiFi access points without being confident they are safe. There are a significant number of tools that can conduct WiFi audits, but no single tool can test the many different attack scenarios and none of the tools integrate well with one another.

The Evil-Twin Framework (ETF) aims to fix these problems in the WiFi auditing process by enabling auditors to examine multiple scenarios and integrate multiple tools. This article describes the framework and its functionalities, then provides some examples to show how it can be used.

The ETF architecture

The ETF framework was written in Python because the development language is very easy to read and make contributions to. In addition, many of the ETF’s libraries, such as Scapy, were already developed for Python, making it easy to use them for ETF.

The ETF architecture (Figure 1) is divided into different modules that interact with each other. The framework’s settings are all written in a single configuration file. The user can verify and edit the settings through the user interface via the ConfigurationManager class. Other modules can only read these settings and run according to them.

Evil-Twin Framework Architecture

Figure 1: Evil-Twin framework architecture

The ETF supports multiple user interfaces that interact with the framework. The current default interface is an interactive console, similar to the one on Metasploit. A graphical user interface (GUI) and a command line interface (CLI) are under development for desktop/browser use, and mobile interfaces may be an option in the future. The user can edit the settings in the configuration file using the interactive console (and eventually with the GUI). The user interface can interact with every other module that exists in the framework.

The WiFi module (AirCommunicator) was built to support a wide range of WiFi capabilities and attacks. The framework identifies three basic pillars of Wi-Fi communication: packet sniffingcustom packet injection, and access point creation. The three main WiFi communication modules are AirScannerAirInjector, and AirHost, which are responsible for packet sniffing, packet injection, and access point creation, respectively. The three classes are wrapped inside the main WiFi module, AirCommunicator, which reads the configuration file before starting the services. Any type of WiFi attack can be built using one or more of these core features.

To enable man-in-the-middle (MITM) attacks, which are a common way to attack WiFi clients, the framework has an integrated module called ETFITM (Evil-Twin Framework-in-the-Middle). This module is responsible for the creation of a web proxy used to intercept and manipulate HTTP/HTTPS traffic.

There are many other tools that can leverage the MITM position created by the ETF. Through its extensibility, ETF can support them—and, instead of having to call them separately, you can add the tools to the framework just by extending the Spawner class. This enables a developer or security auditor to call the program with a preconfigured argument string from within the framework.

The other way to extend the framework is through plugins. There are two categories of plugins: WiFi plugins and MITM plugins. MITM plugins are scripts that can run while the MITM proxy is active. The proxy passes the HTTP(S) requests and responses through to the plugins where they can be logged or manipulated. WiFi plugins follow a more complex flow of execution but still expose a fairly simple API to contributors who wish to develop and use their own plugins. WiFi plugins can be further divided into three categories, one for each of the core WiFi communication modules.

Each of the core modules has certain events that trigger the execution of a plugin. For instance, AirScanner has three defined events to which a response can be programmed. The events usually correspond to a setup phase before the service starts running, a mid-execution phase while the service is running, and a teardown or cleanup phase after a service finishes. Since Python allows multiple inheritance, one plugin can subclass more than one plugin class.

Figure 1 above is a summary of the framework’s architecture. Lines pointing away from the ConfigurationManager mean that the module reads information from it and lines pointing towards it mean that the module can write/edit configurations.

Examples of using the Evil-Twin Framework

There are a variety of ways ETF can conduct penetration testing on WiFi network security or work on end users’ awareness of WiFi security. The following examples describe some of the framework’s pen-testing functionalities, such as access point and client detection, WPA and WEP access point attacks, and evil twin access point creation.

These examples were devised using ETF with WiFi cards that allow WiFi traffic capture. They also utilize the following abbreviations for ETF setup commands:

  • APS access point SSID
  • APB access point BSSID
  • APC access point channel
  • CM client MAC address

In a real testing scenario, make sure to replace these abbreviations with the correct information.

Capturing a WPA 4-way handshake after a de-authentication attack

This scenario (Figure 2) takes two aspects into consideration: the de-authentication attack and the possibility of catching a 4-way WPA handshake. The scenario starts with a running WPA/WPA2-enabled access point with one connected client device (in this case, a smartphone). The goal is to de-authenticate the client with a general de-authentication attack then capture the WPA handshake once it tries to reconnect. The reconnection will be done manually immediately after being de-authenticated.

Scenario for capturing a WPA handshake after a de-authentication attack

Figure 2: Scenario for capturing a WPA handshake after a de-authentication attack

The consideration in this example is the ETF’s reliability. The goal is to find out if the tools can consistently capture the WPA handshake. The scenario will be performed multiple times with each tool to check its reliability when capturing the WPA handshake.

There is more than one way to capture a WPA handshake using the ETF. One way is to use a combination of the AirScanner and AirInjector modules; another way is to just use the AirInjector. The following scenario uses a combination of both modules.

The ETF launches the AirScanner module and analyzes the IEEE 802.11 frames to find a WPA handshake. Then the AirInjector can launch a de-authentication attack to force a reconnection. The following steps must be done to accomplish this on the ETF:

  1. Enter the AirScanner configuration mode: config airscanner
  2. Configure the AirScanner to not hop channels: config airscanner
  3. Set the channel to sniff the traffic on the access point channel (APC): set fixed_sniffing_channel = <APC>
  4. Start the AirScanner module with the CredentialSniffer plugin: start airscanner with credentialsniffer
  5. Add a target access point BSSID (APS) from the sniffed access points list: add aps where ssid = <APS>
  6. Start the AirInjector, which by default lauches the de-authentication attack: start airinjector

This simple set of commands enables the ETF to perform an efficient and successful de-authentication attack on every test run. The ETF can also capture the WPA handshake on every test run. The following code makes it possible to observe the ETF’s successful execution.

███████╗████████╗███████╗
██╔════╝╚══██╔══╝██╔════╝
█████╗     ██║   █████╗
██╔══╝     ██║   ██╔══╝
███████╗   ██║   ██║
╚══════╝   ╚═╝   ╚═╝

[+] Do you want to load an older session? [Y/n]: n
[+] Creating new temporary session on 02/08/2018
[+] Enter the desired session name:
ETF[etf/aircommunicator/]::> config airscanner
ETF[etf/aircommunicator/airscanner]::> listargs
sniffing_interface =               wlan1; (var)
probes =                True; (var)
beacons =                True; (var)
hop_channels =               false(var)
fixed_sniffing_channel =                  11(var)
ETF[etf/aircommunicator/airscanner]::> start airscanner with
arpreplayer        caffelatte         credentialsniffer  packetlogger       selfishwifi
ETF[etf/aircommunicator/airscanner]::> start airscanner with credentialsniffer
[+] Successfully added credentialsniffer plugin.
[+] Starting packet sniffer on interface ‘wlan1’
[+] Set fixed channel to 11
ETF[etf/aircommunicator/airscanner]::> add aps where ssid = CrackWPA
ETF[etf/aircommunicator/airscanner]::> start airinjector
ETF[etf/aircommunicator/airscanner]::> [+] Starting deauthentication attack
– 1000 bursts of 1 packets
– 1 different packets
[+] Injection attacks finished executing.
[+] Starting post injection methods
[+] Post injection methods finished
[+] WPA Handshake found for client ’70:3e:ac:bb:78:64′ and network ‘CrackWPA’

Launching an ARP replay attack and cracking a WEP network

The next scenario (Figure 3) will also focus on the Address Resolution Protocol(ARP) replay attack’s efficiency and the speed of capturing the WEP data packets containing the initialization vectors (IVs). The same network may require a different number of caught IVs to be cracked, so the limit for this scenario is 50,000 IVs. If the network is cracked during the first test with less than 50,000 IVs, that number will be the new limit for the following tests on the network. The cracking tool to be used will be aircrack-ng.

The test scenario starts with an access point using WEP encryption and an offline client that knows the key—the key for testing purposes is 12345, but it can be a larger and more complex key. Once the client connects to the WEP access point, it will send out a gratuitous ARP packet; this is the packet that’s meant to be captured and replayed. The test ends once the limit of packets containing IVs is captured.

Scenario for capturing a WPA handshake after a de-authentication attack

Figure 3: Scenario for capturing a WPA handshake after a de-authentication attack

ETF uses Python’s Scapy library for packet sniffing and injection. To minimize known performance problems in Scapy, ETF tweaks some of its low-level libraries to significantly speed packet injection. For this specific scenario, the ETF uses tcpdump as a background process instead of Scapy for more efficient packet sniffing, while Scapy is used to identify the encrypted ARP packet.

This scenario requires the following commands and operations to be performed on the ETF:

  1. Enter the AirScanner configuration mode: config airscanner
  2. Configure the AirScanner to not hop channels: set hop_channels = false
  3. Set the channel to sniff the traffic on the access point channel (APC): set fixed_sniffing_channel = <APC>
  4. Enter the ARPReplayer plugin configuration mode: config arpreplayer
  5. Set the target access point BSSID (APB) of the WEP network: set target_ap_bssid <APB>
  6. Start the AirScanner module with the ARPReplayer plugin: start airscanner with arpreplayer

After executing these commands, ETF correctly identifies the encrypted ARP packet, then successfully performs an ARP replay attack, which cracks the network.

Launching a catch-all honeypot

The scenario in Figure 4 creates multiple access points with the same SSID. This technique discovers the encryption type of a network that was probed for but out of reach. By launching multiple access points with all security settings, the client will automatically connect to the one that matches the security settings of the locally cached access point information.

Scenario for capturing a WPA handshake after a de-authentication attack

Figure 4: Scenario for capturing a WPA handshake after a de-authentication attack

Using the ETF, it is possible to configure the hostapd configuration file then launch the program in the background. Hostapd supports launching multiple access points on the same wireless card by configuring virtual interfaces, and since it supports all types of security configurations, a complete catch-all honeypot can be set up. For the WEP and WPA(2)-PSK networks, a default password is used, and for the WPA(2)-EAP, an “accept all” policy is configured.

For this scenario, the following commands and operations must be performed on the ETF:

  1. Enter the APLauncher configuration mode: config aplauncher
  2. Set the desired access point SSID (APS): set ssid = <APS>
  3. Configure the APLauncher as a catch-all honeypot: set catch_all_honeypot = true
  4. Start the AirHost module: start airhost

With these commands, the ETF can launch a complete catch-all honeypot with all types of security configurations. ETF also automatically launches the DHCP and DNS servers that allow clients to stay connected to the internet. ETF offers a better, faster, and more complete solution to create catch-all honeypots. The following code enables the successful execution of the ETF to be observed.

███████╗████████╗███████╗
██╔════╝╚══██╔══╝██╔════╝
█████╗     ██║   █████╗
██╔══╝     ██║   ██╔══╝
███████╗   ██║   ██║
╚══════╝   ╚═╝   ╚═╝

[+] Do you want to load an older session? [Y/n]: n
[+] Creating ne´,cxzw temporary session on 03/08/2018
[+] Enter the desired session name:
ETF[etf/aircommunicator/]::> config aplauncher
ETF[etf/aircommunicator/airhost/aplauncher]::> setconf ssid CatchMe
ssid = CatchMe
ETF[etf/aircommunicator/airhost/aplauncher]::> setconf catch_all_honeypot true
catch_all_honeypot = true
ETF[etf/aircommunicator/airhost/aplauncher]::> start airhost
[+] Killing already started processes and restarting network services
[+] Stopping dnsmasq and hostapd services
[+] Access Point stopped…
[+] Running airhost plugins pre_start
[+] Starting hostapd background process
[+] Starting dnsmasq service
[+] Running airhost plugins post_start
[+] Access Point launched successfully
[+] Starting dnsmasq service

Conclusions and future work

These scenarios use common and well-known attacks to help validate the ETF’s capabilities for testing WiFi networks and clients. The results also validate that the framework’s architecture enables new attack vectors and features to be developed on top of it while taking advantage of the platform’s existing capabilities. This should accelerate development of new WiFi penetration-testing tools, since a lot of the code is already written. Furthermore, the fact that complementary WiFi technologies are all integrated in a single tool will make WiFi pen-testing simpler and more efficient.

The ETF’s goal is not to replace existing tools but to complement them and offer a broader choice to security auditors when conducting WiFi pen-testing and improving user awareness.

The ETF is an open source project available on GitHub and community contributions to its development are welcomed. Following are some of the ways you can help.

One of the limitations of current WiFi pen-testing is the inability to log important events during tests. This makes reporting identified vulnerabilities both more difficult and less accurate. The framework could implement a logger that can be accessed by every class to create a pen-testing session report.

The ETF tool’s capabilities cover many aspects of WiFi pen-testing. On one hand, it facilitates the phases of WiFi reconnaissance, vulnerability discovery, and attack. On the other hand, it doesn’t offer a feature that facilitates the reporting phase. Adding the concept of a session and a session reporting feature, such as the logging of important events during a session, would greatly increase the value of the tool for real pen-testing scenarios.

Another valuable contribution would be extending the framework to facilitate WiFi fuzzing. The IEEE 802.11 protocol is very complex, and considering there are multiple implementations of it, both on the client and access point side, it’s safe to assume these implementations contain bugs and even security flaws. These bugs could be discovered by fuzzing IEEE 802.11 protocol frames. Since Scapy allows custom packet creation and injection, a fuzzer can be implemented through it.

Source

Top 10 Artificial Intelligence Technology Trends That Will Dominate In 2019

ai

Artificial Intelligence (AI) has created machines that mimic human intelligence. The intention behind the creation and continued development of machine intelligence is to improve our daily lives and the manner in which we interact with machines. Artificial intelligence is already making a difference in our homes, as customers and as service providers. Improvement of technology will inform the growth of artificial intelligence and vice versa beyond our wildest imagination. So what are the top 10 artificial intelligence technology trends that you should anticipate in 2019? Read on to find out!

1. Machine Learning Platforms

Machines can learn and to adapt to what they have learned. Advancements in technology have led to the improvement of the methods through which learning by computers occurs. Machine learning platforms access, classify and predict data. The progress of this platform is gaining ground by providing,

  • Data applications

  • Algorithms

  • Training tools

  • Application programming interface

  • Other machines

Providing these systems automatically and autonomously enables these machines to perform their functions intelligently.

2. Chatbot

chatbot

A chatbot is a programme in an application or a website that provides customer support twenty-four hours a day, seven days a week. Chatbots interact with users through text or audio, mostly through keywords and automated responses. Chatbots often mimic human interactions. Over time, chatbots improve the users experience through machine learning platforms by identifying patterns and adapting to them. Different online service providers are already making use of this trend in artificial intelligence for their businesses. Users can;

  • Submit complaints or reviews,

  • Order for food from restaurants,

  • Make hotel reservations,

  • Plan appointments.

3. Natural Language Generation

Natural language generation is an artificial intelligence that converts data into text. The text is relayed in a natural language such as English and can be presented as spoken or written. This conversion enables the communication of ideas that are highly accurate by computers. This form of artificial intelligence is used to generate reports that are incredibly detailed. Journalists, for example, have used Natural Language Generation to avail detailed reports and articles on corporate earnings and natural disasters such as earthquakes. Chatbots and smart devices use and benefit from natural language generation.

4. Augmented Reality

augmented reality

If you have played Pokémon Go or used the Snapchat lens, then you have interacted with augmented reality. Augmented reality places computer-generated, virtual characters in the real world in real time usually through a camera lens. Whereas virtual reality completely shuts out the world, augmented reality blends its generated characters with the world.

This trend is making its way into different retail stores that make home furnishing and makeup selection more fun and interactive.

5. Virtual Agents

A virtual agent is a computer-generated intelligence that provides online customer assistance. Virtual agents are animated virtual characters that typically have human-like characteristics. Virtual agents lead discussions with customers and provide adequate responses. Additionally, virtual agents can

  • Avail product information,

  • Place an order,

  • Make a reservation,

  • Book an appointment.

They also improve their function through machine learning platforms for better service provision. Companies that provide virtual agents include Google, Microsoft, Amazon and Assist AI.

6. Speech Recognition

Speech recognition interprets words from spoken language and converts them into data the machine understands and can asses. It facilitates communication between man and machine and is built into a lot of upcoming smart devices such as speakers, phones and watches. Continued improvement of the algorithms that recognize and convert speech into machine data will solidify this trend in 2019.

7. Self-driving cars

These are cars that drive themselves independently. This is made possible by merging sensors and artificial intelligence. The sensors map out the immediate environment of the vehicle, and artificial intelligence interprets and responds to the information relayed by the sensors. This form of artificial intelligence is expected to lower collisions and place less of a burden on drivers. Companies such as Uber, Tesla, and General Motors are hard at work to make self-driving cars a commercial reality in 2019.

8. Smart devices

Smart devices are becoming increasingly popular. Technology that has been in use over the recent years is being modified and released as smart devices. They include,

  • Smart thermostat

  • Smart speakers

  • Smart light bulbs

  • Smart security cameras

  • Smartphones

  • Smartwatches

  • Smart hubs

  • Smart keychains

Smart devices interact with users and other devices through different wireless connections, sensors and artificial intelligence. They pick up on the environment and respond to any changes based on their function and programming. Smart devices are likely to increase and improve in 2019.

9. Artificial intelligence permeation

Artificial intelligence-driven technology is on the rise and is penetrating all manner of industries. The continued development of machine learning platforms is making it easier and convenient for businesses to utilize artificial intelligence. Some of the industries that are adopting this technology include the automotive industry, marketing, healthcare, and finance industries and so on.

10. Internet of Things (IoT)

iot

Internet of Things is a phrase that defines objects or devices connected via the internet that collect and share information. Merging the Internet of Things with machine intelligence will better the collection and sharing of data. The specific form of artificial intelligence being applied to the Internet of Things is machine learning platforms. Classifying and predicting data from the Internet of Things with intelligence will provide new findings and insights into connected devices.

Summary

It is not possible to specifically predict how these trends will develop or how they will disrupt the technology that is already in place. What is certain is that technology as we know it is changing thanks to the development and improvement of artificial intelligence. It is also certain 2019 will be a year of significant growth for artificial intelligence technology.

Watch out for these ten trends in 2019 and challenge yourself to interact with and learn about some, if not all of them.

Source

Akira: The Linux Design Tool We’ve Always Wanted?

Let’s make it clear, I am not a professional designer – but I’ve used certain tools on Windows (like Photoshop, Illustrator, etc.) and Figma (which is a browser-based interface design tool). I’m sure there are a lot more design tools available for Mac and Windows.

Even on Linux, there is a limited number of dedicated graphic design tools. A few of these tools like GIMP and Inkscape are used by professionals as well. But most of them are not considered professional grade, unfortunately.

Even if there are a couple more solutions – I’ve never come across a native Linux application that could replace Sketch, Figma, or Adobe XD. Any professional designer would agree to that, isn’t it?

Is Akira going to replace Sketch, Figma, and Adobe XD on Linux?

Well, in order to develop something that would replace those awesome proprietary tools – Alessandro Castellani – came up with a Kickstarter campaign by teaming up with a couple of experienced developers –
Alberto Fanjul, Bilal Elmoussaoui, and Felipe Escoto.

So, yes, Akira is still pretty much just an idea- with a working prototype of its interface (as I observed in their live stream session via Kickstarter recently).

If it does not exist, why the Kickstarter campaign?

The aim of the Kickstarter campaign is to gather funds in order to hire the developers and take a few months off to dedicate their time in order to make Akira possible.

Nonetheless, if you want to support the project, you should know some details, right?

Fret not, we asked a couple of questions in their livestream session – let’s get into it…

Akira: A few more details

Akira prototype interface

As the Kickstarter campaign describes:

The main purpose of Akira is to offer a fast and intuitive tool to create Web and Mobile interfaces, more like Sketch, Figma, or Adobe XD, with a completely native experience for Linux.

They’ve also written a detailed description as to how the tool will be different from Inkscape, Glade, or QML Editor. Of course, if you want all the technical details, Kickstarter is the way to go. But, before that, let’s take a look at what they had to say when I asked some questions about Akira.

Q: If you consider your project – similar to what Figma offers – why should one consider installing Akira instead of using the web-based tool? Is it just going to be a clone of those tools – offering a native Linux experience or is there something really interesting to encourage users to switch (except being an open source solution)?

Akira: A native experience on Linux is always better and fast in comparison to a web-based electron app. Also, the hardware configuration matters if you choose to utilize Figma – but Akira will be light on system resource and you will still be able to do similar stuff without needing to go online.

Q: Let’s assume that it becomes the open source solution that Linux users have been waiting for (with similar features offered by proprietary tools). What are your plans to sustain it? Do you plan to introduce any pricing plans – or rely on donations?

Akira: The project will mostly rely on Donations (something like Krita Foundation could be an idea). But, there will be no “pro” pricing plans – it will be available for free and it will be an open source project.

So, with the response I got, it definitely seems to be something promising that we should probably support.

Wrapping Up

What do you think about Akira? Is it just going to remain a concept? Or do you hope to see it in action?

Let us know your thoughts in the comments below.

Source

14 Best NodeJS Frameworks for Developers in 2019

Image result for node.js photos

Node.js is used to build fast, highly scalable network applications based on an event-driven non-blocking input/output model, single-threaded asynchronous programming.

A web application framework is a combination of libraries, helpers, and tools that provide a way to effortlessly build and run web applications. A web framework lays out a foundation for building a web site/app.

The most important aspects of a web framework are – its architecture and features (such as support for customization, flexibility, extensibility, security, compatibility with other libraries, etc..).

In this article, we will share the 14 best Node.js frameworks for the developer. Note that this list is not organized in any particular order.

1. Express.JS

Express is a popular, fast, minimal and flexible Model-View-Controller (MVC) Node.js framework that offers a powerful collection of features for web and mobile application development. It is more or less the de facto API for writing web applications on top of Node.js.

It’s a set of routing libraries that provides a thin layer of fundamental web application features that add to the lovely existing Node.js features. It focuses on high performance and supports robust routing, and HTTP helpers (redirection, caching, etc). It comes with a view system supporting 14+ template engines, content negotiation, and an executable for generating applications quickly.

In addition, Express comes with a multitude of easy to use HTTP utility methods, functions and middleware, thus enabling developers to easily and quickly write robust APIs. Several popular Node.js frameworks are built on Express (you will discover some of them as you continue reading).

2. Socket.io

Socket.io is a fast and reliable full stack framework for building realtime applications. It is designed for real-time bidirectional event-based communication.

It comes with support for auto-reconnection, disconnection detection, binary, multiplexing, and rooms. It has a simple and convenient API and works on every platform, browser or device(focusing equally on reliability and speed).

3. Meteor.JS

Third on the list is Meteor.js, an ultra-simple full stack Node.js framework for building modern web and mobile applications. It is compatible with the web, iOS, Android, or desktop.

It integrates key collections of technologies for building connected-client reactive applications, a build tool, and a curated set of packages from the Node.js and general JavaScript community.

4. Koa.JS

Koa.js is a new web framework built by the developers behind Express and uses ES2017 async functions. It’s intended to be a smaller, more expressive, and more robust foundation for developing web applications and APIs. It employs promises and async functions to rid apps of callback hell and simplify error handling.

To understand the difference between Koa.js and Express.js, read this document: koa-vs-express.md.

5. Sails.js

Sailsjs is a realtime MVC web development framework for Node.js built on Express. Its MVC architecture resembles that from frameworks such as Ruby on Rails. However, it’s different in that it supports for the more modern, data-driven style of web app and API development.

It supports auto-generated REST APIs, easy WebSocket integration, and is compatible with any front-end: Angular, React, iOS, Android, Windows Phone, as well as custom hardware.

It has features that support for requirements of modern apps. Sails is especially suitable for developing realtime features like chat.

6. MEAN.io

MEAN (in full MongoExpressAngular(6) and Node) is a collection of open source technologies that together, provide an end-to-end framework for building dynamic web applications from the ground up.

It aims to provide a simple and enjoyable starting point for writing cloud native fullstack JavaScript applications, starting from the top to the bottom. It is another Node.js frameworks built on Express.

7. Nest.JS

Nest.js is a flexible, versatile and progressive Node.js REST API framework for building efficient, reliable and scalable server-side applications. It uses modern JavaScript and it’s built with TypeScript. It combines elements of OOP (Object Oriented Programming), FP (Functional Programming), and FRP (Functional Reactive Programming).

It’s an out-of-the-box application architecture packaged into a complete development kit for writing enterprise-level applications. Internally, it employs Express while providing compatibility with a wide range of other libraries.

8. Loopback.io

LoopBack is a highly-extensible Node.js framework that enables you to create dynamic end-to-end REST APIs with little or no coding. It is designed to enable developers to easily set up models and create REST APIs in a matter of minutes.

It supports easy authentication and authorization setup. It also comes with model relation support, various backend data stores, Ad-hoc queries and add-on components (third-party login and storage service).

9. Keystone.JS

KeystoneJS is an open source, lightweight, flexible and extensible Nodejs full-stack framework built on Express and MongoDB. It is designed for building database-driven websites, applications and APIs.

It supports dynamic routes, form processing, database building blocks (IDs, Strings, Booleans, Dates and Numbers ), and session management. It ships with a beautiful, customizable Admin UI for easily managing your data.

With Keystone, everything is simple; you choose and use the features that suit your needs, and replace the ones that don’t.

10. Feathers.JS

Feathers.js is a real-time, minimal and micro-service REST API framework for writing modern applications. It is an assortment of tools and an architecture designed for easily writing scalable REST APIs and real-time web applications from scratch. It is also built on Express.

It allows for quickly building application prototypes in minutes and production ready real-time backends in days. It easily integrates with any client side framework, whether it be Angular, React, or VueJS. Furthermore, it supports flexible optional plugins for implementing authentication and authorization permissions in your apps. Above all, feathers enables you to write elegant, flexible code.

11. Hapi.JS

Hapi.js is a simple yet rich, stable and reliable MVC framework for building applications and services. It is intended for writing reusable application logic as opposed to building infrastructure. It is configuration-centric and offers features such as input validation, caching, authentication, and other essential facilities.

12. Strapi.io

Strapi is a fast, robust and featured-rich MVC Node.js framework for developing efficient and secure APIs for web sites/apps or mobile applications. Strapi is secure by default and it’s plugins oriented (a set of default plugins is provided in every new project) and front-end agnostic.

It ships in with an embedded elegant, entirely customizable and fully extensible admin panel with headless CMS capabilities for keeping control of your data.

13. Restify.JS

Restify is a Nodejs REST API framework which utilizes connect style middleware. Under the hood, it heavily borrows from Express. It is optimized (especially for introspection and performance) for building semantically correct RESTful web services ready for production use at scale.

Importantly, restify is being used to power a number of huge web services out there, by companies such as Netflix.

14. Adonis.JS

Adonisjs is another popular Node.js web framework that is simple and stable with an elegant syntax. It is a MVC framework that provides a stable ecosystem to write stable and scalable server-side web applications from scratch. Adonisjs is modular in design; it consists of multiple service providers, the building blocks of AdonisJs applications.

A consistent and expressive API allows for building full-stack web applications or micro API servers. It is designed to favor developer joy and there is a well documented blog engine to learn the basics of AdonisJs.

Other well known Nodejs frameworks include but not limited to SocketCluster.io (full stack), Nodal (MVC), ThinkJS (MVC), SocketStreamJS (full stack), MEAN.JS (full stack), Total.js (MVC), DerbyJS (full-stack), and Meatier (MVC).

That’s It! In this article, we’ve covered the 14 best Nodejs web framework for developers. For each framework covered, we mentioned its underlying architecture and highlighted a number of its key features.

Source

vkQuake2, the project adding Vulkan support to Quake 2 now supports Linux

At the start of this year, I gave a little mention to vkQuake2, a project which has updated the classic Quake 2 with various improvements including Vulkan support.

Other improvements as part of vkQuake2 include support for higher resolution displays, it’s DPI aware, HUD scales with resolution and so on.

Initially, the project didn’t support Linux which has now changed. Over the last few days they’ve committed a bunch of new code which fully enables 64bit Linux support with Vulkan.

Screenshot of it running on Ubuntu 18.10.

Seems to work quite well in my testing, although it has a few rough edges. During ALT+TAB, it decided to lock up both of my screens forcing me to drop to a TTY and manually kill it with fire. So just be warned on that, might happen to you.

To build it and try it out, you will need the Vulkan SDK installed along with various other dependencies you can find on the GitHub.

For the full experience, you do need a copy of the data files from Quake 2 which you can find easily on GOG. Otherwise, you can test it using the demo content included in the releases on GitHub. Copy the demo content over from the baseq2 directory.

Source

Download Bitnami ProcessWire Module Linux 3.0.123-0

Bitnami ProcessWire Module iconA free software that allows you to deploy ProcessWire on top of a Bitnami LAMP Stack

Bitnami ProcessWire Module is a multi-platform and free software project that allows users to deploy the ProcessWire application on top of the Bitnami LAMP, MAMP and WAMP stacks, without having to deal with its runtime dependencies.

What is ProcessWire?

ProcessWire is a free, open source, web-based and platform-independent application that has been designed from the offset to act as a CMS (Content Management System) software. Highlights include a modular and flexible plugin architecture, support for thousands of pages, modern drag & drop image storage, as well as an intuitive and easy-to-use WYSIWYG editor.

Installing Bitnami ProcessWire Module

Bitnami’s stacks and modules are distributed as native installers built using BitRock’s cross-platform installer tool and designed to work flawlessly on all GNU/Linux distributions, as well as on the Mac OS X and Microsoft Windows operating systems.

To install the ProcessWire application on top of your Bitnami LAMP (Linux, Apache, MySQL and PHP) stack, you will have to download the package that corresponds to your computer’s hardware architecture, 32-bit or 64-bit (recommended), run it and follow the on-screen instructions.

Host ProcessWire in the cloud or virtualize it

Besides installing ProcessWire on top of your LAMP server, you can host it in the cloud, thanks to Bitnami’s pre-build cloud images for the Amazon EC2 and Windows Azure cloud hosting services. Virtualizing ProcessWire is also possible, as Bitnami offers a virtual appliance based on the latest LTS release of Ubuntu Linux and designed for the Oracle VirtualBox and VMware ESX/ESXi virtualization software.

The Bitnami ProcessWire Stack and Docker container

The Bitnami ProcessWire Stack product has been designed as an all-in-one solution that greatly simplifies the installation and hosting of the ProcessWire application, as well as of its runtime dependencies, on real hardware. While Bitnami ProcessWire Stack is available for download on Softpedia, you can check the project’s homepage for a Docker container.

Source

How to Install Microsoft PowerShell 6.1.1 on Ubuntu 18.04 LTS

What is PowerShell?

Microsoft PowerShell is a shell framework used to execute commands, but primarily it is developed to perform administrative tasks such as

  • Automation of repetitive jobs
  • Configuration management

PowerShell is an open-source and cross-platform project; it can be installed on Windows, macOS, and Linux. It includes an interactive command-line shell and a scripting environment.

How Ubuntu 18.04 made installation of PowerShell easier?

Ubuntu 18.04 has made installation of apps much easier via snap packages. For those who’re new to the phrase “snap package”, Microsoft has recently introduced a snap package for PowerShell. This major advancement allows Linux users/admins to install and run the latest version of PowerShell in fewer steps explained in this article.

Prerequisites to install PowerShell in Ubuntu 18.04

The following minimum requirements must exist before installing PowerShell 6.1.1 in Ubuntu 18.04

  • 2 GHz dual-core processor or better
  • 2 GB system memory
  • 25 GB of free hard drive space
  • Internet access
  • Ubuntu 18.04 LTS (long term support)

Steps to Install PowerShell 6.1.1 via Snap in Ubuntu 18.04 LTS

There are two ways to install PowerShell in Ubuntu i.e. via terminal or via Ubuntu Software Application.

via Terminal

Step 1: Open A Terminal Console

The easiest way to open a Terminal is to use the key combination Ctrl+Alt+T at the same time.

Open Ubuntu Console

Step 2: Snap Command to Install PowerShell

Enter snap package command i.e. “snap install powershell –classic” in the Terminal console to initiate installation of PowerShell in Ubuntu.

The prompt of Authentication Required on your screen is exclusively for security purposes. Before initiating any installation in Ubuntu 18.04, by default, the system requires to authenticate the account initiating this installation.

To proceed, the user must enter credentials of the account they’re currently logged in with.

Authenticate as admin

Step 3: Successful Installation of PowerShell

As soon as the system authenticates the user, Installation of PowerShell will begin in Ubuntu. (Usually, this installation takes 1-2 minutes)

The user can continuously see the status of installation in the terminal console.

At the end of the installation, the status of PowerShell 6.1.1 from ‘microsoft-powershell’ installed is shown as it can be seen in the screenshot below.

Install PowerShell snap

Step 4: Launch PowerShell via Terminal

After successful installation, it’s time to launch PowerShell which is a one-step process.

Enter Linux command “powershell” in the terminal console and it will take you to PowerShell terminal in an instant.

powershell

You must be in the PowerShell prompt by now and ready to experience the world of automation and scripting.

Microsoft PowerShell on Ubuntu

via Ubuntu Software

Step 1: Open Ubuntu Software

Ubuntu has facilitated its users with a desktop application of Ubuntu Software. It contains the list of all software and updates available.

  • Open the Ubuntu Software Manager from the Ubuntu desktop.

Step 2: Search for PowerShell in Ubuntu Software

  • Under the list of All software, search for “powershell” through the search bar.
  • Search Results must include “powershell” software as marked in the screenshot below.
  • Click on “powershell” software and proceed to Step 3.

Step 3: Installing PowerShell via Ubuntu Software

  • The user must be able to see the details of “powershell” software and the Install button

(for reference, it’s marked in below image)

  • Click on the Install button, it will initiate installation.

(Installation via Ubuntu Software takes 1-2 minutes)

  • The User can see installation status continuously on the screen and will be notified once installation completes.

Install PowerShell

Installing PowerShell

Step 4: Launch PowerShell via Ubuntu Software

After successful installation of PowerShell 6.1.1 via Ubuntu Software, the user can now launch PowerShell terminal and use it for multiple purposes and features which Microsoft PowerShell has to offer for its Linux users.

  • Click on “Launch” button (for reference, marked in below image). It will take you to PowerShell terminal successfully.

Launch PowerShell

Test PowerShell Terminal via Commands

To test if PowerShell is working accurately, the user can enter few Linux commands like:

“$PSVersionTable” to find Version of PowerShell installed (for reference, the result of this command attached in the screenshot below)

PowerShell gives its user endless power over the system and its directories. After following the above-mentioned steps in this article, now you must be all set to experience the exciting and productive world of automation and scripting through Microsoft PowerShell.

Source

Linux Today – Get started with Cypht, an open source email client

Integrate your email and news feeds into one view with Cypht, the fourth in our series on 19 open source tools that will make you more productive in 2019.

Email arriving at a mailbox

Cypht

We spend a lot of time dealing with email, and effectively managing your emailcan make a huge impact on your productivity. Programs like Thunderbird, Kontact/KMail, and Evolution all seem to have one thing in common: they seek to duplicate the functionality of Microsoft Outlook, which hasn’t really changed in the last 10 years or so. Even the console standard-bearers like Mutt and Cone haven’t changed much in the last decade.

Cypht main screen

Cypht is a simple, lightweight, and modern webmail client that aggregates several accounts into a single view. Along with email accounts, it includes Atom/RSS feeds. It makes reading items from these different sources very simple by using an “Everything” screen that shows not just the mail from your inbox, but also the newest articles from your news feeds.

Cypht's 'Everything' screen

It uses a simplified version of HTML messages to display mail or you can set it to view a plain-text version. Since Cypht doesn’t load images from remote sources (to help maintain security), HTML rendering can be a little rough, but it does enough to get the job done. You’ll get plain-text views with most rich-text mail—meaning lots of links and hard to read. I don’t fault Cypht, since this is really the email senders’ doing, but it does detract a little from the reading experience. Reading news feeds is about the same, but having them integrated with your email accounts makes it much easier to keep up with them (something I sometimes have issues with).

Reading a message in Cypht

Users can use a preconfigured mail server and add any additional servers they use. Cypht’s customization options include plain-text vs. HTML mail display, support for multiple profiles, and the ability to change the theme (and make your own). You have to remember to click the “Save” button on the left navigation bar, though, or your custom settings will disappear after that session. If you log out and back in without saving, all your changes will be lost and you’ll end up with the settings you started with. This does make it easy to experiment, and if you need to reset things, simply logging out without saving will bring back the previous setup when you log back in.

Settings screen with a dark theme

Installing Cypht locally is very easy. While it is not in a container or similar technology, the setup instructions were very clear and easy to follow and didn’t require any changes on my part. On my laptop, it took about 10 minutes from starting the installation to logging in for the first time. A shared installation on a server uses the same steps, so it should be about the same.

In the end, Cypht is a fantastic alternative to desktop and web-based email clients with a simple interface to help you handle your email quickly and efficiently.

Source

WP2Social Auto Publish Powered By : XYZScripts.com