How to Install Visual Studio Code on Debian 9

Visual Studio Code is a free and open source cross-platform code editor developed by Microsoft. It has a built-in debugging support, embedded Git control, syntax highlighting, code completion, integrated terminal, code refactoring and snippets. Visual Studio Code functionality can be extended using extensions.

This tutorial explains how to install Visual Studio Code editor on Debian using apt from the VS Code repository.

The user you are logged in as must have sudo privileges to be able to install packages.

Complete the following steps to install Visual Studio Code on your Debian system:

  1. Start by updating the packages index and installing the dependencies by typing:
    sudo apt updatesudo apt install software-properties-common apt-transport-https curl
  2. Import the Microsoft GPG key using the following curl command:
    curl -sSL https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

    Add the Visual Studio Code repository to your system:

    sudo add-apt-repository "deb [arch=amd64] https://packages.microsoft.com/repos/vscode stable main"
  3. Once the repository is added, install the latest version of Visual Studio Code with:
    sudo apt update
    sudo apt install code

That’s it. Visual Studio Code has been installed on your Debian desktop and you can start using it.

Once the VS Code is installed on your Debian system you can launch it either from the command line by typing code or by clicking on the VS Code icon (Activities -> Visual Studio Code).

When you start VS Code for the first time, a window like the following will be displayed:

You can now start installing extensions and configuring VS Code according to your preferences.

When a new version of Visual Studio Code is released you can update the package through your desktop standard Software Update tool or by running the following commands in your terminal:

sudo apt update
sudo apt upgrade

You have successfully installed VS Code on your Debian 9 machine. Your next step could be to install Additional Components and customize your User and Workspace Settings.

Source

The Evil-Twin Framework: A tool for testing WiFi security

Learn about a pen-testing tool intended to test the security of WiFi access points for all types of threats.

lock on world map

The increasing number of devices that connect over-the-air to the internet over-the-air and the wide availability of WiFi access points provide many opportunities for attackers to exploit users. By tricking users to connect to rogue access points, hackers gain full control over the users’ network connection, which allows them to sniff and alter traffic, redirect users to malicious sites, and launch other attacks over the network..

To protect users and teach them to avoid risky online behaviors, security auditors and researchers must evaluate users’ security practices and understand the reasons they connect to WiFi access points without being confident they are safe. There are a significant number of tools that can conduct WiFi audits, but no single tool can test the many different attack scenarios and none of the tools integrate well with one another.

The Evil-Twin Framework (ETF) aims to fix these problems in the WiFi auditing process by enabling auditors to examine multiple scenarios and integrate multiple tools. This article describes the framework and its functionalities, then provides some examples to show how it can be used.

The ETF architecture

The ETF framework was written in Python because the development language is very easy to read and make contributions to. In addition, many of the ETF’s libraries, such as Scapy, were already developed for Python, making it easy to use them for ETF.

The ETF architecture (Figure 1) is divided into different modules that interact with each other. The framework’s settings are all written in a single configuration file. The user can verify and edit the settings through the user interface via the ConfigurationManager class. Other modules can only read these settings and run according to them.

Evil-Twin Framework Architecture

Figure 1: Evil-Twin framework architecture

The ETF supports multiple user interfaces that interact with the framework. The current default interface is an interactive console, similar to the one on Metasploit. A graphical user interface (GUI) and a command line interface (CLI) are under development for desktop/browser use, and mobile interfaces may be an option in the future. The user can edit the settings in the configuration file using the interactive console (and eventually with the GUI). The user interface can interact with every other module that exists in the framework.

The WiFi module (AirCommunicator) was built to support a wide range of WiFi capabilities and attacks. The framework identifies three basic pillars of Wi-Fi communication: packet sniffingcustom packet injection, and access point creation. The three main WiFi communication modules are AirScannerAirInjector, and AirHost, which are responsible for packet sniffing, packet injection, and access point creation, respectively. The three classes are wrapped inside the main WiFi module, AirCommunicator, which reads the configuration file before starting the services. Any type of WiFi attack can be built using one or more of these core features.

To enable man-in-the-middle (MITM) attacks, which are a common way to attack WiFi clients, the framework has an integrated module called ETFITM (Evil-Twin Framework-in-the-Middle). This module is responsible for the creation of a web proxy used to intercept and manipulate HTTP/HTTPS traffic.

There are many other tools that can leverage the MITM position created by the ETF. Through its extensibility, ETF can support them—and, instead of having to call them separately, you can add the tools to the framework just by extending the Spawner class. This enables a developer or security auditor to call the program with a preconfigured argument string from within the framework.

The other way to extend the framework is through plugins. There are two categories of plugins: WiFi plugins and MITM plugins. MITM plugins are scripts that can run while the MITM proxy is active. The proxy passes the HTTP(S) requests and responses through to the plugins where they can be logged or manipulated. WiFi plugins follow a more complex flow of execution but still expose a fairly simple API to contributors who wish to develop and use their own plugins. WiFi plugins can be further divided into three categories, one for each of the core WiFi communication modules.

Each of the core modules has certain events that trigger the execution of a plugin. For instance, AirScanner has three defined events to which a response can be programmed. The events usually correspond to a setup phase before the service starts running, a mid-execution phase while the service is running, and a teardown or cleanup phase after a service finishes. Since Python allows multiple inheritance, one plugin can subclass more than one plugin class.

Figure 1 above is a summary of the framework’s architecture. Lines pointing away from the ConfigurationManager mean that the module reads information from it and lines pointing towards it mean that the module can write/edit configurations.

Examples of using the Evil-Twin Framework

There are a variety of ways ETF can conduct penetration testing on WiFi network security or work on end users’ awareness of WiFi security. The following examples describe some of the framework’s pen-testing functionalities, such as access point and client detection, WPA and WEP access point attacks, and evil twin access point creation.

These examples were devised using ETF with WiFi cards that allow WiFi traffic capture. They also utilize the following abbreviations for ETF setup commands:

  • APS access point SSID
  • APB access point BSSID
  • APC access point channel
  • CM client MAC address

In a real testing scenario, make sure to replace these abbreviations with the correct information.

Capturing a WPA 4-way handshake after a de-authentication attack

This scenario (Figure 2) takes two aspects into consideration: the de-authentication attack and the possibility of catching a 4-way WPA handshake. The scenario starts with a running WPA/WPA2-enabled access point with one connected client device (in this case, a smartphone). The goal is to de-authenticate the client with a general de-authentication attack then capture the WPA handshake once it tries to reconnect. The reconnection will be done manually immediately after being de-authenticated.

Scenario for capturing a WPA handshake after a de-authentication attack

Figure 2: Scenario for capturing a WPA handshake after a de-authentication attack

The consideration in this example is the ETF’s reliability. The goal is to find out if the tools can consistently capture the WPA handshake. The scenario will be performed multiple times with each tool to check its reliability when capturing the WPA handshake.

There is more than one way to capture a WPA handshake using the ETF. One way is to use a combination of the AirScanner and AirInjector modules; another way is to just use the AirInjector. The following scenario uses a combination of both modules.

The ETF launches the AirScanner module and analyzes the IEEE 802.11 frames to find a WPA handshake. Then the AirInjector can launch a de-authentication attack to force a reconnection. The following steps must be done to accomplish this on the ETF:

  1. Enter the AirScanner configuration mode: config airscanner
  2. Configure the AirScanner to not hop channels: config airscanner
  3. Set the channel to sniff the traffic on the access point channel (APC): set fixed_sniffing_channel = <APC>
  4. Start the AirScanner module with the CredentialSniffer plugin: start airscanner with credentialsniffer
  5. Add a target access point BSSID (APS) from the sniffed access points list: add aps where ssid = <APS>
  6. Start the AirInjector, which by default lauches the de-authentication attack: start airinjector

This simple set of commands enables the ETF to perform an efficient and successful de-authentication attack on every test run. The ETF can also capture the WPA handshake on every test run. The following code makes it possible to observe the ETF’s successful execution.

███████╗████████╗███████╗
██╔════╝╚══██╔══╝██╔════╝
█████╗     ██║   █████╗
██╔══╝     ██║   ██╔══╝
███████╗   ██║   ██║
╚══════╝   ╚═╝   ╚═╝

[+] Do you want to load an older session? [Y/n]: n
[+] Creating new temporary session on 02/08/2018
[+] Enter the desired session name:
ETF[etf/aircommunicator/]::> config airscanner
ETF[etf/aircommunicator/airscanner]::> listargs
sniffing_interface =               wlan1; (var)
probes =                True; (var)
beacons =                True; (var)
hop_channels =               false(var)
fixed_sniffing_channel =                  11(var)
ETF[etf/aircommunicator/airscanner]::> start airscanner with
arpreplayer        caffelatte         credentialsniffer  packetlogger       selfishwifi
ETF[etf/aircommunicator/airscanner]::> start airscanner with credentialsniffer
[+] Successfully added credentialsniffer plugin.
[+] Starting packet sniffer on interface ‘wlan1’
[+] Set fixed channel to 11
ETF[etf/aircommunicator/airscanner]::> add aps where ssid = CrackWPA
ETF[etf/aircommunicator/airscanner]::> start airinjector
ETF[etf/aircommunicator/airscanner]::> [+] Starting deauthentication attack
– 1000 bursts of 1 packets
– 1 different packets
[+] Injection attacks finished executing.
[+] Starting post injection methods
[+] Post injection methods finished
[+] WPA Handshake found for client ’70:3e:ac:bb:78:64′ and network ‘CrackWPA’

Launching an ARP replay attack and cracking a WEP network

The next scenario (Figure 3) will also focus on the Address Resolution Protocol(ARP) replay attack’s efficiency and the speed of capturing the WEP data packets containing the initialization vectors (IVs). The same network may require a different number of caught IVs to be cracked, so the limit for this scenario is 50,000 IVs. If the network is cracked during the first test with less than 50,000 IVs, that number will be the new limit for the following tests on the network. The cracking tool to be used will be aircrack-ng.

The test scenario starts with an access point using WEP encryption and an offline client that knows the key—the key for testing purposes is 12345, but it can be a larger and more complex key. Once the client connects to the WEP access point, it will send out a gratuitous ARP packet; this is the packet that’s meant to be captured and replayed. The test ends once the limit of packets containing IVs is captured.

Scenario for capturing a WPA handshake after a de-authentication attack

Figure 3: Scenario for capturing a WPA handshake after a de-authentication attack

ETF uses Python’s Scapy library for packet sniffing and injection. To minimize known performance problems in Scapy, ETF tweaks some of its low-level libraries to significantly speed packet injection. For this specific scenario, the ETF uses tcpdump as a background process instead of Scapy for more efficient packet sniffing, while Scapy is used to identify the encrypted ARP packet.

This scenario requires the following commands and operations to be performed on the ETF:

  1. Enter the AirScanner configuration mode: config airscanner
  2. Configure the AirScanner to not hop channels: set hop_channels = false
  3. Set the channel to sniff the traffic on the access point channel (APC): set fixed_sniffing_channel = <APC>
  4. Enter the ARPReplayer plugin configuration mode: config arpreplayer
  5. Set the target access point BSSID (APB) of the WEP network: set target_ap_bssid <APB>
  6. Start the AirScanner module with the ARPReplayer plugin: start airscanner with arpreplayer

After executing these commands, ETF correctly identifies the encrypted ARP packet, then successfully performs an ARP replay attack, which cracks the network.

Launching a catch-all honeypot

The scenario in Figure 4 creates multiple access points with the same SSID. This technique discovers the encryption type of a network that was probed for but out of reach. By launching multiple access points with all security settings, the client will automatically connect to the one that matches the security settings of the locally cached access point information.

Scenario for capturing a WPA handshake after a de-authentication attack

Figure 4: Scenario for capturing a WPA handshake after a de-authentication attack

Using the ETF, it is possible to configure the hostapd configuration file then launch the program in the background. Hostapd supports launching multiple access points on the same wireless card by configuring virtual interfaces, and since it supports all types of security configurations, a complete catch-all honeypot can be set up. For the WEP and WPA(2)-PSK networks, a default password is used, and for the WPA(2)-EAP, an “accept all” policy is configured.

For this scenario, the following commands and operations must be performed on the ETF:

  1. Enter the APLauncher configuration mode: config aplauncher
  2. Set the desired access point SSID (APS): set ssid = <APS>
  3. Configure the APLauncher as a catch-all honeypot: set catch_all_honeypot = true
  4. Start the AirHost module: start airhost

With these commands, the ETF can launch a complete catch-all honeypot with all types of security configurations. ETF also automatically launches the DHCP and DNS servers that allow clients to stay connected to the internet. ETF offers a better, faster, and more complete solution to create catch-all honeypots. The following code enables the successful execution of the ETF to be observed.

███████╗████████╗███████╗
██╔════╝╚══██╔══╝██╔════╝
█████╗     ██║   █████╗
██╔══╝     ██║   ██╔══╝
███████╗   ██║   ██║
╚══════╝   ╚═╝   ╚═╝

[+] Do you want to load an older session? [Y/n]: n
[+] Creating ne´,cxzw temporary session on 03/08/2018
[+] Enter the desired session name:
ETF[etf/aircommunicator/]::> config aplauncher
ETF[etf/aircommunicator/airhost/aplauncher]::> setconf ssid CatchMe
ssid = CatchMe
ETF[etf/aircommunicator/airhost/aplauncher]::> setconf catch_all_honeypot true
catch_all_honeypot = true
ETF[etf/aircommunicator/airhost/aplauncher]::> start airhost
[+] Killing already started processes and restarting network services
[+] Stopping dnsmasq and hostapd services
[+] Access Point stopped…
[+] Running airhost plugins pre_start
[+] Starting hostapd background process
[+] Starting dnsmasq service
[+] Running airhost plugins post_start
[+] Access Point launched successfully
[+] Starting dnsmasq service

Conclusions and future work

These scenarios use common and well-known attacks to help validate the ETF’s capabilities for testing WiFi networks and clients. The results also validate that the framework’s architecture enables new attack vectors and features to be developed on top of it while taking advantage of the platform’s existing capabilities. This should accelerate development of new WiFi penetration-testing tools, since a lot of the code is already written. Furthermore, the fact that complementary WiFi technologies are all integrated in a single tool will make WiFi pen-testing simpler and more efficient.

The ETF’s goal is not to replace existing tools but to complement them and offer a broader choice to security auditors when conducting WiFi pen-testing and improving user awareness.

The ETF is an open source project available on GitHub and community contributions to its development are welcomed. Following are some of the ways you can help.

One of the limitations of current WiFi pen-testing is the inability to log important events during tests. This makes reporting identified vulnerabilities both more difficult and less accurate. The framework could implement a logger that can be accessed by every class to create a pen-testing session report.

The ETF tool’s capabilities cover many aspects of WiFi pen-testing. On one hand, it facilitates the phases of WiFi reconnaissance, vulnerability discovery, and attack. On the other hand, it doesn’t offer a feature that facilitates the reporting phase. Adding the concept of a session and a session reporting feature, such as the logging of important events during a session, would greatly increase the value of the tool for real pen-testing scenarios.

Another valuable contribution would be extending the framework to facilitate WiFi fuzzing. The IEEE 802.11 protocol is very complex, and considering there are multiple implementations of it, both on the client and access point side, it’s safe to assume these implementations contain bugs and even security flaws. These bugs could be discovered by fuzzing IEEE 802.11 protocol frames. Since Scapy allows custom packet creation and injection, a fuzzer can be implemented through it.

Source

Top 10 Artificial Intelligence Technology Trends That Will Dominate In 2019

ai

Artificial Intelligence (AI) has created machines that mimic human intelligence. The intention behind the creation and continued development of machine intelligence is to improve our daily lives and the manner in which we interact with machines. Artificial intelligence is already making a difference in our homes, as customers and as service providers. Improvement of technology will inform the growth of artificial intelligence and vice versa beyond our wildest imagination. So what are the top 10 artificial intelligence technology trends that you should anticipate in 2019? Read on to find out!

1. Machine Learning Platforms

Machines can learn and to adapt to what they have learned. Advancements in technology have led to the improvement of the methods through which learning by computers occurs. Machine learning platforms access, classify and predict data. The progress of this platform is gaining ground by providing,

  • Data applications

  • Algorithms

  • Training tools

  • Application programming interface

  • Other machines

Providing these systems automatically and autonomously enables these machines to perform their functions intelligently.

2. Chatbot

chatbot

A chatbot is a programme in an application or a website that provides customer support twenty-four hours a day, seven days a week. Chatbots interact with users through text or audio, mostly through keywords and automated responses. Chatbots often mimic human interactions. Over time, chatbots improve the users experience through machine learning platforms by identifying patterns and adapting to them. Different online service providers are already making use of this trend in artificial intelligence for their businesses. Users can;

  • Submit complaints or reviews,

  • Order for food from restaurants,

  • Make hotel reservations,

  • Plan appointments.

3. Natural Language Generation

Natural language generation is an artificial intelligence that converts data into text. The text is relayed in a natural language such as English and can be presented as spoken or written. This conversion enables the communication of ideas that are highly accurate by computers. This form of artificial intelligence is used to generate reports that are incredibly detailed. Journalists, for example, have used Natural Language Generation to avail detailed reports and articles on corporate earnings and natural disasters such as earthquakes. Chatbots and smart devices use and benefit from natural language generation.

4. Augmented Reality

augmented reality

If you have played Pokémon Go or used the Snapchat lens, then you have interacted with augmented reality. Augmented reality places computer-generated, virtual characters in the real world in real time usually through a camera lens. Whereas virtual reality completely shuts out the world, augmented reality blends its generated characters with the world.

This trend is making its way into different retail stores that make home furnishing and makeup selection more fun and interactive.

5. Virtual Agents

A virtual agent is a computer-generated intelligence that provides online customer assistance. Virtual agents are animated virtual characters that typically have human-like characteristics. Virtual agents lead discussions with customers and provide adequate responses. Additionally, virtual agents can

  • Avail product information,

  • Place an order,

  • Make a reservation,

  • Book an appointment.

They also improve their function through machine learning platforms for better service provision. Companies that provide virtual agents include Google, Microsoft, Amazon and Assist AI.

6. Speech Recognition

Speech recognition interprets words from spoken language and converts them into data the machine understands and can asses. It facilitates communication between man and machine and is built into a lot of upcoming smart devices such as speakers, phones and watches. Continued improvement of the algorithms that recognize and convert speech into machine data will solidify this trend in 2019.

7. Self-driving cars

These are cars that drive themselves independently. This is made possible by merging sensors and artificial intelligence. The sensors map out the immediate environment of the vehicle, and artificial intelligence interprets and responds to the information relayed by the sensors. This form of artificial intelligence is expected to lower collisions and place less of a burden on drivers. Companies such as Uber, Tesla, and General Motors are hard at work to make self-driving cars a commercial reality in 2019.

8. Smart devices

Smart devices are becoming increasingly popular. Technology that has been in use over the recent years is being modified and released as smart devices. They include,

  • Smart thermostat

  • Smart speakers

  • Smart light bulbs

  • Smart security cameras

  • Smartphones

  • Smartwatches

  • Smart hubs

  • Smart keychains

Smart devices interact with users and other devices through different wireless connections, sensors and artificial intelligence. They pick up on the environment and respond to any changes based on their function and programming. Smart devices are likely to increase and improve in 2019.

9. Artificial intelligence permeation

Artificial intelligence-driven technology is on the rise and is penetrating all manner of industries. The continued development of machine learning platforms is making it easier and convenient for businesses to utilize artificial intelligence. Some of the industries that are adopting this technology include the automotive industry, marketing, healthcare, and finance industries and so on.

10. Internet of Things (IoT)

iot

Internet of Things is a phrase that defines objects or devices connected via the internet that collect and share information. Merging the Internet of Things with machine intelligence will better the collection and sharing of data. The specific form of artificial intelligence being applied to the Internet of Things is machine learning platforms. Classifying and predicting data from the Internet of Things with intelligence will provide new findings and insights into connected devices.

Summary

It is not possible to specifically predict how these trends will develop or how they will disrupt the technology that is already in place. What is certain is that technology as we know it is changing thanks to the development and improvement of artificial intelligence. It is also certain 2019 will be a year of significant growth for artificial intelligence technology.

Watch out for these ten trends in 2019 and challenge yourself to interact with and learn about some, if not all of them.

Source

Akira: The Linux Design Tool We’ve Always Wanted?

Let’s make it clear, I am not a professional designer – but I’ve used certain tools on Windows (like Photoshop, Illustrator, etc.) and Figma (which is a browser-based interface design tool). I’m sure there are a lot more design tools available for Mac and Windows.

Even on Linux, there is a limited number of dedicated graphic design tools. A few of these tools like GIMP and Inkscape are used by professionals as well. But most of them are not considered professional grade, unfortunately.

Even if there are a couple more solutions – I’ve never come across a native Linux application that could replace Sketch, Figma, or Adobe XD. Any professional designer would agree to that, isn’t it?

Is Akira going to replace Sketch, Figma, and Adobe XD on Linux?

Well, in order to develop something that would replace those awesome proprietary tools – Alessandro Castellani – came up with a Kickstarter campaign by teaming up with a couple of experienced developers –
Alberto Fanjul, Bilal Elmoussaoui, and Felipe Escoto.

So, yes, Akira is still pretty much just an idea- with a working prototype of its interface (as I observed in their live stream session via Kickstarter recently).

If it does not exist, why the Kickstarter campaign?

The aim of the Kickstarter campaign is to gather funds in order to hire the developers and take a few months off to dedicate their time in order to make Akira possible.

Nonetheless, if you want to support the project, you should know some details, right?

Fret not, we asked a couple of questions in their livestream session – let’s get into it…

Akira: A few more details

Akira prototype interface

As the Kickstarter campaign describes:

The main purpose of Akira is to offer a fast and intuitive tool to create Web and Mobile interfaces, more like Sketch, Figma, or Adobe XD, with a completely native experience for Linux.

They’ve also written a detailed description as to how the tool will be different from Inkscape, Glade, or QML Editor. Of course, if you want all the technical details, Kickstarter is the way to go. But, before that, let’s take a look at what they had to say when I asked some questions about Akira.

Q: If you consider your project – similar to what Figma offers – why should one consider installing Akira instead of using the web-based tool? Is it just going to be a clone of those tools – offering a native Linux experience or is there something really interesting to encourage users to switch (except being an open source solution)?

Akira: A native experience on Linux is always better and fast in comparison to a web-based electron app. Also, the hardware configuration matters if you choose to utilize Figma – but Akira will be light on system resource and you will still be able to do similar stuff without needing to go online.

Q: Let’s assume that it becomes the open source solution that Linux users have been waiting for (with similar features offered by proprietary tools). What are your plans to sustain it? Do you plan to introduce any pricing plans – or rely on donations?

Akira: The project will mostly rely on Donations (something like Krita Foundation could be an idea). But, there will be no “pro” pricing plans – it will be available for free and it will be an open source project.

So, with the response I got, it definitely seems to be something promising that we should probably support.

Wrapping Up

What do you think about Akira? Is it just going to remain a concept? Or do you hope to see it in action?

Let us know your thoughts in the comments below.

Source

14 Best NodeJS Frameworks for Developers in 2019

Image result for node.js photos

Node.js is used to build fast, highly scalable network applications based on an event-driven non-blocking input/output model, single-threaded asynchronous programming.

A web application framework is a combination of libraries, helpers, and tools that provide a way to effortlessly build and run web applications. A web framework lays out a foundation for building a web site/app.

The most important aspects of a web framework are – its architecture and features (such as support for customization, flexibility, extensibility, security, compatibility with other libraries, etc..).

In this article, we will share the 14 best Node.js frameworks for the developer. Note that this list is not organized in any particular order.

1. Express.JS

Express is a popular, fast, minimal and flexible Model-View-Controller (MVC) Node.js framework that offers a powerful collection of features for web and mobile application development. It is more or less the de facto API for writing web applications on top of Node.js.

It’s a set of routing libraries that provides a thin layer of fundamental web application features that add to the lovely existing Node.js features. It focuses on high performance and supports robust routing, and HTTP helpers (redirection, caching, etc). It comes with a view system supporting 14+ template engines, content negotiation, and an executable for generating applications quickly.

In addition, Express comes with a multitude of easy to use HTTP utility methods, functions and middleware, thus enabling developers to easily and quickly write robust APIs. Several popular Node.js frameworks are built on Express (you will discover some of them as you continue reading).

2. Socket.io

Socket.io is a fast and reliable full stack framework for building realtime applications. It is designed for real-time bidirectional event-based communication.

It comes with support for auto-reconnection, disconnection detection, binary, multiplexing, and rooms. It has a simple and convenient API and works on every platform, browser or device(focusing equally on reliability and speed).

3. Meteor.JS

Third on the list is Meteor.js, an ultra-simple full stack Node.js framework for building modern web and mobile applications. It is compatible with the web, iOS, Android, or desktop.

It integrates key collections of technologies for building connected-client reactive applications, a build tool, and a curated set of packages from the Node.js and general JavaScript community.

4. Koa.JS

Koa.js is a new web framework built by the developers behind Express and uses ES2017 async functions. It’s intended to be a smaller, more expressive, and more robust foundation for developing web applications and APIs. It employs promises and async functions to rid apps of callback hell and simplify error handling.

To understand the difference between Koa.js and Express.js, read this document: koa-vs-express.md.

5. Sails.js

Sailsjs is a realtime MVC web development framework for Node.js built on Express. Its MVC architecture resembles that from frameworks such as Ruby on Rails. However, it’s different in that it supports for the more modern, data-driven style of web app and API development.

It supports auto-generated REST APIs, easy WebSocket integration, and is compatible with any front-end: Angular, React, iOS, Android, Windows Phone, as well as custom hardware.

It has features that support for requirements of modern apps. Sails is especially suitable for developing realtime features like chat.

6. MEAN.io

MEAN (in full MongoExpressAngular(6) and Node) is a collection of open source technologies that together, provide an end-to-end framework for building dynamic web applications from the ground up.

It aims to provide a simple and enjoyable starting point for writing cloud native fullstack JavaScript applications, starting from the top to the bottom. It is another Node.js frameworks built on Express.

7. Nest.JS

Nest.js is a flexible, versatile and progressive Node.js REST API framework for building efficient, reliable and scalable server-side applications. It uses modern JavaScript and it’s built with TypeScript. It combines elements of OOP (Object Oriented Programming), FP (Functional Programming), and FRP (Functional Reactive Programming).

It’s an out-of-the-box application architecture packaged into a complete development kit for writing enterprise-level applications. Internally, it employs Express while providing compatibility with a wide range of other libraries.

8. Loopback.io

LoopBack is a highly-extensible Node.js framework that enables you to create dynamic end-to-end REST APIs with little or no coding. It is designed to enable developers to easily set up models and create REST APIs in a matter of minutes.

It supports easy authentication and authorization setup. It also comes with model relation support, various backend data stores, Ad-hoc queries and add-on components (third-party login and storage service).

9. Keystone.JS

KeystoneJS is an open source, lightweight, flexible and extensible Nodejs full-stack framework built on Express and MongoDB. It is designed for building database-driven websites, applications and APIs.

It supports dynamic routes, form processing, database building blocks (IDs, Strings, Booleans, Dates and Numbers ), and session management. It ships with a beautiful, customizable Admin UI for easily managing your data.

With Keystone, everything is simple; you choose and use the features that suit your needs, and replace the ones that don’t.

10. Feathers.JS

Feathers.js is a real-time, minimal and micro-service REST API framework for writing modern applications. It is an assortment of tools and an architecture designed for easily writing scalable REST APIs and real-time web applications from scratch. It is also built on Express.

It allows for quickly building application prototypes in minutes and production ready real-time backends in days. It easily integrates with any client side framework, whether it be Angular, React, or VueJS. Furthermore, it supports flexible optional plugins for implementing authentication and authorization permissions in your apps. Above all, feathers enables you to write elegant, flexible code.

11. Hapi.JS

Hapi.js is a simple yet rich, stable and reliable MVC framework for building applications and services. It is intended for writing reusable application logic as opposed to building infrastructure. It is configuration-centric and offers features such as input validation, caching, authentication, and other essential facilities.

12. Strapi.io

Strapi is a fast, robust and featured-rich MVC Node.js framework for developing efficient and secure APIs for web sites/apps or mobile applications. Strapi is secure by default and it’s plugins oriented (a set of default plugins is provided in every new project) and front-end agnostic.

It ships in with an embedded elegant, entirely customizable and fully extensible admin panel with headless CMS capabilities for keeping control of your data.

13. Restify.JS

Restify is a Nodejs REST API framework which utilizes connect style middleware. Under the hood, it heavily borrows from Express. It is optimized (especially for introspection and performance) for building semantically correct RESTful web services ready for production use at scale.

Importantly, restify is being used to power a number of huge web services out there, by companies such as Netflix.

14. Adonis.JS

Adonisjs is another popular Node.js web framework that is simple and stable with an elegant syntax. It is a MVC framework that provides a stable ecosystem to write stable and scalable server-side web applications from scratch. Adonisjs is modular in design; it consists of multiple service providers, the building blocks of AdonisJs applications.

A consistent and expressive API allows for building full-stack web applications or micro API servers. It is designed to favor developer joy and there is a well documented blog engine to learn the basics of AdonisJs.

Other well known Nodejs frameworks include but not limited to SocketCluster.io (full stack), Nodal (MVC), ThinkJS (MVC), SocketStreamJS (full stack), MEAN.JS (full stack), Total.js (MVC), DerbyJS (full-stack), and Meatier (MVC).

That’s It! In this article, we’ve covered the 14 best Nodejs web framework for developers. For each framework covered, we mentioned its underlying architecture and highlighted a number of its key features.

Source

vkQuake2, the project adding Vulkan support to Quake 2 now supports Linux

At the start of this year, I gave a little mention to vkQuake2, a project which has updated the classic Quake 2 with various improvements including Vulkan support.

Other improvements as part of vkQuake2 include support for higher resolution displays, it’s DPI aware, HUD scales with resolution and so on.

Initially, the project didn’t support Linux which has now changed. Over the last few days they’ve committed a bunch of new code which fully enables 64bit Linux support with Vulkan.

Screenshot of it running on Ubuntu 18.10.

Seems to work quite well in my testing, although it has a few rough edges. During ALT+TAB, it decided to lock up both of my screens forcing me to drop to a TTY and manually kill it with fire. So just be warned on that, might happen to you.

To build it and try it out, you will need the Vulkan SDK installed along with various other dependencies you can find on the GitHub.

For the full experience, you do need a copy of the data files from Quake 2 which you can find easily on GOG. Otherwise, you can test it using the demo content included in the releases on GitHub. Copy the demo content over from the baseq2 directory.

Source

Download Bitnami ProcessWire Module Linux 3.0.123-0

Bitnami ProcessWire Module iconA free software that allows you to deploy ProcessWire on top of a Bitnami LAMP Stack

Bitnami ProcessWire Module is a multi-platform and free software project that allows users to deploy the ProcessWire application on top of the Bitnami LAMP, MAMP and WAMP stacks, without having to deal with its runtime dependencies.

What is ProcessWire?

ProcessWire is a free, open source, web-based and platform-independent application that has been designed from the offset to act as a CMS (Content Management System) software. Highlights include a modular and flexible plugin architecture, support for thousands of pages, modern drag & drop image storage, as well as an intuitive and easy-to-use WYSIWYG editor.

Installing Bitnami ProcessWire Module

Bitnami’s stacks and modules are distributed as native installers built using BitRock’s cross-platform installer tool and designed to work flawlessly on all GNU/Linux distributions, as well as on the Mac OS X and Microsoft Windows operating systems.

To install the ProcessWire application on top of your Bitnami LAMP (Linux, Apache, MySQL and PHP) stack, you will have to download the package that corresponds to your computer’s hardware architecture, 32-bit or 64-bit (recommended), run it and follow the on-screen instructions.

Host ProcessWire in the cloud or virtualize it

Besides installing ProcessWire on top of your LAMP server, you can host it in the cloud, thanks to Bitnami’s pre-build cloud images for the Amazon EC2 and Windows Azure cloud hosting services. Virtualizing ProcessWire is also possible, as Bitnami offers a virtual appliance based on the latest LTS release of Ubuntu Linux and designed for the Oracle VirtualBox and VMware ESX/ESXi virtualization software.

The Bitnami ProcessWire Stack and Docker container

The Bitnami ProcessWire Stack product has been designed as an all-in-one solution that greatly simplifies the installation and hosting of the ProcessWire application, as well as of its runtime dependencies, on real hardware. While Bitnami ProcessWire Stack is available for download on Softpedia, you can check the project’s homepage for a Docker container.

Source

How to Install Microsoft PowerShell 6.1.1 on Ubuntu 18.04 LTS

What is PowerShell?

Microsoft PowerShell is a shell framework used to execute commands, but primarily it is developed to perform administrative tasks such as

  • Automation of repetitive jobs
  • Configuration management

PowerShell is an open-source and cross-platform project; it can be installed on Windows, macOS, and Linux. It includes an interactive command-line shell and a scripting environment.

How Ubuntu 18.04 made installation of PowerShell easier?

Ubuntu 18.04 has made installation of apps much easier via snap packages. For those who’re new to the phrase “snap package”, Microsoft has recently introduced a snap package for PowerShell. This major advancement allows Linux users/admins to install and run the latest version of PowerShell in fewer steps explained in this article.

Prerequisites to install PowerShell in Ubuntu 18.04

The following minimum requirements must exist before installing PowerShell 6.1.1 in Ubuntu 18.04

  • 2 GHz dual-core processor or better
  • 2 GB system memory
  • 25 GB of free hard drive space
  • Internet access
  • Ubuntu 18.04 LTS (long term support)

Steps to Install PowerShell 6.1.1 via Snap in Ubuntu 18.04 LTS

There are two ways to install PowerShell in Ubuntu i.e. via terminal or via Ubuntu Software Application.

via Terminal

Step 1: Open A Terminal Console

The easiest way to open a Terminal is to use the key combination Ctrl+Alt+T at the same time.

Open Ubuntu Console

Step 2: Snap Command to Install PowerShell

Enter snap package command i.e. “snap install powershell –classic” in the Terminal console to initiate installation of PowerShell in Ubuntu.

The prompt of Authentication Required on your screen is exclusively for security purposes. Before initiating any installation in Ubuntu 18.04, by default, the system requires to authenticate the account initiating this installation.

To proceed, the user must enter credentials of the account they’re currently logged in with.

Authenticate as admin

Step 3: Successful Installation of PowerShell

As soon as the system authenticates the user, Installation of PowerShell will begin in Ubuntu. (Usually, this installation takes 1-2 minutes)

The user can continuously see the status of installation in the terminal console.

At the end of the installation, the status of PowerShell 6.1.1 from ‘microsoft-powershell’ installed is shown as it can be seen in the screenshot below.

Install PowerShell snap

Step 4: Launch PowerShell via Terminal

After successful installation, it’s time to launch PowerShell which is a one-step process.

Enter Linux command “powershell” in the terminal console and it will take you to PowerShell terminal in an instant.

powershell

You must be in the PowerShell prompt by now and ready to experience the world of automation and scripting.

Microsoft PowerShell on Ubuntu

via Ubuntu Software

Step 1: Open Ubuntu Software

Ubuntu has facilitated its users with a desktop application of Ubuntu Software. It contains the list of all software and updates available.

  • Open the Ubuntu Software Manager from the Ubuntu desktop.

Step 2: Search for PowerShell in Ubuntu Software

  • Under the list of All software, search for “powershell” through the search bar.
  • Search Results must include “powershell” software as marked in the screenshot below.
  • Click on “powershell” software and proceed to Step 3.

Step 3: Installing PowerShell via Ubuntu Software

  • The user must be able to see the details of “powershell” software and the Install button

(for reference, it’s marked in below image)

  • Click on the Install button, it will initiate installation.

(Installation via Ubuntu Software takes 1-2 minutes)

  • The User can see installation status continuously on the screen and will be notified once installation completes.

Install PowerShell

Installing PowerShell

Step 4: Launch PowerShell via Ubuntu Software

After successful installation of PowerShell 6.1.1 via Ubuntu Software, the user can now launch PowerShell terminal and use it for multiple purposes and features which Microsoft PowerShell has to offer for its Linux users.

  • Click on “Launch” button (for reference, marked in below image). It will take you to PowerShell terminal successfully.

Launch PowerShell

Test PowerShell Terminal via Commands

To test if PowerShell is working accurately, the user can enter few Linux commands like:

“$PSVersionTable” to find Version of PowerShell installed (for reference, the result of this command attached in the screenshot below)

PowerShell gives its user endless power over the system and its directories. After following the above-mentioned steps in this article, now you must be all set to experience the exciting and productive world of automation and scripting through Microsoft PowerShell.

Source

Linux Today – Get started with Cypht, an open source email client

Integrate your email and news feeds into one view with Cypht, the fourth in our series on 19 open source tools that will make you more productive in 2019.

Email arriving at a mailbox

Cypht

We spend a lot of time dealing with email, and effectively managing your emailcan make a huge impact on your productivity. Programs like Thunderbird, Kontact/KMail, and Evolution all seem to have one thing in common: they seek to duplicate the functionality of Microsoft Outlook, which hasn’t really changed in the last 10 years or so. Even the console standard-bearers like Mutt and Cone haven’t changed much in the last decade.

Cypht main screen

Cypht is a simple, lightweight, and modern webmail client that aggregates several accounts into a single view. Along with email accounts, it includes Atom/RSS feeds. It makes reading items from these different sources very simple by using an “Everything” screen that shows not just the mail from your inbox, but also the newest articles from your news feeds.

Cypht's 'Everything' screen

It uses a simplified version of HTML messages to display mail or you can set it to view a plain-text version. Since Cypht doesn’t load images from remote sources (to help maintain security), HTML rendering can be a little rough, but it does enough to get the job done. You’ll get plain-text views with most rich-text mail—meaning lots of links and hard to read. I don’t fault Cypht, since this is really the email senders’ doing, but it does detract a little from the reading experience. Reading news feeds is about the same, but having them integrated with your email accounts makes it much easier to keep up with them (something I sometimes have issues with).

Reading a message in Cypht

Users can use a preconfigured mail server and add any additional servers they use. Cypht’s customization options include plain-text vs. HTML mail display, support for multiple profiles, and the ability to change the theme (and make your own). You have to remember to click the “Save” button on the left navigation bar, though, or your custom settings will disappear after that session. If you log out and back in without saving, all your changes will be lost and you’ll end up with the settings you started with. This does make it easy to experiment, and if you need to reset things, simply logging out without saving will bring back the previous setup when you log back in.

Settings screen with a dark theme

Installing Cypht locally is very easy. While it is not in a container or similar technology, the setup instructions were very clear and easy to follow and didn’t require any changes on my part. On my laptop, it took about 10 minutes from starting the installation to logging in for the first time. A shared installation on a server uses the same steps, so it should be about the same.

In the end, Cypht is a fantastic alternative to desktop and web-based email clients with a simple interface to help you handle your email quickly and efficiently.

Source

The new System Shock is looking quite impressive with the latest artwork

System Shock, the remake coming eventually from Nightdive Studios continues along in development and it’s looking impressive.

In their latest Kickstarter update, they showed off what they say is the “final art” after they previously showed the game using “temporary art”. I have to admit, while this is only a small slice of what’s to come, from the footage it certainly seems like it will have a decent atmosphere to it.

Take a look:

I missed their last few updates, since this is one game I am trying not to spoil too much from seeing all the bits and pieces start to come together now.

They put out a few more updates since I last took a look, showing off more interesting parts of their final art like these:

I’m very interested in seeing the final game, Nightdive Studios have done some pretty good work reviving older games and System Shock is clearly a labour of love for them. It’s using Unreal Engine, so I do hope they’re getting plenty of Linux testing done closer to release since many developers have had issue with it.

There’s no current date for the final release, will keep you posted.

Source

WP2Social Auto Publish Powered By : XYZScripts.com