14 Best NodeJS Frameworks for Developers in 2019

Image result for node.js photos

Node.js is used to build fast, highly scalable network applications based on an event-driven non-blocking input/output model, single-threaded asynchronous programming.

A web application framework is a combination of libraries, helpers, and tools that provide a way to effortlessly build and run web applications. A web framework lays out a foundation for building a web site/app.

The most important aspects of a web framework are – its architecture and features (such as support for customization, flexibility, extensibility, security, compatibility with other libraries, etc..).

In this article, we will share the 14 best Node.js frameworks for the developer. Note that this list is not organized in any particular order.

1. Express.JS

Express is a popular, fast, minimal and flexible Model-View-Controller (MVC) Node.js framework that offers a powerful collection of features for web and mobile application development. It is more or less the de facto API for writing web applications on top of Node.js.

It’s a set of routing libraries that provides a thin layer of fundamental web application features that add to the lovely existing Node.js features. It focuses on high performance and supports robust routing, and HTTP helpers (redirection, caching, etc). It comes with a view system supporting 14+ template engines, content negotiation, and an executable for generating applications quickly.

In addition, Express comes with a multitude of easy to use HTTP utility methods, functions and middleware, thus enabling developers to easily and quickly write robust APIs. Several popular Node.js frameworks are built on Express (you will discover some of them as you continue reading).

2. Socket.io

Socket.io is a fast and reliable full stack framework for building realtime applications. It is designed for real-time bidirectional event-based communication.

It comes with support for auto-reconnection, disconnection detection, binary, multiplexing, and rooms. It has a simple and convenient API and works on every platform, browser or device(focusing equally on reliability and speed).

3. Meteor.JS

Third on the list is Meteor.js, an ultra-simple full stack Node.js framework for building modern web and mobile applications. It is compatible with the web, iOS, Android, or desktop.

It integrates key collections of technologies for building connected-client reactive applications, a build tool, and a curated set of packages from the Node.js and general JavaScript community.

4. Koa.JS

Koa.js is a new web framework built by the developers behind Express and uses ES2017 async functions. It’s intended to be a smaller, more expressive, and more robust foundation for developing web applications and APIs. It employs promises and async functions to rid apps of callback hell and simplify error handling.

To understand the difference between Koa.js and Express.js, read this document: koa-vs-express.md.

5. Sails.js

Sailsjs is a realtime MVC web development framework for Node.js built on Express. Its MVC architecture resembles that from frameworks such as Ruby on Rails. However, it’s different in that it supports for the more modern, data-driven style of web app and API development.

It supports auto-generated REST APIs, easy WebSocket integration, and is compatible with any front-end: Angular, React, iOS, Android, Windows Phone, as well as custom hardware.

It has features that support for requirements of modern apps. Sails is especially suitable for developing realtime features like chat.

6. MEAN.io

MEAN (in full MongoExpressAngular(6) and Node) is a collection of open source technologies that together, provide an end-to-end framework for building dynamic web applications from the ground up.

It aims to provide a simple and enjoyable starting point for writing cloud native fullstack JavaScript applications, starting from the top to the bottom. It is another Node.js frameworks built on Express.

7. Nest.JS

Nest.js is a flexible, versatile and progressive Node.js REST API framework for building efficient, reliable and scalable server-side applications. It uses modern JavaScript and it’s built with TypeScript. It combines elements of OOP (Object Oriented Programming), FP (Functional Programming), and FRP (Functional Reactive Programming).

It’s an out-of-the-box application architecture packaged into a complete development kit for writing enterprise-level applications. Internally, it employs Express while providing compatibility with a wide range of other libraries.

8. Loopback.io

LoopBack is a highly-extensible Node.js framework that enables you to create dynamic end-to-end REST APIs with little or no coding. It is designed to enable developers to easily set up models and create REST APIs in a matter of minutes.

It supports easy authentication and authorization setup. It also comes with model relation support, various backend data stores, Ad-hoc queries and add-on components (third-party login and storage service).

9. Keystone.JS

KeystoneJS is an open source, lightweight, flexible and extensible Nodejs full-stack framework built on Express and MongoDB. It is designed for building database-driven websites, applications and APIs.

It supports dynamic routes, form processing, database building blocks (IDs, Strings, Booleans, Dates and Numbers ), and session management. It ships with a beautiful, customizable Admin UI for easily managing your data.

With Keystone, everything is simple; you choose and use the features that suit your needs, and replace the ones that don’t.

10. Feathers.JS

Feathers.js is a real-time, minimal and micro-service REST API framework for writing modern applications. It is an assortment of tools and an architecture designed for easily writing scalable REST APIs and real-time web applications from scratch. It is also built on Express.

It allows for quickly building application prototypes in minutes and production ready real-time backends in days. It easily integrates with any client side framework, whether it be Angular, React, or VueJS. Furthermore, it supports flexible optional plugins for implementing authentication and authorization permissions in your apps. Above all, feathers enables you to write elegant, flexible code.

11. Hapi.JS

Hapi.js is a simple yet rich, stable and reliable MVC framework for building applications and services. It is intended for writing reusable application logic as opposed to building infrastructure. It is configuration-centric and offers features such as input validation, caching, authentication, and other essential facilities.

12. Strapi.io

Strapi is a fast, robust and featured-rich MVC Node.js framework for developing efficient and secure APIs for web sites/apps or mobile applications. Strapi is secure by default and it’s plugins oriented (a set of default plugins is provided in every new project) and front-end agnostic.

It ships in with an embedded elegant, entirely customizable and fully extensible admin panel with headless CMS capabilities for keeping control of your data.

13. Restify.JS

Restify is a Nodejs REST API framework which utilizes connect style middleware. Under the hood, it heavily borrows from Express. It is optimized (especially for introspection and performance) for building semantically correct RESTful web services ready for production use at scale.

Importantly, restify is being used to power a number of huge web services out there, by companies such as Netflix.

14. Adonis.JS

Adonisjs is another popular Node.js web framework that is simple and stable with an elegant syntax. It is a MVC framework that provides a stable ecosystem to write stable and scalable server-side web applications from scratch. Adonisjs is modular in design; it consists of multiple service providers, the building blocks of AdonisJs applications.

A consistent and expressive API allows for building full-stack web applications or micro API servers. It is designed to favor developer joy and there is a well documented blog engine to learn the basics of AdonisJs.

Other well known Nodejs frameworks include but not limited to SocketCluster.io (full stack), Nodal (MVC), ThinkJS (MVC), SocketStreamJS (full stack), MEAN.JS (full stack), Total.js (MVC), DerbyJS (full-stack), and Meatier (MVC).

That’s It! In this article, we’ve covered the 14 best Nodejs web framework for developers. For each framework covered, we mentioned its underlying architecture and highlighted a number of its key features.

Source

vkQuake2, the project adding Vulkan support to Quake 2 now supports Linux

At the start of this year, I gave a little mention to vkQuake2, a project which has updated the classic Quake 2 with various improvements including Vulkan support.

Other improvements as part of vkQuake2 include support for higher resolution displays, it’s DPI aware, HUD scales with resolution and so on.

Initially, the project didn’t support Linux which has now changed. Over the last few days they’ve committed a bunch of new code which fully enables 64bit Linux support with Vulkan.

Screenshot of it running on Ubuntu 18.10.

Seems to work quite well in my testing, although it has a few rough edges. During ALT+TAB, it decided to lock up both of my screens forcing me to drop to a TTY and manually kill it with fire. So just be warned on that, might happen to you.

To build it and try it out, you will need the Vulkan SDK installed along with various other dependencies you can find on the GitHub.

For the full experience, you do need a copy of the data files from Quake 2 which you can find easily on GOG. Otherwise, you can test it using the demo content included in the releases on GitHub. Copy the demo content over from the baseq2 directory.

Source

Download Bitnami ProcessWire Module Linux 3.0.123-0

Bitnami ProcessWire Module iconA free software that allows you to deploy ProcessWire on top of a Bitnami LAMP Stack

Bitnami ProcessWire Module is a multi-platform and free software project that allows users to deploy the ProcessWire application on top of the Bitnami LAMP, MAMP and WAMP stacks, without having to deal with its runtime dependencies.

What is ProcessWire?

ProcessWire is a free, open source, web-based and platform-independent application that has been designed from the offset to act as a CMS (Content Management System) software. Highlights include a modular and flexible plugin architecture, support for thousands of pages, modern drag & drop image storage, as well as an intuitive and easy-to-use WYSIWYG editor.

Installing Bitnami ProcessWire Module

Bitnami’s stacks and modules are distributed as native installers built using BitRock’s cross-platform installer tool and designed to work flawlessly on all GNU/Linux distributions, as well as on the Mac OS X and Microsoft Windows operating systems.

To install the ProcessWire application on top of your Bitnami LAMP (Linux, Apache, MySQL and PHP) stack, you will have to download the package that corresponds to your computer’s hardware architecture, 32-bit or 64-bit (recommended), run it and follow the on-screen instructions.

Host ProcessWire in the cloud or virtualize it

Besides installing ProcessWire on top of your LAMP server, you can host it in the cloud, thanks to Bitnami’s pre-build cloud images for the Amazon EC2 and Windows Azure cloud hosting services. Virtualizing ProcessWire is also possible, as Bitnami offers a virtual appliance based on the latest LTS release of Ubuntu Linux and designed for the Oracle VirtualBox and VMware ESX/ESXi virtualization software.

The Bitnami ProcessWire Stack and Docker container

The Bitnami ProcessWire Stack product has been designed as an all-in-one solution that greatly simplifies the installation and hosting of the ProcessWire application, as well as of its runtime dependencies, on real hardware. While Bitnami ProcessWire Stack is available for download on Softpedia, you can check the project’s homepage for a Docker container.

Source

How to Install Microsoft PowerShell 6.1.1 on Ubuntu 18.04 LTS

What is PowerShell?

Microsoft PowerShell is a shell framework used to execute commands, but primarily it is developed to perform administrative tasks such as

  • Automation of repetitive jobs
  • Configuration management

PowerShell is an open-source and cross-platform project; it can be installed on Windows, macOS, and Linux. It includes an interactive command-line shell and a scripting environment.

How Ubuntu 18.04 made installation of PowerShell easier?

Ubuntu 18.04 has made installation of apps much easier via snap packages. For those who’re new to the phrase “snap package”, Microsoft has recently introduced a snap package for PowerShell. This major advancement allows Linux users/admins to install and run the latest version of PowerShell in fewer steps explained in this article.

Prerequisites to install PowerShell in Ubuntu 18.04

The following minimum requirements must exist before installing PowerShell 6.1.1 in Ubuntu 18.04

  • 2 GHz dual-core processor or better
  • 2 GB system memory
  • 25 GB of free hard drive space
  • Internet access
  • Ubuntu 18.04 LTS (long term support)

Steps to Install PowerShell 6.1.1 via Snap in Ubuntu 18.04 LTS

There are two ways to install PowerShell in Ubuntu i.e. via terminal or via Ubuntu Software Application.

via Terminal

Step 1: Open A Terminal Console

The easiest way to open a Terminal is to use the key combination Ctrl+Alt+T at the same time.

Open Ubuntu Console

Step 2: Snap Command to Install PowerShell

Enter snap package command i.e. “snap install powershell –classic” in the Terminal console to initiate installation of PowerShell in Ubuntu.

The prompt of Authentication Required on your screen is exclusively for security purposes. Before initiating any installation in Ubuntu 18.04, by default, the system requires to authenticate the account initiating this installation.

To proceed, the user must enter credentials of the account they’re currently logged in with.

Authenticate as admin

Step 3: Successful Installation of PowerShell

As soon as the system authenticates the user, Installation of PowerShell will begin in Ubuntu. (Usually, this installation takes 1-2 minutes)

The user can continuously see the status of installation in the terminal console.

At the end of the installation, the status of PowerShell 6.1.1 from ‘microsoft-powershell’ installed is shown as it can be seen in the screenshot below.

Install PowerShell snap

Step 4: Launch PowerShell via Terminal

After successful installation, it’s time to launch PowerShell which is a one-step process.

Enter Linux command “powershell” in the terminal console and it will take you to PowerShell terminal in an instant.

powershell

You must be in the PowerShell prompt by now and ready to experience the world of automation and scripting.

Microsoft PowerShell on Ubuntu

via Ubuntu Software

Step 1: Open Ubuntu Software

Ubuntu has facilitated its users with a desktop application of Ubuntu Software. It contains the list of all software and updates available.

  • Open the Ubuntu Software Manager from the Ubuntu desktop.

Step 2: Search for PowerShell in Ubuntu Software

  • Under the list of All software, search for “powershell” through the search bar.
  • Search Results must include “powershell” software as marked in the screenshot below.
  • Click on “powershell” software and proceed to Step 3.

Step 3: Installing PowerShell via Ubuntu Software

  • The user must be able to see the details of “powershell” software and the Install button

(for reference, it’s marked in below image)

  • Click on the Install button, it will initiate installation.

(Installation via Ubuntu Software takes 1-2 minutes)

  • The User can see installation status continuously on the screen and will be notified once installation completes.

Install PowerShell

Installing PowerShell

Step 4: Launch PowerShell via Ubuntu Software

After successful installation of PowerShell 6.1.1 via Ubuntu Software, the user can now launch PowerShell terminal and use it for multiple purposes and features which Microsoft PowerShell has to offer for its Linux users.

  • Click on “Launch” button (for reference, marked in below image). It will take you to PowerShell terminal successfully.

Launch PowerShell

Test PowerShell Terminal via Commands

To test if PowerShell is working accurately, the user can enter few Linux commands like:

“$PSVersionTable” to find Version of PowerShell installed (for reference, the result of this command attached in the screenshot below)

PowerShell gives its user endless power over the system and its directories. After following the above-mentioned steps in this article, now you must be all set to experience the exciting and productive world of automation and scripting through Microsoft PowerShell.

Source

Linux Today – Get started with Cypht, an open source email client

Integrate your email and news feeds into one view with Cypht, the fourth in our series on 19 open source tools that will make you more productive in 2019.

Email arriving at a mailbox

Cypht

We spend a lot of time dealing with email, and effectively managing your emailcan make a huge impact on your productivity. Programs like Thunderbird, Kontact/KMail, and Evolution all seem to have one thing in common: they seek to duplicate the functionality of Microsoft Outlook, which hasn’t really changed in the last 10 years or so. Even the console standard-bearers like Mutt and Cone haven’t changed much in the last decade.

Cypht main screen

Cypht is a simple, lightweight, and modern webmail client that aggregates several accounts into a single view. Along with email accounts, it includes Atom/RSS feeds. It makes reading items from these different sources very simple by using an “Everything” screen that shows not just the mail from your inbox, but also the newest articles from your news feeds.

Cypht's 'Everything' screen

It uses a simplified version of HTML messages to display mail or you can set it to view a plain-text version. Since Cypht doesn’t load images from remote sources (to help maintain security), HTML rendering can be a little rough, but it does enough to get the job done. You’ll get plain-text views with most rich-text mail—meaning lots of links and hard to read. I don’t fault Cypht, since this is really the email senders’ doing, but it does detract a little from the reading experience. Reading news feeds is about the same, but having them integrated with your email accounts makes it much easier to keep up with them (something I sometimes have issues with).

Reading a message in Cypht

Users can use a preconfigured mail server and add any additional servers they use. Cypht’s customization options include plain-text vs. HTML mail display, support for multiple profiles, and the ability to change the theme (and make your own). You have to remember to click the “Save” button on the left navigation bar, though, or your custom settings will disappear after that session. If you log out and back in without saving, all your changes will be lost and you’ll end up with the settings you started with. This does make it easy to experiment, and if you need to reset things, simply logging out without saving will bring back the previous setup when you log back in.

Settings screen with a dark theme

Installing Cypht locally is very easy. While it is not in a container or similar technology, the setup instructions were very clear and easy to follow and didn’t require any changes on my part. On my laptop, it took about 10 minutes from starting the installation to logging in for the first time. A shared installation on a server uses the same steps, so it should be about the same.

In the end, Cypht is a fantastic alternative to desktop and web-based email clients with a simple interface to help you handle your email quickly and efficiently.

Source

The new System Shock is looking quite impressive with the latest artwork

System Shock, the remake coming eventually from Nightdive Studios continues along in development and it’s looking impressive.

In their latest Kickstarter update, they showed off what they say is the “final art” after they previously showed the game using “temporary art”. I have to admit, while this is only a small slice of what’s to come, from the footage it certainly seems like it will have a decent atmosphere to it.

Take a look:

I missed their last few updates, since this is one game I am trying not to spoil too much from seeing all the bits and pieces start to come together now.

They put out a few more updates since I last took a look, showing off more interesting parts of their final art like these:

I’m very interested in seeing the final game, Nightdive Studios have done some pretty good work reviving older games and System Shock is clearly a labour of love for them. It’s using Unreal Engine, so I do hope they’re getting plenty of Linux testing done closer to release since many developers have had issue with it.

There’s no current date for the final release, will keep you posted.

Source

Linus Torvalds Says Things Look Pretty Normal for Linux 5.0, Releases Second RC

Linux creator Linus Torvalds announced today the general availability for testing of the second RC (Release Candidate) of the upcoming major release of the Linux kernel, Linux 5.0.

According to Linus Torvalds, things are going in the right direction for Linux kernel 5.0 series, which should launch sometime at the end of February or early March 2019, and the second Release Candidate is here to add several perf tooling improvements, updated networking, SCSI, GPU, and block drivers, updated x86, ARM, RISC-V, and C-SKY architectures, as well as fixes to Btrfs and CIFS filesystems.

“So the merge window had somewhat unusual timing with the holidays, and I was afraid that would affect stragglers in rc2, but honestly, that doesn’t seem to have happened much. rc2 looks pretty normal. Were there some missing commits that missed the merge window? Yes. But no more than usual. Things look pretty normal,” said Linus Torvalds in a mailing list announcement.

Linux kernel 5.0 RC3 expected on January 17th

Of course, it’s a bit early to say that everything’s fairly normal for the Linux 5.0 kernel series as the development cycle was just kicked off a week ago, when Linus Torvalds announced the first Release Candidate, and it remains to be seen if it will be a normal cycle with seven RCs or a long one with eight RCs. Depending on that, Linux kernel 5.0 could arrive on February 24th or March 3rd.

Until then, we’re looking forward to the third Release Candidate of Linux kernel 5.0, which is expected to hit the streets at the end of the week on January 17th. Meanwhile, you can go ahead and give Linux 5.0 a try on your Linux-powered computer by downloading and compiling the second Release Candidate from kernel.org. Keep in mind though that this is a pre-release version, so don’t use it on production machines.

Source

Nginx vs Apache: Which Serves You Best in 2019?

For two decades Apache held sway over the web server market which is shrinking by the day. Not only has Nginx caught up with the oldest kid on the block, but it is currently the toast of many high traffic websites. Apache users might disagree here. That is why one should not jump to conclusions about which web server is better. The truth is that both form the core of complete web stacks (LAMP and LEMP), and the final choice boils down to individual needs.

For instance, people running Drupal websites often call on Apache, whereas WordPress users seem to favor Nginx as much if not more. Accordingly, our goal is to help you understand your own requirements better rather than providing a one-size recommendation. Having said that, the following comparison between the two gives an accurate picture.

1. Popularity

Up until 2012 more than 65% of websites were based on Apache, a popularity due in no small measure to its historical legacy. It was among the first software that pioneered the growth of the World Wide Web. However, times have changed. According to W3Tech.com, as of January 14, 2019, Apache (44.4%) is just slightly ahead of Nginx (40.9%) in terms of websites using their servers. Between them they dominate nearly 85% of the web server market.

Web Servers Market Share W3techs.com

When it comes to websites with high traffic, the following graph is interesting. Of course, Nginx is quite ahead of Apache but trails behind Google Servers which powers websites like YouTube, Gmail and Drive.

Web Servers Market @ W3Techs 15-Jan-2019

At some point a large number of websites (including this site) migrated from Apache to Nginx. Clearly, the latter is seen as a the latest, and a trendier web server. High traffic websites that are on Apache, e.g. Wikipedia and New York Times, are often using a front-end HTTP proxy like Varnish.

Score: The popularity gap between Apache and Nginx is closing very fast. But, as Apache is still ahead in absolute numbers, we will consider this round a tie.

2. Speed

The main characteristic of a good web server is that it should run fast and easily respond to connections and traffic from anywhere. To measure the server speeds, we compared two popular travel websites based on Apache (Expedia.com) and Nginx (Booking.com). Using an online tool called Bitcatcha, the comparisons were made for multiple servers and measured against Google’s benchmark of 200 ms. Booking.com based on Nginx was rated “exceptionally quick.” In contrast, Expedia.com based on Apache was rated “above average and could be improved.”

Having used both travel websites so many times, I can personally vouch that Expedia feels slightly slower in returning results to my query than Booking does.

Web server response time Booking.com (Nginx) vs. Expedia.com (Apache)

Here are comparisons between the two servers for a few other websites. Nginx does feel faster in all cases below except one.

Website server speeds tested at Bitcatcha

Score: Nginx wins the speed round.

3. Security

Both Nginx and Apache take security very seriously on their websites. There is no dearth of robust systems to deal with DDoS attacks, malware and phishing. Both periodically release security reports and advisories which ensure that the security is strengthened at every level.

Score: We will consider this round a tie.

4. Concurrency

There is a perception that Apache somehow does not measure up to Nginx’s sheer scale and capability. After all, Nginx was originally designed to accelerate speed issues with FastCGI and SCGI handlers. However, from Apache 2.4 onwards (which is the default version), there has been a drastic improvement in the number of simultaneous connections. How far this improvement has been made is worth finding out.

Based on stress tests at Loadimpact.com, we again compared Booking.com (Nginx) with Expedia.com (Apache). For 25 virtual users, the Nginx website was able to record 200 requests per second, which is 2.5 times higher than Apache’s 80 requests per second. Clearly, if you have a dedicated high-traffic website, Nginx is a safer bet.

Scalability testing Apache versus Nginx at Loadimpact.com

Score: Nginx wins the concurrency round.

5. Flexibility

A web server should be flexible enough to allow customizations. Apache does it quite well using .htaccess tools, which Nginx does not support. It allows decentralization of administrator duties. Third party and second-level admins can be prevented from accessing the main server. Moreover, Apache supports more than 60 modules which makes it highly extensible. There is a reason Apache is more popular with shared hosting providers.

Flexible features of Apache: Modules plus htaccess example

Score: Apache wins this round.

Other Parameters

In the past Nginx did not support Windows OS very well, unlike Apache. That is no longer the case. Also, Apache was considered weak for load balancing and reverse proxy which has changed now.

Final Result

Nginx narrowly wins this contest 2-1. Having said this, an objective comparison between Nginx and Apache on technical parameters does not give the complete picture. In the end, our verdict is that both web servers are useful in their own ways.

While Apache should be used with a front-ending server (Nginx itself is one option), Nginx can be better with more customizations and flexibility.

Source

The Start of the RHCA Journey

I’m starting my RHCA (Red Hat Certified Architect) journey!

It took me some time to get my mind set on this, and it was important to understand the reasons I’m willingt to do this in the first place.

Why Red Hat?

I do Linux system administration for a living. Although the world is moving towards DevOps, containers and automation, this doesn’t change the fact that Linux remains the go-to choice for the cloud, and regardless of the job title, one still does a lot of sysadmin work day in, day out.

I’ve been running Linux in production for the past 7 years (and even longer as my personal desktop OS), with the last 4 years being Red Hat based OS exclusively. Over time, I transitioned from running servers on Debian to Ubuntu and then to Red Hat/CentOS. As much as I like Debian, Red Hat has become my distribution of choice. As a result it just seemed natural to learn it in depth.

Why RHCA?

I’m a self-taught RHCE. I passed the exam a couple of years ago.

If you’re reading this, then you’re likely aware that Red Hat exams are hands-on. As a result, they have something of value. You get presented with complex problems, and more often than not you need to know where to find answers on a RHEL system.

This testing methodology is advantageous because it does not require you to simply memorise things, but to know where to find information. Of course, you need to memorise bits and pieces, but it’s muscle memory that’s the key to success.

Having said that, there are three things required to achieve RHCA: practice, practice, practice. You need to perform the tasks over and over to be an expert in using a product, be it Red Hat High Availability clustering, Satellite or OpenStack.

Why am I doing this? Motivation and Expectations

I’m a person who’s eager to learn new things. This includes looking for challenges that would help me grow both personally, and professionally.

As I said some time ago, RHCA is not a sprint, it’s a marathon. It’s also a massive undertaking and should not be taken lightly. This alone makes me want to pursue it. To become better in what I do.

To quote Arnold Schwarzenegger:

“Never, ever think small. If you’re going to accomplish anything, you have to think big. No matter what you do, work, work, work!”

I’m doing this for myself. It’s a project that I feel is worth investing time and resources. I’m not doing RHCA to get a new job, or to get a rise. There are much easier and less time consuming ways of achieving either of those things.

I expect this journey to be a lengthy process with lots of challenges that I’ll need to overcome, including exams, travel and life itself.

Chances are that things won’t always go my way even if I’m well prepared, therefore it’s important to be honest with myself and understand why I’m doing this in the first place.

Timescale

I don’t have a strict deadline, but my aim is to pass the exams by the end of the year. I started planning my RHCA studies back in 2018 so that I would have plenty of time in 2019.

The First Exam: EX436 High Availability Clustering

I use HA clustering at work, therefore the decision to take the EX436 was somewhat easy to make.

EX436 will be my first exam, and I’m already approaching the end of the study process. I use official documentation available on Red Hat’s website, and a lot of practicing.

My home lab for HA clustering is quite simple: a laptop with a quad-core CPU, 16GB of RAM and 128GB SSD, running KVM hypervisor and four RHEL 7.1 virtual machines. One VM is used to provide storage services, and the other three VMs are for clustering. In terms of networking, I use five network interfaces (2x corosync redundand rings, 2x iSCSI multipath, 1x for LAN). Corosync and iSCSI networks are non-routable.

Source

How to Resize OpenStack Instance (Virtual Machine) from Command line

How to Resize OpenStack Instance (Virtual Machine) from Command line

Being a Cloud administrator, resizing or changing resources of an instance or virtual machine is one of the most common tasks.

Source

WP2Social Auto Publish Powered By : XYZScripts.com