A closer look at Go and Node.Js

There are so many software development languages available these days its hard for many software developers to know which is best for a given application project. What many developers don’t consider is the long term maintenance and support needs of their application as well as the scalability of the application in the long term, if its poorly built from startup using a language that will not be fully backwards compatible in the long term then supporting it and expanding its functionality over time may not be viable.

Another point often neglected is the original purpose of the language and what it was developed for. I have come from an assembly language background, then started using “C”, then moved to C++, Java, PHP and all the current languages, so picking a language for a task is based on years of programming skills and engineering software and hardware systems. The result has been an understanding of the effort required to engineer software and what tools are needed to build maintainable clean engineered solutions.

A closer look at “GO” and Node.JS

The Go language, (often referred to as golang) is a system oriented language, its roots are based on “C” and “C++”, it was developed by Google to solve some really annoying issues when you build software at Google scale. Typical of those issues are building huge applications with lots of dependencies and support for multi CPU operation. It wasn’t written to be a GUI or to execute Javascript code (although Javascript also inherits a lot from “C” and “C++”). At Google scale, concurrent execution, communications, garbage collection and flexible data types are important. So Google developed the language in-house and made it public.

Node.js is also a modern programming language build on top of Google’s Chrome V8 engine (which is written in C++). Node.js is not a GUI application, and it’s promoted as a suitable tool for writing back-end system services, like Go. Node.JS is server side Javascript and since it’s native Javascript that its language base is built on it inherits most front-end JavaScript traits. So all the typical methods, functions and object constructs are present in Node.js.

One interesting issue in Node is its inability to directly Export or Import a Module, to achieve this Node uses constructs from RequireJS and CommonJS. Go provides a project like structure that supports multiple files, libraries and there is a keyword called “import” for importing libraries.

Go is suitable for heavy CPU applications and lots of IO, while Node is geared towards server IO use cases, such as a backend server to a client server AJAX application. In fact Node’s native support for JSON makes building REST / JSON api’s easy and is something Node.js excels at.

Object Oriented?

Unlike C++ and Java, Go does not implement OO concepts the same way, there is no Class construct but it does have an “interface” construct and methods can be defined to operate on a common type. Go implements simple data structures and methods that can operate on them. Node implements what are best described as “pseudo” classes using Prototypal  inheritance. On the face of it, Node would look more like a typical Object Oriented application and many libraries are built using this form of OO programming.

Concurrency and Threads

Node has a non-blocking I/O model that does not implement threads at the language level, although threads are implemented underneath the hood. Non-blocking IO makes Node a great choice for wrapping other data sources such as databases or web services and exposing them via a JSON interface. Its heavily geared for web based IO and its eventing model makes handling multiple connections possible. It achieves this by implementing an event loop at the language level. In Node this works very well and overcomes many issues that threads introduce.

“Goroutines” are what Go implements to make concurrency easy to use. The idea is to multiplex independently executing functions, “coroutines”, onto a set of threads. When a coroutine blocks, such as by calling a blocking system call, the run-time automatically moves other coroutines on the same operating system thread to a different, runnable thread so they won’t be blocked. As a result, goroutines have little overhead beyond the memory for the stack, which is just a few kilobytes. It is practical to create hundreds of thousands of goroutines in the same address space. If goroutines were just threads, system resources would run out at a much smaller number.

Node is single threading, it provides an event model using callbacks, but there are no threads in Node that your code runs in. node expects calls to return quickly, realistically if you need some level of concurrent operation then run multiple Node applications and use IO to pass messages between processes.


Node has an active community which has also seen a number of functionality enhancements like JXcoreRequireJS and CommonjS being the most common. There is also “Passport” for authentication, “Blanket” for unit testing, URIjs, pdfkit, aws-lib, LevelDB and ElasticSearch to name just a few. All are geared towards using Node.JS as a backend servicing web clients in one form or another either directly or as a backend service.

Go also has tons of packages, but a glance through all of them appears to cement Go at the system utility development model.


Performance is always a contentious issue, rather than reinvent the wheel I did some research on what others have done, this article on a bubble sort Benchmarks: Nodejs vs GO 1.1 and a further update here Node vs GO 2014 by Jonathan Warner showed a simple effective testing method that drew a lot less “negative” comments than other testing comparisons.

The bottom line was Go out performed Node, the conclusion to this trial could be that Go is native executable code built from the ground up to be fast and compact for huge scale deployment, Node runs on the Google Chrome V8 Javascript engine, so there is an extra layer of code that needs to execute… simple logic tells us that it wil always be slower, but its not significantly slower so the Javascript engine is well written and doing a great job. And don’t forget, in a real application you might have a database layer and other external factors that will impede performance in some way regardless of the operating environment and language choice.

At a glance - Google Go

For those that haven’t done any Go yet, it’s “a fast, statically typed, compiled language that feels like a dynamically typed, interpreted language”. It is backed by Google, and is used at Google, Heroku, and other places.


  • Type-safety
  • Statically-linked compilation which makes deployment extremely easy
  • High concurrency
  • Large core library with components like tar, zip, blowfish, gifs, pngs, templating engines, websocket library are in core
  • Primitives aimed at concurrency - Atomic Channels to reinforce that you should share memory by communicating, instead of communicating by sharing memory. They make it very easy to implement concurrent, scalable solutions without race-conditions.
  • Extremely fast and on par with C for some tasks.
  • Easy to keep up with latest release - The gofix utility makes keeping up with new releases trivial, as it automatically fixes your code
  • Deploys as a single executable binary.


  • Type-safety - Can be tedious such as converting strings to byte buffers, and back.
  • Third-party libraries are third-class citizens - With the focus on a large standard core, third-party libraries have been largely left behind.
  • Lack of community - Possibly due to the community being composed of Google.
  • Naive garbage collection – Some reported performance issues with applications stopping during garbage collection, this may have now been fixed.
  • Focus on doing things ‘the right way’ - This is probably shown most in the ‘gofmt’ utility which automatically formats your code so everyone’s code looks the same. Uniformity is highly prized in the Go community.

At a glance - Node.js

For those that haven’t done any Node.js before, it is “a platform for easily building fast, scalable network applications”. With event-driven non-blocking I/O, it forces an asynchronous ‘callback-passing’ style of development. As a Javascript engine it doesn’t require you to learn any new language or syntax (assuming you already know Javascript).


  • Usage of Javascript is very high already, generally considered “Not perfect, but ‘good enough’”.
  • Large existing community and libraries to draw from. Though most browser libraries aren’t applicable, they do mean there is quite a bit of experience with the language.
  • Homogenous web stack - Less impedance mismatch when moving around the stack. When developing for the web you can use Javascript all the way from the browser to the database.
  • Third-party libraries are first-class citizens
    • The focus is on keeping the core library as small as possible (reverse of Go).
    • Excellent package manager (NPM) included from the beginning. The number of NPM libraries are growing astonishingly quickly.
    • Everything is a package. In Node the prevailing wisdom is to make everything an NPM package. Bouncy (an HTTP Proxy), Forever (a Monit-like process manager), and even NPM itself are all packages which can be required and used programmatically.
  • Socket.io and JSON – Native support for JSON data and execution.
  • Excellent cluster management tools - dnode, hook.io, replicant, cluster to name a few.
  • Focus on isolation of processes and fault-handling - Node’s single-threaded design forces it to be scaled by using lots of isolated communicating processes. This has pros and cons, because while it is initially more difficult, it also helps avoid race conditions, and means that scaling across machines is not much harder.
  • Poor error handling and exception handling resulting in ‘Crash-only software’ for handling serious errors. On the bright side if you write each component of the system in isolated, separate processes, the catastrophic failure of any one component can’t bring down the entire system.


  • Single-threaded - Since Node is single-threaded you rely on forking, and communicating between processes to make use of multiple cores. The upside is that Node sees less difference between multiple core concurrency and multiple machine concurrency.
  • Not ‘proven’ at scale – Lots of blog posts on moving away from Node.JS to Go due to high I/O based performance issues, few blog pages on moving the other way.
  • Callbacks - As Javascript was developed for web pages initially, flow control was not designed for threads so callbacks were implemented as the event model.

Along came Docker

Google Go has had a significant impact in the last 2 years with the development and release of Docker, an application container engine that will revolutionize the application deployment space and cloud computing in general. In fact to cement Go as more the systems level language, Docker’s creator Solomon Hykes cited Go’s standard library, concurrency primitives, and ease of deployment as key factors, and said “To put it simply, if Docker had not been written in Go, it would not have been as successful.”

Production Ready?

Both languages appear “production” ready, with 100′s of documented live deployments and an ever growing user base. From the usage models, most Node.JS deployments appear to be as web server back ends while Go appears to have been deployed in system level applications, mainly by Google and now Docker.

Numerous companies have made the switch from Node to Go citing:

  1. Performance improvements as the #1 core reason.
  2. The callback model in Node is cited as “Hell” in large applications.
  3. Large CPU overheads in high I/O output scenarios especially with string outputs back to web clients.

Summary – Since Go compiles down to executable code and Node is running inside the V8 Javascript engine some performance issues are bound to crop up.

Wrapping up - Some thoughts from various communities

I like this one – “In short, Go feels like it is what C would be like if it were developed today.”

and this rather good comparison -

“There are a few key differences. Node.js has a rabid, and rapidly growing community, tons of third-party libraries, and an extremely pragmatic focus. Google Go has a large, complete set of core libraries, an easy path to high concurrency, and is blazingly fast. Go also has a focus on the ‘correctness’ of the solution (evidenced by it’s type-safety) and it’s community prizes uniformity, emphasising doing things the idiomatic way. The Node community tends to be more focused on the functionality, and are more tolerant of doing things in different and unique ways (e.g. Javascript vs Coffeescript).”

As data center administrators look for technologies that simplify network functions while offering lower costs, greater scalability and improvements in network agility, two approaches are being embraced in the networking world: Software Defined Networking (SDN) and Network Functions Virtualization (NFV). While both offer new and different ways to design, implement and manage the network and its services, both have the capacity to significantly enhance network performance.

To address larger quantities of data being transmitted, stored and managed on high volume servers, switches, storage technology and in cloud computing environments, SDN and NFV are increasingly becoming attractive options to integrators and value added resellers (VARs) who need to identify strategies that compliment virtualization and network programmability.

There are several reasons for the growth of SDN and NFV. The drivers of these technologies include the growth of big data, mobile devices, and the expansion of distributed databases and servers located at different sites and connected over long distances through public and private clouds that require robust data management systems and access to bandwidth on demand.

According to a study from Infonetics Research the global carrier SDN and NFV hardware and software market will grow from less than $500 million in 2013 to over $11 billion in 2018.

As VARs embark on a plan to implement SDN and NFV, however, they should appreciate the differences between these two networking approaches and recognize the ways in which both can help network administrators elevate their management capabilities.

Both SDN and NFV rely on software that operates on commodity servers and switches, but both technologies operate at different levels of the network.

SDN is designed to offer users a way to managed network services through software that makes networks centrally programmable, which allows for faster configuration. Essentially, SDN makes the network programmable by separating the system that decides where traffic is sent (the control plane) from the underlying system that pushes packets of data to specific destinations (the data plane). As network administrators and VARs know, SDN is built on switches that can be programmed through an SDN controller utilizing an industry standard controller like OpenFlow.

By contrast, NFV separates network functions from routers, firewalls, load balancers and other dedicated hardware devices and allows network services to be hosted on virtual machines. Virtual machines have a hypervisor, also called a virtual machine manager, which allows multiple operating systems to share a single hardware processor. When the hypervisor controls network functions those services that required dedicated hardware can be performed on standard x86 servers.

As systems integrators and VARs work with network administrators to deploy these technologies, it’s important to look at the differences between each.  Here are five key differences to keep in mind:

1.  The Basic Idea:

SDN separates control and data and centralizes control and programmability of the network.

NFV transfers network functions from dedicated appliances to generic servers.

2.  Areas of Operation

SDN operates in a campus, data center and/or cloud environment

NFV targets the service provider network

3.  Initial Application Target.

SDN software targets cloud orchestration and networkingNFV software targets routers, firewalls, gateways, WAN, CDN, accelerators and SLA assurance

4. Protocols

SDN – OpenFlow, NFV – None

5. Supporting organization

SDN: Open Networking Foundation (ONF)

NFV: ETSI NFV Working Group

For resellers and systems integrators working on projects that implement NFV into the network, they should consider that because NFV can add server capacity through software rather than purchasing more dedicated hardware devices to build network services, network administrators can deliver to the data center cost reductions in capital and operating expenses.

Integrators should also consider that they are adding value when they help network managers configure, manage, secure and optimize network resources through SDN programs, which network managers can write on their own because these programs don’t depend on proprietary software.

As VARs and integrators convey the benefits of SDN and NFV, they’ll find an abundance of ways to play an integral role in assisting network administrators, and the companies they work for, to save money while building a network that’s easier to manage, faster to configure and smarter at tackling the growing data challenges of our time.