NextCloudPi updated to NC14.0.4, brings HDD monitoring, OrangePi, VM and more – Own your bits

The latest release of NextCloudPi is out!

This release brings the latest major version of Nextcloud, as well as more platforms and tools for monitoring our hard drive health. As usual this release includes many small fixes and improvements, noticeably a new faster version of btrfs-sync.

We are still looking for people to help us support more boards. If you own a BananaPi, OrangePi, Pine64 or any other not yet supported board talk to us. We only need some of your time to perform a quick test in the new images every few months.

We are also in need translators, more automated testing, and some web devs to take on the web interface and improve the user experience.

NextCloudPi improves everyday thanks to your feedback. Please report any problems here. Also, you can discuss anything in the forums, or hang out with us in Telegram.

Last but not least, please download through bitorrent and share it for a while to help keep hosting costs down.

Nextcloud 14.0.4

We have been upgrading to every minor release and now we release an image with version 14.0.4 so new users don’t need to upgrade from 14.0.1. This is basically a more polished Nextcloud version without any new features, as you can see in the changelog.

Remember that it is recommended to upgrade through nc-update-nextcloud instead of the native Nextcloud installer, and that you have the option to let NextCloudPi automatically upgrade by activating nc-autoupdate-nc.

Check and monitor your hard drive health

We already introduced SMART in a previous post, so it was a given that this would be soon included in NextCloudPi! We can check our drive health with nc-hdd-test

We can choose between long and short tests as explained in the previous post.

We can also monitor our drive’s health and get notified via email so that we can hopefully take action before the drive fails

We will also receive a Nextcloud notification

OrangePi images

We are now including Orange Pi images for the Zero Plus 2 version. This board features fast Gigabit networking speeds, and nice eMMC storage, and is capable of displaying 2K graphics, which makes it a popular choice for a NAS + Media Center combo.

NextCloudPi VM

The VM provides a convenient way of installing NCP on a virtual machine, instead of the classic way of using the curl installer.

See details in this previous post.
Source

Docker at DEVIntersection 2018 – Docker Blog

Docker will be at DEVIntersection 2018 in Las Vegas the first week in December. DEVIntersection now in its fifth year, brings Microsoft leaders, engineers and industry experts together to educate, network, and share their expertise with developers. This year DEVIntersection will have a Developer, SQL Server and AI/Azure tracks integrated into a single event. Docker be featured at DEVIntersection via the following sessions:

Modernizing .NET Applications with Docker on Azure

Derrick Miller, a Docker Senior Solutions Engineer, will deliver a session focused on using containers as a modernization path for traditional applications, including how to select Windows Server 2008 applications for containerization, implementation tips, and common gotchas.

Depend on Docker – Get It Done with Docker on Azure

Alex Iankoulski, a Docker Captain, will highlight how how Baker Hughes, a GE Company, uses Docker to transform software development and delivery. Be inspired by the story of software professionals and scientists who were enabled by Docker to use a common language and work together to create a sophisticated platform for the Oil & Gas Industry. Attendees will see practical examples of how Docker is deployed on Azure.

Docker for Web Developers

Dan Wahlin, a Microsoft MVP and Docker Captain, will focus on the fundamentals of Docker and update attendees about the tools that can be used to get a full dev environment up and running locally with minimal effort. Attendees will also learn how to create Docker images that can be moved between different environments.

You can learn when the sessions are being delivered here.

Can’t make it the conference? Learn how Docker Enterprise is helping customers reduce their hardware and software licensing costs by up to 50% and enabling them to migrate their legacy Windows applications here.

Don’t miss #CodeParty at DevIntersection and Microsoft Connect();

On Tuesday, Dec. 4 after DEVIntersection and starting at 5:30PM PST Docker will join @Mobilize, @LEADTOOLS, @PreEmptive, @DocuSignAPI, @CData and @Twilio to kick off another hilarious and prize-filled stream of geek weirdness and trivia questions on the CodeParty twitch channel. You won’t want to miss it, because the only way to get some high-quality swag is to answer the trivia questions on the Twitch chat stream. We’ll be giving away a couple of Surface Go laptops, gift certificates to Amazon, an Xbox and a bunch of other cool stuff. Don’t miss it!

Learn more about the partners participating together with Docker at #CodeParty:

Mobilize.net

Mobilize.Net’s AI-driven code migration tools reduce the cost and time to modernize valuable legacy client-server applications. Convert VB6 code to .NET or even a modern web application. Move PowerBuilder to Angular and ASP.NET Core or Java Spring. Automated migration tools cut time, cost, and risk from legacy modernization projects.

Progress
The creator of Telerik .NET and Kendo UI JavaScript user interface components/controls, reporting solutions and productivity tools, Progress offers all the tools developers need to build high-performance modern web, mobile, and desktop apps with outstanding UI including modern chatbot experiences.
LEADTOOLS

LEADTOOLS Imaging SDKs help programmers integrate A-Z imaging into their cross-platform applications with a comprehensive toolkits offer powerful features including OCR, Barcode, Forms, PDF, Document Viewing, Image Processing, DICOM, and PACS for building an Enterprise Content Management (ECM) solution, zero-footprint medical viewer, or audio/video media streaming server.

PreEmptive Solutions

PreEmptive Solutions provides quick to implement, application protection to hinder IP and data attacks and improve security related compliance. P PreEmptive’s application shielding and .NET, Xamarin, Java and Android obfuscator solutions help protect your assets now – whether client, server, cloud or mobile app protection.

DocuSign

Whether you are looking for a simple eSignature integration or building a complex workflow, the DocuSign APIs and tools have you covered. Our new C# SDK includes .NET Core 2.0 support, and a new Quick Start API code example for C#, complete with installation and demonstration video. s Open source SDKs also available for PHP, Java, Ruby, Python, and Node.js.

CData

CData Software is a leading provider of Drivers & Adapters for data integration offering real-time SQL-92 connectivity to more than 100+ SaaS, NoSQL, and Big Data sources, through established standards like ODBC, JDBC, ADO.NET, and ODATA. By virtualizing data access, the CData drivers insulate developers from the complexities of data integration while enabling real-time data access from major BI, ETL and reporting tools.

Twilio

Twilio powers the future of business communications. Enabling phones, VoIP, and messaging to be embedded into web, desktop, and mobile software. We take care of the messy telecom hardware and expose a globally available cloud API that developers can interact with to build intelligent and complex communications systems that scales with you.

Source

Managing containerized system services with Podman

Managing containerized system services with Podman

In this article, I discuss containers, but look at them from another angle. We usually refer to containers as the best technology for developing new cloud-native applications and orchestrating them with something like Kubernetes. Looking back at the origins of containers, we’ve mostly forgotten that containers were born for simplifying application distribution on standalone systems.

In this article, we’ll talk about the use of containers as the perfect medium for installing applications and services on a Red Hat Enterprise Linux (RHEL) system. Using containers doesn’t have to be complicated, I’ll show how to run MariaDB, Apache HTTPD, and WordPress in containers, while managing those containers like any other service, through systemd and systemctl.

Additionally, we’ll explore Podman, which Red Hat has developed jointly with the Fedora community. If you don’t know what Podman is yet, see my previous article, Intro to Podman (Red Hat Enterprise Linux 7.6) and Tom Sweeney’s Containers without daemons: Podman and Buildah available in RHEL 7.6 and RHEL 8 Beta.

Red Hat Container Catalog

First of all, let’s explore the containers that are available for Red Hat Enterprise Linux through the Red Hat Container Catalog (access.redhat.com/containers):

By clicking Explore The Catalog, we’ll have access to the full list of containers categories and products available in Red Hat Container Catalog.

Exploring the available containers

Clicking Red Hat Enterprise Linux will bring us to the RHEL section, displaying all the available containers images for the system:

Available RHEL containers

At the time of writing this article, in the RHEL category there were more than 70 containers images, ready to be installed and used on RHEL 7 systems.

So let’s choose some container images and try them on a Red Hat Enterprise Linux 7.6 system. For demo purposes, we’ll try to use Apache HTTPD + PHP and the MariaDB database for a WordPress blog.

Install a containerized service

We’ll start by installing our first containerized service for setting up a MariaDB database that we’ll need for hosting the WordPress blog’s data.

As a prerequisite for installing containerized system services, we need to install the utility named Podman on our Red Hat Enterprise Linux 7 system:

[root@localhost ~]# subscription-manager repos –enable rhel-7-server-rpms –enable rhel-7-server-extras-rpms
[root@localhost ~]# yum install podman

As explained in my previous article, Podman complements Buildah and Skopeo by offering an experience similar to the Docker command line: allowing users to run standalone (non-orchestrated) containers. And Podman doesn’t require a daemon to run containers and pods, so we can easily say goodbye to big fat daemons.

By installing Podman, you’ll see that Docker is no longer a required dependency!

As suggested by the Red Hat Container Catalog’s MariaDB page, we can run the following commands to get the things done (we’ll replace, of course, docker with podman):

[root@localhost ~]# podman pull registry.access.redhat.com/rhscl/mariadb-102-rhel7
Trying to pull registry.access.redhat.com/rhscl/mariadb-102-rhel7…Getting image source signatures
Copying blob sha256:9a1bea865f798d0e4f2359bd39ec69110369e3a1131aba6eb3cbf48707fdf92d
72.21 MB / 72.21 MB [======================================================] 9s
Copying blob sha256:602125c154e3e132db63d8e6479c5c93a64cbfd3a5ced509de73891ff7102643
1.21 KB / 1.21 KB [========================================================] 0s
Copying blob sha256:587a812f9444e67d0ca2750117dbff4c97dd83a07e6c8c0eb33b3b0b7487773f
6.47 MB / 6.47 MB [========================================================] 0s
Copying blob sha256:5756ac03faa5b5fb0ba7cc917cdb2db739922710f885916d32b2964223ce8268
58.82 MB / 58.82 MB [======================================================] 7s
Copying config sha256:346b261383972de6563d4140fb11e81c767e74ac529f4d734b7b35149a83a081
6.77 KB / 6.77 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures
346b261383972de6563d4140fb11e81c767e74ac529f4d734b7b35149a83a081

[root@localhost ~]# podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.access.redhat.com/rhscl/mariadb-102-rhel7 latest 346b26138397 2 weeks ago 449MB

After that, we can look at the Red Hat Container Catalog page for details on the needed variables for starting the MariaDB container image.

Inspecting the previous page, we can see that under Labels, there is a label named usage containing an example string for running this container image:

usage docker run -d -e MYSQL_USER=user -e MYSQL_PASSWORD=pass -e MYSQL_DATABASE=db -p 3306:3306 rhscl/mariadb-102-rhel7

After that we need some other information about our container image: the “user ID running inside the container” and the “persistent volume location to attach“:

[root@localhost ~]# podman inspect registry.access.redhat.com/rhscl/mariadb-102-rhel7 | grep User
“User”: “27”,
[root@localhost ~]# podman inspect registry.access.redhat.com/rhscl/mariadb-102-rhel7 | grep -A1 Volume
“Volumes”: {
/var/lib/mysql/data“: {}
[root@localhost ~]# podman inspect registry.access.redhat.com/rhscl/mariadb-102-rhel7 | grep -A1 ExposedPorts
“ExposedPorts”: {
3306/tcp”: {}

At this point, we have to create the directories that will handle the container’s data; remember that containers are ephemeral by default. Then we set also the right permissions:

[root@localhost ~]# mkdir -p /opt/var/lib/mysql/data
[root@localhost ~]# chown 27:27 /opt/var/lib/mysql/data

Then we can set up our systemd unit file for handling the database. We’ll use a unit file similar to the one prepared in the previous article:

[root@localhost ~]# cat /etc/systemd/system/mariadb-service.service
[Unit]
Description=Custom MariaDB Podman Container
After=network.target

[Service]
Type=simple
TimeoutStartSec=5m
ExecStartPre=-/usr/bin/podman rm “mariadb-service”

ExecStart=/usr/bin/podman run –name mariadb-service -v /opt/var/lib/mysql/data:/var/lib/mysql/data:Z -e MYSQL_USER=wordpress -e MYSQL_PASSWORD=mysecret -e MYSQL_DATABASE=wordpress –net host registry.access.redhat.com/rhscl/mariadb-102-rhel7

ExecReload=-/usr/bin/podman stop “mariadb-service”
ExecReload=-/usr/bin/podman rm “mariadb-service”
ExecStop=-/usr/bin/podman stop “mariadb-service”
Restart=always
RestartSec=30

[Install]
WantedBy=multi-user.target

Let’s take apart our ExecStart command and analyze how it’s built:

  • /usr/bin/podman run –name mariadb-service says we want to run a container that will be named mariadb-service.
  • v /opt/var/lib/mysql/data:/var/lib/mysql/data:Z says we want to map the just-created data directory to the one inside the container. The Z option informs Podman to map correctly the SELinux context for avoiding permissions issues.
  • e MYSQL_USER=wordpress -e MYSQL_PASSWORD=mysecret -e MYSQL_DATABASE=wordpress identifies the additional environment variables to use with our MariaDB container. We’re defining the username, the password, and the database name to use.
  • –net host maps the container’s network to the RHEL host.
  • registry.access.redhat.com/rhscl/mariadb-102-rhel7 specifies the container image to use.

We can now reload the systemd catalog and start the service:

[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl start mariadb-service
[root@localhost ~]# systemctl status mariadb-service
mariadb-service.service – Custom MariaDB Podman Container
Loaded: loaded (/etc/systemd/system/mariadb-service.service; static; vendor preset: disabled)
Active: active (running) since Thu 2018-11-08 10:47:07 EST; 22s ago
Process: 16436 ExecStartPre=/usr/bin/podman rm mariadb-service ​(code=exited, status=0/SUCCESS)
Main PID: 16452 (podman)
CGroup: /system.slice/mariadb-service.service
└─16452 /usr/bin/podman run –name mariadb-service -v /opt/var/lib/mysql/data:/var/lib/mysql/data:Z -e MYSQL_USER=wordpress -e MYSQL_PASSWORD=mysecret -e MYSQL_DATABASE=wordpress –net host regist…

Nov 08 10:47:14 localhost.localdomain podman[16452]: 2018-11-08 15:47:14 140276291061504 [Note] InnoDB: Buffer pool(s) load completed at 181108 15:47:14
Nov 08 10:47:14 localhost.localdomain podman[16452]: 2018-11-08 15:47:14 140277156538560 [Note] Plugin ‘FEEDBACK’ is disabled.
Nov 08 10:47:14 localhost.localdomain podman[16452]: 2018-11-08 15:47:14 140277156538560 [Note] Server socket created on IP: ‘::’.
Nov 08 10:47:14 localhost.localdomain podman[16452]: 2018-11-08 15:47:14 140277156538560 [Warning] ‘user’ entry ‘root@b75779533f08’ ignored in –skip-name-resolve mode.
Nov 08 10:47:14 localhost.localdomain podman[16452]: 2018-11-08 15:47:14 140277156538560 [Warning] ‘user’ entry ‘@b75779533f08’ ignored in –skip-name-resolve mode.
Nov 08 10:47:14 localhost.localdomain podman[16452]: 2018-11-08 15:47:14 140277156538560 [Warning] ‘proxies_priv’ entry ‘@% root@b75779533f08’ ignored in –skip-name-resolve mode.
Nov 08 10:47:14 localhost.localdomain podman[16452]: 2018-11-08 15:47:14 140277156538560 [Note] Reading of all Master_info entries succeded
Nov 08 10:47:14 localhost.localdomain podman[16452]: 2018-11-08 15:47:14 140277156538560 [Note] Added new Master_info ” to hash table
Nov 08 10:47:14 localhost.localdomain podman[16452]: 2018-11-08 15:47:14 140277156538560 [Note] /opt/rh/rh-mariadb102/root/usr/libexec/mysqld: ready for connections.
Nov 08 10:47:14 localhost.localdomain podman[16452]: Version: ‘10.2.8-MariaDB’ socket: ‘/var/lib/mysql/mysql.sock’ port: 3306 MariaDB Server

Perfect! MariaDB is running, so we can now start working on the Apache HTTPD + PHP container for our WordPress service.

First of all, let’s pull the right container from Red Hat Container Catalog:

[root@localhost ~]# podman pull registry.access.redhat.com/rhscl/php-71-rhel7
Trying to pull registry.access.redhat.com/rhscl/php-71-rhel7…Getting image source signatures
Skipping fetch of repeat blob sha256:9a1bea865f798d0e4f2359bd39ec69110369e3a1131aba6eb3cbf48707fdf92d
Skipping fetch of repeat blob sha256:602125c154e3e132db63d8e6479c5c93a64cbfd3a5ced509de73891ff7102643
Skipping fetch of repeat blob sha256:587a812f9444e67d0ca2750117dbff4c97dd83a07e6c8c0eb33b3b0b7487773f
Copying blob sha256:12829a4d5978f41e39c006c78f2ecfcd91011f55d7d8c9db223f9459db817e48
82.37 MB / 82.37 MB [=====================================================] 36s
Copying blob sha256:14726f0abe4534facebbfd6e3008e1405238e096b6f5ffd97b25f7574f472b0a
43.48 MB / 43.48 MB [======================================================] 5s
Copying config sha256:b3deb14c8f29008f6266a2754d04cea5892ccbe5ff77bdca07f285cd24e6e91b
9.11 KB / 9.11 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures
b3deb14c8f29008f6266a2754d04cea5892ccbe5ff77bdca07f285cd24e6e91b

We can now look through this container image to get some details:

[root@localhost ~]# podman inspect registry.access.redhat.com/rhscl/php-71-rhel7 | grep User
“User”: “1001”,
“User”: “1001”
[root@localhost ~]# podman inspect registry.access.redhat.com/rhscl/php-71-rhel7 | grep -A1 Volume
[root@localhost ~]# podman inspect registry.access.redhat.com/rhscl/php-71-rhel7 | grep -A1 ExposedPorts
“ExposedPorts”: {
“8080/tcp”: {},

As you can see from the previous commands, we got no volume from the container details. Are you asking why? It’s because this container image, even if it’s part of RHSCL (formerly known as Red Hat Software Collections), has been prepared for working with the Source-to-Image (S2I) builder. For more info on the S2I builder, please take a look at its GitHub project page.

Unfortunately, at this moment, the S2I utility is strictly dependent on Docker, but for demo purposes, we would like to avoid it..!

So moving back to our issue, what can we do for guessing the right folder to mount on our PHP container? We can easily guess the right location by looking at all the environment variables for the container image, where we will find APP_DATA=/opt/app-root/src.

So let’s create this directory with the right permissions; we’ll also download the latest package for our WordPress service:

[root@localhost ~]# mkdir -p /opt/app-root/src/
[root@localhost ~]# curl -o latest.tar.gz https://wordpress.org/latest.tar.gz
[root@localhost ~]# tar -vxf latest.tar.gz
[root@localhost ~]# mv wordpress/* /opt/app-root/src/
[root@localhost ~]# chown 1001 -R /opt/app-root/src

We’re now ready for creating our Apache http + PHP systemd unit file:

[root@localhost ~]# cat /etc/systemd/system/httpdphp-service.service
[Unit]
Description=Custom httpd + php Podman Container
After=mariadb-service.service

[Service]
Type=simple
TimeoutStartSec=30s
ExecStartPre=-/usr/bin/podman rm “httpdphp-service”

ExecStart=/usr/bin/podman run –name httpdphp-service -p 8080:8080 -v /opt/app-root/src:/opt/app-root/src:Z registry.access.redhat.com/rhscl/php-71-rhel7 /bin/sh -c /usr/libexec/s2i/run

ExecReload=-/usr/bin/podman stop “httpdphp-service”
ExecReload=-/usr/bin/podman rm “httpdphp-service”
ExecStop=-/usr/bin/podman stop “httpdphp-service”
Restart=always
RestartSec=30

[Install]
WantedBy=multi-user.target

We need then to reload the systemd unit files and start our latest service:

[root@localhost ~]# systemctl daemon-reload

[root@localhost ~]# systemctl start httpdphp-service

[root@localhost ~]# systemctl status httpdphp-service
httpdphp-service.service – Custom httpd + php Podman Container
Loaded: loaded (/etc/systemd/system/httpdphp-service.service; static; vendor preset: disabled)
Active: active (running) since Thu 2018-11-08 12:14:19 EST; 4s ago
Process: 18897 ExecStartPre=/usr/bin/podman rm httpdphp-service (code=exited, status=125)
Main PID: 18913 (podman)
CGroup: /system.slice/httpdphp-service.service
└─18913 /usr/bin/podman run –name httpdphp-service -p 8080:8080 -v /opt/app-root/src:/opt/app-root/src:Z registry.access.redhat.com/rhscl/php-71-rhel7 /bin/sh -c /usr/libexec/s2i/run

Nov 08 12:14:20 localhost.localdomain podman[18913]: => sourcing 50-mpm-tuning.conf …
Nov 08 12:14:20 localhost.localdomain podman[18913]: => sourcing 40-ssl-certs.sh …
Nov 08 12:14:20 localhost.localdomain podman[18913]: AH00558: httpd: Could not reliably determine the server’s fully qualified domain name, using 10.88.0.12. Set the ‘ServerName’ directive globall… this message
Nov 08 12:14:20 localhost.localdomain podman[18913]: [Thu Nov 08 17:14:20.925637 2018] [ssl:warn] [pid 1] AH01909: 10.88.0.12:8443:0 server certificate does NOT include an ID which matches the server name
Nov 08 12:14:20 localhost.localdomain podman[18913]: AH00558: httpd: Could not reliably determine the server’s fully qualified domain name, using 10.88.0.12. Set the ‘ServerName’ directive globall… this message
Nov 08 12:14:21 localhost.localdomain podman[18913]: [Thu Nov 08 17:14:21.017164 2018] [ssl:warn] [pid 1] AH01909: 10.88.0.12:8443:0 server certificate does NOT include an ID which matches the server name
Nov 08 12:14:21 localhost.localdomain podman[18913]: [Thu Nov 08 17:14:21.017380 2018] [http2:warn] [pid 1] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are …
Nov 08 12:14:21 localhost.localdomain podman[18913]: [Thu Nov 08 17:14:21.018506 2018] [lbmethod_heartbeat:notice] [pid 1] AH02282: No slotmem from mod_heartmonitor
Nov 08 12:14:21 localhost.localdomain podman[18913]: [Thu Nov 08 17:14:21.101823 2018] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.27 (Red Hat) OpenSSL/1.0.1e-fips configured — resuming normal operations
Nov 08 12:14:21 localhost.localdomain podman[18913]: [Thu Nov 08 17:14:21.101849 2018] [core:notice] [pid 1] AH00094: Command line: ‘httpd -D FOREGROUND’
Hint: Some lines were ellipsized, use -l to show in full.

Let’s open the 8080 port on our system’s firewall for connecting to our brand new WordPress service:

[root@localhost ~]# firewall-cmd –permanent –add-port=8080/tcp
[root@localhost ~]# firewall-cmd –add-port=8080/tcp

We can surf to our Apache web server:

Apache web server

Start the installation process, and define all the needed details:

Start the installation process

And finally, run the installation!

Run the installation

At the end, we should reach out our brand new blog, running on Apache httpd + PHP backed by a great MariaDB database!

That’s all folks; may containers be with you!

Source

First Impressions: goto; Copenhagen

It’s November and that means conference season – people from all around the world are travelling to speak at, attend or organise tech conferences. This week I’ve been at my first goto; event in Copenhagen held at the Bella Sky Center in Denmark. I’ll write a bit about my experiences over the last few days.

We’re wondering if #gotoselfie will catch on?? Here with @ah3rz after doing a short interview to camera pic.twitter.com/w7ioMDL7DL

— Alex Ellis (@gotocph) (@alexellisuk) November 23, 2018

My connection to goto; was through my friend Adam Herzog who works for Trifork – the organisers of the goto events. I’ve known Adam since he was working at Docker in the community outreach and marketing team. One of the things I really like about his style is his live-tweeting from sessions. I’ve learnt a lot from him over the past few years so this post is going to feature Tweets and photos from the event to give you a first-person view of my week away.

First impressions CPH

Copenhagen has a great conference center and hotel connected by sky-bridge called Bella Sky. Since I live in the UK I flew in from London and the first thing I noticed in the airport was just how big it is! It feels like a 2km+/- walk from the Ryanair terminal to baggage collection. Since I was here last – they’ve added a Pret A Manger cafe that we’re used to seeing across the UK.
There’s a shuttle bus that leaves from Terminal 2 straight to the Bella Sky hotel. I was the only person on the bus and it was already almost dark at just 3pm in the afternoon.

On arrival the staff at the hotel were very welcoming and professional. The rooms are modern and clean with good views and facilities. I have stayed both at the Bella before and in the city. I liked the city for exploring during the evenings and free-time, but being close to the conference is great for convenience.

The conference days

This goto; event was three days long with two additional workshop days, so for some people it really is an action-packed week. The keynotes kick-off at 9am and are followed by talks throughout the day. The content at the keynotes was compelling, but at the same time wasn’t always focused on software development. For instance the opening session was called The future of high-speed transportation by rocket-scientist Anita Sengupta.

Unlike most conferences I’ve attended there were morning, afternoon and evening keynotes. This does make for quite long days, but also means the attendees are together most of the day rather than having to make their own plans.

One of my favourite keynote sessions was On the Road to Artificial General Intelligence by Danny Lange from Unity.

First we found out what AI was not:

‘These things are not AI’ @GOTOcph pic.twitter.com/7PJHH8qM5S

— Alex Ellis (@gotocph) (@alexellisuk) November 20, 2018

Then we saw AI in action – trained on GCP with TensorFlow to give a personality to the wireframe of a pet dog. That was then replicated into a race course with a behaviour that made the dog chase after a bone.

Fascinating – model of a dog trained by @unity3d to fetch bones. “all we used was TensorFlow and GCP, no developers programmed this” @GOTOcph pic.twitter.com/lOoiHsgCCx

— Alex Ellis (@gotocph) (@alexellisuk) November 20, 2018

My talk

On the first day I also gave my talk on Serverless Beyond the Hype.

There was an artist doing a live-sketch of my talk. I’ve seen this done a few times at meet-ups and I always find it fascinating to see how they capture the talk so well in pictures.

Awesome diagramming art by @MindsEyeCCF based the on @alexellisuk’s #GOTOcph talk on Serverless with @openfaas! pic.twitter.com/iXZawibeiQ

— Kenny Bastani (@kennybastani) November 19, 2018

My talk started off looking at Gartner’s Hype Cycle – explored ThoughtWorks’ opinions on multi-cloud and lock-in before covering RedMonk’s advice to adopt Kubernetes. After that I looked at the leading projects available that enable Serverless with Kubernetes and then gave some live demos and described case-studies of how various companies are leveraging OpenFaaS.

#serverless is going to get a bit worse before it gets better…@openfaas creator @alexellisuk sharing #gartner hype cycle predicting reaching plateau of productivity in 2-5 years and clickbait article on fear of lock-in from @TheRegister at #GOTOcph pic.twitter.com/gZmP7KsisP

— adam herzog (@ah3rz) November 19, 2018

Vision Bank is one of our production users who are benefiting from the automation, monitoring and self-healing, scaling infrastructure offered by containers.

cool #fintech #serverless case study using @openfaas in production @VisionBanco looking to skip #microservices and go from monolith to functions

#openfaas founder @alexellisuk at #GOTOcph pic.twitter.com/jALzmve5PH

— adam herzog (@ah3rz) November 19, 2018

And of course – no talk of mine is complete without live-demos:

Live coding session #GOTOcph @alexellisuk #openfaas pic.twitter.com/BOTGYkk4TD

— Nicolaj Lock (@mr_nlock) November 19, 2018

In my final demo the audience donated my personal money to a local children’s charity in Copenhagen using the Monzo bank API and OpenFaaS Cloud functions.

Serverless beyond the hype by @alexellisuk. Donating to @Bornecancerfond in the live demo 💰💸 #serverless pic.twitter.com/n1rzcqRByd

— Martin Jensen (@mrjensens) November 19, 2018

Feedback-loop

Later in the day Adam mentioned that my talk was well rated and that the recording would be made available in the goto play app. That means you can check it out any time.

Throughout the week I heard a lot about ratings and voting for sessions. The audience are able to give anonymous feedback to the speakers and the average rating given is taken seriously by the organisers. I’ve not seen such an emphasis put on feedback from attendees before and to start with it may seem off-putting, but I think getting feedback in this way can help speakers know their audience better. The audience seemed to be made up largely of enterprise developers and many had a background in Java development – a talk that would get a 5/5 rating at KubeCon may get a completely different rating here and visa-versa.

One of the tips I heard from the organisers was that speakers should clearly “set expectations” about their session in the first few minutes and in the abstract so that the audience are more likely to rate the session based upon the content delivered vs. the content they would have liked to have seen instead.

Hearing from RedMonk

I really enjoyed the talk by James Governer from RedMonk where James walked us through what he saw as trends in the industry relating to cloud, serverless and engineering practices. I set about live-tweeting the talk and you can find the start of the thread here:

James takes the stage @monkchips at @GOTOcph pic.twitter.com/qcQz0yUVUU

— Alex Ellis (@gotocph) (@alexellisuk) November 21, 2018

One of the salient points for me was where James suggested that the C-Level of tech companies have a harder time finding talent than capital. He then went on to talk about how developers are now the new “King Makers” for software. I’d recommend finding the recording when it becomes available on YouTube.

Hallway track

The hallway track basically means talking to people, ad-hoc meetings and the conversations you get to have because you’re physically at the event with like-minded people.

I met Kenny Bastani for the first time who’s a Field CTO at Pivotal and he asked me for a demo of OpenFaaS. Here it is – the Function Store that helps developers collaborate and share their functions with one another (in 42 seconds):

In 42 seconds @alexellisuk demos the most powerful feature of FaaS. The function store. This is what the future and the now looks like. An open source ecosystem of functions. pic.twitter.com/ix3ER4b7Jn

— Kenny Bastani (@kennybastani) 20 November 2018

Letting your hair down

My experience this week compared to some other large conferences showed that the Trifork team really know how to do things well. There were dozens of crew ready to help out, clear away and herd the 1600 attendees around to where they needed to be. This conference felt calm and relaxed depsite being packed with action and some very long days going on into the late evening.

Party time

We attended an all-attendee party on site where there was a “techno-rave” with DJ Sam Aaron from the SonicPi project. This is music generated by writing code and really well-known in the Raspberry Pi and maker community.

At the back of the room there was the chance to don a VR headset and enter another world – walking the plank off a sky-scraper or experiencing an under-water dive in a shark-cage.

VR and techno at the party @GOTOcph pic.twitter.com/3wfxS4vSeZ

— Alex Ellis (@gotocph) (@alexellisuk) November 20, 2018

Speakers’ dinner

I felt that the speakers were well looked after and the organisers helped with any technical issues that may have come up. The dinner organised for the Wednesday night was in an old theatre with Danish Christmas games and professional singers serenading us between courses. This was a good time to get to know other speakers really well and to have some fun.

Thank you @GOTOcph for the speakers’ dinner tonight. Very entertaining and great company! pic.twitter.com/LUfqf6zJRF

— Alex Ellis (@gotocph) (@alexellisuk) November 21, 2018

Workshop – Serverless OpenFaaS with Python

On Thursday after the three days of the conference talks we held a workshop called Serverless OpenFaaS with Python. My colleague Ivana Yocheva joined me from Sofia to help facilitate a workshop to a packed room of developers from varying backgrounds.

We had an awesome workshop yesterday at #GOTOcph with a packed room of developers learning how to build portable Serverless with Python and @openfaas #FaaSFriday pic.twitter.com/dhP9rN5wLa

— OpenFaaS (@openfaas) November 23, 2018

Feedback was very positive and I tried to make the day more engaging by introducing demos after we came back from lunch and the coffee breaks. We even introduced a little bit of competition to give away some t-shirts and beanies which went down well in the group.

Wrapping up

As I wrap up my post I want to say that I really enjoyed the experience and would highly recommend a visit to one of the goto conferences.

Despite only knowing around half a dozen people when I arrived, I made lots of new friends and contacts and am looking forward to keeping in touch and being part of the wider community. I’ll leave you with this really cute photo from Kasper Nissen the local CNCF Ambassador and community leader.

Thank you for the beanie, @alexellisuk! Definitely going to try out @openfaas in the coming weeks 🤓 pic.twitter.com/gSX63s9E6y

— 𝙺𝚊𝚜𝚙𝚎𝚛 𝙽𝚒𝚜𝚜𝚎𝚗 (@phennex) November 22, 2018

My next speaking session is at KubeCon North America in December speaking on Digital Transformation of Vision Banco Paraguay with Serverless Functions with Patricio Diaz.

Let’s meet up there for a coffee? Follow me on Twitter @alexellisuk

Get involved

Want to get involved in OpenFaaS or to contribute to Open Source?

Source

DockerCon Hallway Track Is Back – Schedule One Today

The Hallway Track is coming back to DockerCon Europe in Barcelona. DockerCon Hallway Track is an innovative platform that helps you find like-minded people to meet one-on-one and schedule knowledge sharing conversations based on shared topics of interest. We’ve partnered with e180 to provide the next level of conference attendee networking. Together, we believe that some of the most valuable conversations can come from hallway encounters, and that we can unlock greatness by learning from each other. After the success at past DockerCons, we’re happy to grow this idea further in Barcelona.

DockerCon is all about learning new things and connecting with the community. The Hallway Track will help you meet and share knowledge with Docker Staff, other attendees, Speakers, and Docker Captains through structured networking.

Source

Docker and MuleSoft Partner to Accelerate Innovation for Enterprises

A convergence of forces in SaaS, IoT, cloud, and mobile have placed unprecedented requirements on businesses to accelerate innovation to meet those rapidly changing preferences. The big don’t eat the small, the fast eat the slow.

The industry has offered several solutions to this acceleration problem – from working harder to outsourcing and devops, but none of those solutions have really offered the levels of acceleration needed. The reason: there is too much friction slowing the art of the possible.

Docker and MuleSoft remove friction in the innovation process, from ideation all the way to deployment. MuleSoft provides a tops down architectural approach, with API-first design and implementation. The Docker approach is bottoms up from the perspective of the application workload with containerization, to both modernize traditional applications and create of new applications.

Marrying those two approaches combined with the platform, tools and methodology, enable both organizations to help your business accelerate faster than ever before. Docker and MuleSoft bridge the chasm between infrastructure and services in a way never before achieved in the industry.

Together, Docker and MuleSoft accelerate legacy application modernization and new application delivery while reducing IT complexity and costs.

  • Modernize traditional applications quickly without code changes with the Docker Enterprise container platform methodology and tooling to containerize legacy applications. Then, you can instantly extend the business logic and data to new applications by leveraging MuleSoft API gateway and Anypoint Platform.
  • Accelerate time to market of new applications by enhancing developer productivity and collaboration and enabling greater reuse of application services. Anypoint Studio lets you define API contracts and decouple consumers and producers of microservices, so line of business developers who consume the API can start creating new experiences such as a mobile application right away with Anypoint mock service. Docker Desktop is used today by millions of developers to develop microservices using any language and any framework: microservice developers can leverage Docker Desktop to implement the APIs using the best tool for the job, and focus on implementing the business logic with a clear specification defined in Anypoint Platform, letting Anypoint Platform provide one more layer of security, observability and manageability at the API level.
  • Improve overall application security, manageability and observability by using Docker Enterprise to manage container workloads and MuleSoft Anypoint Platform to run and manage the application network.

Only Docker and MuleSoft can bring you the complete solution, tools, methodology and know-how, to execute a multi-pronged approach to transforming your business today. And we’re going to work together to make the experience even more pleasurable. There is a saying in IT that between speed, cost, and quality you have to pick two. With Docker and MuleSoft together, you can have all three.

Source

Introducing Docker Engine 18.09 – Docker Blog

Docker Engine Diagram

Last week, we launched Docker Enterprise 2.1 – advancing our leadership in the enterprise container platform market. That platform is built on Docker Engine 18.09 which was also released last week for both Community and Enterprise users. Docker Engine 18.09 represents a significant advancement of the world’s leading container engine, introducing new architectures and features that improve container performance and accelerate adoption for every type of Docker user – whether you’re a developer, an IT admin, working at a startup or at a large, established company.

Built on containerd

Docker Engine – Community and Docker Engine – Enterprise both ship with containerd 1.2. Donated and maintained by Docker and under the auspices of the Cloud Native Computing Foundation (CNCF), containerd is being adopted as the primary container runtime across multiple platforms and clouds, while progressing towards Graduation in CNCF.

BuildKit Improvements

Docker Engine 18.09 also includes the option to leverage BuildKit. This is a new Build architecture that improves performance, storage management, and extensibility while also adding some great new features:

  • Performance improvements: BuildKit includes a re-designed concurrency and caching model that makes it much faster, more precise and portable. When tested against the github.com/moby/moby Dockerfile, we saw 2x to 9.5x faster builds. This new implementation also supports these new operational models:
    • Parallel build stages
    • Skip unused stages and unused context files
    • Incremental context transfer between builds
  • Build-time secrets: Integrate secrets in your Dockerfile and pass them along in a safe way. These secrets do not end up stored in the final image nor are they included in the build cache calculations to avoid anyone from using the cache metadata to reconstruct the secret.
  • SSH forwarding: Connect to private repositories by forwarding your existing SSH agent connection or a key to the builder instead of transferring the key data.
  • Build cache pruning and configurable garbage collection: Build cache can be managed separately from images and cleaned up with a new command ‘docker builder prune`. You can also set policies around when to clear build caches.
  • Extensibility: Create extensions for Dockerfile parsing by using the new #syntax directive:
    # syntax = registry/user/repo:tag

New Enterprise Features

With this architecture shift and alignment, we’ve also made it much easier to upgrade from the Community engine to the Enterprise engine with a simple license activation. For current Community engine users, that means unlocking many enterprise security features and getting access to Docker’s enterprise-class support and extended maintenance policies. Some of the Enterprise specific features include:

  • FIPS 140-2 validation: Enable FIPS mode to leverage cryptographic modules that have been validated by the National Institute of Standards and Technology (NIST). This is important to the public sector and many regulated industries as it is referenced in FISMA, PCI, and HIPAA/HITECH among others. This is supported for both Linux and Windows Server 2016+.
  • Enforcement of signed images: By enabling engine signature verification in the Docker daemon configuration file, you can verify that the integrity of the container is not compromised from development to execution.

Docker Engine 18.09 is now available for both Community and Enterprise users. Next week, we’ll highlight more of the differences in the Enterprise engine and why some of our existing Community users may want to upgrade to Enterprise.

Source

Introducing Docker’s Windows Server Application Migration Program

Last week, we announced the Docker Windows Server Application Migration Program, designed to help companies quickly and easily migrate and modernize legacy Windows Server 2008 applications while driving continuous innovation across any application, anywhere.

We recognize that Windows Server 2008 is one of the most widely used operating systems today and the coming end-of-support in January 2020 leaves IT organizations with few viable options to cost-effectively secure their legacy applications and data. The Docker Windows Server Application Migration Program represents the best and only way to containerize and secure legacy Windows Server applications while enabling software-driven business transformation. With this new program, customers get:

  • Docker Enterprise: Leading Container Platform and only one for Windows Server applications.
    Docker Enterprise is the leading container platform in the industry– familiar to millions of developers and IT professionals. It’s also the only one that runs Windows Server applications, with support for Windows Server 2016, 1709, 1803 and soon, 2019 (in addition to multiple Linux distributions.) Organizations routinely save 50% or more through higher server consolidation and reduced hardware and licensing costs when they containerize their existing applications with Docker Enterprise.
  • Industry-proven tools & services: Easily discover, containerize, and migrate with immediate results.Only Docker delivers immediate results with industry-proven services that leverage purpose-built tools for the successful containerization of Windows Server applications in the enterprise. This new service offering is based on proven methodologies from Docker’s extensive experience working with hundreds of enterprises to modernize traditional applications. To accelerate migration of legacy applications, Docker leverages a purpose-built tool, called Docker Application Converter, to automatically scan systems for specific applications and speed up the containerization process by automatically creating Docker artifacts. Similarly, Docker Certified Infrastructure accelerates customers’ ability to operationalize the Docker Enterprise container platform. Docker Certified Infrastructure includes configuration best practices, automation tools and validated solution guides for integrating containers into enterprise IT infrastructure like VMware vSphere, Microsoft Azure and Amazon Web Services.
  • Foundation for continuous innovation: Software-driven transformation that enables continuous innovation across any application, anywhereDocker’s platform and methodologies enable organizations to both modernize existing applications and adopt new technologies to meet business needs. This enables transformation to be driven by business and not technical dependencies. Customers can easily integrate new technology stacks and architecture without friction, including: cloud-native apps, microservices, data science, edge computing, and AI.

But don’t just take our word for it. Tele2, a European telecommunications company, is rolling out Docker Enterprise to containerize their legacy Windows Applications at scale.

“We have a vision to ‘cloudify’ everything and transform how we do business as a telecom provider. Docker Enterprise is a key part of that vision. With Docker Enterprise, we have already containerized over half of our application portfolio and accelerated deployment cycles. We are looking forward to getting the advanced Windows Server support features in Docker Enterprise 2.1 into production.”

— Gregory Bohcke, Technical Architect, Tele2

By containerizing legacy applications and their dependencies with the Docker Enterprise container platform, businesses can be moved to Windows Server 2016 (and later OS) without code changes, saving millions in development costs. And because containerized applications run independently of the underlying operating system, they break the cycle of extensive, dependency-ridden upgrades, creating a future-proof architecture that makes it easy to always stay current on the latest OS.

Source

Introducing Docker Enterprise 2.1 – Advancing Our Container Platform Leadership

Operational Insights with Docker Enterprise

Today, we’re excited to announce Docker Enterprise 2.1 – the leading enterprise container platform in the market and the only one designed for both Windows and Linux applications. When Docker Enterprise 2.1 is combined with our industry-proven tools and services in the new Windows Server application migration program, organizations get the best platform for securing and modernizing Windows Server applications, while building a foundation for continuous innovation across any application, anywhere.

In addition to expanded support for Windows Server, this latest release further extends our leadership position by introducing advancements across key enterprise requirements of choice, agility and security.

Choice: Expanding Support for Windows Server and New Kubernetes Features

Docker Enterprise 2.1 adds support for Windows Server 1709, 1803 and Windows Server 2019* in addition to Windows Server 2016. This means organizations can take advantage of the latest developments for Docker Enterprise for Windows Server Containers while supporting a broad set of Windows Server applications.

  • Smaller image sizes: The latest releases of Windows Server support much smaller image sizes which means improved performance downloading base images and building applications, contributing to faster application delivery and lower storage costs.
  • Improved compatibility requirements: With Windows Server 1709 and beyond, the host operating system and container images can deploy using different Windows Server versions, making it more flexible and easier to run containers on a shared operating system.
  • Networking enhancements: Windows Server 1709 also introduced expanded support for Swarm-based routing mesh capabilities, including service publishing using ingress mode and VIP-based service discovery when using overlay networks.

*Pending Microsoft’s release of Windows Server 2019

In addition to Windows Server updates, Docker Enterprise also gets updated with Kubernetes 1.11 and support for pod autoscaling among other new Kubernetes features.

Agility: Greater Insights & Serviceability

As many containerized applications are considered business critical, organizations want to be able to manage and secure these applications over their entire lifecycle. Docker Enterprise 2.1 includes several enhancements to help administrators better streamline Day 2 cluster and application operations:

  • New out-of-the-box dashboards: Enhanced health status dashboards provide greater insight into node and container metrics and allow for faster troubleshooting of issues as well as early identification of emerging issues.Node Health Check in Docker Enterprise 2.1
  • Visibility to known vulnerabilities at runtime: Administrators can now identify running containers with known vulnerabilities to better triage and remediate issues.
  • Task activity streams: Administrators can view and track tasks and background activities in the registry including things like vulnerability scans in progress or database updates.
  • Manage images at scale: Online garbage collection and policy-based image pruning help to reduce container image sprawl and reduce storage costs in the registry.

Security: Enhanced Security & Compliance

For many organizations in highly regulated industries, Docker Enterprise 2.1 adds several new important enhancements:

  • SAML 2.0 authentication: Integrate with your preferred Identity Provider through SAML to enable Single Sign-On (SSO) or multi-factor authentication (MFA).
  • FIPS 140-2 validated Docker Engine: The cryptographic modules in Docker Engine – Enterprise have been validated against FIPS 140-2 standards which also impacts industries that follow FISMA, HIPAA and HITECH and others.
  • Detailed audit logs: Docker Enterprise 2.1 now includes detailed logs across both the cluster and registry to capture users, actions, and timestamps for a full audit trail. These are required for forensic analysis after a security incident and to meet certain compliance regulations.
  • Kubernetes network encryption: Protect all host-to-host communications with the optional IPSec encryption module that includes key management and key rotation.

How to Get Started

We’re excited to share Docker Enterprise 2.1 – the first and only enterprise container platform for both Windows and Linux applications – and it’s available today!

To learn more about this release:

container platform, docker enterprise, Docker enterprise 2.1, migration program, new release, Windows Server application

Source

The Push to Modernize at William & Mary

At William & Mary, our IT infrastructure team needs to be nimble enough to support a leading-edge research university — and deliver the stability expected of a 325 year old institution. We’re not a large school, but we have a long history. We’re a public university located in Williamsburg, Virginia, and founded in 1693, making us the second-oldest institution of higher education in America. Our alumni range from three U.S. presidents to Jon Stewart.

The Linux team in the university’s central IT department is made up of 5 engineers. We run web servers, DNS, LDAP, the backend for our ERP system, components of the content management system, applications for administrative computing, some academic computing, plus a long list of niche applications and middleware. In a university environment with limited IT resources, legacy applications and infrastructure are expensive and time-consuming to keep going.

Some niche applications are tools built by developers in university departments outside of IT. Others are academic projects. We provide infrastructure for all of them, and sometimes demand can ramp up quickly. For instance, an experimental online course catalog was discovered by our students during a registration period. Many students decided they liked the experimental version better and told their friends. The unexpected demand at 7am sent developers and engineers scrambling.

More recently, IT was about to start on a major upgrade of our ERP system that would traditionally require at least 100 new virtual machines to be provisioned and maintained. The number of other applications was also set to double. This put a strain on our network and compute infrastructure. Even with a largely automated provisioning process, we didn’t have much time to spare.

We wanted to tackle both the day-to-day infrastructure management challenges and the scalability concerns. That’s what led us to look at Docker. After successfully running several production applications on Docker Engine – Community, we deployed the updated ERP application in containers on Docker Enterprise. We’re currently running on five bare metal Dell servers that support 47 active services and over 100 containers with room to grow.

Docker Enterprise also dramatically simplifies our application release cycles. Now most applications, including the ERP deployment, are being containerized. Individual departments can handle their own application upgrades, rollbacks and other changes without waiting for us to provision new infrastructure. We can also scale resources quickly, taking advantage of the public cloud as needed.

Just like our researchers have done for years, Docker has also enabled deeper collaboration with our counterparts at other universities. As we all work on completing the same major ERP upgrade we’re able to easily share and adopt enhancements much faster than with traditional architectures.

Today, Docker Enterprise is our application platform of choice. Going forward, it opens up all kinds of possibilities. We are already exploring public cloud for bursting compute resources and large-scale storage. In a year or two, we expect to operate 50 percent to 80 percent in the cloud.
Source