Docker Certified Logging Containers and Plugins from Partners

The Docker Certified Technology Program is designed for ecosystem partners and customers to recognize Containers and Plugins that excel in quality, collaborative support and compliance. Docker Certification gives organizations enterprises an easy way to run trusted software and components in containers on the Docker Enterprise container platform with support from both Docker and the publisher.

In this review, we’re looking at Docker Logging Containers and Plugins. Docker Enterprise provides built-in logging drivers to help users get information from docker nodes, running containers and services. The Docker Engine also exposes a Docker Logging Plugin API for use by Partner Docker logging plugins. The user’s needs are solved by innovations from the extensive Docker ecosystem that extend Docker’s logging capabilities which provide complete log management solutions that include searching, visualizing, monitoring, and alerting.

These solutions are validated by both Docker and the partner company and integrated into a seamless support pipeline that provide customers the world class support they have become accustomed to when working with Docker.

Check out the latest certified Docker Logging Containers and Plugins that are now available from our partners on Docker Store:

Learn More:

Source

Docker Achieves FIPS 140-2 Validation

We are excited to share that we have achieved formal FIPS 140-2 validation (Certificate #3304) from the National Institute of Standards and Technology (NIST) for our Docker Enterprise Edition Crypto Library. With this validation and industry-recognized seal of approval for cryptographic modules, we are able to further deliver on the fundamental confidentiality, integrity and availability objectives of information security and provide our commercial customers with a validated and secure platform for their applications. As required by the Federal Information Security Management Act (FISMA) and other regulatory technology frameworks like HIPAA and PCI, FIPS 140-2 is an important validation mechanism for protecting the sensitivity and privacy of information in mission-critical systems.

As we highlighted in a previous blog post, Docker Engine – Enterprise version 18.03 and above includes this now-validated crypto module. This module has been validated at FIPS 140-2 Level 1. The formal Docker Enterprise Edition Crypto Library’s Security Policy calls out the specific security functions in Docker Engine – Enterprise supported by this module and includes the following:

  • ID hashes
  • Swarm Mode distributed state store and Raft log (securely stores Docker Secrets and Docker Configs)
  • Swarm Mode overlay networks (control plane only)
  • Swarm Mode mutual TLS implementation
  • Docker daemon socket TLS binding

Activating the FIPS mode of operation in Docker Engine – Enterprise is as easy as enabling FIPS Mode on the underlying host operating system and restarting the Engine (if it’s already running). Docker Engine – Enterprise’s FIPS mode can also be explicitly activated by prepending the DOCKER_FIPS=1 environment variable to your commands. Furthermore, FIPS mode can be enabled in the next Docker Enterprise release, thus providing a secure foundation for the management and registry services in addition to the application cluster.

Behind the scenes, Docker Engine – Enterprise leverages a proprietary switching library to swap the crypto module used for functions when FIPS mode is enabled, as shown by the figure below.

We are continuing to work to incorporate this FIPS 140-2 validated module into the remainder of the Docker Enterprise container platform so stay tuned for updates.

Source

The Top 6 Questions You Asked on Containerizing Production Apps

We recently hosted IDC research manager Gary Chen as a guest speaker on a webinar where he shared results from a recent IDC survey on container and container platform adoption in the enterprise. IDC data shows that more organizations are deploying applications into production using containers, driving the need for container platforms like Docker Enterprise that integrate broad management capabilities including orchestration, security and access controls.

The audience asked a lot of great questions about both the IDC data and containerizing production applications. We picked the top questions from the webinar and recapped them here.

If you missed the webinar, you can watch the webinar on-demand here.

Top Questions from the Webinar

Q: What are the IDC stats based on?

A: IDC ran a survey of 300+ container deployers from companies with more than 1,000 employees and have primary responsibility for container infrastructure in the US and modeled it from a variety of data sources they collect about the industry.

Q: IDC mentioned that 54% of containerized applications are traditional apps. Is there is simple ‘test’ to see if an app can be containerized easily?

Source: IDC, Container Infrastructure Market Assessment: Bridging Legacy and Cloud-Native Architectures — User Survey Summary, Doc # US43642018, March 2018

A: Docker works with many organizations to assess and categorize their application portfolios based on the type of app, their dependencies, and their deployment characteristics. Standalone, stateless apps such as load balancers, web, PHP, and JEE WAR apps are generally the easiest applications to containerize. Stateful and clustered apps are also candidates for containerization, but may require more preparation.

Q: How do we containerize applications that are already in production?

A: Docker has created a set of tools and services that help organizations containerize existing applications at scale. We help assess and analyze your application portfolio, and have the tools to automate application discovery and conversion to containers and a methodology to help integrate them into your existing software pipelines and infrastructure. Find out more here.

Q: How do we decide whether to use Swarm or Kubernetes for orchestration of applications in production?

A: It comes down to the type of application and your organization’s preferences. The best part of Docker Enterprise is that you can use either orchestrator within a single platform so your workflows and UI are consistent. Your application can be defined in Compose files or Kubernetes YAML files. Additionally, you can choose to deploy a Compose file to Swarm or Kubernetes within Docker Enterprise.

Q: How can containers be checked for vulnerabilities?

A: Containers are based on an image file. Vulnerability scanning in Docker Enterprise does a binary level scan on each layer of the image file, identifies the software components in each layer and compares it against the NIST CVE database. You can find more details here.

Q: We’re exploring different cloud-based Kubernetes services. Why should we look at Docker Enterprise?

A: The value of the Docker Enterprise platform’s integrated management and security goes well beyond a commercially-supported Kubernetes distribution. Specifically Docker Enterprise allows you to leverage these capabilities consistently regardless of the cloud provider.

With Docker Enterprise, you get an integrated advanced image registry solution that includes vulnerability scanning, registry mirroring and caching for distributed development teams, and policy-based image promotions for scalable operations. We also provide integrated operations capabilities around Kubernetes – simplifying things like setting up teams, defining roles and access controls, integrating LDAP, creating client certificates, and monitoring health. Docker Enterprise makes the detailed configurations of Kubernetes quick and easy to get started and use.

Source

Docker Certified Containers from Monitoring Partners

The Docker Certified Technology Program is designed for ecosystem partners and customers to recognize Containers and Plugins that excel in quality, collaborative support and compliance. Docker Certification gives organizations enterprises an easy way to run trusted software and components in containers on the Docker Enterprise container platform with support from both Docker and the publisher.

In this review, we’re looking at solutions to monitor Docker containers. Docker enables developers to iterate faster with software architectures consisting of many microservices. This poses a challenge to traditional monitoring solutions as the target processes are no longer statically allocated or tied to particular hosts. Monitoring solutions are now expected to track ephemeral and rapidly scaling sets of containers. The Docker Engine exposes APIs for container metadata, lifecycle events, and key performance metrics. Partner Monitoring solutions collect both system and Docker container events and metrics in real time to monitor the health and performance of the customers entire infrastructure, applications and services. These solutions are validated by both Docker and the partner company and integrated into a seamless support pipeline that provide customers the world class support they have become accustomed to when working with Docker.

Check out the latest certified Docker Monitoring Containers that are now available from our partners on Docker Store:

Source

3 Customer Stories You Don’t Want to Miss at DockerCon Barcelona 2018

One of the great things about DockerCon is the opportunity to learn from your peers and find out what they’re doing. We’re pleased to announce several of the sessions in our Customer Stories track. In the track, you’ll hear from your peers who are using Docker Enterprise to modernize legacy applications, build new services and products, and transform the customer experience.

These are just a few of the sessions in the catalog today. You can browse the full list of sessions here. We also have a few more we’ll announce over the coming weeks (some customers just like to keep things under wraps for a little longer).

Desigual Transforms the In-Store Experience with Docker Enterprise Containers Across Hybrid Cloud

Mathias Kriegel, IT Ops Lead and Cloud Architect

Joan Anton Sances, Software Architect

We’re particularly excited to have a local company share their story at DockerCon. In this session, find out how Docker Enterprise has helped Desigual, a global $1 billion fashion retailer headquartered in Barcelona, transform the in-store customer experience with a new “shopping assistant” application.

Not Because We Can, But Because We Have To: Tele2 Containerized Journey to the Cloud
Dennis Ekkelenkamp, IT Infrastructure Manager
Gregory Bohncke, Technical Architect

How does an established mobile phone provider transition from a market strategy of being the cheap underdog to a market challenger leading with high quality product, and awesome features that fearlessly liberates people to live a more connected life? Tele2 Netherlands, a leading mobile service provider, is transforming how it does business with Docker Enterprise.

From Legacy Mainframe to the Cloud: The Finnish Railways Evolution with Docker Enterprise
Niko Virtala, Cloud Architect

Finnish Railways (VR Group) joined us at DockerCon EU 2017 to share how they transformed their passenger reservation system. That project paid off. Today, they have containerized multiple applications, running both on-premises and on AWS. In this session, Finnish Rail will explain the processes and tools they used to build a multi-cloud strategy that lets them take advantage of geo-location and cost advantages to run in AWS, Azure and soon Google Cloud.

You can read more about these and other session in the Customer Stories track at DockerCon Barcelona 2018 here.

Docker Customers, docker enterprise, Docker Partner, DockerCon 2018, DockerCon Barcelona, Spotlight

Source

NextCloudPi backup strategies – Own your bits

This post is a review of the different options that we have when deciding on our backup strategy. As more features have appeared over time, this information has become scattered around this blog, so it is nice to have a review in a single article.

It goes without saying that we need to backup our sensitive data in case a data loss event occurs, such as server failure, hard drive failure or a house fire. Many people think about this only too late, and it can be devastating.


Ideally we would have at least two copies, better three, of all our data. This will cover us from hardware failure. If possible, one of them should be in a different location, in case an event such as somebody breaking in our home, or a house fire, and it also would ideally be encrypted.

Now if we are running NCP on a low end hardware, options such as encryption, or RAID can have a prohibitive computational cost. With this in mind, the following are the backup features that have been developed for this scenario.

Periodic backups

This one is the most basic one. Backup our Nextcloud files and database at regular intervals, in case our system breaks or our database becomes corrupted. As we add more and more Gigabytes of data, this can take really long and take up a lot of space.

The data is better off at an external USB drive than at the SD card, not only because it is cheaper per Megabyte, but also because they are more reliable.

Pros:

  • Doesn’t require any advanced filesystem such as BTRFS.
  • Self contained. We can copy the backup to another NCP instance and restore easily.

Cons:

  • Can take up a lot of space, even if compressed.
  • They can be slow.
  • Our cloud will be inaccessible in maintenance mode during the process.

It is recommended to use a second hard drive to save the backups to, in case the first one fails (instructions). In order to do this just plug in a second hard drive and set it as the destination of backups in nc-backup-auto. Remember that if we are using more than one drive, we shoud reference each one by label in the NCP options as explained here.

Periodic dataless backups

To alleviate some of these issues, we can do dataless backups. These are typically a couple hundred megabytes big and the downtime will be small. The problem with this is that we need to find another way of duplicating the data itself.

Pros:

  • Doesn’t require any advanced filesystem such as BTRFS
  • Almost self contained. We can copy the backup to another NCP instance and restore our accounts, apps and so on easily.
  • The downtime quite small
  • Don’t take much additional space

Cons:

  • We need to backup our data separatedly
  • Restoring can be more complicated to include data

After restoring, we have to edit config.php and point our instance to the path where our data is, then we need to run nc-scan to make Nextcloud aware of the new files.

Periodic BTRFS snapshots

We have dedicated several posts already to BTRFS and its features. NCP will default to format our USB drive to BTRFS, and our datadir will automatically be in a subvolume. Just activate nc-snapshots to get hourly snapshots of your data with btrfs-snp

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

 

$ ls -1 /media/myCloudDrive/ncp-snapshots/

daily_2018-01-30_211703

daily_2018-01-31_221703

daily_2018-02-01_231702

daily_2018-02-02_231703

daily_2018-02-03_231702

daily_2018-02-04_231702

daily_2018-02-05_231702

hourly_2018-02-05_221701

hourly_2018-02-05_231701

hourly_2018-02-06_001701

hourly_2018-02-06_011701

hourly_2018-02-06_021701

hourly_2018-02-06_031701

hourly_2018-02-06_041701

hourly_2018-02-06_051701

hourly_2018-02-06_061701

hourly_2018-02-06_071701

hourly_2018-02-06_081701

hourly_2018-02-06_091701

hourly_2018-02-06_101702

hourly_2018-02-06_111701

hourly_2018-02-06_121701

hourly_2018-02-06_131701

hourly_2018-02-06_141701

hourly_2018-02-06_151701

hourly_2018-02-06_161701

hourly_2018-02-06_171701

hourly_2018-02-06_181702

hourly_2018-02-06_201701

hourly_2018-02-06_211701

hourly_2018-02-06_221701

manual_2017-12-28_113633

monthly_2017-12-28_101702

monthly_2018-01-27_101703

weekly_2018-01-11_111701

weekly_2018-01-18_111702

weekly_2018-01-25_111702

weekly_2018-02-01_111702

 

By virtue of BTRFS being a COW system, the new clones will only take as much additional space as new files we have added since the last snapshot, being space efficient.

Also, the snapshots can be sent incrementally very efficiently to another BTRFS filesystem using btrfs-sync. We can sync to another hard drive on the same machine

, or a BTRFS filesystem on another machine, in which case the transfer is encrypted through SSH.


In a low bandwidth situation, btrfs-sync can send the deltas compressed, but that will take a big toll on the CPU so it is not recommended on low end boards.

Pros:

  • Virtually instant, no matter how many Terabytes we have
  • No downtime
  • Space efficient
  • Can be synced efficiently to another hard drive or another machine through SSH by using btrfs-sync

Cons:

  • Need to run on BTRFS, which is the default filesystem that nc-format-USB uses
  • If we want to sync it to another USB drive it also needs to use BTRFS
  • If we want to sync it remotely we neend a BTRFS subvolume on the other end and setting up SSH credentials

If we only care about our data this can mean zero downtime and an efficient means of prevent accidental deletion. In fact, I have the Trash and Versions apps disabled. I recommend combining nc-snapshots and nc-snapshot-sync with dataless backups to get the best of both worlds.

Rsync


Another option to save our data remotely is to sync it through rsync. This is also quite efficient but compared to BTRFS snapshots we won’t retain a history of the datadir, just the latest version.

Pros:

  • Doesn’t require a particular filesystem on either end
  • Efficient delta sync, only copies the new files

Cons:

  • Need to setup SSH credentials
  • Not able to sync snapshots, it will only mirror the latest version of our datadir.

This option also allows us to keep our data safe if something happens in our house, and is more flexible as it doesn’t require a BTRFS filesystem in the other end.

Make sure you keep your data with you, we can never be too safe!
Source

The Role of Enterprise Container Platforms

As container technology adoption continues to advance and mature, companies now recognize the importance of an enterprise container platform. More than just a runtime for applications, a container platform provides a complete management solution for securing and operationalizing applications in containers at scale over the entire software lifecycle.

While containers may have revolutionized the way developers package applications, container platforms are changing the way enterprises manage and secure both mission-critical legacy applications and microservices both on prem and across multiple clouds. Enterprises are beginning to see that container runtime and orchestration technologies alone don’t address these critical questions:

  • Where did this application come from?
  • Was the application built with company and/or industry best practices in mind?
  • Has this application undergone a security review?
  • Is my cluster performing as expected?
  • If my application is failing or underperforming, where should I look?
  • Will this environment run the same on the new hardware/cloud that we’re using?
  • Can I use my existing infrastructure and/or tools with this container environment?

Leading Industry Analysts Highlight Container Platforms for Enterprise Adoption

For some time, there was a lot of confusion in the market between orchestration solutions and container platforms. But in 2018, we are seeing more alignment across major industry analyst firms over the definition of container platforms. Today, Forrester published the Forrester New Wave™: Enterprise Container Platform Software Suites, Q4 2018, in which Docker was named a Leader.

This report is based on a multi-dimensional review of enterprise container platform solutions that go beyond runtime and orchestration, including:

  • Image management
  • Operations management
  • Security features
  • User experience
  • Application lifecycle management
  • Integrations and APIs
  • And more….

Download the full report here.

Enterprise Container Platforms: The Docker Approach

Docker is committed to delivering a container platform that is built on the values of choice, agility, and security. Docker Enterprise is now being used in over 650 organizations around the world, supporting a wide range of use cases and running on a variety of infrastructures including both private data centers and public clouds. Each of these customers are recognizing significant infrastructure cost reduction, operational efficiency and increased security as a result of containerization.

One key area of focus for us is being an enterprise solution for all applications – including Windows Server applications. While containers did originate with Linux, Windows Server applications represent more than half of all enterprise applications in use today. By partnering with Microsoft since our early days, we have been helping organizations containerize and operate Windows Server applications in production for over two years and counting. More importantly, the Docker container platform addresses both Windows Server and Linux applications – from image management to operations management, integrated security features to user experience and more.

We are honored to be recognized as a Leader in the Forrester New Wave report and look forward to working with more companies as they build out their container platform strategy.

To learn more about Docker Enterprise and the importance of container platforms:

container platform, docker enterprise, Forrester New Wave

Source

Run your blog with Ghost, Docker and LetsEncrypt

In this blog post I’ll show you how to set up your own blog just like mine with Ghost, Docker, Nginx and LetsEncrypt for HTTPS. You can follow these instructions to kick-start your own blog or find some alternative approaches in the conclusion.

When I decided to start my blog I knew that I wanted it to have a clean and uncluttered theme, no dependencies on an external relational database and that it should allow me to write in Markdown. Markdown is a kind of structured text for writing documentation which gets turned into HTML for viewing.

I heard about Ghost – a blogging platform written in Node.js that used SQLite as a back-end. After trying it out I set it up on my Raspberry Pi and registered a domain name that allowed dynamic DNS updates. From there I started writing 1-3 times per week about the various open-source technologies I was learning.

Background

I used to self-host my blog at home with a domain-name from namecheap.com. Their free dynamic-DNS service allowed me to serve up web content without a static IP address. I ran the blog on my Raspberry Pi which involved producing and maintaining a Dockerfile for ARM. This setup served me well but the uptime started to suffer every time the ISP had a “hiccup” in their network. So I moved it over to a public cloud host to ensure a better uptime.

Follow my current setup

In the next section I’ll tell you how my blog is running today and you can follow these steps to set up your own in the same way.

Set up a cloud host

The first step is to set up a cloud host. If you already have a blog or website and you are using a log analyzer or analytics platform then find out where most of your readers are based. For me, the majority of my readers are based in North America on the East Coast so that’s where I provisioned a cloud host.

Google Analytics is a good option for analytics and real-time insights.

Here are some good options for hosting providers:

The cheapest options will cost you around 5-10 USD / month with Packet costing around 36 USD / month. I do not advise going any cheaper than this. The first two providers listed use VMs so also offer the ability to make snapshots of your host at any time for a small fee. This is a quick way of doing a backup.

Pick an OS

I picked Ubuntu Linux 16.04 LTS. All of the providers above support this flavour at time of writing. Ubuntu 17.10 is also a valid option but some command line steps may be different.

Lock down the configuration

Some providers offer a cloud firewall which should be enough to close off incoming connections. The advantage of a cloud firewall is that if you mess things up you can turn it off easily. Be careful configuring a firewall in software – you don’t want to get locked out inadvertently.

I used ufw Uncomplicated Firewall to close incoming connections from the Internet and allow outgoing ones.

  • Install ufw:

$ sudo apt install ufw

  • Setup a basic configuration to allow SSH, HTTPS and HTTP incoming

Create setup_firewall.sh and run chmod +x ./setup_firewall.sh:

#!/bin/sh
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable

Then run the script: sudo ./setup_firewall.sh.

You can check the config at any time with:

$ sudo ufw status verbose

You can disable the ufw configuration at any time with:

$ sudo ufw disable

In a new terminal window check that you can still access your host via ssh.

Install Docker

The simplest way to install Docker CE on Ubuntu is via the official utility script:

curl -sL https://get.docker.com | sh

If you’re using a regular user-account then run usermod i.e.:

$ sudo usermod -aG docker alex

Log out and in again so that your user can access the docker command.

Install Nginx on the host – part 1

Nginx is a load-balancer and reverse proxy. We will use it to stand in front of Ghost and offer HTTPS. If you want to run more than one blog later on you can also use Nginx to help with that.

This is a two-part process.

We’re installing Nginx directly onto the host for simplicity and lower latency.

$ sudo apt install nginx

You can take my configuration file and use it as a template – just change the domain name values for your own host.

This configuration does two things:

  • Prepares a static folder for LetsEncrypt to use later on port 80
  • Sets up a redirect to the HTTPS-enabled version of your site for any calls on port 80

server {
listen 80;
server_name blog.alexellis.io;
location /.well-known/ {
root /var/www/blog.alexellis.io/.well-known/;
}

location / {
return 301 https://$server_name$request_uri;
}
}

Change the hostname etc and place it at /etc/nginx/conf.d/default

Obtain a HTTPS certificate from LetsEncrypt

Before we enable Nginx we’ll need to obtain a certificate for your domain. HTTPS encrypts the HTTP connection between your users and your blog. It is essential for when you use the admin page.

Note: if you avoid this step your password will be sent in clear-text over the Internet.

Use certbot to get a certificate.

sudo apt-get install software-properties-common
sudo add-apt-repository ppa:certbot/certbot
sudo apt update
sudo apt-get install python-certbot-nginx
sudo certbot –authenticator webroot –installer nginx

You can also use alternative for HTTPS such as Cloudflare’s free tier, but this will not give you a green lock and only encrypts a user’s traffic from their device up to the Cloudflare server. The last hop is left open and vulnerable.

Install Nginx on the host – part 2

Now that you have a certificate in /etc/letsencrypt/live for your blog you can finish adding the configuration for Nginx.

These lines enable HTTPS for your blog, but remember to personalise the domain replacing blog.alexellis.io with your own domain name.

Edit /etc/nginx/conf.d/default:

server {
server_name blog.alexellis.io;
listen 443 ssl;

location / {
proxy_pass http://127.0.0.1:2368;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

}

ssl_certificate /etc/letsencrypt/live/blog.alexellis.io/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/blog.alexellis.io/privkey.pem;
ssl on;

}

Run the blog with Ghost and Docker

In this step we will run the blog software in a container and configure it with a URL.

Save create.sh and run chmod +x ./create.sh

#!/bin/sh
docker rm -f ghost

docker run –name ghost
-p 127.0.0.1:2368:2368
-e url=https://blog.alexellis.io
-v /home/alex/ghost/content:/var/lib/ghost/content
–restart=always
-d ghost:1.21.1-alpine

  • Replace /home/alex with your home directory
  • Replace url= with the URL of your website

Note that the line -p 127.0.0.1:2368:2368 means Docker won’t interfere with the configuration of ufw and expose the Ghost blog directly.

You can run the shell script now called ./create.sh and check that the site came up with docker ps. If something appears to have gone wrong then type in docker logs ghost to check on the container. The create.sh shell-script is re-runnable but you only need to run it once – when you restart the machine Docker will automatically restart the container.

Attempt to access the site from the host’s shell:

$ curl 127.0.0.1:2368 -H “Host: blog.alexellis.io”

Enable Nginx

You can now start and enable Nginx, then head over to your URL in a web-browser to test that everything worked.

$ sudo systemctl enable nginx
$ sudo systemctl start nginx

If you see an error then type in sudo systemctl status nginx -l to view the logs. A common error is to miss a ; semi-colon from the end of a line in a configuration file.

Register your admin account

You now need to register the admin account on your blog so that you can write new posts.

Head over to the URL of your blog adding the suffix /admin and follow the new user flow.

Set up the classic theme (optional)

If you have used Ghost before then you may remember the classic theme (LTS) which I use on my blog. If you would like that instead of the default then you can find it on the Ghost GitHub site in the lts branch. Click “Clone or Download” then “Download Zip”. You can then install it via the Ghost admin page.

Day 2 operations

Here are some “Day 2” operations that relate to how I run my blog on a day-to-day basis.

Setup a backup regime

You need to back up your blog posts. Here’s two simple ideas:

  • Take snapshots

The easiest way to make a backup is to take a snapshot of your host through the cloud provider’s control-panel feature. These snapshots do cost money and it’s often not a one-off cost – it will be recurring, so bear that in mind if picking this option. If you lose the host you can restore the machine and the blog posts at once.

  • Store an archive off-site

Take a tar or zip of the content directory and store it in a private AWS S3 bucket using CLI tools. This is a cheaper option that taking snapshots. If you lose your host then you’ll have to rebuild the whole machine using this blog post, but your blog posts will be restored.

  • Export data from the UI

You can also export your data as a .json file from the admin panel. This is a manual task, but simple to do. It can be used to restore your data if you set up Ghost again.

Pro-tip: Backup regimes need to be tested – don’t wait until you have important posts to test out recovering from a failure.

Update Ghost

The Docker team host an official image for ghost on the Docker Hub. When you want to update to a newer version then check the tags listed and edit the create.sh script and run it again.

Use analytics

You can install Google Analytics on any site you set up for free. It will show you where your audience is located and which pages they are most interested in. I use this data to drive what to blog about next. It also gives you clues as to where your traffic is coming from – was it Docker Weekly? Or that post on Reddit that got me that spike in traffic?

If you have objections to using Google Analytics, then I’d suggest using some kind of log analyzer or aggregator instead such as the ELK stack or matomo.

Use the insights from the analytics to make your blog better.

Renew your HTTPS certificates

Every three months you will need to renew your HTTPS certificates with LetsEncrypt. This is a caveat of using a free service – they will also send you emails when the certificate is close to its expiry date.

You can automate this with a shell-script and cron.

I enable comments on my site via Disqus. This gives people a quick way to get in touch with me avoiding unnecessary emails or Tweets which are out of context. They can also see a thread of discussion and what other people have said about the post. Sometimes one person will post a question and someone else will answer it before I have a chance to get there.

If you don’t want to enable comments then that’s OK too, my advice is to make sure it’s super easy for people to get in touch with you with questions, comments and suggestions.

Run a second blog

If you have several blogs you can run them all on the same system providing it has enough RAM and disk available. Just follow the steps above on the same host for each blog.

Wrapping up

I’ve been using Ghost as my primary blogging platform for several years and find it easy to write a blog post in one sitting or over several by using the draft post feature. My blog post today does not offer the only way to set up Ghost – it offers my current method and there is always room for automation and improvement.

If setting up your own blog seems like too much work (maybe it is) then there are some short-cuts available:

  • Medium.com offers free blogs and a WYSIWYG editor, it’s very popular with developers.
  • Ghost offer a hosted option for their blog – be warned they do charge per page-view, so if you have a popular site it could be expensive
  • DigitalOcean offer a VM image which has Ghost and ufw set up already, but without LetsEncrypt configured. It could be a good starting point if you want to host on DigitalOcean.

Run your blog with Ghost, Docker and LetsEncrypt https://t.co/zzbvvv4UWP @TryGhost @docker @letsencrypt

— Alex Ellis (@alexellisuk) February 16, 2018

Follow me on Twitter @alexellisuk

Acknowledgements: thanks to Richard Gee for proof reading this post. Lead image from Pexels.

Source

Now Open: DockerCon Europe Diversity Scholarship!

Over the last 3 years, Docker has provided over 75 financial scholarships to members of the Docker community, who are traditionally underrepresented, to attend DockerCon. By actively promoting diversity of all kinds, our goal is make DockerCon a safe place for all to learn, belong and collaborate.

With the continued support of Docker and one of our DockerCon scholarship sponsors, the Open Container Initiative (OCI), we are excited to announce the launch of the DockerCon Europe Diversity Scholarship Program. This year, we are increasing the number of scholarships we are granting to ensure attending DockerCon is an option for all.

Apply Now!

Deadline to Apply:

Friday, 26 October, 2018 at 5:00PM PST

Selection Process

A committee of Docker community members will review and select the scholarship recipients. Recipients will be notified by the week of 7 November 2018.

What’s included:

Full Access DockerCon Conference Pass

Please note, travel expenses are not covered under the scholarship and are the responsibility of the scholarship recipient.

Requirements

Must be able to attend DockerCon Europe 2018

Must be 18 years old or older to apply

Must be able to travel to Barcelona, Spain

We wanted to check back in with DockerCon Europe 2017 scholarship recipient, Roshan Gautam, and hear how attending DockerCon Europe has impacted his life. In Roshan’s words:

DockerCon Experience

Ever since I learned about Docker from a senior friend of mine back in early 2017, I got super excited about this technology and I started learning Docker. I enjoyed using Docker so much that I applied for a campus ambassador program and started a community on my campus.** One day, a of the member of campus community mentioned the scholarship program for DockerCon EU 2017. I applied for the scholarship and cheers to my luck, I got it.

At DockerCon, I had a really wonderful time listening to the speakers, meeting with new people with similar mindset and of course having nice healthy food. “Guidance, Mentoring, Caring” – These are the words I want to use for the amazing organizers of the conference. Along with attending the high quality sessions in the conference, I got the chance to meet other fellow campus ambassadors from different places. I can’t stop mentioning, how amazing the After Party was. It was my first time in Denmark (actually Europe) and I really had an experience of a lifetime.

After returning from DockerCon EU, I started sharing my experience with my school community as well as the local Docker community in Kathmandu. I took courses online and I joined a company in Nepal and started working as a Devops (also Developer) in the company. There, we organized 2 week long workshops and seminars. Recently, I started my own company in Kathmandu where we use Docker for every single projects both in development and production. I have been advocating about Docker and the communities behind it in different events and workshops in Nepal. I believe that, DockerCon EU 2017 was a life changing moment of my life and I have no complains.

**Please note: the Docker Campus Ambassador program is no longer active.

Learn more about the DockerCon Diversity Scholarship here.

Have questions or concerns? Reach us at dockercon@docker.com

Interested in sponsoring DockerCon? Reach out here to learn more sponsors@docker.com

More free Docker resource:

docker scholarship, dockercon, dockercon eu

Source

Creating a minimal Debian container for Docker – Own your bits

In the last post, we introduced some basic techniques to free up unused space on a Debian system. Following those steps, I created a Debian 8 Docker image that takes only 56.7 MB!

Usage

You can get it typing the following, but you really don’t need to because
docker run pulls the image for you if you do not already have it. It is still useful to get updates.

docker pull ownyourbits/minidebian

Bash into it with

docker run –rm -ti ownyourbits/minidebian

Run any command inside the container, for instance list root with

docker run –rm -ti ownyourbits/minidebian ls /

Is this small?

In order to see how small this really is, we can compare it to a minimal system built using debootstrap. More details on this below.

Docker images can be made very small, because there are parts of the system that are not needed in order to launch things in a container. If we take a look at the official Debian Docker repository, we can see that their images are smaller than the file system generated by bootstrap. They even have slim versions that get rid of locales and man pages, but they are still bigger than ownyourbits/minidebian.

$ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

ownyourbits/minidebian latest e3d72e6d0731 40 hours ago 56.7 MB

debootstrap-sid latest a702df4074d3 41 hours ago 316 MB

debian jessie-slim 7d86024f45a4 4 weeks ago 79.9 MB

debian jessie e5599115b6a6 4 weeks ago 123 MB

Is this useful for anything?

It is! docker containers are quite handy to use and allow us to play around with a Debian system easily. Sure, we always could do this with a virtual machine or with bootstrap, but there are benefits to using docker.

One benefit lays in the fact that Docker uses overlayfs, so any changes made to your container will be lost when you exit, unless you issue
docker commit. We can play around, we can experiment, break things without fear, and then throw it away.

Another benefit is that we can use it to build more complex systems, overlaying a database, Java runtime, or a web server on top of it. That means that if an Apache server adds a 140 MB layout, you only have to get that compressed overlay, which is quite fast and space efficient.

It is also convenient to distribute stuff with dependencies. Everything is packed for you and you do not have to deal with configuration. This makes trying things out easy. Want to get a feel of gentoo?
docker pull gentoo/stage3-amd64 will save you tons of compilation and configuration time.

Finally, we can share this easily on dockerhub.io or our private docker repo.

Details

In order to get a working Debian system that we can then trim down, we have different options.

One of them is working on a live ISO, another is starting from the official Debian Docker repo that we mentioned earlier.

Another one is using good old debootstrap. Debootstrap is a little tool that gets the base debs from the official repositories, then installs them in a directory, so you can chroot to it. It provides the basic directory structure for Debian.

We can see what packages Debian considers essential

$ debootstrap –print-debs sid .

I: Keyring file not available at /usr/share/keyrings/debian-archive-keyring.gpg; switching to https mirror https://deb.debian.org/debian

I: Retrieving InRelease

I: Retrieving Packages

I: Validating Packages

I: Resolving dependencies of required packages…

I: Resolving dependencies of base packages…

I: Found additional required dependencies: libaudit-common libaudit1 libbz2-1.0 libcap-ng0 libdb5.3 libdebconfclient0 libgcrypt20 libgpg-error0 liblz4-1 libncursesw5 libsemanage-common libsemanage1 libsystemd0 libudev1 libustr-1.0-1

I: Found additional base dependencies: dmsetup gnupg-agent libapparmor1 libassuan0 libbsd0 libcap2 libcryptsetup4 libcurl3-gnutls libdevmapper1.02.1 libdns-export162 libelf1 libfastjson4 libffi6 libgmp10 libgnutls30 libgssapi-krb5-2 libhogweed4 libidn11 libidn2-0 libip4tc0 libip6tc0 libiptc0 libisc-export160 libk5crypto3 libkeyutils1 libkrb5-3 libkrb5support0 libksba8 libldap-2.4-2 libldap-common liblocale-gettext-perl liblognorm5 libmnl0 libnetfilter-conntrack3 libnettle6 libnfnetlink0 libnghttp2-14 libnpth0 libp11-kit0 libperl5.24 libpsl5 librtmp1 libsasl2-2 libsasl2-modules-db libseccomp2 libsqlite3-0 libssh2-1 libtasn1-6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl libunistring0 libxtables12 openssl perl perl-modules-5.24 pinentry-curses xxd

base-files base-passwd bash bsdutils coreutils dash debconf debianutils diffutils dpkg e2fslibs e2fsprogs findutils gcc-5-base gcc-6-base grep gzip hostname init-system-helpers libacl1 libattr1 libaudit-common libaudit1 libblkid1 libbz2-1.0 libc-bin libc6 libcap-ng0 libcomerr2 libdb5.3 libdebconfclient0 libfdisk1 libgcc1 libgcrypt20 libgpg-error0 liblz4-1 liblzma5 libmount1 libncurses5 libncursesw5 libpam-modules libpam-modules-bin libpam-runtime libpam0g libpcre3 libselinux1 libsemanage-common libsemanage1 libsepol1 libsmartcols1 libss2 libsystemd0 libtinfo5 libudev1 libustr-1.0-1 libuuid1 login lsb-base mawk mount multiarch-support ncurses-base ncurses-bin passwd perl-base sed sensible-utils sysvinit-utils tar tzdata util-linux zlib1g adduser apt apt-transport-https apt-utils blends-tasks bsdmainutils ca-certificates cpio cron debconf-i18n debian-archive-keyring dmidecode dmsetup gnupg gnupg-agent gpgv ifupdown init iproute2 iptables iputils-ping isc-dhcp-client isc-dhcp-common kmod libapparmor1 libapt-inst2.0 libapt-pkg5.0 libassuan0 libbsd0 libcap2 libcryptsetup4 libcurl3-gnutls libdevmapper1.02.1 libdns-export162 libelf1 libestr0 libfastjson4 libffi6 libgdbm3 libgmp10 libgnutls30 libgssapi-krb5-2 libhogweed4 libidn11 libidn2-0 libip4tc0 libip6tc0 libiptc0 libisc-export160 libk5crypto3 libkeyutils1 libkmod2 libkrb5-3 libkrb5support0 libksba8 libldap-2.4-2 libldap-common liblocale-gettext-perl liblogging-stdlog0 liblognorm5 libmnl0 libnetfilter-conntrack3 libnettle6 libnewt0.52 libnfnetlink0 libnghttp2-14 libnpth0 libp11-kit0 libperl5.24 libpipeline1 libpopt0 libprocps6 libpsl5 libreadline6 libreadline7 librtmp1 libsasl2-2 libsasl2-modules-db libseccomp2 libslang2 libsqlite3-0 libssh2-1 libssl1.0.2 libssl1.1 libstdc++6 libtasn1-6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl libunistring0 libusb-0.1-4 libxapian30 libxtables12 logrotate nano netbase openssl perl perl-modules-5.24 pinentry-curses procps readline-common rsyslog systemd systemd-sysv tasksel tasksel-data udev vim-common vim-tiny wget whiptail xxd

This is what debootstrap considers a base filesystem. We already see things that will not be needed in a container. Let’s create the filesystem.

mkdir debian_root && cd debian_root

sudo debootstrap sid .

We can then chroot to it manually. Some preparations need to be done to interface the new userspace with the virtual filesystems it expects.

sudo mount -t devpts devpts debian_root/dev/pts

sudo mount -t proc proc debian_root/proc

sudo mount -t sysfs sysfs debian_root/sys

sudo chroot debian_root

That is already more cumbersome than using Docker. Docker also offers more advanced isolation using newer kernel features like namespaces and cgroups.

It is easier to import this filesystem as a Docker image.

tar -c * | docker import – minidebian:raw

Now we can start freeing up space. The problem with this is that, because Docker uses overlays, you will not get a smaller container even if you delete things. This happens because when you delete in an upper layer it is just marked as deleted so that you can go back to the original contents just by getting rid of the upper layer.

In order to get around this, we can repack everything in an unique layer with

docker container create –name minidebian-container minidebian

docker export minidebian-container | docker import – minidebian:raw

When we are happy with the result, we end up with a Docker image with no metadata. We are only left with creating a basic dockerfile in an empty directory

FROM minidebian:raw

LABEL description=”Minimal Debian 8 image”

MAINTAINER Ignacio Núñez Hernanz <nacho@ownyourbits.com>

CMD [“/bin/bash”]

 

, and building the final image

docker build . -t minidebian:latest

In this example, we have only indicated Docker to spawn
bash if no other arguments are given.

In the next post we will create a LAMP installation on top of this small debian layer.

Source