Java comes to the official OpenFaaS templates

At the core of OpenFaaS is a community which is trying to Make Serverless Functions Simple for Docker and Kubernetes. In this blog post I want to show you the new Java template released today which brings Serverless functions to Java developers.

If you’re not familiar with the OpenFaaS CLI, it is used to generate new files with everything you need to start building functions in your favourite programming language.

The new template made available today provides Java 9 using the OpenJDK, Alpine Linux and gradle as a build system. The serverless runtimes for OpenFaaS uses the new accelerated watchdog built out in the OpenFaaS Incubator organisation on GitHub.

Quickstart

First of all, set up OpenFaaS on your laptop or the cloud with Kubernetes or Docker Swarm. Follow the quickstart here

Checklist:

  • I have my API Gateway URL
  • I’ve installed the faas-cli
  • I have Docker installed
  • I have a Docker Hub account or similar local Docker registry available

I recommend using Visual Studio Code to edit your Java functions. You can also install the Java Extension Pack from Microsoft.

Generate a Java function

You can pull templates from any supported GitHub repository, this means that teams can build their own templates for golden Linux images needed for compliance in the enterprise.

$ faas-cli template pull

You can list all the templates you’ve downloaded like this:

$ faas-cli new –list


java8

Tip: Before we get started, sign up for a Docker Hub accout, or log into your own local Docker registry.

Below update username=alexellis2 to your Docker Hub user name or private registry address. Now generate a new Java function using the faas-cli which you should have installed.

export username=alexellis2

mkdir -p blog
cd blog

faas-cli new –lang java8 hello-java –prefix=$username

This generates several files:

  • build.gradle – specify any other JAR files or code repositories needed
  • settings.gradle – specify any other build settings needed

You then get a function Handler.java and HandlerTest.java file in the ./src folder.

package com.openfaas.function;

import com.openfaas.model.IHandler;
import com.openfaas.model.IResponse;
import com.openfaas.model.IRequest;
import com.openfaas.model.Response;

public class Handler implements com.openfaas.model.IHandler {

public IResponse Handle(IRequest req) {
Response res = new Response();
res.setBody(“Hello, world!”);

return res;
}
}

Contents of ./hello-java/src/main/java/com/openfaas/function/Handler.java

Build and deploy the function

Now use the faas-cli to build the function, you will see gradle kick in and start downloading the dependencies it needs:

faas-cli build -f hello-java.yml

If you are running on Kubernetes, then you may need to pass the –gateway flag with the URL you used for the OpenFaaS portal. You can also set this in the OPENFAAS_URL environmental-variable.

faas-cli deploy -f hello-java.yml –gateway 127.0.0.1:31112

Test the function

You can now test the function via the OpenFaaS UI portal, using Postman, the CLI or even curl.

export OPENFAAS_URL=http://127.0.0.1:31112/

echo -n “” | faas-cli invoke hello-java

Add a third-party dependency

You can now add a third-party dependency such as okhttp which is a popular and easy to use HTTP client. We will create a very rudimentary HTTP proxy which simply fetches the text of any URL passed in via the request.

  • Scaffold a new template

$ faas-cli new –lang java8 web-proxy

  • Edit build.gradle

At the end of the dependencies { add the following:

implementation ‘com.squareup.okhttp3:okhttp:3.10.0’
implementation ‘com.squareup.okio:okio:1.14.1’

  • Edit Handler.java

Paste the following into your Handler.java file, this imports the OKHttpClient into scope.

package com.openfaas.function;

import com.openfaas.model.IHandler;
import com.openfaas.model.IResponse;
import com.openfaas.model.IRequest;
import com.openfaas.model.Response;

import java.io.IOException;

import okhttp3.OkHttpClient;

public class Handler implements IHandler {

public IResponse Handle(IRequest req) {
IResponse res = new Response();

try {
OkHttpClient client = new OkHttpClient();

okhttp3.Request request = new okhttp3.Request.Builder()
.url(req.getBody())
.build();

okhttp3.Response response = client.newCall(request).execute();
String ret = response.body().string();
res.setBody(ret);

} catch(Exception e) {
e.printStackTrace();
res.setBody(e.toString());
}

return res;
}
}

  • Package, deploy and test

faas-cli build -f web-proxy.yml
faas-cli push -f web-proxy.yml
faas-cli deploy -f web-proxy.yml

Now test it out with a JSON endpoint returning the position of the International Space Station.

$ echo -n “http://api.open-notify.org/iss-now.json” | faas-cli invoke web-proxy

Parse a JSON request

You can use your preferred JSON library to parse a request in JSON format. This example uses Google’s GSON library and loads a JSON request into a Java POJO.

  • Create a function

faas-cli new –lang java8 buildinfo

  • Edit build.gradle

Within dependencies add:

implementation ‘com.google.code.gson:gson:2.8.5’

  • Edit Handler.java

package com.openfaas.function;

import com.openfaas.model.IHandler;
import com.openfaas.model.IResponse;
import com.openfaas.model.IRequest;
import com.openfaas.model.Response;

import com.google.gson.*;

public class Handler implements com.openfaas.model.IHandler {

public IResponse Handle(IRequest req) {
Response res = new Response();

Gson gson = new Gson();
BuildInfo buildInfo = gson.fromJson(req.getBody(), BuildInfo.class);

res.setBody(“The status of the build is: ” + buildInfo.getStatus());

return res;
}
}

class BuildInfo {
private String status = “”;
public String getStatus() { return this.status; }
}

Build, push and deploy your function.

Now invoke it via the CLI:

$ echo ‘{“status”: “queued”}’ | faas invoke buildinfo
The status of the build is: queued

Donwload and parse JSON from an URL

In this example I will show you how to fetch the manifest file from the OpenFaaS Function Store, we will then deserialize it into an ArrayList and print out the count.

  • Create a function named deserialize

  • Edit build.gradle

Within dependencies add:

implementation ‘com.google.code.gson:gson:2.8.5’
implementation ‘com.squareup.okhttp3:okhttp:3.10.0’
implementation ‘com.squareup.okio:okio:1.14.1’

  • Handler.java

package com.openfaas.function;

import com.openfaas.model.IHandler;
import com.openfaas.model.IResponse;
import com.openfaas.model.IRequest;
import com.openfaas.model.Response;

import com.google.gson.*;
import okhttp3.OkHttpClient;
import com.google.gson.reflect.TypeToken;
import java.util.ArrayList;

public class Handler implements com.openfaas.model.IHandler {

public IResponse Handle(IRequest req) {
Response res = new Response();

Gson gson = new Gson();
String url = “https://raw.githubusercontent.com/openfaas/store/master/store.json”;
ArrayList<Function> functions = (ArrayList<Function>) gson.fromJson(downloadFromURL(url), new TypeToken<ArrayList<Function>>(){}.getType());

int size = functions.size();
String functionCount = Integer.toString(size);
res.setBody(functionCount +” function(s) in the OpenFaaS Function Store”);
return res;
}

public String downloadFromURL(String url) {
String ret = “{}”;

try {
OkHttpClient client = new OkHttpClient();
okhttp3.Request request = new okhttp3.Request.Builder()
.url(url)
.build();

okhttp3.Response response = client.newCall(request).execute();
ret = response.body().string();
} catch(Exception e) {
e.printStackTrace();
System.out.println(e.toString());
}
return ret;
}
}

class Function {
public String Name = “”;
}

Here is the output:

$ echo | faas-cli invoke deserialize ; echo
16 function(s) in the OpenFaaS Function Store

Wrapping up

We have now packaged and deployed a Serverless function written in Java. The new OpenFaaS watchdog component keeps your function hot and that ensures the JVM is re-used between invocations. This approach enables high-throughput for your code.

Let us know what you think of the new Java template by tweeting to @openfaas or join the Slack community for one of the special-interest channels like #kubernetes or #templates.

Take it further

If you would like to use some other JDK version, a different base-image for the Linux container or even a different build-tool like Maven, you can fork the templates repository and add your own variant.

Contributions are welcome, so if you have an enhancement that will benefit the community, please feel free to suggest it over on GitHub.

The Java 8 + Gradle 4.8.1 template is available here:

https://github.com/openfaas/templates/tree/master/template/java8

Source

Eclipse Che 6.6 Release Notes

[This article is cross-posted from the Eclipse Che Blog.]

Eclipse Che 6.6 is here! Since the release of Che 6.0, the community has added a number of new capabilities:

  • Kubernetes support: Run Che on Kubernetes and deploy it using Helm.
  • Hot server updates: Upgrade Che with zero downtime.
  • C/C++ support: ClangD Language Server was added.
  • Camel LS support: Apache Camel Language Server Protocol (LSP) support was added.
  • <strong”>Eclipse Java Development Tools (JDT) Language Server (LS): Extended LS capabilities were added for Eclipse Che.
  • Faster workspace loading: Images are pulled in parallel with the new UI.

Quick Start

Che is a cloud IDE and containerized workspace server. You can get started with Che by using the following links:

Kubernetes Support (#8559)

In the past, Eclipse Che was primarily targeted at Docker. However, with the rise of Kubernetes, we have added OpenShift and native Kubernetes as primary deployment platforms.

Since the 6.0.0 release, we have made a number of changes to ensure that Che works with Kubernetes. These changes were related to volume management for workspaces, routing, service creation, and more.

We have also recently added Helm charts for deploying Che on Kubernetes. Helm is a popular application template system for deploying container applications on Kubernetes. Helm charts were first included in the 6.2.0 release, and support has improved through the 6.3.0 and 6.4.0 releases.

Much of the work to support TLS routes and multiuser Che deployments using Helm was contributed by Guy Daich from SAP. Thank you, Guy!

Learn more about Che on Kubernetes in the documentation.

Highlighted Issues

See the following pull requests (PRs):

  • Kubernetes-infra: routing, TLS (rebased) #9329
  • Use templates only to deploy Che to OpenShift #9190
  • Kubernetes multiuser helm #8973
  • Kubernetes-infra: server routing strategies and basic TLS kind/enhancement status/code-review #8822
  • Initial support for deploying Che to Kubernetes using Helm charts #8715
  • Added Kubernetes infrastructure #8559

Hot Server Updates (#8547)

In recent releases, we steadily improved the ability to upgrade the Che server without having to stop or restart active workspaces. In Che 6.6.0, it is now possible to upgrade the Che server with no downtime for active workspaces, and there is only a short period when you cannot start new workspaces. This was a request from our enterprise users, but it helps teams of all sizes.

You can learn more in the documentation.

Highlighted Issues

See the following PRs:

  • Implement interruption of start for OpenShift workspaces #5918
  • Implement recovery for OpenShift infrastructure #5919
  • Server checkers won’t be started if a workspace is started by another Che Server instance #9502
  • Document procedure of rolling hot update #9630
  • Adapt ServiceTermination functionality to workspaces recovering #9317
  • Server checkers works incorrectly when k8s/os workspaces are recovered #9453
  • Add an ability to use distributed cache for storing workspace statuses in WorkspaceRuntimes #9206
  • Do not use data volume to store agents on OpenShift/Kubernetes #9040

C/C++ Support with ClangD LS (#7516)

Clang provides a C and C++ language front end for the LLVM compiler suite, and the Clangd LS enables improved support for the C language in Eclipse Che and other LSP-capable IDEs. Many thanks to Hanno Kolvenbach from Silexica for the contribution of this feature.

Code Completion with ClangD

Go to Definition with ClangD

Apache Camel LSP Support (#8648)

Camel-language-server is a server implementation that provides Camel DSL intelligence. The server adheres to the Language Server Protocol and has been integrated into Eclipse Che. The server utilizes Apache Camel.

Related PRs

See the following PRs:

  • Introduce Apache Camel LSP support #8648
  • [533196] Fix Camel LSP artefact to download #9324

Eclipse JDT LS (#6157)

The Eclipse JDT LS combines the power of the Eclipse JDT (that powers the Eclipse desktop IDE) with the Language Server Protocol. The JDT LS can be used with any editor that supports the protocol, including Che of course. The server is based on:

  • Eclipse LSP4J, the Java binding for LSP
  • Eclipse JDT, which provides Java support (code completion, references, diagnostics, and so on) in Eclipse IDE
  • M2Eclipse, which provides Maven support
  • Buildship, which provides Gradle support

Eclipse Che will soon switch its Java support to use the JDT LS. In order to support this transition, we’ve been working hard on supporting extended LS capabilities. Java is one of the most used languages by Che users, and we are going to bring even more capabilities thanks to the JDT LS. Once the switch is done, you can expect more Java versions to be supported, as well as Maven and Gradle support!

Highlighted Issues

See the following PRs:

Faster Workspace Loading (#8748)

In version 6.2.0, we introduced the ability for Che to pull multiple images in parallel through the SPI. This way, when you are working on a multi-container based application, your workspace’s container images are instantiated more quickly.

Highlighted Issues

See the following PR:

  • Che should pull images in parallel (#7102)

Coming Soon

You can keep track of our future plans for Eclipse Che on the project roadmap page. In coming releases, you can expect further improvements to the extensibility of the platform, including an Eclipse Che plugins framework, support for a debug adapter protocol to improve debugging capabilities in the IDE, integration of more cloud-native technologies into workspace management, and scalability and reliability work to make Eclipse Che even more suitable for large enterprise users.

The community is working hard on those different aspects, and we will be speaking about this more extensively in the following weeks. If you are interested in learning more and want to eventually engage, don’t forget to join the bi-weekly community call.

Getting Started

Get started on Kubernetes, OpenShift, or Docker.

Learn more in our documentation and start using a shared Che server or a local instance today.

The Eclipse Che project is always looking for user feedback and new contributors! Find out how you can get involved and help make Che even better.

Source

A Beginner’s Guide to Kubernetes (PodCTL Podcast #38)

 

If you aren’t following the OpenShift Blog, you might not be aware of the PodCTL podcast. It’s a free weekly tech podcast covering containers, kubernetes, and OpenShift hosted by Red Hat’s Brian Gracely (@bgracely) and Tyler Britten (@vmtyler). I’m reposting this episode here on the Red Hat Developer Blog because I think their realization is spot on—while early adopters might be deep into Kubernetes, many are just starting and could benefit from some insights.

Original Introduction from blog.openshift.com:

The Kubernetes community now has 10 releases (2.5 yrs) of software and experience. We just finished KubeCon Copenhagen, OpenShift Commons Gathering, and Red Hat Summit and we heard lots of companies talk about their deployments and journeys. But many of them took a while (12–18) months to get to where they are today. This feels like the “early adopters” and we’re beginning to get to the “crossing the chasm” part of the market. So thought we’d discuss some of the basics, lessons learned, and other things people could use to “fast-track” what they need to be successful with Kubernetes.

The podcast will always be available on the Red Hat OpenShift blog (search: #PodCTL), as well as on RSS Feeds, iTunes, Google Play, Stitcher, TuneIn, and all your favorite podcast players.

PodCTL #38 – A Beginners Guide to Kubernetes

June 04, 2018
Brian Gracely & Tyler Britten
PodCTL – Containers | Kubernetes | OpenShift

Overview: Brian and Tyler talk some of the basics, lessons learned, and other things people could use to “fast-track” what they need to be successful with Kubernetes.

Show Notes:

Show Premise:
The Kubernetes community now has 10 releases (2.5 yrs) of software and experience. We just finished KubeCon and Red Hat Summit and we heard lots of companies talk about their deployments and journeys. But many of them took a while (12–18) months to get to where they are today. This feels like the “early adopters” and we’re beginning to get to the “crossing the chasm” part of the market. So thought we’d discuss some of the basics, lessons learned, and other things people could use to “fast-track” what they need to be successful with Kubernetes.

Topic 1: What are the core skills needed for a team that manages/runs/interacts with a Kubernetes environment?

  • Ops Skills
  • Dev Skills
  • Compliance Skills / Security Skills

Topic 2: What has significantly changed in the Kubernetes world since 2015/16 to today that people should consider taking advantage of?

  • Persistence
  • Immutability
  • Operators
  • Native tools vs. Config Mgmt tools
  • Storage

Topic 3: What do you consider “still hard” and should probably justify more early effort?

  • Security?
  • Storage?
  • Monitoring?
  • Being overly precise about capacity planning?

Topic 4: What patterns have you seen from successful deployments and customer behaviors?

Feedback?
Email: PodCTL at gmail dot com
Twitter: @PodCTL
Web: http://blog.openshift.com, search #PodCTL

Source

Intro to Podman (Red Hat Enterprise Linux 7.6 Beta)

 

Red Hat Enterprise Linux (RHEL) 7.6 Beta was released a few days ago and one of the first new features I noticed is Podman. Podman complements Buildah and Skopeo by offering an experience similar to the Docker command line: allowing users to run standalone (non-orchestrated) containers. And Podman doesn’t require a daemon to run containers and pods, so we can easily say goodbye to big fat daemons.

Podman implements almost all the Docker CLI commands (apart from the ones related to Docker Swarm, of course). For container orchestration, I suggest you take a look at Kubernetes and Red Hat OpenShift.

Podman consists of just a single command to run on the command line. There are no daemons in the background doing stuff, and this means that Podman can be integrated into system services through systemd.

We’ll cover some real examples that show how easy it can be to transition from the Docker CLI to Podman.

Podman installation

If you are running Red Hat Enterprise Linux 7.6 Beta, follow the steps below. If not, you can try Podman online with Katacoda.

You need to enable the extras repo:

$ su –
# subscription-manager repos –enable rhel-7-server-extras-beta-rpms

Please note: at the time this was written RHEL 7.6 is still in beta. Once GA occurs, please change the repository name by removing the -beta-.

Then, launch the proper installation command:

# yum -y install podman

This command will install Podman and also its dependencies: atomic-registries, runC, skopeo-containers, and SELinux policies.

That’s all. Now you can now play with Podman.

Command-line examples

Run a RHEL container

For the first example, suppose we want to just run a RHEL container. We are on a RHEL system and we want to run a RHEL container, so it should work:

[root@localhost ~]# docker run -it rhel sh
-bash: docker: command not found

As you can see, there is no docker command on my RHEL 7.6 host. Just replace the docker command with podman:

[root@localhost ~]# podman run -it rhel sh
Trying to pull registry.access.redhat.com/rhel:latest…Getting image source signatures
Copying blob sha256:367d845540573038025f445c654675aa63905ec8682938fb45bc00f40849c37b
71.46 MB / ? [————=———————————————-] 23s
Copying blob sha256:b82a357e4f15fda58e9728fced8558704e3a2e1d100e93ac408edb45fe3a5cb9
1.27 KB / ? [—-=——————————————————–] 0s
Copying config sha256:f5ea21241da8d3bc1e92d08ca4888c2f91ed65280c66acdefbb6d2dba6cd0b29
6.52 KB / 6.52 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures
sh-4.2#

We now have our RHEL container. Let’s play with it, check its status, and then delete it and its source image:

sh-4.2# ps ax
PID TTY STAT TIME COMMAND
1 pts/0 Ss 0:00 sh
10 pts/0 R+ 0:00 ps ax
sh-4.2# exit

[root@localhost ~]# podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
deda2991f9fd registry.access.redhat.com/rhel:latest sh 3 minutes ago Exited (0) Less than a second ago reverent_torvalds

[root@localhost ~]# podman rm deda2991f9fd
deda2991f9fd43400566abceaa917ecbd59a2e83354c5c9021ba1830a7ab196d

[root@localhost ~]# podman image rm rhel
f5ea21241da8d3bc1e92d08ca4888c2f91ed65280c66acdefbb6d2dba6cd0b29

As you can see, we used the same syntax as we’d use with docker. There are no differences at the moment. I didn’t check the Podman documentation and I started working immediately!

Run a MariaDB persistent container

Let’s move forward and try a more complicated test: run MariaDB 10.2 with some custom variables and try to let its “data” be persistent.

First, let’s download the MariaDB container image and inspect its details:

[root@localhost ~]# podman pull registry.access.redhat.com/rhscl/mariadb-102-rhel7Trying to pull registry.access.redhat.com/rhscl/mariadb-102-rhel7…Getting image source signatures
Copying blob sha256:367d845540573038025f445c654675aa63905ec8682938fb45bc00f40849c37b
71.46 MB / ? [————=———————————————-] 10s
Copying blob sha256:b82a357e4f15fda58e9728fced8558704e3a2e1d100e93ac408edb45fe3a5cb9
1.27 KB / ? [—-=——————————————————–] 0s
Copying blob sha256:ddec0f65683ad89fc27298921921b2f8cbf57f674ed9eb71eef4e23a9dd9bbfe
6.40 MB / ? [————–=———————————————-] 1s
Copying blob sha256:105cfda934d478ffbf65d74a89af55cc5de1d5bc94874c2d163c45e31a937047
58.25 MB / ? [——————————————-=—————] 10s
Copying config sha256:7ac0a23445fec91d4b458f3062e64d1ca4af4755387604f8d8cbec08926867d7
6.79 KB / 6.79 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures
7ac0a23445fec91d4b458f3062e64d1ca4af4755387604f8d8cbec08926867d7

[root@localhost ~]# podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.access.redhat.com/rhscl/mariadb-102-rhel7 latest 7ac0a23445fe 9 days ago 445MB

[root@localhost ~]# podman inspect 7ac0a23445fe

Then we can set up a folder that will handle MariaDB’s data once we start our container:

[root@localhost ~]# mkdir mysql-data
[root@localhost ~]# chown 27:27 mysql-data

Please note: “27” is the ID of the mysql user that will run the MariaDB’s processes in the container. For this reason, we have to allow it to read from and write to the directory.

And finally, run it:

[root@localhost ~]# podman run -d -v /root/mysql-data:/var/lib/mysql/data:Z -e MYSQL_USER=user -e MYSQL_PASSWORD=pass -e MYSQL_DATABASE=db -p 3306:3306 registry.access.redhat.com/rhscl/mariadb-102-rhel7
71da2bb210b36aaab28a2dc81b8e77da4e1024d1f2d025c0a7b97b075dec1425

[root@localhost ~]# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
71da2bb210b3 registry.access.redhat.com/rhscl/mariadb-102-rhel7:latest container-entrypoin… 3 seconds ago Up 3 seconds ago 0.0.0.0:3306->3306/udp, 0.0.0.0:3306->3306/tcp cranky_mahavira

As you can see, the container is up and running, but what is it doing? Let’s check:

[root@localhost ~]# podman logs 71da2bb210b3 | head
=> sourcing 20-validate-variables.sh …
=> sourcing 25-validate-replication-variables.sh …
=> sourcing 30-base-config.sh …
—> 13:12:43 Processing basic MySQL configuration files …
=> sourcing 60-replication-config.sh …
=> sourcing 70-s2i-config.sh …
—> 13:12:43 Processing additional arbitrary MySQL configuration provided by s2i …
=> sourcing 40-paas.cnf …
=> sourcing 50-my-tuning.cnf …
—> 13:12:43 Initializing database …

Ah! It’s just started and initialized its database. Let’s play with it:

[root@localhost ~]# mysql –user=user –password=pass -h 127.0.0.1 -P 3306 -t
Welcome to the MariaDB monitor. Commands end with ; or g.
Your MariaDB connection id is 8
Server version: 10.2.8-MariaDB MariaDB Server

Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.

Type ‘help;’ or ‘h’ for help. Type ‘c’ to clear the current input statement.

MariaDB [(none)]> show databases;
+——————–+
| Database |
+——————–+
| db |
| information_schema |
| test |
+——————–+
3 rows in set (0.00 sec)

MariaDB [(none)]> use test;
Database changed

MariaDB [test]> show tables;
Empty set (0.00 sec)

Perfect. Now we’ll create at least a table and then we’ll terminate the container:

MariaDB [db]> CREATE TABLE mytest (username VARCHAR(20), date DATETIME);
Query OK, 0 rows affected (0.02 sec)

MariaDB [db]> show tables;
+————–+
| Tables_in_db |
+————–+
| mytest |
+————–+
1 row in set (0.00 sec)

MariaDB [db]> Bye

[root@localhost ~]# podman kill 71da2bb210b3
71da2bb210b36aaab28a2dc81b8e77da4e1024d1f2d025c0a7b97b075dec1425

Inspecting the content of the folder, we can see that data is still there, but let’s start a new container for checking the data persistence:

[root@localhost ~]# ls -la mysql-data/
total 41024
drwxr-xr-x. 6 27 27 4096 Aug 24 09:12 .
dr-xr-x—. 4 root root 219 Aug 24 09:28 ..
-rw-rw—-. 1 27 27 2 Aug 24 09:12 71da2bb210b3.pid
-rw-rw—-. 1 27 27 16384 Aug 24 09:12 aria_log.00000001
-rw-rw—-. 1 27 27 52 Aug 24 09:12 aria_log_control
drwx——. 2 27 27 56 Aug 24 09:27 db
-rw-rw—-. 1 27 27 2799 Aug 24 09:12 ib_buffer_pool
-rw-rw—-. 1 27 27 12582912 Aug 24 09:27 ibdata1
-rw-rw—-. 1 27 27 8388608 Aug 24 09:27 ib_logfile0
-rw-rw—-. 1 27 27 8388608 Aug 24 09:12 ib_logfile1
-rw-rw—-. 1 27 27 12582912 Aug 24 09:12 ibtmp1
-rw-rw—-. 1 27 27 0 Aug 24 09:12 multi-master.info
drwx——. 2 27 27 4096 Aug 24 09:12 mysql
-rw-r–r–. 1 27 27 14 Aug 24 09:12 mysql_upgrade_info
drwx——. 2 27 27 20 Aug 24 09:12 performance_schema
-rw-rw—-. 1 27 27 24576 Aug 24 09:12 tc.log
drwx——. 2 27 27 6 Aug 24 09:12 test

[root@localhost ~]# podman run -d -v /root/mysql-data:/var/lib/mysql/data:Z -e MYSQL_USER=user -e MYSQL_PASSWORD=pass -e MYSQL_DATABASE=db -p 3306:3306 registry.access.redhat.com/rhscl/mariadb-102-rhel7
0364513f6b6ae1b86ea3752ec732bad757770ca14ec1f879e7487f3f4293004d

[root@localhost ~]# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0364513f6b6a registry.access.redhat.com/rhscl/mariadb-102-rhel7:latest container-entrypoin… 3 seconds ago Up 2 seconds ago 0.0.0.0:3306->3306/udp, 0.0.0.0:3306->3306/tcp heuristic_northcutt

[root@localhost ~]# mysql –user=user –password=pass -h 127.0.0.1 -P 3306 -t
Welcome to the MariaDB monitor. Commands end with ; or g.
Your MariaDB connection id is 8
Server version: 10.2.8-MariaDB MariaDB Server

Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.

Type ‘help;’ or ‘h’ for help. Type ‘c’ to clear the current input statement.

MariaDB [(none)]> use db;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [db]> show tables;
+————–+
| Tables_in_db |
+————–+
| mytest |
+————–+
1 row in set (0.00 sec)

MariaDB [db]> Bye

[root@localhost ~]# podman kill 0364513f6b6a
0364513f6b6ae1b86ea3752ec732bad757770ca14ec1f879e7487f3f4293004d

Great! MariaDB’s data is still there and the new container read it and showed it once as requested.

Manage containers as system services through systemd and Podman

Finally, we’ll create a simple systemd resource for handling the previously created MariaDB container.

First, we need to create a systemd resource file for handling the brand new container service:

[root@localhost ~]# cat /etc/systemd/system/mariadb-podman.service
[Unit]
Description=Custom MariaDB Podman Container
After=network.target

[Service]
Type=simple
TimeoutStartSec=5m
ExecStartPre=-/usr/bin/podman rm “mariadbpodman”

ExecStart=/usr/bin/podman run –name mariadbpodman -v /root/mysql-data:/var/lib/mysql/data:Z -e MYSQL_USER=user -e MYSQL_PASSWORD=pass -e MYSQL_DATABASE=db -p 3306:3306 registry.access.redhat.com/rhscl/mariadb-102-rhel7

ExecReload=-/usr/bin/podman stop “mariadbpodman”
ExecReload=-/usr/bin/podman rm “mariadbpodman”
ExecStop=-/usr/bin/podman stop “mariadbpodman”
Restart=always
RestartSec=30

[Install]

Then we can reload the systemd catalog and start the service:

[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl start mariadb-podman
[root@localhost ~]# systemctl status mariadb-podman
mariadb-podman.service – Custom MariaDB Podman Container
Loaded: loaded (/etc/systemd/system/mariadb-podman.service; static; vendor preset: disabled)
Active: active (running) since Fri 2018-08-24 10:14:36 EDT; 3s ago
Process: 19147 ExecStartPre=/usr/bin/podman rm mariadbpodman (code=exited, status=0/SUCCESS)
Main PID: 19172 (podman)
CGroup: /system.slice/mariadb-podman.service
└─19172 /usr/bin/podman run –name mariadbpodman -v /root/mysql-data:/var/lib/mysql/data:Z -e MYSQL_USER=user -e MYSQL_PASSWORD=pass -e MYSQL_DA…

Aug 24 10:14:39 localhost.localdomain podman[19172]: 2018-08-24 14:14:39 140578968823552 [Note] InnoDB: Buffer pool(s) load completed at 180824 14:14:39
Aug 24 10:14:39 localhost.localdomain podman[19172]: 2018-08-24 14:14:39 140579889719488 [Note] Plugin ‘FEEDBACK’ is disabled.
Aug 24 10:14:39 localhost.localdomain podman[19172]: 2018-08-24 14:14:39 140579889719488 [Note] Server socket created on IP: ‘::’.
Aug 24 10:14:39 localhost.localdomain podman[19172]: 2018-08-24 14:14:39 140579889719488 [Warning] ‘user’ entry ‘root@71da2bb210b3’ ignored in –sk…ve mode.
Aug 24 10:14:39 localhost.localdomain podman[19172]: 2018-08-24 14:14:39 140579889719488 [Warning] ‘user’ entry ‘@71da2bb210b3’ ignored in –skip-n…ve mode.
Aug 24 10:14:39 localhost.localdomain podman[19172]: 2018-08-24 14:14:39 140579889719488 [Warning] ‘proxies_priv’ entry ‘@% root@71da2bb210b3’ igno…ve mode.
Aug 24 10:14:39 localhost.localdomain podman[19172]: 2018-08-24 14:14:39 140579889719488 [Note] Reading of all Master_info entries succeded
Aug 24 10:14:39 localhost.localdomain podman[19172]: 2018-08-24 14:14:39 140579889719488 [Note] Added new Master_info ” to hash table
Aug 24 10:14:39 localhost.localdomain podman[19172]: 2018-08-24 14:14:39 140579889719488 [Note] /opt/rh/rh-mariadb102/root/usr/libexec/mysqld: read…ections.
Aug 24 10:14:39 localhost.localdomain podman[19172]: Version: ‘10.2.8-MariaDB’ socket: ‘/var/lib/mysql/mysql.sock’ port: 3306 MariaDB Server
Hint: Some lines were ellipsized, use -l to show in full.
[root@localhost ~]# systemctl stop mariadb-podman
[root@localhost ~]#

Awesome! We just set up a custom system service based on a container managed through Podman.

Further resources

Do you want an easy and fast way to experiment with Podman? Katacoda is the answer. Katacoda is an interactive learning and training platform that lets you learn new technologies using real environments right in your browser!

Check it out here: katacoda.com/courses/containers-without-docker/running-containers-with-podman

To get a better understanding of Podman, see these two blog articles by Dan Walsh:

That’s all! May the containers be with you! 🙂

About Alessandro

Take advantage of your Red Hat Developers membership and download RHEL today at no cost.

Source