Codefresh versus Jenkins X – Codefresh

In a previous blog post, we saw how Codefresh compared to Jenkins. In that post, the major takeaway is the fact that Codefresh is a solution for both builds and deployments (CI/CD) while Jenkins does not support any deployments on its own (only CI).

Jenkins X has recently been announced and it is has been introduced as a native CI/CD solution for Kubernetes. It adds deployment capabilities to plain Jenkins and also makes entities such as environments and deployments a first-class citizen.

In theory, Jenkins X is a step in the right direction especially for organizations that are moving into containers and Kubernetes clusters. It therefore makes sense to see how this new project stands up against Codefresh, the CI/CD solution that was developed with Docker/Helm/Kubernetes support right from its inception.

In practice, Jenkins X has some very strong opinions on the software lifecycle which might not always agree with the processes of your organization.

Jenkins X is still using Jenkins 2.x behind the scenes inheriting all its problems

This is probably the most important point to understand regarding Jenkins X. Jenkins X is NOT a new version of Jenkins, or even a rewrite. It is only a collection of existing services that still include Jenkins at its core. If you deploy Jenkins X on a cluster you can easily look at all the individual components:

Jenkins X componentsJenkins X components

Jenkins X is just Jenkins plus Chartmuseum, Nexus, Mongo, Monocular, etc. The main new addition is the jx executable which is responsible for installing and managing JX installations.

Fun fact: Chartmuseum is actually a Codefresh project!

Jenkins X uses a Codefresh projectJenkins X uses a Codefresh project

This means that Jenkins X is essentially a superset of plain Jenkins. Of course it adds new deployment abilities to the mix but it also inherits all existing problems. Most points we have mentioned in the original comparison are still true:

  • Plugins and shared libraries are still present
  • Upgrading and configuring the Jenkins server is still an issue
  • Pipelines are still written in Groovy instead of declarative YAML

You can even visit the URL of Jenkins in a Jenkins X installation and see the familiar UI.

Jenkins 2 inside Jenkins XJenkins 2 inside Jenkins X

What is more troubling is that all Jenkins configuration options are still valid. So if you want to change the default Docker registry, for example, you still need to manage it via the Jenkins UI.

Another downside of the inclusion of all these off-the-shelf tools is the high system requirements. Gone are the days where you could just download the Jenkins war on your laptop and try it out. I tried to install Jenkins X on my laptop and failed simply because of lack of resources.

The recommended installation mode is to use a cloud provider with 5 nodes of 20-30GBs or RAM. I tried to get away with just 3 nodes and was not even able to compile/deploy the official quickstart application.

Jenkins X deploys only on Kubernetes clusters, Codefresh can deploy anywhere

Kubernetes popularity is currently exploding and JenkinsX contains native support for Kubernetes deployments. The problem, however, is that JenkinsX can ONLY deploy on Kubernetes clusters and nothing else.

All new JenkinsX concepts such as environments, previews, and promotions are always targeting a namespace in a Kubernetes cluster. You can still use Jenkins X for compiling and packaging any kind of application, but the continuous delivery part is strictly constrained to Kubernetes clusters.

Codefresh, on the other hand, can deploy everywhere. Even though there is native support for Helm and Kubernetes dashboards, you can use Codefresh to deploy to Virtual machines, Docker Swarm, Bare Metal clusters, FTP sites, and any other deployment target you can imagine.

This means that Codefresh has a clear advantage for organizations that are migrating to Kubernetes while still having legacy applications around, as Codefresh can be used in a gradual way.

Jenkins X requires Helm, in Codefresh Helm is optional

Helm is the package manager for Kubernetes that allows you to group multiple microservices together, manage templates for Kubernetes manifests and also perform easy rollbacks to previous releases.

Codefresh has native support for Helm by offering a private Helm repository and a dedicated Helm dashboard.

Codefresh Helm DashboardCodefresh Helm Dashboard

We love Helm and we believe it is the future of Kubernetes deployments. However, you don’t have to adopt Helm in order to use Codefresh.

In Codefresh the usage of Helm is strictly optional. As mentioned in the previous section you can use Codefresh to deploy anywhere including plain Kubernetes clusters without Helm installed. We have several customers that are using Kubernetes without Helm and some of the facilities we offer such as blue/green and canary deployments are designed with that in mind.

Jenkins X, on the other hand, REQUIRES the adoption of Helm. All deployments happen via Helm as the only option.

Helm is also used to represent complete environments (the Helm umbrella pattern). The GIT repositories that back each environment are based on Helm charts.

We love the fact that Jenkins X has adopted Helm, but making it the only deployment option is a very aggressive stance for organizations that want to deploy legacy applications as well.

Representing an environment with a Helm umbrella chart is a good practice but in Codefresh this is just one of the many ways that you can do deployments.

Jenkins X enforces trunk based development, Codefresh allows any git workflow

From the previous sections, it should become clear that Jenkins X is a very opinionated solution that has a strong preference on how deployments are handled.

The problem is that these strong opinions also extend to how development happens during the integration phase. Jenkins X is designed around trunk based development. The mainline branch is what is always deployed and merging a pull request also implies a new release.

Trunk-based development is not bad on its own, but again there several organizations that have selected other strategies which are better suited for their needs. The ever-popular gitflow paradigm might be losing popularity in the last years, but in some cases, it really is a better solution. Jenkins X does not support it at all.

There are several organizations where even the concept of a single “production” branch might not exist at all. In some cases, there are several production branches (i.e. where releases are happening from) and adopting them in Jenkins X would be a difficult if not impossible task.

Codefresh does not enforce a specific git methodology. You can use any git workflow you want.

Example of Codefresh pipelineExample of Codefresh pipeline

A similar situation occurs with versioning. Jenkins X is designed around semantic versioning of Git tags. Codefresh does not enforce any specific versioning pattern.

In summary, with Codefresh you are free to choose your own workflow. With Jenkins X there is only a single way of doing things.

Jenkins X has no Graphical interface, Codefresh offers built-in GUI dashboards

Jenkins X does not have a UI on its own. The only UI present is the one from Jenkins which, as we have already explained, knows only about jobs and builds. In the case of a headless install, not even that UI is available.

This means that all the new Jenkins X constructs such as deployments, applications, and environments are only available in the command line.

The command line is great for developers and engineers who want to manage Jenkins X, but very inflexible when it comes to getting a general overview of everything that is happening. If Jenkins X is installed in a big organization, several non-developers (e.g. project manager, QA lead) will need an easy way to see what their team is doing.

Unfortunately, at its present state, only the JX executable offers a view of the Jenkins X flows via the command line.

Codefresh has a full UI for both CI and CD parts of the software lifecycle that includes everything in a single place. There are graphical dashboards for:

  • Git Repos
  • Pipelines
  • Builds
  • Docker images
  • Helm repository
  • Helm releases
  • Kubernetes services

It is very easy to get the full story of a feature from commit until it reaches production as well as understand what is deployed where.

Enteprise Helm promotion boardEnteprise Helm promotion board

The UI offered from Codefresh is targeted at all stakeholders that take part in the software delivery process.

Jenkins X uses Groovy/JX pipelines, Codefresh uses declarative YAML

We already mentioned that Jenkins X is using plain Jenkins under the hood. This means that pipelines in Jenkins X are also created with Groovy and shared libraries. Codefresh, on the other hand, uses declarative YAML.

The big problem here is that the jx executable (which is normally used to manage Jenkins X installation) can also be injected into Jenkins pipelines by extending their pipeline steps. This means that pipelines are now even more complicated as one must also learn how the jx executable works and how it affects the pipelines it takes part in.

Here is an official example from the quick start of Jenkins X (this is just a segment of the full pipeline)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

steps {

container(‘maven’) {

// ensure we’re not on a detached head

sh “git checkout master”

sh “git config –global credential.helper store”

sh “jx step git credentials”

// so we can retrieve the version in later steps

sh “echo $(jx-release-version) > VERSION”

sh “mvn versions:set -DnewVersion=$(cat VERSION)”

sh “jx step tag –version $(cat VERSION)”

sh “mvn clean deploy”

sh “export VERSION=`cat VERSION` && skaffold build -f skaffold.yaml”

sh “jx step post build –image $DOCKER_REGISTRY/$ORG/$APP_NAME:$(cat VERSION)”

}

}

You can see in the example above that jx now takes place in the Jenkins X pipelines requiring developers to learn yet another tool.

The problem here is that for Jenkins X to do its magic your pipelines must behave as they are expected to. Therefore modifying Jenkins X pipelines becomes extra difficult as you must honor the assumptions already present in the pipeline (that match the opinionated design of Jenkins X).

This problem does not exist in Codefresh. You can have a single pipeline that does everything with conditionals or multiple independent pipelines, or linked-pipelines or parallel pipelines, or any other pattern that you wish for your project.

Jenkins X does not cache dependencies by default, Codefresh has automatic caching

JenkinsX is scaling by creating additional builders on the Kubernetes cluster where it is installed. When you start a new pipeline, a new builder is created dynamically on the cluster that contains a Jenkins slave which is automatically connected to the Jenkins master. Jenkins X also supports a serverless installation where there is no need for a Jenkins master at all (which normally consumes resources even when no builds are running).

While this approach is great for scalability it is also ineffective when it comes to build caching. Each build node starts in a fresh state without any knowledge of previous builds. This means that all module dependencies needed by your programming language are downloaded again and again all the time.

A particularly bad example of this is the quick start demo offered by Jenkins. It is a Java application that downloads its Maven dependencies:

  1. Whenever a branch is built
  2. When a pull request is created
  3. When a pull request is merged back to master

Basically any Jenkins X build will download all dependencies each time it runs.

The builders in all 3 cases are completely isolated. For big projects (think also node modules, pip packages, ruby gems etc) where the dependencies actually dominate the compile time, this problem can quickly get out of hand with very slow Jenkins X builds.

Codefresh solves this problem by attaching a Docker volume in all steps on the pipeline. This volume is cached between subsequent builds so that dependencies are only downloaded once.

Codefresh caches the shared volume Codefresh caches the shared volume

In summary, Codefresh has much better caching than JenkinsX resulting in very fast builds and quick feedback times from commit to result.

Conclusion

In this article, we have seen some of the shortcomings of Jenkins X. Most of them center around the opinionated workflow & design decisions behind JenkinsX.

The biggest drawback, of course, is that JenkinsX also inherits all problems of Jenkins and in some cases (e.g. pipeline syntax) it makes things more complicated.

Finally, at its present state, Jenkins X has only a command line interface, making the visualization of its environment and application a very difficult process.

In the following table, we summarize the characteristics of JenkinsX vs Codefresh:

Feature Jenkins X Codefresh
Git repository dashboard No Yes
Git support Yes Yes
Quay/ACR/JFrog/Dockerhub triggers No Yes
GIT flow Trunk-based Any
Versioning Git tags /semantic Any
Pipeline Management CLI Graphical and CLI
Built-in dynamic test environments Yes Yes
Docker-based builds Yes Yes
Native build caching No Yes
Pipelines as code Groovy Yaml
Native Monorepo support No Yes
Extension mechanism Groovy shared libraries Docker images
Installation Cloud/On-prem Cloud/On-prem/Hybrid
Internal Docker registry Yes Yes
Docker Registry dashboard No Yes
Custom Docker image metadata No Yes
Native Kubernetes deployment Yes Yes
Kubernetes Release Dashboard No Yes
Deployment mode Helm only Helm or plain K8s
Integrated Helm repository Yes Yes
Helm app dashboard Yes Yes
Helm release dashboard No Yes
Helm releases history and management (UI) No Yes
Helm Rollback to any previous version (UI) No Yes
Deploy to Bare Metal/VM No Yes
Deployment management CLI GUI and CLI

New to Codefresh? Create Your Free Account Today!

Source

Securing a Containerized Instance of MongoDB

Securing
MongoDBMongoDB, the popular
open source NoSQL database, has been in the news a lot recently—and
not for reasons that are good for MongoDB admins. Early this year,
reports began
appearing

of MongoDB databases being “taken hostage” by attackers who delete all
of the data stored inside the databases, then demand ransoms to restore
it. Security is always important, no matter which type of database
you’re using. But the recent spate of MongoDB attacks makes it
especially crucial to secure any MongoDB databases that you may use as
part of your container stack. This article explains what you need to
know to keep MongoDB secure when it is running as a container. We’ll go
over how to close the vulnerability behind the recent ransomware attacks
using a MongoDB container while the container is running—as well as
how to modify a MongoDB Dockerfile to change the default behavior
permanently.

Why the Ransomware Happened: MongoDB’s Default Security Configuration


Register now for free online training on deploying containers with
Rancher The ransomware attacks against MongoDB weren’t
enabled by a flaw inherent in MongoDB itself, per se, but rather by some
weaknesses that result from default configuration parameters in an
out-of-the-box installation of MongoDB. By default, MongoDB databases,
unlike most other popular database platforms, don’t require login
credentials. That means anyone can log into the database and start
modifying or removing data. Securing a MongoDB Container In order to
mitigate that vulnerability and run a secure containerized instance of
MongoDB, follow the steps below. Start a MongoDB instance First of
all, start a MongoDB instance in Docker using the most up-to-date image
available. Since Docker uses the most recent image by default, a simple
command like this will start a MongoDB instance based on an up-to-date
image:

docker run –name mongo-database -d mongo

Create a secure MongoDB account Before disabling password-less
authentication to MongoDB, we need to create an account that we can use
to log in after we change the default settings. To do this, first log
into the MongoDB container with:

docker exec -it mongo-database bash

Then, from inside the container, log into the MongoDB admin interface:

Now, enter this stanza of code and press Enter:

use admin
db.createUser(
{
user: “db_user”,
pwd: “your_super_secure_password”,
roles: [ { role: “userAdminAnyDatabase”, db: “admin” } ]
}
)

This creates a MongoDB user with username db_user and password
your_super_secure_password (Feel free to change this, of course, to
something more secure!) The user has admin privileges. Changing
default behavior in Dockerfile If you want to make the MongoDB process
start with authentication required by default all of the time, you can
do so by editing the Dockerfile used to build the container. To do this
locally, we’ll first pull the MongoDB Dockerfile from GitHub with:

git clone https://github.com/dockerfile/mongodb

Now, cd into the mongodb directory that Git just created and open the
Dockerfile inside using your favorite text editor. Look for the
following section of the Dockerfile:

# Define default command.
CMD [“mongod”]

Change this to:

# Define default command.
CMD [“mongod –auth”]

This way, when mongodb is called when the container starts, it will run
with the –auth flag by default.

Conclusion

If you follow the steps above, you’ll be able to run MongoDB as a Docker
container without becoming one of the tens of thousands of admins whose
MongoDB databases were wiped out and held for ransom by attackers. And
there really is not much to it, other than being aware of the
vulnerabilities inherent in a default MongoDB installation and the steps
for resolving them. Chris Riley (@HoardingInfo) is a technologist who
has spent 12 years helping organizations transition from traditional
development practices to a modern set of culture, processes and tooling.
In addition to being a research analyst, he is an O’Reilly author,
regular speaker, and subject matter expert in the areas of DevOps
strategy and culture. Chris believes the biggest challenges faced in the
tech market are not tools, but rather people and planning.

Source

Top 5 Blog Posts of 2018: Introducing the New Docker Hub

Today, we’re excited to announce that Docker Store and Docker Cloud are now part of Docker Hub, providing a single experience for finding, storing and sharing container images. This means that:

  • Docker Certified and Verified Publisher Images are now available for discovery and download on Docker Hub
  • Docker Hub has a new user experience

Millions of individual users and more than a hundred thousand organizations use Docker Hub, Store and Cloud for their container content needs. We’ve designed this Docker Hub update to bring together the features that users of each product know and love the most, while addressing known Docker Hub requests around ease of use, repository and team management.

Here’s what’s new:

Repositories

  • View recently pushed tags and automated builds on your repository page
  • Pagination added to repository tags
  • Improved repository filtering when logged in on the Docker Hub home page

Organizations and Teams

  • As an organization Owner, see team permissions across all of your repositories at a glance.
  • Add existing Docker Hub users to a team via their email (if you don’t remember their of Docker ID)

New Automated Builds

  • Speed up builds using Build Caching
  • Add environment variables and run tests in your builds
  • Add automated builds to existing repositories

Note: For Organizations, GitHub & BitBucket account credentials will need to be re-linked to your organization to leverage the new automated builds. Existing Automated Builds will be migrated to this new system over the next few months. Learn more

Improved Container Image Search

  • Filter by Official, Verified Publisher and Certified images, guaranteeing a level of quality in the Docker images listed by your search query
  • Filter by categories to quickly drill down to the type of image you’re looking for

Existing URLs will continue to work, and you’ll automatically be redirected where appropriate. No need to update any bookmarks.

Verified Publisher Images and Plugins

Verified Publisher Images are now available on Docker Hub. Similar to Official Images, these images have been vetted by Docker. While Docker maintains the Official Images library, Verified Publisher and Certified Images are provided by our third-party software vendors. Interested vendors can sign up at https://goto.docker.com/Partner-Program-Technology.html.

Certified Images and Plugins

Certified Images are also now available on Docker Hub. Certified Images are a special category of Verified Publisher images that pass additional Docker quality, best practice, and support requirements.

  • Tested and supported on Docker Enterprise platform by verified publishers
  • Adhere to Docker’s container best practices
  • Pass a functional API test suite
  • Complete a vulnerability scanning assessment
  • Provided by partners with a collaborative support relationship
  • Display a unique quality mark “Docker Certified”

Source

DevOps and Containers, On-Prem or in the Cloud

The cloud vs.
on-premises debate is an old one. It goes back to the days when the
cloud was new and people were trying to decide whether to keep workloads
in on-premises datacenters or migrate to cloud hosts. But the Docker
revolution has introduced a new dimension to the debate. As more and
more organizations adopt containers, they are now asking themselves
whether the best place to host containers is on-premises or in the
cloud. As you might imagine, there’s no single answer that fits
everyone. In this post, we’ll consider the pros and cons of both cloud
and on-premises container deployment and consider which factors can make
one option or the other the right choice for your organization.

DevOps, Containers, and the Cloud

First, though, let’s take a quick look at the basic relationship
between DevOps, containers, and the cloud. In many ways, the combination
of DevOps and containers can be seen as one way—if not the native
way—of doing IT in the cloud. After all, containers maximize
scalability and flexibility, which are key goals of the DevOps
movement—not to mention the primary reasons for many people in
migrating to the cloud. Things like virtualization and continuous
delivery seem to be perfectly suited to the cloud (or to a cloud-like
environment), and it is very possible that if DevOps had originated in
the Agile world, it would have developed quite naturally out of the
process of adapting IT practices to the cloud.

DevOps and On-Premises

Does that mean, however, that containerization, DevOps, and continuous
delivery are somehow unsuited or even alien to on-premises deployment?
Not really. On-premises deployment itself has changed; it now has many
of the characteristics of the cloud, including a high degree of
virtualization, and relative independence from hardware constraints
through abstraction. Today’s on-premises systems generally fit the
definition of “private cloud,” and they lend themselves well to the
kind of automated development and operations cycle that lies at the
heart of DevOps. In fact, many of the major players in the
DevOps/container world, including AWS and Docker, provide strong support
for on-premises deployment, and sophisticated container management tools
such as Rancher are designed to work seamlessly across the
public/private cloud boundary. It is no exaggeration to say that
containers are now as native to the on-premises world as they are to the
cloud.

Why On-premises?

Why would you want to deploy containers on-premises? Local Resources
Perhaps the most obvious reason is the need to directly access and use
hardware features, such as storage, or processor-specific operations.
If, for example, you are using an array of graphics chips for
matrix-intensive computation, you are likely to be tied to local
hardware. Containers, like virtual machines, always require some degree
of abstraction, but running containers on-premises reduces the number of
layers of abstraction between the application and underlying metal to a
minimum. You can go from the container to the underlying OS’s hardware
access more or less directly—something which is not practical with VMs
on bare metal, or with containers in the public cloud. Local
Monitoring In a similar vein, you may also need containers to monitor,
control, and manage local devices. This may be an important
consideration in an industrial setting, or a research facility, for
example. It is, of course, possible to perform monitoring and control
functions with more traditional types of software—The combination of
containerization and continuous delivery, however, allows you to quickly
update and adapt software in response to changes in manufacturing
processes or research procedures. Local Control Over Security
Security may also be a major consideration when it comes to deploying
containers on-premises. Since containers access resources from the
underlying OS, they have potential security vulnerabilities; in order to
make containers secure, it is necessary to take positive steps to add
security features to container systems. Most container-deployment
systems have built-in security features. On-premises deployment,
however, may be a useful strategy for adding extra layers of security.
In addition to the extra security that comes with controlling access to
physical facilities, an on-premises container deployment may be able to
make use of the built-in security features of the underlying hardware.
Legacy Infrastructure and Cloud Migration What if you’re not in a
position to abandon existing on-premises infrastructure? If a company
has a considerable amount of money invested in hardware, or is simply
not willing or able to migrate away from a large and complex set of
interconnected legacy applications all at once, staying on-premises for
the time being may be the most practical (or the most politically
prudent) short-to-medium-term choice. By introducing containers (and
DevOps practices) on-premises, you can lay out a relatively painless
path for gradual migration to the cloud. Test Locally, Deploy in the
Cloud You may also want to develop and test containerized applications
locally, then deploy in the cloud. On-premises development allows you to
closely monitor the interaction between your software and the deployment
platform, and observe its operation under controlled conditions. This
can make it easier to isolate unanticipated post-deployment problems by
comparing the application’s behavior in the cloud with its behavior in
a known, controlled environment. It also allows you to deploy and test
container-based software in an environment where you can be confident
that information about new features or capabilities will not be leaked
to your competitors.

Public/Private Hybrid

Here’s another point to consider when you’re comparing cloud and
on-premises container deployment: public and private cloud deployment
are not fundamentally incompatible, and in many ways, there is really no
sharp line between them. This is, of course, true for traditional,
monolithic applications (which can, for example, also reside on private
servers while being accessible to remote users via a cloud-based
interface), but with containers, the public/private boundary can be made
even more fluid and indistinct when it is appropriate to do so. You can,
for example, deploy an application largely by means of containers in the
public cloud, with some functions running on on-premises containers.
This gives you granular control over things such as security or
local-device access, while at the same time allowing you to take
advantage of the flexibility, broad reach, and cost advantages of
public-cloud deployment.

The Right Mix for Your Organization

Which type of deployment is better for your company? In general,
startups and small-to-medium-size companies without a strong need to tie
in closely to hardware find it easy to move into (or start in) the
cloud. Larger (i.e. enterprise-scale) companies and those with a need to
manage and control local hardware resources are more likely to prefer
on-premises infrastructure. In the case of enterprises, on-premises
container deployment may serve as a bridge to full public-cloud
deployment, or hybrid private/public deployment. The bottom line,
however, is that the answer to the public cloud vs. on-premises question
really depends on the specific needs of your business. No two
organizations are alike, and no two software deployments are alike, but
whatever your software/IT goals are, and however you plan to achieve
them, between on-premises and public-cloud deployment, there’s more
than enough flexibility to make that plan work.

Source

Top 5 Post: Improved Docker Container Integration with Java 10

As 2018 comes to a close, we looked back at the top five blogs that were most popular with our readers. For those of you that had difficulties with memory and CPU sizing/usage when running Java Virtual Machine (JVM) in a container, we are kicking off the week with a blog that explains how to get improved Docker container integration with Java 10 in Docker Desktop ( Mac or Windows) and Docker Enterprise environments.

Docker and Java

Many applications that run in a Java Virtual Machine (JVM), including data services such as Apache Spark and Kafka and traditional enterprise applications, are run in containers. Until recently, running the JVM in a container presented problems with memory and cpu sizing and usage that led to performance loss. This was because Java didn’t recognize that it was running in a container. With the release of Java 10, the JVM now recognizes constraints set by container control groups (cgroups). Both memory and cpu constraints can be used manage Java applications directly in containers, these include:

  • adhering to memory limits set in the container
  • setting available cpus in the container
  • setting cpu constraints in the container

Java 10 improvements are realized in both Docker Desktop ( Mac or Windows) and Docker Enterprise environments.

Container Memory Limits

Until Java 9 the JVM did not recognize memory or cpu limits set by the container using flags. In Java 10, memory limits are automatically recognized and enforced.

Java defines a server class machine as having 2 CPUs and 2GB of memory and the default heap size is ¼ of the physical memory. For example, a Docker Enterprise Edition installation has 2GB of memory and 4 CPUs. Compare the difference between containers running Java 8 and Java 10. First, Java 8:

docker container run -it -m512 –entrypoint bash openjdk:latest

$ docker-java-home/bin/java -XX:+PrintFlagsFinal -version | grep MaxHeapSize
uintx MaxHeapSize := 524288000
openjdk version “1.8.0_162”

The max heap size is 512M or ¼ of the 2GB set by the Docker EE installation instead of the limit set on the container to 512M. In comparison, running the same commands on Java 10 shows that the memory limit set in the container is fairly close to the expected 128M:

docker container run -it -m512M –entrypoint bash openjdk:10-jdk

$ docker-java-home/bin/java -XX:+PrintFlagsFinal -version | grep MaxHeapSize
size_t MaxHeapSize = 134217728
openjdk version “10” 2018-03-20

Setting Available CPUs

By default, each container’s access to the host machine’s CPU cycles is unlimited. Various constraints can be set to limit a given container’s access to the host machine’s CPU cycles. Java 10 recognizes these limits:

docker container run -it –cpus 2 openjdk:10-jdk
jshell> Runtime.getRuntime().availableProcessors()
$1 ==> 2

All CPUs allocated to Docker EE get the same proportion of CPU cycles. The proportion can be modified by changing the container’s CPU share weighting relative to the weighting of all other running containers. The proportion will only apply when CPU-intensive processes are running. When tasks in one container are idle, other containers can use the leftover CPU time. The actual amount of CPU time will vary depending on the number of containers running on the system. These can be set in Java 10:

docker container run -it –cpu-shares 2048 openjdk:10-jdk
jshell> Runtime.getRuntime().availableProcessors()
$1 ==> 2

The cpuset constraint sets which CPUs allow execution in Java 10.

docker run -it –cpuset-cpus=”1,2,3″ openjdk:10-jdk
jshell> Runtime.getRuntime().availableProcessors()
$1 ==> 3

Allocating memory and CPU

With Java 10, container settings can be used to estimate the allocation of memory and CPUs needed to deploy an application. Let’s assume that the memory heap and CPU requirements for each process running in a container has already been determined and JAVA_OPTS set. For example, if you have an application distributed across 10 nodes; five nodes require 512Mb of memory with 1024 CPU-shares each and another five nodes require 256Mb with 512 CPU-shares each. Note that 1 CPU share proportion is represented by 1024.

For memory, the application would need 5Gb allocated at minimum.

512Mb x 5 = 2.56 Gb

256Mb x 5 = 1.28 Gb

The application would require 8 CPUs to run efficiently.

1024 x 5 = 5 CPUs

512 x 5 = 3 CPUs

Best practice suggests profiling the application to determine the memory and CPU allocations for each process running in the JVM. However, Java 10 removes the guesswork when sizing containers to prevent out of memory errors in Java applications as well allocating sufficient CPU to process work loads.

Source

Docker Certified Containers From IBM

The Docker Certified Technology Program is designed for ecosystem partners and customers to recognize containers and plugins that excel in quality, collaborative support and compliance. Docker Certification gives enterprises an easy way to run trusted software and components in containers on Docker Enterprise with support from both Docker and the publisher.

As cloud computing continues to transform every business and industry, developers at global enterprises and emerging startups alike are increasingly leveraging container technologies to accelerate how they build modern web, mobile and IoT applications.

IBM has achieved certification of its flagship Db2 database, Websphere-Liberty middleware server and Security Access Manager products now available on Docker Hub. These Certified Containers enable developers to accelerate building cloud-native applications for the Docker Enterprise platform. Developers can deploy these solutions from IBM to any on-premises infrastructure or public cloud. They are designed to assist in the modernization of traditional applications moving from on-premises monoliths into hybrid cloud microservices.

These solutions are validated by both Docker and IBM and are integrated into a seamless support pipeline that provides customers the world-class support they have become accustomed to when working with Docker and IBM.

Check out the latest certified technology available from IBM on Docker Hub:

Learn More:

container image, container platform, Db2, Docker certified, docker enterprise, IBM, IBM Db2, IBM POWER, IBM Z, ISAM, Liberty, Websphere

Source

Your Guide to Container Security

Your storage system should be locked down with all security and access
control tools available to you as well. That is true whether the storage
serves containers or any other type of application environment. How do
you secure containers? That may sound like a simple question, but it
actually has a six- or seven-part answer. That’s because securing
containers doesn’t involve just deploying one tool or paying careful
attention to one area where vulnerabilities can exist. Because a
containerized software stack involves so many different components, you
need to secure many different layers. The tools designed to help you
harden one part of your environment won’t protect other segments.
Commercial security tools do exist, and are designed to provide
relatively comprehensive security or container environments. They are
good tools, and they can certainly be useful parts of a container
security strategy, but they have their limitations. To be truly secure,
you need to analyze each of the layers in your stack, and be sure that
they are covered adequately by the security tools or processes you put
in place. This post helps you plan a complete container security
strategy by outlining all of the layers you need to secure, and
explaining the primary considerations to keep in mind when securing each
one.

Understanding the Layers

When planning your approach to container security, you should begin by
identifying all of the different layers of the software stack that you
have to secure. Those layers include:

Your image registry. This is the part of your stack that hosts your
images. Security vulnerabilities here could allow attackers to add
malicious images to your environment, or steal private data. The
orchestrator. Your orchestrator is the brains of your container
cluster. If it’s not secured, an attacker could use it to disrupt
service, or possibly intercept private information. Your hosting
infrastructure. The operating system or cloud environment that hosts
your container environment needs to be secure—otherwise, it can become
the front door for an attack against your environment. Storage
systems. To protect the sensitive data hosted on your container
cluster, you need to keep the storage system you use free of
vulnerabilities. The container daemon. If the Docker daemon is
compromised, attackers can shut down containers or gain unauthorized
access to the ones you’re running. Application code inside your
containers. Last but not least, you need to make sure the code that
runs inside your containers is free of vulnerabilities that could allow
attackers to disrupt or control your application when it is running.


Enjoying this article? Check out all of our learning resources at
rancher.com/learn

Securing the Stack

There are two main considerations to bear in mind when securing your
image registry. First, you need to make sure to lock down access
control. Your approach to doing this will vary depending on which
registry you use. Some registries offer finer-tuned access control
features than others, but all of the mainstream registries provide some
security controls. (For an overview of different registry options,
including a comparison of the security features built into them, check
out Container Registries You May Have
Missed
) The
second challenge is detecting security vulnerabilities inside container
images themselves. For this task, two tools are available: Clair from
CoreOS and Docker Security Scanning from Docker. Both of these image
scanners will check an image for known malware signatures. They’re
designed mainly to be integrated into CoreOS’s and Docker’s registries,
but they can also work in standalones mode by manually scanning an
image.

Orchestrator

Securing your orchestrator requires more work than simply turning on
some access control features or running a scanner. Orchestrators are
complex tools, and their inner workings vary from one orchestrator to
another. Explaining all of the details of configuring an orchestrator
for maximum security is beyond the scope of this article. In general,
however, key principles to follow include:

  • Making sure you install your orchestrator from an official source.
    Be wary of third-party package repositories.
  • Keep your orchestrator up-to-date.
  • When configuring your orchestrator and the cluster it manages, limit
    public-facing network connections to the minimum necessary to run
    your application.
  • Configure your orchestrator for automatic failover and high
    availability in order to mitigate the impact of potential DDoS or
    similar attacks.

If you use Kubernetes, you may also find this article on security best
practices

to be helpful. A similar guide for Mesos is available
here.

Hosting Infrastructure

The infrastructure you use to host your container environment could be
on-premises, in the cloud, or in some cases, a mix of both. Whatever it
looks like, you should be sure to secure the infrastructure as much as
possible. If you manage your host servers yourself, make sure they are
locked down with kernel hardening tools like SELinux or AppArmor. For
cloud-based deployments, take advantage of access control features (such
as IAM roles on AWS) to configure access to your environment. Security
auditing and monitoring tools will help to keep your infrastructure
secure, too.

Storage

When it comes to containers and storage, however, one important point to
keep in mind is that, in many cases, many containers might share access
to the same storage directories. This happens if you map directories
inside containers to a shared location on the host. Under these
conditions, it’s especially important to make sure that any container
with access to a shared storage location is secure. You should also
limit storage access to read-only in cases where a container does not
need write permissions. And it’s always a good idea to have rollback
features built into your storage system so that you can undo changes to
data if necessary.

Container Daemon

Running SELinux or AppArmor on the container host can help to defend the
Docker daemon against attack, but that is only one security challenge to
keep in mind when it comes to the daemon. You should also make sure that
daemon socket connections are securely authenticated and
encrypted
. Of course,
it’s also essential to keep your Docker installation up-to-date to avoid
security vulnerabilities in the daemon.

Application Code

You should secure the code running inside your containers just as you
would secure any type of application code—by obtaining the code from a
trusted source and auditing it with security tools designed to catch
vulnerabilities. Clair and Docker Security Scanning can help with the
latter, but they are not designed to be all-purpose static application
security testing solutions. For that reason, you may benefit from
deploying a tool like Veracode or OWASP to scan your code for
vulnerabilities.

Conclusion

Keeping a container environment secure is a big task because there are
so many moving parts. The details of your security strategy will vary
depending on exactly which types of registries, orchestrators, hosting
infrastructure and so on that you choose to include in your stack. But
whatever your environment looks like, the key to keeping it secure is to
remember that there is no one-stop shopping. You have to keep all of the
different layers in mind, and develop a security plan that addresses
each one. Chris Riley (@HoardingInfo) is a technologist who has
spent 12 years helping organizations transition from traditional
development practices to a modern set of culture, processes and tooling.
In addition to being a research analyst, he is an O’Reilly author,
regular speaker, and subject matter expert in the areas of DevOps
strategy and culture. Chris believes the biggest challenges faced in the
tech market are not tools, but rather people and planning.

Source

Speak at DockerCon San Francisco 2019 – Call for Papers is Open

Whether you missed DockerCon EU in Barcelona, or you already miss the fun, connections and learning you experienced at DockerCon – you won’t have to wait long for the next one. DockerCon returns to San Francisco on April 29 and extends through May 2, 2019 and the Call for Papers is now open. We are accepting talk submissions through January 18th at 11:59 PST.

Submit a Talk

Attending DockerCon is an awesome experience, but so is speaking at DockerCon – it’s a great way to get to know the community, share ideas and collaborate. Don’t be nervous about proposing your idea – no topic is too small or too big. And for some speakers, DockerCon is their first time speaking publicly. Don’t be intimidated, DockerCon attendees are all looking to level up their skills, connect with fellow container fans and go home inspired to implement new containerization initiatives. Here are some suggested topics from the conference committee:

  • “How To” type sessions for developers or IT teams
  • Case Studies
  • Technical deep dives into container and distributed systems related components
  • Cool New Apps built with Docker containers
  • The craziest thing you have containerized
  • Wild Card – anything and everything!
  • The impact of change – both for organizations and ourselves as individuals and communities.
  • Inspirational stories

Note that our attendees expect practical guidance so vendor sales pitches will not be accepted.

Accepted speakers receive a complimentary conference pass, speakers gift and participate in a networking reception. Additionally, they receive help preparing their session, access to an online recording of their talk and the opportunity to share their experience with the broader Docker community.
Source

Where are we going with the Giant Swarm API?

Where are we going with the Giant Swarm API?

In the coming months we are going to take a new direction with our API. In this blogpost we share the reasoning behind this shift.

Our API acts as a gateway between the outside world and the microservices that enable the cluster management features that we provide. It handles our authentication and authorization, and has generally been an enabler of the great user experience our customers have come to love. By using the Giant Swarm API, customers with teams of developers – who no doubt have a lot of familiarity with REST APIs – can create clusters and get the ball rolling quickly. So far this has enabled interesting things like automation that scales dev clusters down in the weekend to save costs, and even temporary clusters that are created as part of a CI pipeline.

Under the hood, we’ve found a lot of success in modeling our services as operators that act on Custom Resources. So, when someone wants to create a cluster by sending a request to POST /v4/clusters/, what is really happening behind the scenes is that the API:

  • authenticates and authorizes the user
  • validates the input
  • either directly or indirectly reads, creates, or modifies some Custom Resources.

A simplified look at the role of our API

As the Kubernetes API has matured, we have been using more and more of its feature set. It all started with using Third Party Resources (TPRs) as a kind of storage backend for some of our state. Instead of trying to run some stateful service (redis, etcd) alongside our microservices we realized we could instead leverage the Kubernetes API (backed by a well-monitored and configured etcd) to store our state in TPRs. However, things were different then. RBAC was still very fresh, and the Kubernetes API was rapidly changing.

Now we have Custom Resource Definitions, Mutating and Validating Admission Controllers, Role Based Access Control, and even the aggregation layer. It’s features like these that raise the question: “Why can’t we simply do everything through the Kubernetes API?”

Now that more and more people start to embrace how Kubernetes works, we feel that with well-designed Custom Resources and some extra UX enhancing touches like setting additionalPrinterColumns, we can make a very usable interface using only the Custom Resources themselves.

We’re still shuffling things around behind the scenes to try and make it as inviting and friendly as possible, and while we do that, our trusty API will remain available as a kind of curtain in front of the stage. However, the benefits of having the Custom Resources themselves be the interface are becoming clearer and we’re excited to be moving in this more consolidated direction.

This would mean that users can leverage standard Kubernetes tooling to view and manage the state of their tenant clusters. Established Kubernetes workflows and concepts can be reused as well. We’ll also be able to align with the upstream Cluster API.

There won’t be two user management systems (one for users that should have access to our API, and one for those that should have access to the Kubernetes API). Role Based Access Control would allow our customers more fine-grained control over what their users are allowed to do. Auditing would also be fully consolidated under the Kubernetes audit log.

As for our Web UI and CLI, they’ll eventually be talking directly to the Kubernetes API of the control plane (once the curtain is ready to come down).

Until then our API will hold up the curtain, though in some places it is already giving a glimpse of what is to come and letting the unaltered structure of the CRD itself shine through. If you’re curious take a look at our recently added status endpoint: https://github.com/giantswarm/api-spec/pull/122

Source

How to Monitor and Secure Containers in Production

Managing containers requires a broad scope from application development, test, and system OS preparation, and as a result, securing containers can be a
broad topic with many separate areas. Taking a layered security approach
works just as well for containers as it does for any IT infrastructure.
There are many precautions that should be taken before running
containers in production.* These include:

  • Hardening, scanning and signing images
  • Implementing access controls through management tools
  • Enable/switch settings to only use secured communication protocols
  • Use your own digital signatures
  • Securing the host, platforms and Docker by hardening, scanning and
    locking down versions

*Download “15
Tips for Container Security” for a more detailed explanation

But at the end of the day, containers need to run in a production
environment where constant vigilance is required to keep them secure. No
matter how many precautions and controls have been put in place prior to
running in production, there is always the risk that a hacker may get
through or a malware might try to spread from an internal network. With
the breaking of applications into microservices, internal
east-west
traffic increases dramatically and it becomes more difficult to monitor
and secure traffic. Recent examples include the ransomware
attacks

which can exploit thousands of MongoDB or ElasticSearch servers, include
containers, with very simple attack scripts. It’s often reported that
some serious data leakage or damage also has happened from an internal
malicious laptop or desktop.

What is ‘Run-Time Container Security’?

Run-time container security focuses on monitoring and securing
containers running in a production environment. This includes container
and host processes, system calls, and most importantly, network
connections. In order to monitor and secure containers during run-time,

  1. Get real-time visibility into network connections.
  2. Characterize application behavior – develop a baseline.
  3. Monitor for violations or any suspicious activities.
  4. Automatically scan all running containers for vulnerabilities.
  5. Enforce or block without impacting applications and services.
  6. Ensure the security service auto-scales with application containers

Why is it Important?

Containers can be deployed in seconds and many architectures assume
containers can scale up or down automatically to meet demand. This makes
it extremely difficult to monitor and secure containers using
traditional tools such as host security, firewalls, and VM security. An
unauthorized network connection often provides the first indicator that
an attack is coming, or a hacker is attempting to find the next
vulnerable attack point. But to separate authorized from unauthorized
connections in a dynamic container environment is extremely difficult.
Security veterans understand that no matter how many precautions have
been taken before run-time, hackers will eventually find a way in, or
mistakes will lead to vulnerable systems. Here are a few requirements
for successfully securing containers during run-time:

  1. The security policy must scale as containers scale up or down,
    without manual intervention
  2. Monitoring must be integrated with or compatible with overlay
    networks and orchestration services such as load balancers and name
    services to avoid blind spots
  3. Network inspection should be able to accurately identify and
    separate authorized from unauthorized connections
  4. Security event logs must be persisted even when containers are
    killed and no longer visible.

Encryption for Containers

A
business guide to effective container app management – download
today 
Encryption can be an important layer of a run-time security strategy.
Encryption can protect against stealing of secrets or sensitive data
during transmission. But it can’t protect against application attacks or
other break outs from a container or host. Security architects should
evaluate the trade-offs between performance, manageability, and security
to determine which, if any connections should be encrypted. Even if
network connections are encrypted between hosts or containers, all
communication should be monitored at the network layer to determine if
unauthorized connections are being attempted.

Getting Started with Run-Time Container Security

You can try to start doing the actions above manually or with a few open
source tools. Here’s some ideas to get you started:

  • Carefully configure VPC’s and security groups if you use AWS/ECS
  • Run the CIS Docker Benchmark and Docker Bench test tool
  • Deploy and configure monitoring tools like Prometheus or Splunk for
    example
  • Try to configure the network using tools from Kubernetes or
    Weaveworks for basic network policies
  • Load and configure container network plugins from Calico, Flannel or
    Tigera for example
  • If needed, use and configure SECCOMP, AppArmor, or SELinux
  • Adopt the new LinuxKit which has Wireguard, Landlock, Mirage and
    other tools built-in
  • Run tcpdump and Wireshark on a container to diagnose network
    connections and view suspicious activity

But often you’ll find that there’s too much glue you have to script to
get everything working together. The good news is that there is a
developing ecosystem of container security vendors, my company NeuVector
included, which can provide solutions for the various tasks above. It’s
best to get started evaluating your options now before your containers
actually go into production. But if that ship has sailed make sure a
security solution will layer nicely on a container deployment already
running in production without disrupting it. Here are 10 important
capabilities to look for in run-time security tools:

  1. Discovery and visualization of containers, network connections, and
    system services
  2. Auto-creation and adapting whitelist security policies to decrease
    manual configuration and increase accuracy
  3. Ability to segment applications based on layer 7 (application
    protocol), not just layer 3 or 4 network policies
  4. Threat protection against common attacks such as DDoS and DNS
    attacks
  5. Ability to block suspicious connections without affecting running
    containers, but also the ability to completely quarantine a
    container
  6. Host security to detect and prevent attacks against the host or
    Docker daemon
  7. Vulnerability scanning of new containers starting to run
  8. Integration with container management and orchestration systems to
    increase accuracy and scalability, and improve visualization and
    reporting
  9. Compatible and agnostic to virtual networking such as overlay
    networks
  10. Forensic capture of violations logs, attacks, and packet captures
    for suspicious containers

Today, containers are being deployed to production more frequently for
enterprise business applications. Often these deployments have
inadequate pre-production security controls, and non-existent run-time
security capabilities. It is not necessary to take this level of risk to
important business critical applications when container security tools
can be deployed as easily as application containers, using the same
orchestration tools as well. Fei Huang is Co-Founder and CEO of
NeuVector. He has over 20 years of experience in enterprise security,
virtualization, cloud and embedded software. He has held engineering
management positions at VMware, CloudVolumes, and Trend Micro and was
the co-founder of DLP security company Provilla. Fei holds several
patents for security, virtualization and software architecture.

Source