{"id":5951,"date":"2018-12-19T00:20:45","date_gmt":"2018-12-19T00:20:45","guid":{"rendered":"https:\/\/www.appservgrid.com\/paw92\/?p=5951"},"modified":"2018-12-28T22:06:32","modified_gmt":"2018-12-28T22:06:32","slug":"sharing-docker-containers-across-devops-environments","status":"publish","type":"post","link":"https:\/\/www.appservgrid.com\/paw92\/index.php\/2018\/12\/19\/sharing-docker-containers-across-devops-environments\/","title":{"rendered":"Sharing Docker Containers across DevOps Environments"},"content":{"rendered":"<p><em>Docker provides a powerful tool for creating lightweight images and<br \/>\ncontainerized processes, but did you know it can make your development<br \/>\nenvironment part of the DevOps pipeline too? Whether you&#8217;re managing<br \/>\ntens of thousands of servers in the cloud or are a software engineer looking<br \/>\nto incorporate Docker containers into the software development life<br \/>\ncycle, this article has a little something for everyone with a passion<br \/>\nfor Linux and Docker.<\/em><\/p>\n<p>In this article, I describe how Docker containers flow<br \/>\nthrough the DevOps pipeline. I also cover some advanced DevOps<br \/>\nconcepts (borrowed from object-oriented programming) on how to use<br \/>\ndependency injection and encapsulation to improve the DevOps process.<br \/>\nAnd finally, I show how containerization can be useful for the<br \/>\ndevelopment and testing process itself, rather than just as a<br \/>\nplace to serve up an application after it&#8217;s written.<\/p>\n<h3>Introduction<\/h3>\n<p>Containers are hot in DevOps shops, and their benefits from an<br \/>\noperations and service delivery point of view have been covered well<br \/>\nelsewhere. If you want to build a Docker container or deploy a Docker<br \/>\nhost, container or swarm, a lot of information is available.<br \/>\nHowever, very few articles talk about how to <em>develop<\/em> inside the Docker<br \/>\ncontainers that will be reused later in the DevOps pipeline, so that&#8217;s what<br \/>\nI focus on here.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.linuxjournal.com\/sites\/default\/files\/styles\/max_650x650\/public\/u%5Buid%5D\/12282f1%281%29.png\" alt=\"&quot;&quot;\" width=\"650\" height=\"130\" \/><\/p>\n<p><em>Figure 1.<br \/>\nStages a Docker Container Moves Through in a Typical DevOps<br \/>\nPipeline<\/em><\/p>\n<h3>Container-Based Development Workflows<\/h3>\n<p>Two common workflows exist for developing software for use inside Docker<br \/>\ncontainers:<\/p>\n<ol>\n<li>Injecting development tools into an existing Docker container:<br \/>\nthis is the best option for sharing a consistent development environment<br \/>\nwith the same toolchain among multiple developers, and it can be used in<br \/>\nconjunction with web-based development environments, such as Red Hat&#8217;s<br \/>\ncodenvy.com or dockerized IDEs like Eclipse Che.<\/li>\n<li>Bind-mounting a host directory onto the Docker container and using your<br \/>\nexisting development tools on the host:<br \/>\nthis is the simplest option, and it offers flexibility for developers<br \/>\nto work with their own set of locally installed development tools.<\/li>\n<\/ol>\n<p>Both workflows have advantages, but local mounting is inherently simpler. For<br \/>\nthat reason, I focus on the mounting solution as &#8220;the simplest<br \/>\nthing that could possibly work&#8221; here.<\/p>\n<p>How Docker Containers Move between Environments<\/p>\n<p>A core tenet of DevOps is that the source code and runtimes that will be used<br \/>\nin production are the same as those used in development. In other words, the<br \/>\nmost effective pipeline is one where the identical Docker image can be reused<br \/>\nfor each stage of the pipeline.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.linuxjournal.com\/sites\/default\/files\/styles\/max_650x650\/public\/u%5Buid%5D\/12282f2.png\" alt=\"&quot;&quot;\" width=\"650\" height=\"619\" \/><\/p>\n<p><em>Figure 2. Idealized Docker-Based DevOps Pipeline<\/em><\/p>\n<p>The notion here is that each environment uses the same Docker image and code<br \/>\nbase, regardless of where it&#8217;s running. Unlike systems such as Puppet, Chef<br \/>\nor Ansible that converge systems to a defined state, an idealized Docker<br \/>\npipeline makes duplicate copies (containers) of a fixed image in each<br \/>\nenvironment. Ideally, the only artifact that really moves between<br \/>\nenvironmental stages in a Docker-centric pipeline is the ID of a Docker image;<br \/>\nall other artifacts should be shared between environments to ensure<br \/>\nconsistency.<\/p>\n<p>Handling Differences between Environments<\/p>\n<p>In the real world, environmental stages can vary. As a case point, your QA and<br \/>\nstaging environments may contain different DNS names, different firewall<br \/>\nrules and almost certainly different data fixtures. Combat this<br \/>\nper-environment drift by standardizing services across your different<br \/>\nenvironments. For example, ensuring that DNS resolves &#8220;db1.example.com&#8221; and<br \/>\n&#8220;db2.example.com&#8221; to the right IP addresses in each environment is much more<br \/>\nDocker-friendly than relying on configuration file changes or injectable<br \/>\ntemplates that point your application to differing IP addresses. However, when<br \/>\nnecessary, you can set environment variables for each container rather than<br \/>\nmaking stateful changes to the fixed image. These variables then can be<br \/>\nmanaged in a variety of ways, including the following:<\/p>\n<ol>\n<li>Environment variables set at container runtime from the command line.<\/li>\n<li>Environment variables set at container runtime from a file.<\/li>\n<li>Autodiscovery using etcd, Consul, Vault or similar.<\/li>\n<\/ol>\n<p>Consider a Ruby microservice that runs inside a Docker container. The service<br \/>\naccesses a database somewhere. In order to run the same Ruby image in each<br \/>\ndifferent environment, but with environment-specific data passed in as<br \/>\nvariables, your deployment orchestration tool might use a shell script like<br \/>\nthis one, &#8220;Example Mircoservice Deployment&#8221;:<\/p>\n<p># Reuse the same image to create containers in each<br \/>\n# environment.<br \/>\ndocker pull ruby:latest<\/p>\n<p># Bash function that exports key environment<br \/>\n# variables to the container, and then runs Ruby<br \/>\n# inside the container to display the relevant<br \/>\n# values.<br \/>\nmicroservice () {<br \/>\ndocker run -e STAGE -e DB &#8211;rm ruby<br \/>\n\/usr\/local\/bin\/ruby -e<br \/>\n&#8216;printf(&#8220;STAGE: %s, DB: %sn&#8221;,<br \/>\nENV[&#8220;STAGE&#8221;],<br \/>\nENV[&#8220;DB&#8221;])&#8217;<br \/>\n}<\/p>\n<p>Table 1 shows an example of how environment-specific information<br \/>\nfor Development, Quality Assurance and Production can be passed to<br \/>\notherwise-identical containers using exported environment variables.<\/p>\n<p>Table 1. Same Image with Injected Environment Variables<\/p>\n<table>\n<thead>\n<tr>\n<td>Development<\/td>\n<td>Quality Assurance<\/td>\n<td>Production<\/td>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>export STAGE=dev DB=db1; microservice<\/td>\n<td>export STAGE=qa DB=db2; microservice<\/td>\n<td>export STAGE=prod DB=db3; microservice<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>To see this in action, open a terminal with a Bash prompt and run the commands<br \/>\nfrom the &#8220;Example Microservice Deployment&#8221; script above to pull the Ruby image onto your Docker<br \/>\nhost and create a reusable shell function. Next, run each of the commands from<br \/>\nthe table above in turn to set up the proper environment variables and execute<br \/>\nthe function. You should see the output shown in Table 2 for each simulated<br \/>\nenvironment.<\/p>\n<p>Table 2. Containers in Each Environment Producing Appropriate<br \/>\nResults<\/p>\n<table>\n<thead>\n<tr>\n<td>Development<\/td>\n<td>Quality Assurance<\/td>\n<td>Production<\/td>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>STAGE: dev, DB: db1<\/td>\n<td>STAGE: qa, DB: db2<\/td>\n<td>STAGE: prod, DB: db3<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Despite being a rather simplistic example, what&#8217;s being accomplished is really<br \/>\nquite extraordinary! This is DevOps tooling at its best: you&#8217;re re-using the<br \/>\nsame image and deployment script to ensure maximum consistency, but each<br \/>\ndeployed instance (a &#8220;container&#8221; in Docker parlance) is still being tuned to<br \/>\noperate properly within its pipeline stage.<\/p>\n<p>With this approach, you limit configuration drift and variance by ensuring<br \/>\nthat the exact same image is re-used for each stage of the pipeline.<br \/>\nFurthermore, each container varies only by the environment-specific data or<br \/>\nartifacts injected into them, reducing the burden of maintaining multiple<br \/>\nversions or per-environment architectures.<\/p>\n<h3>But What about External Systems?<\/h3>\n<p>The previous simulation didn&#8217;t really connect to any services outside the<br \/>\nDocker container. How well would this work if you needed to connect your<br \/>\ncontainers to environment-specific things outside the container itself?<\/p>\n<p>Next, I simulate a Docker container moving from development through other stages<br \/>\nof the DevOps pipeline, using a different database with its own data in each<br \/>\nenvironment. This requires a little prep work first.<\/p>\n<p>First, create a workspace for the example files. You can do this by cloning<br \/>\nthe examples from GitHub or by making a directory. As an example:<\/p>\n<p># Clone the examples from GitHub.<br \/>\ngit clone<br \/>\nhttps:\/\/github.com\/CodeGnome\/SDCAPS-Examples<br \/>\ncd SDCAPS-Examples\/db<\/p>\n<p># Create a working directory yourself.<br \/>\nmkdir -p SDCAPS-Examples\/db<br \/>\ncd SDCAPS-Examples\/db<\/p>\n<p>The following SQL files should be in the db directory if you cloned the<br \/>\nexample repository. Otherwise, go ahead and create them now.<\/p>\n<p>db1.sql:<\/p>\n<p>&#8212; Development Database<br \/>\nPRAGMA foreign_keys=OFF;<br \/>\nBEGIN TRANSACTION;<br \/>\nCREATE TABLE AppData (<br \/>\nlogin TEXT UNIQUE NOT NULL,<br \/>\nname TEXT,<br \/>\npassword TEXT<br \/>\n);<br \/>\nINSERT INTO AppData<br \/>\nVALUES (&#8216;root&#8217;,&#8217;developers&#8217;,&#8217;dev_password&#8217;),<br \/>\n(&#8216;dev&#8217;,&#8217;developers&#8217;,&#8217;dev_password&#8217;);<br \/>\nCOMMIT;<\/p>\n<p>db2.sql:<\/p>\n<p>&#8212; Quality Assurance (QA) Database<br \/>\nPRAGMA foreign_keys=OFF;<br \/>\nBEGIN TRANSACTION;<br \/>\nCREATE TABLE AppData (<br \/>\nlogin TEXT UNIQUE NOT NULL,<br \/>\nname TEXT,<br \/>\npassword TEXT<br \/>\n);<br \/>\nINSERT INTO AppData<br \/>\nVALUES (&#8216;root&#8217;,&#8217;qa admins&#8217;,&#8217;admin_password&#8217;),<br \/>\n(&#8216;test&#8217;,&#8217;qa testers&#8217;,&#8217;user_password&#8217;);<br \/>\nCOMMIT;<\/p>\n<p>db3.sql:<\/p>\n<p>&#8212; Production Database<br \/>\nPRAGMA foreign_keys=OFF;<br \/>\nBEGIN TRANSACTION;<br \/>\nCREATE TABLE AppData (<br \/>\nlogin TEXT UNIQUE NOT NULL,<br \/>\nname TEXT,<br \/>\npassword TEXT<br \/>\n);<br \/>\nINSERT INTO AppData<br \/>\nVALUES (&#8216;root&#8217;,&#8217;production&#8217;,<br \/>\n&#8216;$1$Ax6DIG\/K$TDPdujixy5DDscpTWD5HU0&#8217;),<br \/>\n(&#8216;deploy&#8217;,&#8217;devops deploy tools&#8217;,<br \/>\n&#8216;$1$hgTsycNO$FmJInHWROtkX6q7eWiJ1p\/&#8217;);<br \/>\nCOMMIT;<\/p>\n<p>Next, you need a small utility to create (or re-create) the various SQLite<br \/>\ndatabases. This is really just a convenience script, so if you prefer to<br \/>\ninitialize or load the SQL by hand or with another tool, go right ahead:<\/p>\n<p>#!\/usr\/bin\/env bash<\/p>\n<p># You assume the database files will be stored in an<br \/>\n# immediate subdirectory named &#8220;db&#8221; but you can<br \/>\n# override this using an environment variable.<br \/>\n: &#8220;$&#8221;<br \/>\ncd &#8220;$DATABASE_DIR&#8221;<\/p>\n<p># Scan for the -f flag. If the flag is found, and if<br \/>\n# there are matching filenames, verbosely remove the<br \/>\n# existing database files.<br \/>\npattern='(^|[[:space:]])-f([[:space:]]|$)&#8217;<br \/>\nif [[ &#8220;$*&#8221; =~ $pattern ]] &amp;&amp;<br \/>\ncompgen -o filenames -G &#8216;db?&#8217; &gt;&amp;-<br \/>\nthen<br \/>\necho &#8220;Removing existing database files &#8230;&#8221;<br \/>\nrm -v db? 2&gt; \/dev\/null<br \/>\necho<br \/>\nfi<\/p>\n<p># Process each SQL dump in the current directory.<br \/>\necho &#8220;Creating database files from SQL &#8230;&#8221;<br \/>\nfor sql_dump in *.sql; do<br \/>\ndb_filename=&#8221;$&#8221;<br \/>\nif [[ ! -f &#8220;$db_filename&#8221; ]]; then<br \/>\nsqlite3 &#8220;$db_filename&#8221; &lt; &#8220;$sql_dump&#8221; &amp;&amp;<br \/>\necho &#8220;$db_filename created&#8221;<br \/>\nelse<br \/>\necho &#8220;$db_filename already exists&#8221;<br \/>\nfi<br \/>\ndone<\/p>\n<p>When you run .\/create_databases.sh, you should see:<\/p>\n<p>Creating database files from SQL &#8230;<br \/>\ndb1 created<br \/>\ndb2 created<br \/>\ndb3 created<\/p>\n<p>If the utility script reports that the database files already exist, or if you<br \/>\nwant to reset the database files to their initial state, you can call<br \/>\nthe script again with the -f flag to re-create them from the associated .sql<br \/>\nfiles.<\/p>\n<h3>Creating a Linux Password<\/h3>\n<p>You probably noticed that some of the SQL files contained clear-text<br \/>\npasswords while others have valid Linux password hashes. For the<br \/>\npurposes of this article, that&#8217;s largely a contrivance to ensure that you have<br \/>\ndifferent data in each database and to make it easy to tell which<br \/>\ndatabase you&#8217;re looking at from the data itself.<\/p>\n<p>For security though, it&#8217;s usually best to ensure that you have a<br \/>\nproperly hashed password in any source files you may store. There are a<br \/>\nnumber of ways to generate such passwords, but the OpenSSL library makes<br \/>\nit easy to generate salted and hashed passwords from the command line.<\/p>\n<p><em>Tip: for optimum security, don&#8217;t include your desired password or<br \/>\npassphrase as an argument to OpenSSL on the command line, as it could<br \/>\nthen be seen in the process list. Instead, allow OpenSSL to prompt you<br \/>\nwith Password: and be sure to use a strong passphrase.<\/em><\/p>\n<p>To generate a salted MD5 password with OpenSSL:<\/p>\n<p>$ openssl passwd<br \/>\n-1<br \/>\n-salt &#8220;$(openssl rand -base64 6)&#8221;<br \/>\nPassword:<\/p>\n<p>Then you can paste the salted hash into \/etc\/shadow, an SQL file, utility<br \/>\nscript or wherever else you may need it.<\/p>\n<h3>Simulating Deployment inside the Development Stage<\/h3>\n<p>Now that you have some external resources to experiment with, you&#8217;re ready to<br \/>\nsimulate a deployment. Let&#8217;s start by running a container in your development<br \/>\nenvironment. I follow some DevOps best practices here and use fixed image IDs<br \/>\nand defined gem versions.<\/p>\n<p>DevOps Best Practices for Docker Image IDs<\/p>\n<p>To ensure that you&#8217;re re-using the same image across pipeline stages,<br \/>\nalways use an image ID rather than a named tag or symbolic reference<br \/>\nwhen pulling images. For example, while the &#8220;latest&#8221; tag might point to<br \/>\ndifferent versions of a Docker image over time, the SHA-256 identifier<br \/>\nof an image version remains constant and also provides automatic<br \/>\nvalidation as a checksum for downloaded images.<\/p>\n<p>Furthermore, you always should use a fixed ID for assets you&#8217;re<br \/>\ninjecting into your containers. Note how you specify a specific version<br \/>\nof the SQLite3 Ruby gem to inject into the container at each stage. This<br \/>\nensures that each pipeline stage has the same version, regardless of<br \/>\nwhether the most current version of the gem from a RubyGems repository<br \/>\nchanges between one container deployment and the next.<\/p>\n<p>Getting a Docker Image ID<\/p>\n<p>When you pull a Docker image, such as ruby:latest, Docker will report<br \/>\nthe digest of the image on standard output:<\/p>\n<p>$ docker pull ruby:latest<br \/>\nlatest: Pulling from library\/ruby<br \/>\nDigest:<br \/>\nsha256:eed291437be80359321bf66a842d4d542a789e<br \/>\n\u21aa687b38c31bd1659065b2906778<br \/>\nStatus: Image is up to date for ruby:latest<\/p>\n<p>If you want to find the ID for an image you&#8217;ve already pulled, you can<br \/>\nuse the inspect sub-command to extract the digest from Docker&#8217;s JSON<br \/>\noutput\u2014for example:<\/p>\n<p>$ docker inspect<br \/>\n&#8211;format='{}&#8217;<br \/>\nruby:latest<br \/>\nruby@sha256:eed291437be80359321bf66a842d4d542a789<br \/>\n\u21aae687b38c31bd1659065b2906778<\/p>\n<p>First, you export the appropriate environment variables for development. These<br \/>\nvalues will override the defaults set by your deployment script and affect the<br \/>\nbehavior of your sample application:<\/p>\n<p># Export values we want accessible inside the Docker<br \/>\n# container.<br \/>\nexport STAGE=&#8221;dev&#8221; DB=&#8221;db1&#8243;<\/p>\n<p>Next, implement a script called container_deploy.sh that will simulate deployment across multiple<br \/>\nenvironments. This is an example of the work that your deployment pipeline or<br \/>\norchestration engine should do when instantiating containers for each<br \/>\nstage:<\/p>\n<p>#!\/usr\/bin\/env bash<\/p>\n<p>set -e<\/p>\n<p>####################################################<br \/>\n# Default shell and environment variables.<br \/>\n####################################################<br \/>\n# Quick hack to build the 64-character image ID<br \/>\n# (which is really a SHA-256 hash) within a<br \/>\n# magazine&#8217;s line-length limitations.<br \/>\nhash_segments=(<br \/>\n&#8220;eed291437be80359321bf66a842d4d54&#8221;<br \/>\n&#8220;2a789e687b38c31bd1659065b2906778&#8221;<br \/>\n)<br \/>\nprintf -v id &#8220;%s&#8221; &#8220;$&#8221;<\/p>\n<p># Default Ruby image ID to use if not overridden<br \/>\n# from the script&#8217;s environment.<br \/>\n: &#8220;$&#8221;<\/p>\n<p># Fixed version of the SQLite3 gem.<br \/>\n: &#8220;$&#8221;<\/p>\n<p># Default pipeline stage (e.g. dev, qa, prod).<br \/>\n: &#8220;$&#8221;<\/p>\n<p># Default database to use (e.g. db1, db2, db3).<br \/>\n: &#8220;$&#8221;<\/p>\n<p># Export values that should be visible inside the<br \/>\n# container.<br \/>\nexport STAGE DB<\/p>\n<p>####################################################<br \/>\n# Setup and run Docker container.<br \/>\n####################################################<br \/>\n# Remove the Ruby container when script exits,<br \/>\n# regardless of exit status unless DEBUG is set.<br \/>\ncleanup () {<br \/>\nlocal id msg1 msg2 msg3<br \/>\nid=&#8221;$container_id&#8221;<br \/>\nif [[ ! -v DEBUG ]]; then<br \/>\ndocker rm &#8211;force &#8220;$id&#8221; &gt;&amp;-<br \/>\nelse<br \/>\nmsg1=&#8221;DEBUG was set.&#8221;<br \/>\nmsg2=&#8221;Debug the container with:&#8221;<br \/>\nmsg3=&#8221; docker exec -it $id bash&#8221;<br \/>\nprintf &#8220;n%sn%sn%sn&#8221;<br \/>\n&#8220;$msg1&#8221;<br \/>\n&#8220;$msg2&#8221;<br \/>\n&#8220;$msg3&#8221;<br \/>\n&gt; \/dev\/stderr<br \/>\nfi<br \/>\n}<br \/>\ntrap &#8220;cleanup&#8221; EXIT<\/p>\n<p># Set up a container, including environment<br \/>\n# variables and volumes mounted from the local host.<br \/>\ndocker run<br \/>\n-d<br \/>\n-e STAGE<br \/>\n-e DB<br \/>\n-v &#8220;$\/db}&#8221;:\/srv\/db<br \/>\n&#8211;init<br \/>\n&#8220;ruby@sha256:$IMAGE_ID&#8221;<br \/>\ntail -f \/dev\/null &gt;&amp;-<\/p>\n<p># Capture the container ID of the last container<br \/>\n# started.<br \/>\ncontainer_id=$(docker ps -ql)<\/p>\n<p># Inject a fixed version of the database gem into<br \/>\n# the running container.<br \/>\necho &#8220;Injecting gem into container&#8230;&#8221;<br \/>\ndocker exec &#8220;$container_id&#8221;<br \/>\ngem install sqlite3 -v &#8220;$SQLITE3_VERSION&#8221; &amp;&amp;<br \/>\necho<\/p>\n<p># Define a Ruby script to run inside our container.<br \/>\n#<br \/>\n# The script will output the environment variables<br \/>\n# we&#8217;ve set, and then display contents of the<br \/>\n# database defined in the DB environment variable.<br \/>\nruby_script=&#8217;<br \/>\nrequire &#8220;sqlite3&#8221;<\/p>\n<p>puts %Q(DevOps pipeline stage: #)<br \/>\nputs %Q(Database for this stage: #)<br \/>\nputs<br \/>\nputs &#8220;Data stored in this database:&#8221;<\/p>\n<p>Dir.chdir &#8220;\/srv\/db&#8221;<br \/>\ndb = SQLite3::Database.open ENV[&#8220;DB&#8221;]<br \/>\nquery = &#8220;SELECT rowid, * FROM AppData&#8221;<br \/>\ndb.execute(query) do |row|<br \/>\nprint &#8221; &#8221; * 4<br \/>\nputs row.join(&#8220;, &#8220;)<br \/>\nend<br \/>\n&#8216;<\/p>\n<p># Execute the Ruby script inside the running<br \/>\n# container.<br \/>\ndocker exec &#8220;$container_id&#8221; ruby -e &#8220;$ruby_script&#8221;<\/p>\n<p>There are a few things to note about this script. First and foremost, your<br \/>\nreal-world needs may be either simpler or more complex than this script<br \/>\nprovides for. Nevertheless, it provides a reasonable baseline on which you can<br \/>\nbuild.<\/p>\n<p>Second, you may have noticed the use of the tail command when creating the<br \/>\nDocker container. This is a common trick used for building containers that<br \/>\ndon&#8217;t have a long-running application to keep the container in a running<br \/>\nstate. Because you are re-entering the container using multiple<br \/>\nexec commands,<br \/>\nand because your example Ruby application runs once and exits,<br \/>\ntail sidesteps a<br \/>\nlot of ugly hacks needed to restart the container continually or keep it<br \/>\nrunning while debugging.<\/p>\n<p>Go ahead and run the script now. You should see the same output as listed<br \/>\nbelow:<\/p>\n<p>$ .\/container_deploy.sh<br \/>\nBuilding native extensions. This could take a while&#8230;<br \/>\nSuccessfully installed sqlite3-1.3.13<br \/>\n1 gem installed<\/p>\n<p>DevOps pipeline stage: dev<br \/>\nDatabase for this stage: db1<\/p>\n<p>Data stored in this database:<br \/>\n1, root, developers, dev_password<br \/>\n2, dev, developers, dev_password<\/p>\n<p>Simulating Deployment across Environments<\/p>\n<p>Now you&#8217;re ready to move on to something more ambitious. In the preceding<br \/>\nexample, you deployed a container to the development environment. The Ruby<br \/>\napplication running inside the container used the development database. The<br \/>\npower of this approach is that the exact same process can be re-used for each<br \/>\npipeline stage, and the only thing you need to change is the database to<br \/>\nwhich the<br \/>\napplication points.<\/p>\n<p>In actual usage, your DevOps configuration management or orchestration engine<br \/>\nwould handle setting up the correct environment variables for each stage of<br \/>\nthe pipeline. To simulate deployment to multiple environments, populate an<br \/>\nassociative array in Bash with the values each stage will need and then run<br \/>\nthe script in a for loop:<\/p>\n<p>declare -A env_db<br \/>\nenv_db=([dev]=db1 [qa]=db2 [prod]=db3)<\/p>\n<p>for env in dev qa prod; do<br \/>\nexport STAGE=&#8221;$env&#8221; DB=&#8221;$&#8221;<br \/>\nprintf &#8220;%sn&#8221; &#8220;Deploying to $ &#8230;&#8221;<br \/>\n.\/container_deploy.sh<br \/>\ndone<\/p>\n<p>This stage-specific approach has a number of benefits from a DevOps point of<br \/>\nview. That&#8217;s because:<\/p>\n<ol>\n<li>The image ID deployed is identical across all pipeline stages.<\/li>\n<li>A more complex application can &#8220;do the right thing&#8221; based on the value of<br \/>\nSTAGE and DB (or other values) injected into the container at runtime.<\/li>\n<li>The container is connected to the host filesystem the same way at each<br \/>\nstage, so you can re-use source code or versioned artifacts pulled from Git,<br \/>\nNexus or other repositories without making changes to the image or<br \/>\ncontainer.<\/li>\n<li>The switcheroo magic for pointing to the right external resources is<br \/>\nhandled by your deployment script (in this case, container_deploy.sh) rather<br \/>\nthan by making changes to your image, application or<br \/>\ninfrastructure.<\/li>\n<li>This solution is great if your goal is to trap most of the complexity in your<br \/>\ndeployment tools or pipeline orchestration engine. However, a small refinement<br \/>\nwould allow you to push the remaining complexity onto the pipeline<br \/>\ninfrastructure instead.<\/li>\n<\/ol>\n<p>Imagine for a moment that you have a more complex application than the one<br \/>\nyou&#8217;ve been working with here. Maybe your QA or staging environments have large<br \/>\ndata sets that you don&#8217;t want to re-create on local hosts, or maybe you need to point<br \/>\nat a network resource that may move around at runtime. You can handle this by<br \/>\nusing a well known name that is resolved by a external resource instead.<\/p>\n<p>You can show this at the filesystem level by using a symlink. The benefit of<br \/>\nthis approach is that the application and container no longer need to know<br \/>\nanything about which database is present, because the database is always named<br \/>\n&#8220;db&#8221;. Consider the following:<\/p>\n<p>declare -A env_db<br \/>\nenv_db=([dev]=db1 [qa]=db2 [prod]=db3)<br \/>\nfor env in dev qa prod; do<br \/>\nprintf &#8220;%sn&#8221; &#8220;Deploying to $ &#8230;&#8221;<br \/>\n(cd db; ln -fs &#8220;$&#8221; db)<br \/>\nexport STAGE=&#8221;$env&#8221; DB=&#8221;db&#8221;<br \/>\n.\/container_deploy.sh<br \/>\ndone<\/p>\n<p>Likewise, you can configure your Domain Name Service (DNS) or a Virtual IP<br \/>\n(VIP) on your network to ensure that the right database host or cluster is<br \/>\nused for each stage. As an example, you might ensure that db.example.com<br \/>\nresolves to a different IP address at each pipeline stage.<\/p>\n<p>Sadly, the complexity of managing multiple environments never truly goes<br \/>\naway\u2014it just hopefully gets abstracted to the right level for your<br \/>\norganization. Think of your objective as similar to some object-oriented<br \/>\nprogramming (OOP) best practices: you&#8217;re looking to create pipelines that<br \/>\nminimize things that change and to allow applications and tools to rely on a<br \/>\nstable interface. When changes are unavoidable, the goal is to keep the scope<br \/>\nof what might change as small as possible and to hide the ugly details from<br \/>\nyour tools to the greatest extent that you can.<\/p>\n<p>If you have thousands or tens of thousands of servers, it&#8217;s often better to<br \/>\nchange a couple DNS entries without downtime rather than rebuild or<br \/>\nredeploy 10,000 application containers. Of course, there are always<br \/>\ncounter-examples, so consider the trade-offs and make the best decisions you<br \/>\ncan to encapsulate any unavoidable complexity.<\/p>\n<h3>Developing inside Your Container<\/h3>\n<p>I&#8217;ve spent a lot of time explaining how to ensure that your development<br \/>\ncontainers look like the containers in use in other stages of the pipeline.<br \/>\nBut have I really described how to develop inside these<br \/>\ncontainers? It turns out I&#8217;ve actually covered the essentials, but you need to<br \/>\nshift your perspective a little to put it all together.<\/p>\n<p>The same processes used to deploy containers in the previous sections also<br \/>\nallow you to work inside a container. In particular, the previous examples have<br \/>\ntouched on how to bind-mount code and artifacts from the host&#8217;s filesystem<br \/>\ninside a container using the -v or &#8211;volume flags. That&#8217;s how<br \/>\nthe container_deploy.sh script mounts database files on \/srv\/db inside the container. The<br \/>\nsame mechanism can be used to mount source code, and the Docker<br \/>\nexec command<br \/>\nthen can be used to start a shell, editor or other development process inside<br \/>\nthe container.<\/p>\n<p>The develop.sh utility script is designed to showcase this ability. When you<br \/>\nrun it, the script creates a Docker container and drops you into a Ruby shell<br \/>\ninside the container. Go ahead and run .\/develop.sh now:<\/p>\n<p>#!\/usr\/bin\/env bash<\/p>\n<p>id=&#8221;eed291437be80359321bf66a842d4d54&#8243;<br \/>\nid+=&#8221;2a789e687b38c31bd1659065b2906778&#8243;<br \/>\n: &#8220;$&#8221;<br \/>\n: &#8220;$&#8221;<br \/>\n: &#8220;$&#8221;<br \/>\n: &#8220;$&#8221;<\/p>\n<p>export DB STAGE<\/p>\n<p>echo &#8220;Launching &#8216;$STAGE&#8217; container&#8230;&#8221;<br \/>\ndocker run<br \/>\n-d<br \/>\n-e DB<br \/>\n-e STAGE<br \/>\n-v &#8220;$&#8221;:\/usr\/local\/src<br \/>\n-v &#8220;$\/db}&#8221;:\/srv\/db<br \/>\n&#8211;init<br \/>\n&#8220;ruby@sha256:$IMAGE_ID&#8221;<br \/>\ntail -f \/dev\/null &gt;&amp;-<\/p>\n<p>container_id=$(docker ps -ql)<\/p>\n<p>show_cmd () {<br \/>\nenter=&#8221;docker exec -it $container_id bash&#8221;<br \/>\nclean=&#8221;docker rm &#8211;force $container_id&#8221;<br \/>\necho -ne<br \/>\n&#8220;nRe-enter container with:nt$&#8221;<br \/>\necho -ne<br \/>\n&#8220;nClean up container with:nt$n&#8221;<br \/>\n}<br \/>\ntrap &#8216;show_cmd&#8217; EXIT<\/p>\n<p>docker exec &#8220;$container_id&#8221;<br \/>\ngem install sqlite3 -v &#8220;$SQLITE3_VERSION&#8221; &gt;&amp;-<\/p>\n<p>docker exec<br \/>\n-e DB<br \/>\n-e STAGE<br \/>\n-it &#8220;$container_id&#8221;<br \/>\nirb -I \/usr\/local\/src -r sqlite3<\/p>\n<p>Once inside the container&#8217;s Ruby read-evaluate-print loop (REPL), you can<br \/>\ndevelop your source code as you normally would from outside the container. Any<br \/>\nsource code changes will be seen immediately from inside the container at the<br \/>\ndefined mountpoint of \/usr\/local\/src. You then can test your code using the<br \/>\nsame runtime that will be available later in your pipeline.<\/p>\n<p>Let&#8217;s try a few basic things just to get a feel for how this works. Ensure<br \/>\nthat you<br \/>\nhave the sample Ruby files installed in the same directory as develop.sh. You<br \/>\ndon&#8217;t actually have to know (or care) about Ruby programming for this exercise<br \/>\nto have value. The point is to show how your containerized applications can<br \/>\ninteract with your host&#8217;s development environment.<\/p>\n<p>example_query.rb:<\/p>\n<p># Ruby module to query the table name via SQL.<br \/>\nmodule ExampleQuery<br \/>\ndef self.table_name<br \/>\npath = &#8220;\/srv\/db\/#&#8221;<br \/>\ndb = SQLite3::Database.new path<br \/>\nsql =&lt;&lt;-&#8216;SQL&#8217;<br \/>\nSELECT name FROM sqlite_master<br \/>\nWHERE type=&#8217;table&#8217;<br \/>\nLIMIT 1;<br \/>\nSQL<br \/>\ndb.get_first_value sql<br \/>\nend<br \/>\nend<\/p>\n<p>source_list.rb:<\/p>\n<p># Ruby module to list files in the source directory<br \/>\n# that&#8217;s mounted inside your container.<br \/>\nmodule SourceList<br \/>\ndef self.array<br \/>\nDir[&#8216;\/usr\/local\/src\/*&#8217;]<br \/>\nend<\/p>\n<p>def self.print<br \/>\nputs self.array<br \/>\nend<br \/>\nend<\/p>\n<p>At the IRB prompt (irb(main):001:0&gt;), try the following code to make<br \/>\nsure everything is working as expected:<\/p>\n<p># returns &#8220;AppData&#8221;<br \/>\nload &#8216;example_query.rb&#8217;; ExampleQuery.table_name<\/p>\n<p># prints file list to standard output; returns nil<br \/>\nload &#8216;source_list.rb&#8217;; SourceList.print<\/p>\n<p>In both cases, Ruby source code is being read from \/usr\/local\/src, which is<br \/>\nbound to the current working directory of the develop.sh script. While working<br \/>\nin development, you could edit those files in any fashion you chose and then<br \/>\nload them again into IRB. It&#8217;s practically magic!<\/p>\n<p>It works the other way too. From inside the container, you can use any tool<br \/>\nor feature of the container to interact with your source directory on the host<br \/>\nsystem. For example, you can download the familiar Docker whale logo and make<br \/>\nit available to your development environment from the container&#8217;s Ruby<br \/>\nREPL:<\/p>\n<p>Dir.chdir &#8216;\/usr\/local\/src&#8217;<br \/>\ncmd =<br \/>\n&#8220;curl -sLO &#8221; &lt;&lt;<br \/>\n&#8220;https:\/\/www.docker.com&#8221; &lt;&lt;<br \/>\n&#8220;\/sites\/default\/files&#8221; &lt;&lt;<br \/>\n&#8220;\/vertical_large.png&#8221;<br \/>\nsystem cmd<\/p>\n<p>Both \/usr\/local\/src and the matching host directory now contain the<br \/>\nvertical_large.png graphic file. You&#8217;ve added a file to your source tree from<br \/>\ninside the Docker container!<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.linuxjournal.com\/sites\/default\/files\/styles\/max_650x650\/public\/u%5Buid%5D\/12282f3.png\" alt=\"&quot;&quot;\" width=\"286\" height=\"237\" \/><\/p>\n<p><em>Figure 3.<br \/>\nDocker Logo on the Host Filesystem <em>and<\/em> inside the Container<\/em><\/p>\n<p>When you press Ctrl-D to exit the REPL, the develop.sh script informs you how to<br \/>\nreconnect to the still-running container, as well as how to delete the<br \/>\ncontainer when you&#8217;re done with it. Output will look similar to the following:<\/p>\n<p>Re-enter container with:<br \/>\ndocker exec -it 9a2c94ebdee8 bash<br \/>\nClean up container with:<br \/>\ndocker rm &#8211;force 9a2c94ebdee8<\/p>\n<p>As a practical matter, remember that the develop.sh script is setting Ruby&#8217;s<br \/>\nLOAD_PATH and requiring the sqlite3 gem for you when launching the first<br \/>\ninstance of IRB. If you exit that process, launching another instance of IRB<br \/>\nwith docker exec or from a Bash shell inside the container may not do what<br \/>\nyou expect. Be sure to run irb -I \/usr\/local\/src -r sqlite3 to<br \/>\nre-create that<br \/>\nfirst smooth experience!<\/p>\n<h3>Wrapping Up<\/h3>\n<p>I covered how Docker containers typically flow through the DevOps pipeline,<br \/>\nfrom development all the way to production. I looked at some common practices<br \/>\nfor managing the differences between pipeline stages and how to use<br \/>\nstage-specific data and artifacts in a reproducible and automated fashion.<br \/>\nAlong the way, you also may have learned a little more about Docker commands,<br \/>\nBash scripting and the Ruby REPL.<\/p>\n<p>I hope it&#8217;s been an interesting journey. I know I&#8217;ve enjoyed sharing it with<br \/>\nyou, and I sincerely hope I&#8217;ve left your DevOps and containerization toolboxes<br \/>\njust a little bit larger in the process.<\/p>\n<p><a href=\"https:\/\/www.linuxjournal.com\/content\/sharing-docker-containers-across-devops-environments\" target=\"_blank\" rel=\"noopener\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Docker provides a powerful tool for creating lightweight images and containerized processes, but did you know it can make your development environment part of the DevOps pipeline too? Whether you&#8217;re managing tens of thousands of servers in the cloud or are a software engineer looking to incorporate Docker containers into the software development life cycle, &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.appservgrid.com\/paw92\/index.php\/2018\/12\/19\/sharing-docker-containers-across-devops-environments\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Sharing Docker Containers across DevOps Environments&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-5951","post","type-post","status-publish","format-standard","hentry","category-linux"],"_links":{"self":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts\/5951","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/comments?post=5951"}],"version-history":[{"count":1,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts\/5951\/revisions"}],"predecessor-version":[{"id":6566,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts\/5951\/revisions\/6566"}],"wp:attachment":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/media?parent=5951"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/categories?post=5951"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/tags?post=5951"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}