Docker is a popular platform for packaging apps as self-contained distributable artifacts. It creates images that contain everything you need to run certain software, such as the source code, third-party package dependencies, and required environment attributes.
Because Docker images can be run anywhere Docker is installed, they are a useful format for distributing your CLI applications. The Docker ecosystem includes Docker Hub as a standard available public registry, giving you a complete toolchain for publishing, updating and documenting your tools.
Here’s how to use Docker to package CLI apps instead of traditional OS package managers and standalone binary downloads.
Why use Docker for CLI apps?
Docker can make it faster and easier for users to install your new utility. They can run your app through docker instead of having to search for platform-specific installation instructions. There is no manual extraction of tar archives, copying to system folders, or PATH editing involved.
Dockerized software also makes it easy for users to select different versions, perform updates, and initiate rollbacks. Each individual release you make should have its own immutable tag that uniquely identifies the Docker image. Unlike regular package managers, users can easily run two versions of your software side by side by launching containers based on different image tags.
Another benefit is the ease with which users can safely try out your app without committing to it for a long time. People may be hesitant to add new packages to their machines, otherwise the software will not be able to completely clean itself when they are removed. Docker containers have their own private file system; deleting a container leaves no trace of its existence on the host. This can encourage more users to try your app.
A natural consequence of Dockerized distribution is the requirement that users already have Docker on their computers. Today, many developers will take it as a matter of course, so it’s a pretty safe choice to make. If you’re concerned about excluding users who don’t want to use Docker, you can still provide alternative options through your existing distribution channels.
Create a Docker image for a CLI app
Docker images for CLI apps differ little from those for other types of software. The goal is to provide the lightest possible image while still bundling everything your application needs to function.
It’s usually best to start with a minimal base image running a streamlined OS like Alpine. Only add the packages your software needs, such as the programming language, framework, and dependencies.
Two essential Dockerfile statements for CLI tools are ENTRYPOINT and CMD. Together, these define the foreground process that runs when containers are launched from your image. Most base images start a shell by default when the container is started. You should change this so that it’s your app that runs automatically so that users don’t have to manually run it in the container.
The ENTRYPOINT Dockerfile statement defines the container’s foreground process. Set this to your application’s executable:
ENTRY POINT [“demo-app”]
The CMD statement works in conjunction with ENTRYPOINT. It provides default arguments for the command set in the ENTRYPOINT. Arguments made by the user when starting the container with docker-run will overwrite the CMD set in the Dockerfile.
A good use of CMD is when you want to show some basic help or version information when users omit a specific command:
ENTRY POINT [“demo-app”]cmd [“–version”]
Here are a few examples that show how these two statements cause different commands to run when containers are created:
# Start a new container from the “demo-app-image:latest” image # Runs “demo-app –version” docker run demo-app-image:latest # Runs “demo-app demo –foo bar” docker run from demo-app-image:last demo –foo bar
None of the examples require the user to type the name of the demo app’s executable file. It is automatically used as the foreground process because it is the configured ENTRYPOINT. The command receives the arguments provided by the user to run docker after the image name. If no arguments are provided, the default –version is used.
These two statements are the fundamental building blocks of Docker images that house CLI tools. You want your application’s main executable to be the default process in the foreground so that users don’t have to call it themselves.
Join together
Here’s a Docker image running a simple Node.js application:
#!/usr/local/bin/node console.log(“Hello world”); FROM node:16-alpine WORKDIR /hello-world COPY ./ . RUN npm install ENTRYPOINT [“hello-world.js”]
The Alpine-based variant of the Node base image is used to reduce the overall size of your image. The application source code is copied to the image file system via the COPY statement. The project’s npm dependencies are installed and the hello-world.js script is set as the image entry point.
Build the image using docker build:
docker build -t demo-app-image: last
Now you can run the image to see how Hello World is sent to your terminal:
docker run demo app image: last
At this point you are ready to push your image to Docker Hub or some other registry where it can be downloaded by users. Anyone with access to the image can launch your software using just the Docker CLI.
Manage persistent data
Dockerizing a CLI application presents some challenges. The most prominent of these is how to deal with data persistence. Data created in a container is lost when that container stops unless it is stored on an external Docker volume.
You must write data to clearly defined paths that users can attach volumes to. It is good practice to group all your persistent data into one folder, such as /data. Avoid using too many locations that require multiple volumes to be mounted. Your Getting Started guide should document the volumes your application needs so that users can set up persistence when they create their container.
# Run demo app with a data volume associated with /data docker run -v demo-app-data:/data demo-app-image:latest
Other possible challenges
The mounting issue reappears when your command needs to communicate with files on the host’s file system. Here’s a simple example of a file upload tool:
docker run file-uploader cp example.txt demo-server:/example.txt
This ends up looking for eg.txt in the container. In this situation, users need to link their workbook so that its contents are available to the container:
docker run -v $PWD:/file-uploader file-uploader cp example.txt demo-server:/example.txt
It is also important to think about how users will provide configuration values to your application. If you normally read from a configuration file, keep in mind that users must associate one with every container they create. By providing alternative options, such as command-line flags and environment variables, the experience can be streamlined for simple usage scenarios:
# Set the environment variable LOGGING_DRIVER in the container docker run -e LOGGING_DRIVER=json demo-app-image:latest
Another challenge concerns interactive applications that require user input. Users must pass the -it flag to the docker run to enable interactive mode and assign a pseudo-TTY:
docker run -it demo-app-image:last
Users should remember to set these flags when necessary, otherwise your program will not be able to collect input. You should document commands that require a TTY so that users are not caught off guard by unexpected errors.
These bottlenecks mean that Dockerized applications can become impractical if they are not specifically designed with containerization in mind. Users get the best experience when your commands are pure, don’t interact with the file system and require minimal configuration. When possible, a simple docker-run image name fulfills the purpose of installation and use without friction. You can still containerize more complex software, but you are increasingly dependent on users with a good working knowledge of the Docker CLI and its concepts.
Overview
Docker isn’t just for cloud deployments and background services. It is also gaining popularity as a distribution mechanism for mainstream console applications. You can easily publish, consume, run, and maintain software with the single docker CLI that many software practitioners already use on a daily basis.
Providing a ready-to-use Docker image for your application gives users more choice. Newbies can get started with a single command that builds a preconfigured environment with all dependencies satisfied. There’s no risk of contaminating their Docker host’s file system or environment, avoiding conflicts with other packages, and guaranteeing the ability to revert to a clean slate if desired.
Building a Docker image usually takes no more than the routines you already use to submit builds to various OS package managers. The most important considerations are to keep your image as small as possible and to ensure that the entry point and command are appropriate for your application. This gives users the best possible experience when using your Dockerized software.
This post Using Docker to package CLI applications – CloudSavvy IT
was original published at “https://www.cloudsavvyit.com/15713/how-to-use-docker-to-package-cli-applications/”