Build Golang Docker images with GitLab CI Pipelines

Akriotis Kyriakos
7 min readJul 8, 2022

Use GitLab CI Pipelines to build Docker images for your Golang projects and to push them in Docker Hub.

In this article we are going to setup a GitLab CI Pipeline from scratch for a Golang project and investigate the following topics:

  • Assess the syntactical integrity of your code
  • Run unit tests
  • Build binaries from your source code
  • Package them as a docker image
  • Push this image to docker hub

As a baseline I am going to use a very simple API written in Golang that I am building as the backend for a CarPlay-enabled app that I am developing for my car. Dissecting the source code of the API itself is not in the scope of this article.

What are we going to need

We are going to need access to a GitLab server either a local installation or a public cloud offering — or just one offered by your job. This GitLab server has to have GitLab Runners in place that would be available to use them for this lab. If you have a on-prems instance of a GitLab Server but you haven’t provisioned any Runner yet, you can follow this guide in order to deploy one:

Additionally you are going to need a Docker Hub account in order to be able to push the generated image to it.

More or less that’s all, so let’s get started!

Create the Pipeline

First and foremost we have to add a .gitlab-ci.yml file in the root folder of our repository.

GitLab CI uses this file to configure your project’s pipeline. It lives in the root folder of your repo and contains the definition of your Pipelines, Jobs, and Environments.

and paste the following content to it:

Let’s dissect now this file step by step, but not really in a sequential manner. Pipelines consist of jobs and stages. The former define what to do (compile code, run unit tests, build binaries etc) and the latter defines when to run which jobs. The jobs in turn, are executed by runners.

Analyze the Pipeline

In this pipeline I have defined 4 stages lint, test, build and release and 4 jobs respectively; lint, test, build and build_image.

I am not really interested here in a deployment stage, as I am planning to do this later with ArgoCD, but you can of course expand this pipeline to include one as you see fit.

There are a lot of tools in the Go landscape to assist you in writing code that is cleaner and more conforming to the standards. Go is a an opinionated language and stricly enforces lot of formatting and naming convention standards. You can see in the figure below a collection of the most common tools used in Go. Using one of them doesn’t exclude the others, and as they are not mutually exclusive we are going to use them all in this example.

Walking down the stairs of the prominent Go tools that check and apply coding standards, from less to more critical.

lint/lint:

Linters are intended to check and enforce or just recommend the usage of coding and naming conventions in order to promote a sense of aesthetic uniformity in the Go ecosystem. Linters are mostly dealing with stylistic issues, and their warnings should be treated more as recommendations and less than unbendable standards. Here we are going to use golangci-lint, a very robust and fast linters runner, that comes pre-configured with a bunch of linters, including golint.

With image , we declare the docker image that the job will run in. We can use a specific one for every job or we can define a global one that will cover the whole pipeline if the distinct jobs will not specify otherwise. For this job we are going to use the image golangci/golangci-lint:latest , we will not allow the recommendations to be enforced with allow_failure: falseand under scriptwe specify the command(s) that will be executed by the runner while processing this job.

The lint job will run in a specific docker image defined by the “image”

You can speed up the process by caching dependencies and use this cache in your jobs with extends

test/test:

go fmt is used to improve our code’s readability by making sure that the code is consistent, optically, making sure that future engaged developers will face visually consistent codebases throughout the Golang ecosystem. It defines and imposes the Go formatting standards and automatically applies them to your code without affecting the compilation or the execution flow of your code.

go vet on the other hand, identifies subtle issues in the codebase. It is intended to pick up code that wouldn’t behave like we were expecting it would e.g an unreachable region.

The go test command executes unit test functions — those whose names begin with Test, and placed inside test files that are named with the suffix _test.go. Adding the -v flag you get a more verbose output, that lists all of the tests and their outcomes. All of the tests should successfully pass.

Use the -race flag to activate the race detector — the race detector only finds races that happen at runtime, so it can’t spot races in code paths that are not executed. If your tests do not have complete coverage, you may find more races by running a binary built with -race under a realistic workload.

build/build:

This job is more or less self-explanatory. Here we compile and build our project and the created binary (if they are not needed you can safely omit the exportation of your binaries as an artifact) is stored in the artifacts path that we specified in the beginning of our pipeline as a variable:

You can safely introduce the cache directive and reuse the existing cache we used earlier in this job as well, in order to speed things up!

release/build_image:

That’s the last piece of our pipeline, and practically or end-product. With image , we declare as docker image docker:stable. I used to have some hick-ups with docker:latest and I tend to use the former. The before_script is like preparation step(s) that our actual job scripts depend upon. Here we are going to login to our docker hub account. All the necessary variables have to be on before hand declared in the CI/CD Settings of our repo, under the Variables section.

The script itself, will run after every commit in any branch that contains a Dockerfile and this is defines by the rulesdirective:

If the rule is met, and the pipeline is triggered from the default branch (usually main nowadays) it will assign the tag latest to the image otherwise it will derive it from the value of $CI_COMMIT_REF_SLUG . It will then build the docker image and push it to your docker hub repository.

And that was it! You have an automated pipeline building a docker image directly from you Go source code.

A small extra before you Go!

If you are starting now with Golang and you are on the search of a simple and straightforward Dockerfile for your project, here’s a sample that you could step on and build further:

Is that the end of the road? Can I ship this container now to production? Well no, I wouldn’t recommend you do that! Because the size of the generated image is more than 300MB as it includes all the Golang tooling, which in the end of the day is unecessary to us as we instruct the compiler to disable cgo (CGO_ENABLED=0) and statically link any C bindings that will provide us a self-contained executable, with no external framework or runtime dependencies. If you want to find out how you could reduce the size of your generated Golang image please follow the instructions of my article below:

Have fun with your GitLab CI Pipeline. They are really very flexible and robust and can save you a lot of time during your build process, eliminating human errors and inconsistencies.

--

--

Akriotis Kyriakos

talking about: kubernetes, golang, open telekom cloud, aws, openstack, sustainability, software carbon emissions