How to deploy a Docker container to Kubernetes using wercker and


In this post, I will introduce how to setup a cloud native development environment with wercker, kubernetes, quay and Google Cloud.

The post here is evolved from (the tutorial is out of date, about two years ago, and it is a defacto guideline for a experienced developer, rather a tutorial for a new beginner). That is why I wrote this post to track my learning and also hope it can help if you are also going to play with it)

I assumed you have done,

I assumed you have knowledge:

  • Docker.
  • Kubernetes. Install a minikube in your local machine and play with it. Should have some concepts about cluster, pods, nodes, replication controllers.
  • Knowledge about CI/CD tool, like Jenkins.


Google cloud

Go to Google cloud to register a account. They provided a 12 months trial with $300 in cap.

Logon Google cloud to create a cluster.


I have created acluster ‘cluster-1’ by following the ‘CREATE CLUSTER’ wizard step by step.

Click ‘Connect’ button,


Logon your laptop, and install Google Cloud SDK.

Then run commands as below (copied from above browser dialog)


The kubectl proxy command will help you logon your Google Cloud Kubernetes cluster from your local desktop!


It has no replication controllers created. In the next steps, Wercker will create it automatically.


You can see it also has only system’s kubernetes service.

You can also use kubectl to manage the cluster directly in command line,



Fork my repo


You need to update the information with yours in and wercker.yml

(Do NOT fork it from wercker. Its wercker.yml is outdated)

Go to create a account and create a registry


Its path will be in this case. You need to update the values to yours in and wercker.yml


Bind your GitHub account.


Go to create the pipelines. To make it simple, I created all of them as pipelines as below,


The “build” will be triggered by Git push while the rest pipelins will be based on pipelines hook. Since I don’t define a workflow here, I will have to trigger these three pipelines manually on demand.

All of the pipelines require parameters and K8s parameters so I created them as application environment variables. You can also add variables into specified pipelines.





OK, it is ready to run.

I am going to commit a change to GitHub. Take a note about the commit#.



The build pipeline will be triggered immediately automatically by wercker as I configured its hook type as ‘Git push’.


See more details,


It firstly pulls the latest code, then setup environment, run test, build, then store the artifacts.



The build pipeline has built and stored the artfacts successfully. I am going to trigger the push-to-registry manually (its hook type is not Git Push) to use Docker to push the image to

After the pipeine completes, go to



After the new image is gone to, go to trigger the deploy-kubernetes.

It will run kubectl command (create_cities-controller.json) setup the Replication Controllers. See from the kubnernetes console,


(you can see wercker created ‘cities’ replication controllers in my Google cloud cluster)

And then it will run kubectl command (cities-service.json) to expose the services. See from your Kubernetes console,


(you can see cities has been exposed as services. We should be able to visit it from internet via http://#external endpoints#/cities.json)

Let’s see more details from Google Cloud console. You can choose one of the three replication instances and check the log.


(Listening on the host: xxx is the log printed by the application.

You can also visit the service via browser:


From the console,


You jsut need to run ‘deploy-kubernetes’ for one time only. The next time you have any code changes and want to deploy. You don’t need to run ‘deploy-kubernetes’ again as the replication controllers and all related settings already available. You should run ‘rolling-update’.

rolling-update (this is a new push. It will print one more line ‘Hello World’)


The ‘build’ is triggered successfully and after it completes, I triggered the ‘push-to-registry’.

Okie, now I will NOT run ‘deploy-to-kubernetes’, instead, I will just run ‘rolling-update’ to ask replication controller to update ALL of the three pods in my Google cluster to use the image#db3a58bdee726e0769962023840211568086a64f

Look at the log from rolling-update:


I recommend you read into the document about how Kubernetes uses replications controller to sync up Pods, and how Kubernetes to update/sync up applications among pods…

Now go to Kubernetes console,


See I just ran the ‘rolling-update’ and it helped me update all of the three pods. Go to all of the three pods,


‘Hello World’s is printed for the same.


In this wiki, you can see how we can setup a cloud native development by deploying a Docker container to Kubernetes using wercker and on a Google Cloud environment.

I think it is kinda cool DX (developer experience) of cloud native development:

  1. dev pushes a code to GitHub and then go to have a coffee.
  2. Wercker triggers a build to pull the code and use Docker to generate image
  3. Publish the image to
    1. If Dev wants to test it locally, for example dev has Minikube installed on his local machine. He can deploy the new image to his minikube to do testing.
  4. Deploy the image from to Google Kubernetes Cluster

It is a cool experience. Dev pushes code by a click ‘GO’ and then all will be in cloud.

kubectl and minikube setup

1# install minikube and try to use kubectl to interact with it. It gave me error, “Unable to connect to the server: x509: cannot validate certificate for because it doesn’t contain any IP SANs”. The problem is, my laptop is under a firewall and I have setup the http(s) proxies. I had to set a no_proxy for the local IPs before I ran kubectl to avoid this issue.

2# I had a standalone Docker installed on my Mac OSX 10.12 and then I installed minikube. The minikube has its owned Docker. I ran ‘eval $(minikube docker-env) to use minikube’s docker to try to build something before I deploy it to minikube. However the minikube’s Docker always failed to connect to with all of the means I tried to make my standalone Docker works perfectly. I had to uninstall the standalone Docker and then re-installed minikube to fix this problem.

3# After I fixed 2#. I found that no docker or docker-compose executables under /usr/local/bin (the symbol links are broken) but docker-machine executable does exists. I had to download a standalone Docker installer and then re-install it. After that no need to launch the Docker from Application panel as I will still use the docker from minikube.

After 1#, 2# and 3#, the minikube is all setup well on my Mac OSX 10.12