3 Reasons Your Kids Should Learn Parkour!

Parkour is an excellent sport to teach and learn for children because they get the same benefits as adults and may even benefit more. Nowadays, technology has impacted society, causing individuals to…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Docker and the private repo nightmare

If you ever had to include a private repository into your Docker build then you know it’s a nightmare. At least in the beginning when you have never done it before. * Spoiler Alert: Use the vendor folder (Daaaaa) *

A lot of people think they can just share or Mount their SSH folder with the keys and be able to access the repositories using go get, dep or even glide.

What they don’t realize is that the mount folder will not be available on the machine it is built on. And you most definitely don’t want to include your keys in the image.

I’ve spent the last several days reading many articles on medium digging in deep on stack overflow and various other sources trying to figure this out.

In the end, I ended up solving the problem. I don’t like the way it was solved but it works and it’s secure. I never even had to refer to my private keys. I never made them available. And I was able to use the public and several different private repositories. So, how did I do it?

Well, I ended up using dep (you can use glide as well) to create a vendor folder. But instead of telling .gitignore and .dockerignore to ignore the folder I actually added it to the repository. This is the part that I don’t like. Now I have all of those vendor packages added to my repository.

But if you stop and think about it… any developer on the project that has access could add and remove packages when needed to the project and all authentication is handled by the host machine like normal.

Now, when you use Docker to build a new image, your entire vendor folder is there with all packages. No authentication is needed because they’re already part of the repository.

So you’re thinking …”You made the repository bigger so now my container is going to be bigger with all those packages”. That is true if you’re going to do a simple single stage build and base your image off of golang and just run your project.

The resulting image from Docker build is a mere 6MB down from 295 MB. The purpose of this article is not to teach you how to use docker, so I’m not going to go through everything line by line. I will just provide the examples as I believe that if you’re reading this article you’re having the same problem I was having and you already have an understanding of these files and tools being used.

So you can modify this as you see fit. Add and remove items but it’s an ignore file so I see no reason why you should remove anything. Just because you may not use one tool or editor does it mean that somebody on your development team does not or will not in the future.

Our next step is to build the vendor folder. As I stated before I decided on Dep. So, change into your project root folder and run this command.

It’s going to take a while on the first run depending on the size of your project. When it’s complete you should notice that you now have a vendor folder. Now let’s add that to our repository.

That’s it. Hopefully, you created a branch before starting your update and don’t forget to push to the repository. Now it’s time to build.

Now, for this, I will explain some of the parameters here to keep you from going to and hunting through the Docker documentation.

The -rm will remove intermediate containers after a successful build. This just leaves the last image.

The -squash will Squash newly built layers into a single new layer. This is an experimental feature and you will have to enable it and Docker desktop work manually and your configuration file.

The rest is self-explanatory and you should know already. Now that we have a successful build … here is the run command I use when testing locally.

Obviously, if you’re using different ports or not using any ports at all you can remove that part. Other than that, just change the names for both commands to suit your project and you should be good to go. Also, keep in mind, these are the commands I use for local Docker Desktop building and running. The production uses slightly different commands and port values.

Please feel free to comment and give me any feedback on this or anything that I may be doing wrong or at the very least the hard way. Maybe you have an improvement? oh, great send it my way.

For further reading on Multistage Docker files check out the excellent story by Kynan RyleeFun with Multi-stage Dockerfiles

Add a comment

Related posts:

Uber e o uso de nudges

Aplicativos de celulares usam cada vez mais a arquitetura de escolhas (através do design UX/UI) a fim de engajar tanto clientes quanto prestadores para um melhor uso de suas plataformas. A própria…

Grow Your Twitter Following Fast

Last week right before Christmas I made my first ever twitter account. Being a professional developer, of course, I wanted to see the code. While I can not look at the server side code, I can look at…

How to setup Cognio with third party Federated Identities provider such as OKTA using SAML

I recently had to implement Amazon Cognito with third party federated identities provider such as OKTA, where cognito should NOT do authentication for you, your authentication should be done by OKTA…