All the pathes lead to Docker(Con)

DockerCon Europe 17 is already behind us, and I would like to share my path that lead me to attend it. The journey was by far not a straight line and it could even provide with some ideas on how to request your travel next year.

How it all started

It all starts with an anecdote that I was lucky enough to share with the ‘main victim’ during DockerCon:

Around 2 years ago, my Geek team decide to try learn Go(lang). The community was quite already in place and howtos, videos training and examples were already massively available.
Still, for whatever reason we ended downloading a pirated copy of a certain Nigel Poulton Go course.
We simply became instant fans of both the person and the language. However, during his course he kept mentioning how Go was production ready because it already was the power behind Docker, this “new” container technology that was taking the world by storm.

As I always try to give back to Caesar when possible, I managed to have my company ordering me a 1 year Pluralsight license (sorry for the initial steal) and when I looked up on the training Mr Poulton created, there was already 3 of them about Docker (fun fact: the Docker Learning Path can be called the Nigel Path to Docker wisdom as only his courses are listed).

While I was seeing a lot of potential in Docker, being in a company that runs exclusively Microsoft technology and applications based, the sell was kinda difficult at first.
Hopefully for me, with Microsoft’s new direction and vision under Satya Nadella, the wait was not long and Microsoft embraced the Docker train instead of creating a similar (and compatible) technology.
I could then make a first presentation of this technology to my CIO and he simply was mind blown about this technology.

The next logic step was to know more about Docker in the industry (Pharmaceuticals if possible) and DockerCon seemed the right place to do so, as I knew there would be Customer success stories and certainly could learn about best practices for implementing it.

My CIO approved, however as I am part of the Business Applications team, he told me that I would get his sponsorship only if someone from the IT Operations would join me.
Side note: for the ones who attended Lee Namba‘s talk (MTA with Docker EE: From PoC to Production) my journey at that point was already in the pit. I didn’t even had the luxury to rejoice at the top of the initial curve.

After few discussions with the Head of Deployement, he appointed one of his direct reports to go with me and that’s how I finally ended in DockerCon.

There is still much to say, however I would prefer discuss it live around a good burger.

My final words/advice on this journey would be: your Docker journey can start in many ways, but it will (hopefully) always end in one of the greatest communities I’ve seen in a long time.
I have four words for you: Welcome to Docker(Con), Friend!

>>> Nunix out <<<

WSL: mount drives the docker way

So, I have this shiny Surface Pro 4 with a nice 64Go SDCard and, as you may know if you’re reading this blog, WSL does not (yet) mount SDCards or USB sticks.

I tried two “normal” ways, with CIFS and SSHFS, however when trying to mount the filesystem I got the not so nice errors.

CIFS:

ref: https://www.swiftbyte.com/articles/linux/mounting-a-wicd bashhare-in-ubuntu-on-boot

SSHFS:

ref: https://www.digitalocean.com/community/tutorials/how-to-use-sshfs-to-mount-remote-file-systems-over-ssh

Then, while playing with Docker and the WSL, I could actually mount my SDCard to a docker (normal) and then access it from the “normal/linux” docker client:

Ok, until now, everything is working as expected. A quick inspect show us the configuration of the Mount:

At this moment, as my *nix skills are a bit rusty, I searched for an alternative to NFS as my direct memories are not so good, so don’t blame me but my Masta instead.
Also the high risk of not being able to mount it (once again) didn’t motivate me too much (still, I will try it in another blog if no one else tries it).

“Et la lumière fut!”. I already went to the extent of implementing an SSH server into a container (by the way, no need for that to access it) so I could try to RSYNC and … IT WORKED.

So here is the setup needed:

  1. Have Docker for Windows installed
  2. Pull the Ubuntu image (I choose this one for “standardization” purpose)
  3. Create a new container from Powershell! with the following options and follow the configuration stated in Docker own Dockerfile for the SSH service:
    PS> docker run -it -p 2222:22 -v d:/:/mnt ubuntu bash

    • The reason is simple: Powershell “sees” your external drive
    • This command will expose the SSH port to your local port 2222 and mount your external drive to “/mnt”
  4. Install RSYNC in the container
    $ apt-get install rsync

    • You can exit your session without killing your container by using “CTRL+P” and then type “CTRL+Q”
  5. Open a WSL Bash console
    • You can already connect with your WSL Docker client and see your drive mounted in “/mnt”
  6. Install RSYNC (if not already done) in WSL
    $ apt-get install rsync
  7. Try to sync a file from WSL to your External drive
    /usr/sbin/rsync -avh --progress -e "ssh -p2222" bigfile.txt root@192.168.192.55:/mnt/

    • Please note that you need your “local IP” as we did map the port 2222 to the ssh port of the Container
    • For this first test, I created a 1Go file and synched it

And here you have, now you can have your files synched to your external drives.
I will do a follow-up blog post with another synch tool: Unison (both ways sync for the win).

>>> Nunix out <<<