The incomparable Jessica Kerr drops by with a grab-bag of amazing topics. Understanding software systems, transferring knowledge between devs, building relationships, using VS Code & Docker to code together, observability as a logical extension of TDD, and a whole lot more.
Build your own using these as a reference or simply pull them as-is from Docker Hub and
If you’re using Docker, the next natural step seems to be Kubernetes, aka K8s. Or is it? If you’re part of a small team, Kubernetes probably isn’t for you: it’s a lot of pain with very little benefits.
If you’ve been following along in the open source news cycle lately, you’ve probably heard that Red Hat has dropped the docker container runtime engine from both its Red Hat Enterprise Linux (RHEL) and CentOS Linux distributions.
I must not be following along, because that’s news to me.
That being the case, what do you do when you need to deploy containers? Fortunately, they’ve created a near drop-in replacement for docker, called Podman.
Podman is a rename from kpod, sorta. The new thing is actually called libpod, and Podman exists as the CLI for that library. It’s all a bit confusing, but what’s cool is none of this requires a daemon like the Docker Engine.
If you’d like to give it a go, this walk-through by The New Stack will get you started.
Gives you access to a virtualised ARM based Raspberry Pi machine running the Raspian operating system. This is not just a Raspian Docker image, it’s a full ARM based Raspberry Pi virtual machine environment.
How it does its thing:
A full ARM environment is created by using Docker to bootstrap a QEMU virtual machine. The Docker QEMU process virtualises a machine with a single core ARM11 CPU and 256MB RAM, just like the Raspberry Pi. The official Raspbian image is mounted and booted along with a modified QEMU compatible kernel.
DockerSlim promises a lot:
docker-slimwill optimize and secure your containers by understanding your application and what it needs using various analysis techniques. It will throw away what you don’t need reducing the attack surface for your container. What if you need some of those extra things to debug your container? You can use dedicated debugging side-car containers for that.
Their minification examples are impressive…
A new cryptojacking worm, named Graboid, has been spread into more than 2,000 Docker hosts, according to the Unit 42 researchers from Palo Alto Networks. This is the first time such a piece of malware has spread via containers within the Docker Engine (specifically docker-ce).
Scary stuff, and (at the moment) difficult to detect & prevent:
We’ve reached a point with containers where security must be constantly on the front burner. Antivirus and anti-malware applications currently have no means of analyzing and cleaning containers and container images. That’s the heart of the issue.
Graboid may be the first malware to target containers, but it certainly won’t be the last.
Memorising docker commands is hard. Memorising aliases is slightly less hard. Keeping track of your containers across multiple terminal windows is near impossible. What if you had all the information you needed in one terminal window with every common command living one keypress away (and the ability to add custom commands as well). Lazydocker’s goal is to make that dream a reality.
If you’d like to follow along with someone who “has no idea what they’re doing” to learn how to take a base Docker image made with a single line Dockerfile
FROM debian:latest and convert it to something launch-able, then read on…
…messing about with things like this is the only way to gain extra knowledge of any system internals. We are going to speak Docker and Linux here. What if we want to take a base Docker image, I mean really base, just an image made with a single line Dockerfile like
FROM debian:latest, and convert it to something launchable on a real or virtual machine? In other words, can we create a disk image having exactly the same Linux userland a running container has and then boot from it?
Docker builds can be slow, so you want to use Docker’s layer caching, reusing previous builds to speed up the current one. But there’s a down-side: caching can lead to insecure images. Read on to learn why, and what you can do about it.
Run a secure DoT (DNS-over-TLS) and DoH (DNS-over-HTTPS) DNS server that can do ad blocking and hide your DNS query from your ISP.
Developers, often lacking insights into the intricacies of Docker, may set out to build their Node.js-based docker images by following naive tutorials which lack good security approaches in how an image is built. One of these nuances is the use of proper permissions when building Docker images.
To minimize exposure, opt-in to create a dedicated user and a dedicated group in the Docker image for the application; use the USER directive in the Dockerfile to ensure the container runs the application with the least privileged access possible.
We’re talking with Gerhard Lazu, our resident ops and infrastructure expert, about the setup we’ve rolled out for 2019. Late 2016 we relaunched Changelog.com as a new Phoenix/Elixir application and that included a brand new infrastructure and deployment process. 2019’s infrastructure update includes Linode, CoreOS, Docker, CircleCI, Rollbar, Fastly, Netdata, and more — and we talk through all the details on this show.
Attention Docker Hub users — Docker Hub has been hacked, so check your email to read the report from Kent Lamb, Director of Docker Support and take appropriate action. Here are the details…
During a brief period of unauthorized access to a Docker Hub database, sensitive data from approximately 190,000 accounts may have been exposed (less than 5% of Hub users). Data includes usernames and hashed passwords for a small percentage of these users, as well as Github and Bitbucket tokens for Docker autobuilds.
From lugg on Hacker News:
If you got an email you should:
Containerization technologies are one of the trendiest topics in the cloud economy and the IT ecosystem. The container ecosystem can be confusing at times, this post may help you understand some confusing concepts about Docker and containers. We are also going to see how the containerization ecosystem evolved and the state of containerization in 2019.
Put on your swimming suit, because this is a deep dive. 🏊♀️🏊
Instantbox spins up a temporary Linux system with instant webshell access from any browser. Great for presentations, demos at schools and user groups, testing out random ideas, and more.
Distros supported include Ubuntu, CentOS, Arch Linux, Debian, Fedora, and Alpine.
The new changelog.com setup for 2019 is packed with exciting features that are too good to keep to ourselves. Since the infrastructure code is already public and has been running changelog.com for a few months now, the value that we are sharing is proven to us.
Today containerd graduated within the CNCF to join the ranks of Kubernetes, Prometheus, Envoy, and CoreDNS as a “graduated” project in the CNCF. From Michael Crosby on the Docker blog:
We are happy to announce that as of today, containerd, an industry-standard runtime for building container solutions, graduates within the CNCF.
From Docker’s initial announcement that it was spinning out its core runtime to its donation to the CNCF in March 2017, the containerd project has experienced significant growth and progress over the last two years. The primary goal of Docker’s donation was to foster further innovation in the container ecosystem by providing a core container runtime that could be leveraged by container system vendors and orchestration projects such as Kubernetes, Swarm, etc.
The adoption of application container technology is increasing at a remarkable rate and is expected to grow by a further 40% in 2020, according to 451 Research. It is common for system libraries to be available in many docker images, as these rely on a parent image that is commonly using a Linux distribution as a base.
In many cases, remediation is as simple as rebuilding the image or swapping out the base image, but it’s not always that easy. Click through for more analysis and advice.
Testing code that talks to the database can be slow. Fakes are fast but unrealistic. What to do? With a little help from Docker, you can write tests that run fast, use the real database, are easy to write and run.
I tried Itamar’s technique on changelog.com’s test suite and the 679 tests complete in ~17 seconds. The same tests run directly against Postgres complete in ~12 seconds.
A net loss for me, but that may have something to do with how Docker for Mac works? I’d love to hear other people’s experiences.
Discover ways to shrink your Docker image size by exploring its contents broken down by layer.
In this highly visual and scroll friendly post from Daniele, you’ll follow the evolution of monolith, to components, to VMs, to today’s world of Kubernetes and cloud. Daniele writes:
Kubernetes and Docker? What is the difference? Is it just a fad or are those two technologies here to stay? If you heard about the Docker and Kubernetes, but you aren’t sold on the idea and don’t see the point in migrating, this article is for you. Learn how you can leverage Kubernetes to reduce infrastructure costs and accelerate your software delivery.
Nick Parsons writes on Hacker Noon:
This article will be your one-stop shop for Docker, going over some of the best practices and must-know commands that any user should know.