Modernizing my network with Docker

A recent project I’ve completed involved some fun Docker usage.

Docker has become a fun industry buzz word whenever someone wants to create a modern web application. For good reason, I might add. Docker allows software to be containerized in isolated sections of an operating system. The best part about this is that everything is self contained, which makes updating and administrating these containers a breeze.

However, to actually gain the benefit of Docker, you must actually use it properly. For starters, if volumes are not setup you might lose data when updating a container. If the containers are not setup to restart, they won’t start automatically when the server goes down.

These are some of the key features I opted out of learning when starting with my Docker experience. As such, it was a royal pain to migrate my data and utilize proper Docker setups.

Another factor that plays into Docker is the core security behind the scenes. Did you know that the Docker Linux daemon can runs as the root user? Did you also know that the core concept of Docker is against the philosophy of Linux daemons. Linux is supposed to be a bunch of packages that do small things – small things that work 100% of the time. There isn’t supposed to be a large single point of failure running the show (Most of the time).

So, considering these factors, I’ve also been playing with a Docker alternative called: Podman

Creating a Podman server?

Podman is an independent containerization software like Docker, built by Red Hat Enterprise Linux. The idea behind Podman is that it is simply a more secure and stable version of Docker. Without getting into a massive deep dive the main differences are as follows:

Podman lets containers run using user accounts, not just the root account.

Podman uses a daemon-less architecture. It uses multiple pieces and applications to produce it’s function. It is not an all-in-one solution like Docker. Which might mean a more stable platform.

Docker and Podman both have many Pros and Cons. My primary homelab direction has always been on overkill security, so I’m tending to lean towards Podman in the usage of my containers.

And use I shall:

I ended up building 3 containerization servers. I’m using Rocky Linux as the core operating system for all of them, as Rocky Linux is a replacement for CentOS.

DOCKER_JAGUAR is my main Docker server.

DOCKER_LION and

DOCKER_TIGER are my Podman servers.

All 3 of them have Cockpit enabled, of which I use my ROCKY_COCKPIT server as the jump point into them.

Cockpit has proved to be an amazing feature built into Rocky Linux. I have each box setup with an SSH Key and randomly generated passwords. I love being able to access all of my servers from a web GUI:

One of the best features of Podman is the default integration with Cockpit on Rocky Linux:

When signing into a server, the Podman option in the left menu let’s us directly manage the containers running on the server:

This native functionality is great, and makes managing these containers so much easier.

Podman replaces the Docker command when managing containers. For example, if you wanted to create a container using the command line, you’d enter:

podman run --name some-wordpress --network some-network -d wordpress

Instead of 

docker run --name some-wordpress --network some-network -d wordpress

These includes other commands, like “docker container ls -a” becoming “podman container ls -a”. This makes it easier to switch between the two platforms, as a single command syntax style only needs to be remembered.

The biggest issue I’ve had with Podman is that it doesn’t seem to have native support for Dockers compose functionality. The biggest benefit of Docker compose is that you can use a single .yml file to generate multiple containers, or complex single containers with a simple “docker compose up” command.

There are scripts you can install and integrate with Podman, but I hesitate using them as they are not officially baked into the software. Sure, they’ll probably work most of the time – but as I’ve learned in IT: We never want to depend on third party fixes, or solutions – unless you’re paying for it.

Docker server setup

As Podman doesn’t support compose, I opted to place a lot of my modern services on my DOCKER_JAGUAR server.

The DOCKER_JAGUAR server again uses Rocky OS with Cockpit as it’s core operating system. However, I installed the latest version of Docker using the repositories hosted on the main Dockers website. The install process was relatively simple. Sadly, Cockpit does not have native support for Docker as they’ve decided to use Podman as it’s main container software. There are some alternative plugins available, but again I’m looking for the most production like OOB experience (I’m sure the alternative plugins are awesome though).

So as I’ve been spoiled by a GUI experience, I wanted to continue that option. A perfect Docker container is: Portainer.

Expanding my homelab with some servers and RAM.

 

New additions

A few months ago I took a trip up to Courtenay to pick up some servers I won via BCAuction.

Sadly, the servers I picked up were not filled to the brim of precious DDR4 ram as I had original hoped. Ultimately, I was able to collect 64 GB’s between the 6 servers I picked up. Which is pretty great to be honest!

I picked up these servers from Courtenay:

3x ProLiant DL360 Gen9

2x ProLiant DL360p Gen8

1x ProLiant DL380p Gen8

The goal was to replace my existing Dell R710’s with some newer hardware that would be more power efficient and support later versions of ESXi.

Sadly, the final tally of my Courtenay trip only really resulted in 1 fully spec’d out server, but even then it had only 64 GB’s of RAM.

So, naturally I had to either make a choice of butchering my Dell R710’s for their RAM, or find some other source of RAM…

And so I did!

I logged into BCAuction again and placed a healthy bet on some freshly pulled RAM sticks.

Yes! I won a lovely auction for for 48 sticks of server RAM:

That’s right! My homelab just got a huge upgrade, 786 GB’s worth of an upgrade!

2 weeks later I had a nice package of RAM on my doorstep. After a bunch of testing, it looks like 6 of the modules they sent are dead. Which means I have 42 sticks of functional RAM that’ve been able to insert into my servers.

 

Did someone say RAM?

After another small while, I’ve completed a bit of crazy upgrade to my homelab. I think the next thing I purchase should probably be a server rack…

Here is a gallery of the new servers: 

Upgrading the networking equipment of my homelab.

I have been using very basic and very cheap TP-link 1GB 8 port switches for my homelab for probably 4 years now. These little switches have actually been pretty amazing, I have had zero performance problems or really any issues of any kind with them. They have been running non-stop for basically 3-4 years now – impressive!

As part of my personal project to over hall my homelab, I realized I’m kind of reaching my max capacity of switching. So, this meant it was time to do a bit of upgrading.

I’ve been part of the Ubiquiti for a few years now, albeit on a much smaller scale. I’ve owned and operated a Unifi Wifi AP for years as part of an introduction to enterprise networking. While I haven’t had a lot of opportunity to use Ubiquiti, I am familiar with their controller software and general ecosystem. I’ve been fairly impressed with their offerings.

Naturally, I figured it would make sense to keep them in mind when purchasing new networking equipment.

I bought 2x US-24-G1 switches from Ubiquiti to help manage my homelab network. The biggest selling point for these switches was their integrations with the Unifi controller and that they are managed switches for around $200.

I particular love that I can now split up traffic between my out-of-band host manage NIC’s, iSCSI storage and aggregate connections between the switches.

Here’s a bit of a sneak peak on my latest project… Cable management:

Really needing a rack mount…

Both switches in the Unifi controller