
I am so, so sad… last November, they killed off the Heroku free tier. Woe! I probably owe my career and a vast amount of my learning to the fact that I was able to deploy a whole frickin rails monolith back then with a simple git push heroku master.
Git push heroku master…
Git push heroku master…
Has a beautiful ring to it, doesn’t it? Boom, you deploy, and it’s online in seconds. How beautiful is that? And it doesn’t cost a dime, son. Sure, your app goes to “sleep” after a while and there’s a plethora of restrictions, but it doesn’t cost a dime! Oh! Did I also mention that you get a free managed PostgreSQL database? OH MY GOODNESS, Heroku I love you.
It’s perfect to teach and learn, to run labs, proof-of-concepts, exercises, tutorials, etc. I taught people how to build a whole-ass imageboard with it back then. Thank you, rails and Heroku for your amazing generosity. I owe my career to you :)
Alas, we’re grumpy old men who have to worry about endless scalability™ to ~~keep Moloch happy~~ ~~not get the ax~~ make our goals and we don’t get Heroku free anymore, and this means my Mario Tarot is not online anymore.
RIP https://smbdxfortunes.herokuapp.com/
Even worse, this means that now we have to use hacks to run our apps on shit like AWS lambda ~~because the solutions architect hypnotized the CTO and now we are tied to the Amazon ecosystem~~ to try to run things on cloud and not break the bank for Jeffrey B. and we end up with bullshit like this to run fucking Wordpress:

And… no! As much as I love Heroku, I don’t want to pay the $5 a month for the eco dyno. I want it my way because I’m a stupid fool! I learn the Hard Way because I’m a fucking idiot!
I tried mounting Mario Tarot on Lambda without success (I was perfectly aware that this wasn’t the use case, but I’ve seen people hack AWS lambda a lot for things like this).
You know what? And I quote, “fuck this”. I’m not going to pay AWS for this. FORGET ABOUT THE CLOUD. Let’s mount this on-prem (or “edge” cloud computing as people will tell you, which is kind of a fancy marketing buzzword for “a local server”, but, sure, “cloud on the edge”) with the raspberry cluster I set up the other time. I got a lot of compute there which I can repurpose for my homebridge server, etc.
- Convert app to sqlite:
- This is a personal choice. I could actually persist all of the fortunes in a ruby module, under a trie (hashmap-of-hashmap), but I’m lazy, so sqlite it is, because I want to reuse the sql script I used with Heroku (RIP liek if u cri everytiem). I can just package it in a container image and we’re off to the races.
Dockerize app:
Add a Dockerfile. Notice that we’re binding the rack server to serve on local on 0.0.0.0, instead of the loopback (aka localhost):
FROM ruby:2.7.7
WORKDIR /app
ADD . /app
RUN gem install bundler:1.16.0
RUN bundle install --system
EXPOSE 4567
CMD ["ruby", "fortune.rb", "-o", "0.0.0.0"]
- Build the image for ARM v7, because raspberry Pis run on ARM! and push to a repository (don’t forget the tag!):
docker buildx build --platform linux/arm/v7 -t nullset2/smbdxfortunes:latest --push . If all goes well you can test with docker run -p 4567:4567 nullset2/smbdxfortunes:latest and you can hit localhost:4567.
Deploy on kubernetes:
- We are going to design a Pod consisting of an application container exposing port 4567 (which is what Sinatra is listening to) and a separate container which updates our dynamic DNS on no-ip.
- Why use no-ip? I am running this from
the nullhouse, and like most everyone else, I do not have a static IP at home (because for most intents and purposes as a consumer you don’t need it, you’re behind NAT, and ISPs charge you extra if you want a static IP). - Thus I need dynamic DNS so the dynamic IP of my router on the public internet is exposed through a static hostname (
smbdxfortunes.zapto.org) and I can reliably access it from anywhere on the Internet. - One way to achieve this is with the no-ip DUC, which I can run on a cron on my cluster. The only downside is that you have to sign in every 30 days to confirm that you still want your
zapto.org subdomain (sigh… eh, it’s not that bad…). - I will integrate Dynamic DNS with docker image
aanousakis/no-ip:latest. This will run at certain intervals, getting the public IP of the router, and feeding it to no-ip so we don’t have to.
Besides the Pod, I need an Ingress and a Service.
- Ingresses are how Kubernetes allows things from outside the cluster to connect to the internal networking of the cluster, which is virtual, simulated.
- What we are interested in, the sinatra app, resides in a pod, in a container exposing port 4567.
- A pod can contain many containers which expose different ports. These containers can be scaled up or down at will and will change IP addresses (in the internal kubernetes virtual networking) over time.
- So, a
Service must be mounted between the Ingress and the Pod, so the Pod exposes a uniform interface. - An Ingress requires an Ingress Controller. Lucky for us, we get
traefik out of the box with k3s, covering the Ingress controller and Ingress part. - Look at your Kubernetes
services and you should notice that there’s an EXTERNAL IP for the LoadBalancer for traefik. This is what you’re going to feed to your router on the next step. - So we just need to configure an Ingress on port 80, that reaches the Pod through a Service on port 80, which connects to the Sinatra container on port 4567.
- This is the full manifest, save it as
deploy.yaml:
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: smbdxfortunes
namespace: default
labels:
app: smbdxfortunes
spec:
replicas: 1
selector:
matchLabels:
app: smbdxfortunes
template:
metadata:
labels:
app: smbdxfortunes
spec:
volumes:
- name: tz-config
hostPath:
path: /etc/localtime
containers:
- name: smbdxfortunes
imagePullPolicy: IfNotPresent
image: nullset2/smbdxfortunes:latest
ports:
- name: web
containerPort: 4567
- name: no-ip
image: aanousakis/no-ip:latest
imagePullPolicy: IfNotPresent
env:
- name: USERNAME
value: {{REDACTED}}
- name: PASSWORD
value: {{REDACTED}}
- name: DOMAINS
value: smbdxfortunes.zapto.org
volumeMounts:
- name: tz-config
readOnly: true
mountPath: /etc/localtime
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
---
apiVersion: v1
kind: Service
metadata:
name: smbdxfortunes
spec:
ports:
- name: web
port: 80
targetPort: web
selector:
app: smbdxfortunes
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: smbdxfortunes-ingress
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: smbdxfortunes
servicePort: 80
- NOTE: I had to use the
networking.k8s.io/v1beta1 version of Ingress because my meek, poor k3s cluster is fixed to Kubernetes 1.17, otherwise the master node would just blow up by the use of etcd. Alas!
- Enable port forwarding on my router:
- When your router gets requests from the outside world through Dynamic DNS, it will not know what to do with them unless if explicitly allowed through a technique called “Port Forwarding”.
- Depending on your ISP, Port Forwarding may actually be explicitly forbidden so you may want to double check on this, but personally I’m using Comcast and I know they allow for this in the admin interace. I actually don’t use the Xfinity router personally, though; instead, I set my router to what’s known as “bridge mode”, which makes the ISP box act as a passthrough to my personal router running FreshTomato custom firmware, so I need to setup the port forwarding rules in FreshTomato instead.
- This is how my port forwarding looks:

- Get it all in:
kubectl apply -f deploy.yaml
It’s done. Check out http://smbdxfortunes.zapto.org/. As a conclusion, I present an Architecture Diagram for your enjoyment:
