My Tech Stack

Precipice Peak at Sunset San Juan Mountains

Table of Contents

Be Opinionated

I take all "my OS is better than yours" banter with a grain of salt.

(And I give a lot of "Linux is better" banter, lol.)

It's true, if all you have is a hammer, everything looks like a nail.

We all think "my tool is the best tool for all the jobs."

And that's ok. As long as it's informed.

Don't be the kid who just learned his first language and thinks that language is the world's best at all things.

But do be opinionated about what tools you use and, more importantly, why.

I love meeting people who have a tech stack opinion different from mine, are passionate about it, and can back it up with reason.

I walk away having learned something new.

We need more of them.

The world is full of people who have opinions, are passionate about it, but they have all that passion because they don't have any reasoning.

The more fact-based reasons you have in your tech opinions, the less you'll feel like you need to argue about it, especially with someone on the internet.

'Nuff said.

Just The List

  • Dev Environment
    • vscode
    • Linux (Ubuntu)
  • App Stack
    • Nodejs
      • NextJs
      • NextAuth
      • Socket.io
    • MongoDB
    • Redis
  • Cloud Infrastructure
    • Kubernetes
    • gitOps / ArgoCD
  • git Repo and CI/CD
    • GitLab
      • Bash
      • Docker
  • Package/Container Publishing
    • npmjs.com
    • Docker Hub

Dev Environment

vscode

Way back in the day, I was a VC++ kid. C++ has always been my favorite language, even though I don't write much in it these days.

Before VC++, I used the DOS edit to edit files (I wrote BASIC, my first lang), so the early Visual Studio experience was amazing to me. Integrated debugging, quick variable/function reference, one-key compile, etc. It was so much easier.

So after I left the dark side and went open source (linux, and mostly vim for editing) I always missed how easy Visual Studio made develoment.

I bounced around between things like eclipse and a couple others I can't remember (probably traumatic repression).

When M$ decided to release vscode for linux, I immediately switched and haven't looked back.

I've dabbled with things like cursor (a vscode clone that integrates AI), but have never really found a strong enough reason to leave vscode.

Drop me a line if you think your IDE has something I should have FOMO about.

Linux

Are you kidding? Do I even need to write this up in today's day and age?

I remember in the late 90's when linux was starting to warm up. I was a big M$ fan (VC++, remember?).

I was writing code that ran in ring0 of Windows NT - multithreading, IO completion ports (early days of async IO), and was quite the windows snob.

I worked with these unix guys (this was at Nortel) who were always talking about running linux on their home computers.

It boggled my mind. I had no idea why anyone would want to run home and put a version of unix on their home computer.

So naturally I made fun of them.

It didn't help that I basically rewrote their entire search engine in C++ on NT, using multithreading and IO completion ports.

Theirs was written in C and running on some big unix systems, and my rewrite was much faster than theirs (3x+).

It was also at Nortel that I began working from home, which had a cascade effect of me getting into linux.

When I first started working from home Nortel set me up with an ISDN line. Fastest of the fast back then.

Within a year or two, cable modems came out and I switched over to using @Home (I think that later became Comcast).

@Home had a major tech problem - they couldn't keep their email servers up.

My email would be inaccessible for days at a time.

So I setup my own domain, NTProgrammer.com, and started hosting my own site and email on a windows NT EE server at my house.

Yep, I was bootlegging the Interwebs.

It was completely against the terms of service w/ @Home - but I felt pretty justified since they couldn't keep their email servers up.

I even wrote their support so much I eventually got an email from their CTO saying he knew it was a problem and they were working on it.

I honestly don't know if they ever got it figured out.

In 2001, after I had my site up for maybe a couple years, nimda (an early internet worm that attacked windows servers - admin spelled backwards) came out.

I quickly realized I was super vulnerable and that I'd need to start thinking about security.

My first thought was I'd write a LSP (Layered Service Provider - code you can run in the windows TCP/IP stack to directly intercept packets as they come off the wire), but it was pretty complicated.

Anytime you write something that runs in the kernel you always run the risk of your code breaking and locking the entire box up.

That means debugging would be "fun" at times.

And, the code to handle the packets would be complicated all by itself, so I started hunting around for other solutions.

And here's where my relationship with linux started...

I found out I could write a basic firewall in linux, it would be easy to debug, and super easy to maintain.

I had a spare computer, my first 386 from nearly a decade prior, and thought it woudl be perfect for linux.

And it was, except for one small thing - all the distros you could buy off the shelf came w/ code that was compiled for 486 and 586 ("pentium" ooOOoohh) and up. So the install disks (3.5" floppy) literally would not run on my old 386.

So I hunted around and found slackware (slackware.com - the site's still there at the time of this writing, and it looks exactly the same).

With slack you could download the installer and write it to 9 3.5" floppies and install on a 386.

I want to say that old boat anchor had 64M, maybe 128M of RAM.

I slapped 3 NICs into it (public, DMZ, and private), and was off to the races.

I won't regale you with you all the stories about compiling the kernel by hand or that time I deleted /var/lib (because, hey, I needed space and I didn't put any of those files there...).

Suffice it to say, over the next 10 years my internal network grew to over 20 linux servers, including a couple RAIDs, mostly housed in an enclosed server rack, but also sprawled across the floor against one wall of my office.

I can't remember what caused me to finally switch from windows to linux as my personal OS. I think it was probably cost and that my work had finally shifted completely to linux-based security work.

At any rate, I remember in 2005 making the decision to go linux-only for my laptop and I never looked back. I think in those days I was mostly using RedHat/CentOS. (I remember version 8 having a big infinity symbol, so whichever one that was, I'm too lazy to do the search.)

So, for nearly 20 years I've used only linux. I think it was around 2010 that I switched to Ubuntu, I loved the community support and have never looked back.

Those early years of only having ssh to my linux firewall, doing everything in bash and only using vim as the editor were pretty formative and have paid off big time for my productivity today.

I was always lightyears ahead of coworkers when it came to using linux, crafting big long "one liners" that piped multiple commands together in while and for loops.

I became (in)famous for saying "just hit enter" when a coworker would be working on a linux server problem and I'd give them a big one-liner, they'd sit there and stare at it trying to decipher it, and I already knew that at worst if it didn't give us what we wanted it wouldn't break anything, so hitting enter would give us forward-momentum results and them trying to figure it out was only delaying the feedback.

"Just hit enter" was the original "ship it."

App Stack

Nodejs

Languages (all things "programming", so shells and SQL) I've used, chronologically:

  • BASIC
  • C64asm
  • C
  • C++
  • Bash
  • PHP
  • MySQL
  • Javascript / ajax, then eventually angular
  • Nodejs

There are other languages I can write in, like python, kotlin, and java, but I don't use them enough to maintain fluency. That's not to say I'm still fluent in all those languages above, but I used them enough at the time that I was very fluent in them.

I landed on Node back in 2020/2021. I was working on a startup idea and had been trying to cobble things together in my then-most-current web language, php.

I kept hitting brick walls and it looked like it was really going to be ugly as home made soap if I ever got it to the point of releasing it.

So after a lot of evaluation, I landed on nodejs.

Here's a short list of the criteria I used when I was evaluation what direction to take:

  • Active community (so if I needed some tool or package, it would likely already be written)
  • Web-native (although some components would be backend-only, the hardest part was going to be writing the front end)
  • Full-stack (I didn't want to learn one language for back-end and another for the front-end, and if I could write both together, that would greatly reduce my complexity)
  • Cloud-technologies either supported by the community or natively supported (I didn't want to see something (like redis) and find out I couldn't use it without writing my own (redis) implementation)

PHP didn't fit the bill. I'd have to upgrade my web/js/css knowledge and I already had a lot of pain trying to use angular alongside php. Maybe it was the pre-made web apps I had (one core app was a pdf editor), but the ajax/javascript/jquery/angular was pretty painful and I always felt I was writing hacky code.

Node was the clear winner, and after looking at all the frameworks, I landed on.... NextJs

NextJs

I chose nextjs because express was great for backend, react was great for frontend, and nextjs was the integration of the two.

I was going to have to learn 3 things all at once: (1) how to nodejs, (2) react, and (3) express.

Why not at least combine #2 and #3 together and minimize my onboarding ramp?

And, since there is a lot of support for nextjs, most of my "how to node" would be included in "how to nextjs."

After an 80-episode youtube on using nextjs (I think the maker was ninja-something, can't remember), I was good to go.

And I didn't just watch them. I followed along, did all the code myself.

NextAuth

Another no-brainer. Next Auth has all the auth mechanisms tied into one. Integration to login through social media, or by email links.

I don't know that there's another auth for nodejs/react/next that works as well or as easily.

I did have some fun implementing it, I had to discover some undocumented features.

Namely, I wanted other domains to be able to auth through my domain, so when pages on those domains called my API, they would be logged in to my API.

It was fun figuring it out. I learned a lot more about browser cookie policies than I ever wanted.

Socket.io

In the period from 2010-2012 I was already doing livestreaming.

I used googleMeet for the stream. I would start the meet and get the youtube URL for it, put it into my back-end page running on wordpress, click "go live" and everyone sitting on my "live" page on my site would get the youtube stream and have a chat where they could send me questions and comments.

Pretty hacky, but those were the fun days before websockets.

I had to shut that business down in 2014(ish?), and later when I saw websockets I was excited to see it.

My pre-websocket implementation basically had the client pages POSTing a call to the API (in wordpress, no less, lol) to "get the latest" which could be the youtube URL, other chat messages, etc.

It was super hacky and heavy on the network and the server.

Websockets makes it much nicer as the clients all have an open connection (so no more connection creation and connection tear down for a POST request, which spins up a new instance of php on the server, just to find out there weren't any new messages), and there's only network and server activity when an event actually happens.

Naturally that's now part of my webstack.

MongoDB

OK, I gotta admit, this was a tough one and I'm still not married to it.

I'm a MySQL guy from way back. I can craft super complex MySQL queries and I know how to optimize both queries and the server itself.

I went the mongo route for two reasons:

  1. I had a consultant who was supposed to be a mongo guru, and he extolled all the reasons for useing mongo. As I started implementing it, I kept running into things he didn't know how to do that seemed basic to me (guid vs int primary keys and static functions in mongoose, as examples). I had to ditch him, but was already sold on mongo over mysql for the next reason.
  2. Schema migrations. In my previous iterations of my app in php, I was using phoenix to do mysql schema migrations. Any time you change the schema, you run the risk of not being able to roll back the database (like if the code release is broken and you need to roll the DB back), or screwing up the schema with your schema migration.

Mongo auto-migrates the schema as you change the definition in code.

Which was another plus. The schema definition is in your code, it's not something managed through the database.

With mysql/phoenix you had to make the changes in the database, then grab a snapshot and build the migration from the previous snapshot.

In mongo, you just change your code and when you push your app up, the mongodb automatically matches the new schema.

Caveat: there is one condition where it doesn't pick up the changes, and that's when you remove keys. I had a couple instances where a unique key was blocking me from writing a new record, and although I made the schema change in the code, I had to go delete the key from the database directly.

Redis

I love redis. I first started using it when I wrote my nodejs server to handle all my websockets (part of socket.io implementation).

But then I extended on top of it to do a couple more nice things:

  • Active connection count. As websocket clients connect, I write out a key with a prefix that is for a specific "room" combined with the connecting user id. When they disconnect, I delete the key. Later when I want a room count I can just get a total of all the keys with that prefix.
  • Total connection count. I increment a couple different redis keys for each new connection that comes in - so I have a running total of all the clients my service has ever hosted. Yeah, I geek out on that kinda stuff.
  • Rate limiting. Every incoming request gets written to a key with that user's ID, and it auto-expires in a set period of time. Then, before I service the reques, I count all the keys with that prefix and if it's above my threshold, I don't service the request. Since this is redis-wide, if I have a bad actor who's connected multiple times, every one of his connections get throttled as if they were just one.

Cloud Infrastructure

I scoped out the cost of running all my services as containers on AWS. It would be in the thousands per month.

I decided to hunt around and see how I could cut down on my costs and landed on straight kubernetes.

Yes, AWS does offer startup credits, but the problem with that is they get you locked in and one day when those credits run out you have a crisis. You're either moving off AWS at an incredible tech-debt expense, or you're staying on AWS and getting crushed by thousands a month of run rate.

I decided to avoid all of that up front.

Kubernetes

I run 35+ apps on a K8S stack for less than $200/mo.

I can scale the nodes up at any time, both in adding more compute (larger nodes) as well as more nodes.

I can easily add on new K8S clusters in to handle my existing workload.

And I can have it all autoscale.

AWS has some fancy-shmancy stuff (I use it at my day job), but for the smart tech founder, hosting your own can't be beat.

Some would say you have to learn more - and I get why they'd say that, but I counter that with this: you have to learn something.

AWS doesn't just deploy itself. At some point you have to figure out how to manage your AWS.

And you're either going to learn AWS management, or you're going to learn K8S management (or some other deployment strategy).

gitOps / ArgoCD

I love me some ArgoCD.

Deployments are straghtforward. I build and publish a container (below) and then API ArgoCD to update the deployment.

Any time I need to rollback, I just update a repo and commit.

It doesn't get easier.

git Repo and CI/CD

First, my pipeline looks like this:

  1. From the bash prompt in vscode I type commit_and_tag_next_[minor_|major_]release automated <insert my commit message here
    That figures out the next version (based on minor, major, or just "release"), updates appropriate files (package.json, etc), commits all the outstanding changes, and git pushes them all up to the repo.
    Then it creates a new tag using that version and pushes the tag up to the repo
  2. The tag triggers the build process, which a runner picks up and performs the build.
  3. If the build is a new package (npm type package, for example), the build process ends after publishing the package.
  4. If the build is a new container, the build process proceeds to push the container up to docker hub and tag it with the version.
  5. Then it updates the deployment repo for the dev instance (git update, change file, git commit, git push).
  6. Then it triggers ArgoCD to check for the update, and it waits until the new container is deployed before exiting.

GitLab

OK, ok. Let's hit this nail on the head next.

I went with gitLab and not gitHub because originally gitHub charged developers $5/mo to use them.

I thought that was antithetical to the ethos of the indie/oss community and for that reason alone I didn't use gitHub.

GitLab was the original CI/CD.

Yeah. They were the first ones to roll out automation, and they did it for all free plans.

Speaking of free, I'm still using them for free today. For all my code. For all my pipelines.

I do have my own runners, but that's just a security measure - I don't want my code running on infrastructure that may be compromised.

At my day job we use gitHub, and I do like the way one repo can run another repo's action. So there are some features of gitHub I like, but yeah, I'm not going back.

SorryNotSorry.

"But Richard, everyone will go look on GitHub for your activity/commits and that's where you build your reputation."

Hate to break it to you, that's a vanity metric.

No great product person got there by wanting to make more commits and making sure others see them.

Either you're developing a great app, or your not.

Don't be an IndieFlounder.

Bash

As said above, bash comes natural to me, so my entire pipeline automation is orchestrated by bash.

"But why not python?"

Every modern linux distro has bash built-in. Trimmed down containers may back off to just /bin/sh (which is actually bash with a flag to play dumb, btw), but 99% of the scripting you write for bash will also run in sh just fine.

If you have your orchestration done in python, you have to install python first.

(BTW, what are you installing python with? Bash, of course.)

"But what about _____ tool?"

Pipe all of the above through sed 's/python/type-your-other-too-here/g'

Docker

Not to be outdone - consistency in builds.

I use docker in two different ways, for all of my pipelines.

  1. When I'm building a container I have a Dockerfile that always builds exactly the same - same base image, same tools added, same compile process, and then copied to the same slim image, same production tools added, and application copied into the same place. I always have at least a two-tier layer approach - one layer copies files in, installs build tools if needed, and does the build, the second layer copies over the final build into a slim that will be deployed. I don't have all the build artifacts in my released container.
  2. When I'm building a package (like npm), the Dockerfile does the same build consistency, but on a single layer (because the container isn't published), and at the end of the build it npm publish es the package.

One more note - I have an npm package that actually builds the Dockerfile, etc. This way all my releases work exactly the same. If I ever need to update my package build process, I go to that core package, add the new functionality there, then go use it in my app. That way all the other builds can have the same functionality if they need it.

Keyword: Consistency in every process in build.

Package/Container Publishing

So, I guess I actually covered these two in what I was saying above in "Docker."

So if you want to read about my consistency in my builds, you can scroll up there.

Here I'll just put some miscelanious details

npmjs.com

I have both private and public packages I've released on npmjs. The private ones are those that are just built-in to other apps I release. I could opt for running my own package server, but figure $5/mo is worth it to not have to do that.

Docker Hub

I think I only have one container on Docker Hub that I publish that's public (mysql w/ some lock-file correcting logic on boot up).

All the rest are private because they are meant to run exclusively in my stack.

Again, I could publish my containers to my own container repo, but honestly Docker Hub is just too easy.