Day 4 – Adding Tech Debt
4 min read

Day 4 – Adding Tech Debt

Yesterday, I was following along with the book Zero to Production to set up a web server in Rust. Today, I went rogue and added a background job system. All code quality metrics have dropped as a result.
Day 4 – Adding Tech Debt
Photo by davide ragusa / Unsplash

Queuing Work for Later

When I was working as a Ruby on Rails developer a few years ago, it was common practice to queue jobs for later. The motivation for this was simple: Keep the HTTP request cycle as short as possible. Complex tasks and work that could be done asynchronously was pushed into a job queue. One reasons for this was that it improved the user experience. Another that Sidekiq, the background job system, was just so good and reliable. And when a job failed due to a bug, it was possible to fix the bug and try again.

Simple, efficient background jobs for Ruby
Sidekiq is a simple, efficient framework for background jobs in Ruby

Maybe it is a premature optimization, but I am approaching my current project in the same way. Incoming webhooks get serialized and pushed into a queue for later processing. A background worker dequeues the events and checks if more work needs to be done. If so, it can either create more jobs in the queue or perform the work itself.

The reason that I like this design is that it creates flexibility and resiliency. First of all, the background worker does not have any time constraints. If the web server were to perform the work, the HTTP request might time out while it was busy. This makes it possible to choose less performant, but more maintainable algorithms. For example, work can be done in sequence instead of in parallel. This avoid a while group of concurrency issues that can occur when parallelizing data access. Second, if something fails it can easily be retried. This is particularly useful if the work requires interfacing with third-party systems. These systems might be unavailable, in which case the job can simply be retried at a later time. And sometimes we make mistakes. When a job fails due to a bug, it is still in the queue. A developer can fix the bug, deploy a new version of the application, and the job can succeed.

The downside of this approach is of course that it requires a background job system and a worker. Thus more moving pieces and more orchestration. But I have had great experiences with Sidekiq, and I am sure that its polyglot brother Faktory will work just as well.

Decreasing Code Quality

When I was working on the web server yesterday, I had the advantage of following along with the book Zero to Production by Luca Palmieri. The book provided a reference implementation for many common patterns, for example integration tests and obversability. As someone who is building their first web application in Rust, I appreciate the guidance.

Zero To Production In Rust
A hands-on introduction to backend development in Rust.

Today, I went off the beaten path to set up the background job system. This had two particular challenges:

  1. Finding the right abstraction for the background jobs
  2. Mocking the job system in integration tests

I probably spent most of my time dabbling with the interface between the web server and the background worker. The simplest solution would have been to throw the whole webhook into the queue. The benefit of this approach would have been that it kept the web server very simple. But it would have forced the background worker to know implementation details about GitHub's webhooks. This felt like tech debt to me.

So instead, the web server deserializes the incoming webhook and extracts the necessary information from it. It creates an Event object and pushes it into the queue. This keeps the implementation details of GitHub's webhook events in a single place. And it makes it easier to support other platforms in the future, since the core library does not need to be changed for that. Instead, it is enough to add a new endpoint that can deserialize–for example–GitLab's webhooks.

The code is still a mess. There is considerable overlap between the core library and the web server. A quick search reveals that there are roughly 8-12 types that deal with pull request events. There is the PullRequestWebhook from GitHub, and then the PullRequestEvent that is used internally. Both have similar, but still different fields. While I don't like the duplication, changing the internal types to work with GitHub's data structures felt even more wrong.

My hope is that this will improve over time and as I become more proficient with Rust and GitHub's API. Right now, I have only implemented a single event type. Once there are more, patterns might start to emerge that make it obvious how to clean up the code. But the code works, and that is the most important thing for today.

Next Steps

The next step is obviously to create the background worker. But this will have to wait until next week. The routine that I am trying in January is to work four days on the code itself. The fifth day will be spent on shaping the next week. I also want to start recording my personal podcast again, and leave time to write technical blog posts. My hypothesis is that a day of reflecting and planning will make the other four days much more productive and valuable.

Here is the first episode from my podcast, recorded last year. You can find it in iTunes, Spotify, Overcast, and pretty much everywhere else.