Hens being hand-fed by a person.
Adobe Stock (Fair Use)

I love reading blogs about technical documentation, so much so that I created a collection and aggregated feed of them plus podcasts and newsletters. Here’s how I did it (and how you can contribute).

What I’ve Done

I’ve collected a number of my favorite blogs, podcasts, and newsletters about technical documentation and presented the in these forms:

I’ve also added a proper RSS feed for the DocOps Lab Blog, which is also included in the above collection and aggregator.

If all you’re interested in is the content, check out the links above and subscribe to any or all of it.

Instead of a clunky, opaque database backend, I’ve sourced all the data for the collection as a YAML file in the repository.

If you wish to submit a feed, check out Contributing Your Favorite Resources below.

The Execution

Some few may be interested in how I did this without using an application server and database or JavaScript, so I thought I’d lay it out.

Feel free to copy any of this and make it your own, with a modified or totally alternate listing of the serialized content of your choice.

Background

Back in early and middle 2000s, when blogs were all the rage and podcasts were emergent, I created a few different tools that collected all or some of the output from their attendant “feeds” in order to create a one-stop shop for content from multiple outlets.

Such “aggregators” were pretty popular at the time, and there was lots of software to produce them with. The only reason to make your own, which many developers did, was to make it fit your specific technological circumstances.

This bespoke development wasn’t so much about customizing the feeds or the look and feel, as other emergent technologies made this possible for some of the open-source and commercial aggregators that existed. It was more about the fact that I wanted to use the tools I was already using for serving application-driven websites. Basically, I wanted them to use my database backend, my app server, and my HTML templating system.

For similar reasons, I have harked back to my industrious days of coding feeds and aggregators in order to create and publish a collection of blogs about technical documentation, and an aggregated feed of them — all because I want to do it my way, which I fully realize might be nobody else’s way.

Goals

I wanted to do this project in a way that demonstrates the power of DocOps — a solution consistent with the practices I advocate for in my work on documentation operations.

That means:

  • No relational databases (YAML instead)

  • Everything in Git

  • No unnecessary JavaScript (see note below)

  • No application server

  • Static-site generation (SSG)

  • CI/CD automation

The aggregator is the only really tricky part of this project. The standard way to pull the latest posts from dozens of feeds together this would be either:

  1. Dynamic web application with a database backend that collects posts centrally, or

  2. Client-side JavaScript that forces every visiting browser to request every feed and construct the page on the fly

I wanted the power of the first option without the overhead of a database or application server. Neither of these opaque maintenance headaches are in use for any of my other offerings, and I surely did not want to start running them now.

The JavaScript option (B) was a non-starter, as it would be a terrible user experience, obscene overkill, and prone to breaking. I’ve seen this done, and it’s ugly.

The Tools

Since this site already runs on Jekyll SSG and GitHub Actions, I used these resources to collect and build the aggregator.

The collection listing itself is a simple YAML file that contains the metadata for each feed, including the URL of the feed itself.

The aggregator is where the real work is done, since it must be kept up to date with the latest content from scores of feeds. Here’s how it works:

  1. A single YAML file gets maintained with metadata for each service.

  2. A GitHub Action runs on a schedule to pull the latest content from each service and combine it into a single feed source.

  3. The Action commits the resulting JSON source file to the repository.

  4. Then the Jekyll SSG builds the aggregator page and feed from that source file, which is committed back to the repository and published on GitHub Pages.

This flow is somewhat unconventional, as it uses a cron job to pull content from external sources and commit it to the repository, which is probably an uncommon use of GitHub Actions. It only runs hourly, as I think it would get costly (well, start costing) to run it every minute or however often might be needed for truly breaking news.

I did end up using JavaScript for the browseable collections page. This is merely a superficial layer to reorganize the generated content — it does not perform any content gathering. This front-end feature was fully vibe coded and will probably be replaced by a multi-page solution in the future.

Contributing Your Favorite Resources

If you have a favorite blog, podcast, or newsletter about technical documentation that is not included in the collection, please submit it for consideration.

Open a pull request to the repository with the feed’s metadata added to the YAML file that contains the collection listing.