I spent my run today thinking about the "work of giants."

In the eighth century, an anonymous poet in Old English wandered through the ruins of Roman Bath. He looked at the shattered wall-stone and the tumbled towers and called them the work of giants. He wasn't just talking about the scale of the buildings; he was talking about the gap in understanding. The people of his time could see the ruins, but they had lost the knowledge of how they were built. The underfloor heating, the massive aqueducts, the complex urban planning—the infrastructure remained, but the idea had vanished.

It is a terrifying thought: the possibility of living among the ruins of a world we no longer understand.

Lately, I've been feeling a digital version of this.

It feels like we're somewhere in the middle of one of the biggest infrastructure shifts in living memory. We've moved from writing code to prompting agents. We've moved from "searching" for information to "synthesizing" it. And while it feels like we're gaining superpowers, there is a hidden risk: we are becoming the anonymous poet.

If we rely entirely on the "giants" (the LLMs) to build our tools and write our code, without maintaining a map of how it all works, we are just building a more sophisticated set of ruins. We are delegating the execution, but we are also accidentally delegating the understanding.

The Shift from Implementation to Intent

I've been diving deep into Andrej Karpathy's work recently, and he hit on something that shifted my perspective: the concept of the "Idea File."

For a long time, the way we shared knowledge was through implementation. You shared a GitHub repo, a snippet of code, or a finished app. But in the era of LLM agents, implementation is becoming a commodity. If you have a powerful enough agent, the specific code matters less than the intent behind it.

It seems like the new unit of value isn't the code anymore; it's the idea.

To make it concrete: instead of sending someone a GitHub repo, you send them a description of what the system does, what problem it solves, what constraints matter. Their agent builds the implementation for their environment. The intent travels; the code gets rebuilt from scratch each time, customised to whoever's building it.

Building the Living Wiki

This realization changed how I think about my Second Brain.

For years, I've used Obsidian to collect notes, highlights, and clippings. But if I'm honest, most Second Brains are just digital attics. We put things in, and then they sit there, gathering digital dust, waiting for a search query that may never come. It's a static archive.

I want something that isn't just a warehouse, but a living wiki.

The pattern I'm experimenting with comes directly from Karpathy's tweet. The structure is simple but the implications aren't: a raw/ directory holds all your source material — articles, papers, transcripts, images, whatever you're researching. An LLM then incrementally "compiles" that into a wiki: a collection of .md files organised into concepts, with summaries, backlinks, and cross-references maintained by the agent. Not by you. The key detail is that you rarely touch the wiki directly — the LLM writes it and keeps it updated.

Obsidian is the frontend. You use it to view the raw data, browse the compiled wiki, and read any outputs the LLM generates — slides, diagrams, summaries. When you ask a question and get an answer, that answer gets filed back into the wiki. So every query you run makes the base richer. Karpathy notes that once the wiki reaches around 100 articles and 400K words, you can ask genuinely complex research questions across the whole thing — and the LLM navigates it through its own index files, no fancy RAG required.

I'm still in the early stages of this. I've been testing the capture layer — working out how to feed it consistently, what's worth indexing and what isn't. The full loop isn't running yet.

But even just thinking about it as a pattern has changed how I collect things. I'm not organising notes anymore. I'm feeding raw material to something that organises itself.

The Anti-Ruin

The goal here isn't to build a more efficient tool. The goal is to make sure my own intellectual infrastructure doesn't quietly crumble.

By using agents to maintain a living map of my learning—from the collapse of Roman Britain to the inner workings of deep learning—I'm trying to build something where the "work of giants" stays accessible. Not just relying on the LLM to give me an answer, but using it to help me build a running record of understanding.

The ground is still moving. The tools we use today will probably be ruins in five years.


References