From Static Textbooks to Living Systems: How I Tried to Turn My Brain into AI Agents

For years I watched good learners get trapped inside bad containers.

A static textbook. A fixed syllabus. A teacher repeating the same five examples because that’s what the book has on page 37. You can be smart, motivated, and disciplined — but if the container is rigid, your learning is slow.

I wanted the opposite: something alive. Something that meets you where you are, uses what you already know, and then builds the shortest possible bridge to what you need next.

This is the story of how I turned those cornerstones and bridges that I spoke about in the previous piece into a living system — and, yes, tried to turn bits of my brain into agents that can teach.


Why static doesn’t cut it anymore

Textbooks are snapshots. They freeze a moment of knowledge and force every learner through the same tunnel —
Day 1: Greetings – Bill meets Thitiporn in the library
Day 2: Colours
Day 3: How to respond to “You speak Thai so well”

That made sense when printing and shipping were the bottlenecks. Today the bottlenecks are different: personalisation, precision, and time. You shouldn’t spend months climbing the wrong wall when a bridge could have carried you straight across.

That’s what I set out to build.


The spark: cornerstones, bridges… and a conductor

In the last post I talked about cornerstones (what you already have in your toolkit) and bridges (direct mappings into the target language).

The leap was this: if I can map my own mental models into code and solid, scalable, extensible data structures, then agents can recognise your cornerstones, choose the right bridges, and assemble a lesson on the fly.

  • You type a prompt in the Arena (my Discord-like workspace).
  • That prompt travels with your profile: languages you speak, your progress, preferences, strengths and gaps.
  • A Master Agent bundles that context and hands it to the CLF Teacher Agent — the conductor — which decides which tools, data, and examples to bring in.

It feels like a well-rehearsed orchestra responding to a new piece in real time.


RAG without the jargon

People throw around acronyms. Here’s what I actually use, in normal English.

RAG stands for Retrieval-Augmented Generation. Instead of asking an AI to invent everything from nothing (expensive, and often wrong), I built a library of my own material: my 700-page book, course notes, articles that go back to 2005, and thousands of YouTube videos and transcripts — all chunked (split into meaningful blocks) and tagged with retrievable fields.

When you ask a question, the system retrieves the most relevant chunks, then uses an AI model to weave them into a response that fits you with live data. It’s grounded in what I actually teach, but shaped for your context.

Result: better accuracy, better teaching, far cheaper. My total AI token spend to get the Arena to soft launch? Under USD $5.


When AI needs help: hand-built linguistic engines

LLMs are brilliant, but there are things they still muddle — especially the finicky bits of phonology, tone class logic, and abugida behaviour. Rather than throw tokens at the problem, I wrote code that does the heavy lifting deterministically.

One example: my Thai syllable analyser. From a single syllable it can derive tone class, dead/alive status, vowel length, legal finals, tone outcomes, articulation hints — and then generate graphics, drills, and lessons from that analysis. I spent months getting these thousands of lines of code right — and I’m still tweaking them — but because of that, I can now save $$$ by not wasting AI tokens on calls that would return questionable data with a tendency to hallucinate. My algorithms ensure that what’s returned is correct, stable, and consistent.

Think of AI as electricity. Useful — but you still need to wire the house correctly. These code engines I built in advance are the plumbing and wiring.


The Tool Bus: Lego for learning

Inside CLF every tool can talk to every other tool. I built a communication Tool Bus for everything living in my site — a lightweight pub/sub network — so data only needs to be fetched or computed once.

  • Look up a Thai word? The analyser, tone trainer, quiz builder, flashcards, and sentence tools all get the result instantly.
  • Ask a follow-up? The new data streams through the same bus. No repetition. No retyping. It feels like interactive Lego.
  • Often, because my back-end code does a lot of the heavy lifting, you don’t even need to make repeat follow-up calls. At no cost, it predicts what you may want and instantly has those results on hand if you want to see them and learn from what it’s prepared for you.

This is where learning starts to feel alive rather than linear.


The flow: from prompt to dynamic lesson

  1. You type a prompt in Arena (free text, or a /slash command like /tone-chart).
  2. Profile & context are attached (your languages, recent sessions, wins and struggles).
  3. Master Agent triages; CLF Teacher Agent chooses:
    • which cornerstone functions to run (my hand-built engines),
    • which chunks to pull from the library (RAG),
    • which model to call (or none at all),
    • which tools to inject back into Arena.
  4. You get a tailored lesson: markdown + interactive tools (ToneBox, AI-generated practice, quizzes), plus follow-ups that chain intelligently.

No two learners see the same path, but everyone is steered toward mastery.


Why BYOK (Bring Your Own Key)

I didn’t take VC money. If I had, I’d be nudged toward shallow gamification and daily-streak theatre.

Instead I funded and coded every line myself (along with my army of agents), so I chose BYOK. Learners can use whatever LLM provider they want, or their choice of text-to-speech voices (though Google, in my opinion, has the superior voices, which I enabled first).

That means the decision of what AI provider, which AI model, and how much you use is all in the hands of the learner. You just plug your API keys in — encrypted end-to-end — and it works.

Even if you don’t have AI keys, much of the functionality within the site and within the learning arena still works, as it’s driven by my own code — the engine room is primarily code and tools, not token burn.

This lets me build what I know serious learners need, not what looks good on a pitch deck.


Textbooks come alive

Here’s the philosophical shift.

A “textbook” in CLF isn’t a stack of pages. I see textbooks as a collection of algorithms + variables: explanations, theories, functions, examples, and live data that recombine based on you. Pipe live data through those things and magic happens.

  • Teachers can see exactly where a student is stuck and spin up targeted practice.
  • Learners get instant diagnostics and next steps that make sense for their cornerstones.
  • Costs stay sane because the heavy lifting happens in code, not in the model.

Education stops being something you passively receive. It becomes something that adapts to you in real time.


What this means for you

  • If you’ve felt that traditional courses don’t fit, you’re not broken — the container is.
  • If you already carry pieces of the puzzle (tones from Cantonese, abugida from Hindi, Sino-lexicon from Chinese), the system recognises it and uses it.
  • If you teach, this gives you superpowers: precise diagnostics, living materials, and tools that do the grunt work while you coach.

I built what I wished I could always have

I didn’t set out to build a tech showcase. I built what I wish I’d had as a learner and what my students needed as a teacher. If parts of my brain are now running as agents, it’s only so yours can learn faster.

Education should meet you where you are, then take you where you need to go. That’s the whole point.


Next time: I’ll crack open one cornerstone — the Indic abugida — and show why it’s one of the most elegant “maps of the mouth” humans ever designed, and how CLF uses it to unlock tone, spelling, and pronunciation across all languages in one way or another.