AI Is Making Us Poor and Dumb (Unless We Learn to Do It Right)

AI Is Making Us Poor and Dumb (Unless We Learn to Do It Right)

Here’s a truth: the entire AI “token burn” I accumulated while building CLF up to soft-launch level was USD $5.47. Yet, I regularly see tech companies burning tens of thousands of dollars per month on the same thing. How is that even possible?

It comes down to how—and why—we use AI.


Story One: Degrees That Deteriorate

A certain institution I know delivers Western university degrees across Africa and Asia through local partners. They’re under immense pressure to keep pass rates high—if too many fail, the franchise unravels.

What surprised me most was discovering that students were barely writing. Some left exam questions blank. Others just rephrased the question as their answer, or posted incoherent text. Upon investigation, it turned out that many were incapable of writing any kind of extended response without AI (like ChatGPT).

They were even using ChatGPT to reply in WhatsApp groups—even for one-to-one personal messages. Copy, paste, return. Often without understanding what they were posting. In some cases, ChatGPT was effectively deciding what they thought, rather than serving as a translator.

Meanwhile, the English-language requirements (like IELTS) were lowered, with promises that students would “catch up” later. Standards are being tweaked not to raise students up, but simply to sustain a broken model.

The truth: AI is being used to permit—even perpetuate—academic collapse.


Story Two: Devs—Fast or Fragile?

Then there’s coding. The hype is real: “vibe coding” lets a ten-person team feel like a hundred-person team, as Y Combinator’s Garry Tan says. LLMs can build entire drafts just by describing the “vibe” you want. But it has a dark side.

  • Developers using vibe coding have reported subscription bills in the tens of thousands per month. One user racked up $35,000 of compute while only paying $200 for a “flat-rate” plan (ref: Business Insider).
  • Though AI models are cheaper per token, token consumption is exploding. Reasoning models now generate thousands of internal tokens just to return a short answer (ref: IKANGAI) (ref: Venturebeat).
  • Open-source models? They guzzle 1.5 to 10× more tokens than closed ones, often negating any cost benefit (ref: Venturebeat).
  • And “vibe coding” skips code review entirely, inviting hidden bugs and security holes (ref: Financial Times).

The result: AI is powerful—but dangerously wasteful and undisciplined in careless hands.


How I Built Something Smart—and Cheap

So how did I spend just USD $5.47 to build a full AI-powered language learning platform?

  • Heavy lifting done by me, not the AI. I wrote the code that handles complex linguistic logic—tone systems, abugida structure, dialect mapping—so AI only needs to generate prompts or follow-ups, not reinvent the wheel.
  • AI as conductor, not brute force. I built a rich content library: my book, lessons, YouTube transcripts, blog posts. Everything’s chunked, tagged, and ready for RAG (Retrieval-Augmented Generation). AI retrieves and recombines, rather than invents (ref: IKANGAI) (ref: Venturebeat).
  • Tools over token extravagance. Detailed linguistic analysis happens through my code. AI just adds what it does best—filtering, editing, interacting.
  • BYOK (Bring Your Own Key). Users plug in their own AI keys, and decide the model. Meanwhile, CLF runs on its own smart plumbing.

The result? AI helps, but doesn’t dominate. It’s an amplifier of human intelligence, not a crutch.


The Takeaway: Use AI to Empower, Not Enslave

We have a choice:

  • Let AI make us poorer (token bills that wreck budgets) and dumber (kids who can’t write, or engineers who can’t debug).
  • Or use AI mindfully, to enhance what we already do well—our ideas, systems, and understanding.

I’ve spent years crafting the infrastructure so AI doesn’t replace me—it makes me better. That’s the hope. That’s what learners deserve.

AI is here. That’s not the question.
The question is: Will we use it to stay shallow, or to build something truly smart?


Next time: I’ll unpack RAG—the retrieval backbone of CLF—and show how it keeps learners grounded in real knowledge, not hallucination.

Read more