Skip to main content
AI 16 MIN READ MAY 11, 2026

How I Built AccelaStudy AI

Today I launched AccelaStudy AI: what I believe is the most advanced, most capable adaptive learning platform ever created. That's a bold claim but one I believe will quickly be proven as people start using it to study.

Today I launched AccelaStudy AI: what I believe is the most advanced, most capable adaptive learning platform ever created. That's a bold claim but one I believe will quickly be proven as people start using it to study.

The technology behind AccelaStudy AI is called AVIAN — Adaptive Vector Intelligence and Network — and is protected by 33 patent filings describing 192 distinct inventions. The filings run nearly 1,000 pages of documentation, with 263 technical figures, 733 claims, grouped into 36 branded platform clusters spanning a 13-tier pipeline architecture. No competitor has anything remotely like it.

I built all of this in 80 days. Solo. Bootstrapped. $0 raised, no team, no co-founders. My only collaborator was Anthropic's Claude.

This post is the story of how that happened.

The Problem

I've worn many hats in my career but the one I wear most often these days is "Solution Architect," which is a somewhat generic term that means I build infrastructure in the cloud, usually the Amazon Web Services (AWS) cloud. I have passed most of the AWS certification exams, some multiple times, but in September 2025 I was preparing to study for the Advanced Networking Specialty (ANS) exam. ANS is widely considered the most difficult of the AWS certifications to pass.

For other certifications in the past, I've used A Cloud Guru (acquired by Pluralsight), Udemy, and other sites that are supposed to help you prepare for the exam. I hate these sites. They are all the same. An exam has a syllabus and most of the topics have videos and transcripts of the videos and simple, static quizzes at the end of each topic. After slogging through all of this, there are usually 1–3 practice exams that, assuming you pass, indicate you are ready for the real exam.

Garbage.

The first issue I have is the "one size fits all" curriculum model. Every class treats every student the same. And since they have to teach to the lowest common denominator, they assume you are coming at the exam with minimal prior knowledge. So they all start with refreshers on prerequisite material. You can skip these usually, but maybe I want a refresher and just don't need the WHOLE thing — just some of the more esoteric details. No way to get a refresher on just the details you need refreshed.

The primary course material is grouped into fairly broad topics. This means the course itself is largely like the refreshers: new material coupled with basic material many students already know. So you end up watching a 30-minute video to get 2 minutes of new knowledge that you need for the exam. It's not possible to skip around or you might miss the new material. To help with this, the video can often be watched at 1.5x or 2x speed. That's an awesome experience: having to focus intently on someone speaking super fast to make sure you don't miss the new material. Exhausting. The transcripts aren't much better. They are usually just blobs of text dumped out by a speech-to-text utility with zero formatting, no headers, nothing.

Some topics have practice "quizzes" which are essentially a handful of multiple choice questions to answer. There is only one practice quiz and it never changes, so once you've taken it, that's it. You can take it again but it's the same questions with, maybe, the answers sorted into a different order than the first attempt. Woo!

Some topics have "labs" which is where they give you some instructions and then you go log into your own live cloud account and muck around following the instructions and hope you don't mess anything up or accidentally run up a bunch of charges. I've never done a lab. I understand the value of doing things for real, but I'm not messing around in my own cloud account. Forget it.

And the practice exams — these are arguably the most useful feature of these online courses. A good one simulates the format of the exam and its duration. I thought the A Cloud Guru (Pluralsight) ones were pretty good until I passed all three available exams with near-perfect scores and then went on to fail the real exam. $300 down the drain and a serious shot to my confidence. The main problem is that these exams use a fixed battery of questions and you end up learning their practice exam and not the real material being tested.

I was not looking forward to studying for ANS with any of these sites.

The Idea

I had been thinking about building my own certification prep site for awhile. I figured if I was frustrated with the existing options, others were too. I was using Sonnet 4.5 regularly to write code and was able to have it put together a basic site in a few hours. There were two major obstacles to launching a real site, though.

One, how do I make mine better and truly useful? It wouldn't be sufficient to just put out a site that was the same as the competition. It had to be measurably better. Really, it had to be revolutionary.

Two, how do I create all of that content for users to study? Even one exam required a massive amount of content, and while I like writing, no way I had the free time to write the code AND write the content. And I didn't know all of it, either. I needed content for exams I hadn't passed yet.

Fortunately, I already knew all about creating educational software. The original AccelaStudy was the first flashcard app in the App Store when it opened in July 2008. That AccelaStudy was basically just foreign-language vocabulary flashcards: "Hello" on one side, "Hola" on the other. But I didn't know all of the languages (Spanish, French, German, Italian, and Turkish on opening day), so how did I generate the translations? I didn't. I hired professors at the premier foreign-language university in the world — Brigham Young University in Utah — to do the translations. Then I simply imported them into the app. For the native speaker audio files, I hired professional voiceover artists who spoke each language natively. That was a lot of fun, actually. The voice for Japanese was done by the same actor who does voiceovers in TV commercials for Mercedes-Benz.

But this content was on a different scale. Pluralsight has over 2,500 expert authors creating their technical courses. Of course, keeping 2,500 authors around is very expensive, and probably part of the reason Pluralsight is struggling financially. I had no money for content authors, so I needed a different solution.

Content Galore

For quite awhile, myself and all of my professional colleagues had been using ChatGPT for infrastructure questions. For example: "What are the options for encrypting an S3 bucket?" or "I'm getting a 502 error on a new web service I'm running in Fargate. What could be the problem?" I realized that the LLM's training data included every possible detail about every resource, every service that you could use in the AWS cloud.

Or be tested on in an AWS certification exam.

A few test prompts later — "Tell me everything I need to know about S3 buckets to pass the Solutions Architect Professional exam" — and I knew that AI had all the knowledge I needed to generate content for the site.

But how to handle hallucinations? How to make sure the content is accurate? These are tough problems with LLMs today. The solution to these issues is quite complicated but achievable. The solution that evolved became part of the AVIAN Origin and AVIAN Preflight patents, two of the 33 AVIAN patent filings, in the Content Creation architectural tier. AVIAN can generate the entire content of an AWS certification course in about 8 hours for around $100. And if the exam changes? A new version can be ready in 30 minutes.

But I'm getting ahead of myself.

Adaptive Learning, Solved

For over 10 years, I had been working on an adaptive learning patent. It started out as an idea to improve on the Leitner spaced-repetition algorithm. That improvement proved unpatentable but it was a real improvement, and it shipped in AccelaStudy years ago. So I kept working on it. By 2020 or so, I had a draft of The AccelaStudy Method, which captured most of the ideas I had around adaptive learning. Alas, that document was heavy on the concepts and light on the technical implementation. Not patentable.

Then, last September, when I was getting started on a proof of concept for what would eventually become AccelaStudy AI, I entered a fateful prompt:

I'm working on an educational site and I've got some ideas in this document, accelastudy_method.md. What would it take to make this a real patent?

And so it began. What started off as a single Markdown file describing an array of ideas for making online learning adaptive and personalized became 33 separate patents, not just the one I thought I had. The first patent was filed in October 2025, another 25 in March and April 2026, and 7 more in early May.

One of the key aspects of the patent portfolio is that it applies to ANYTHING that can be learned. As long as the AI has a deep knowledge of the subject, curriculum can be created. And given that the training data for OpenAI and Anthropic models (and Grok and Gemini and others) includes essentially every document ever written by humans, the AI has far deeper knowledge than even the most experienced content author.

Code Warrior

On February 16, 2026, it was time to build it. The patents were mostly done, but I wanted to ensure they worked before I went to all the trouble and expense of filing them.

The first task was to build the AVIAN engine itself. This meant taking all of that patent documentation and extracting a system architecture, and then an implementation and testing plan. That work was done in an afternoon.

The next several weeks were a sustained sprint of building, in roughly this order: the engine, the content synthesis pipeline, the web application, the API, the admin tooling, the marketing site, the press kit, the iOS app, the desktop apps for macOS / Windows / Linux, and the entire supporting infrastructure to run all of it. Then, in parallel with the customer-facing product, I built out a fleet of internal tools to actually operate the company: a CMS, an email client, a CRM, an accounting system, a calendar, an analytics platform, a service-health monitor, a leverage-metrics tracker, and more than a dozen others. Each one is a real production application. Each one was 100% built with Claude Code.

I'll write a longer technical post about the architecture choices that made this pace possible. But the single biggest workflow unlock was something simple and structural: I used 57 nested CLAUDE.md constraint files as a per-repo knowledge graph that Claude Code walks before any edit. Plan mode and parallel sub-agents rode on top of that. It felt like handing Claude a map of the entire monorepo. Every constraint I would have wanted to enforce as a code reviewer — coding style, architectural rules, naming conventions, testing requirements, what NOT to touch — lives in those files. The agent reads them. The agent respects them.

I ran 2–3 concurrent Claude Max subscriptions for most of the build window so I could fan out work across multiple repos at once. I typically had 10-12 terminals up, each doing work in a different repo. Through the API, the content-synthesis pipeline ran independently — various Anthropic models orchestrated in sequence to yield the most accurate and comprehensive course material. That synthesis spend lives in a separate stack of credit-recharge invoices: 80+ at roughly $50 each, $4,000+ documented. The coding spend through Claude Code lives in Fulcrum, the leverage tracker, which is itself one of the 19 internal tools I built along the way.

By the Numbers

Eighty days. Solo. The tracker captured every non-trivial task as a row: estimated human-equivalent hours, actual Claude wall-clock minutes, tokens consumed, leverage factor, supervisory leverage. Here is what 80 days of compressed work looks like:

MetricValue
Days of build80 (Feb 23 → May 13, 2026)
Measured tasks2,115
Human-equivalent work hours~50,319
Human-equivalent work-years24.2
Claude wall-clock~1,061 hours
My supervisory time (writing prompts)~148 hours
Average task leverage51.5×
Average supervisory leverage (personal ROI)432.4×
Maximum single-task leverage240×
Claude Code tokens consumed~360 million

The full record set has been published daily since early April at charlessieg.com/leverage/all. Every task, every estimate, every minute of Claude wall-clock. Nothing redacted. Each day's post also includes an analytical writeup of which task patterns produced the highest leverage and which were still gated by human review.

And here is what those 24 work-years of compressed effort produced:

  • AccelaStudy AI — the customer product. Over 900 certifications, standardized tests, and other courses covered, 1.4 million synthesized questions, sub-2-millisecond knowledge updates, root-cause prerequisite-gap detection, pass-probability forecasting before you spend hundreds of dollars on an exam voucher. Live on the web today at accelastudy.ai; native iOS / iPadOS / macOS / Windows / Linux apps follow on June 1.
  • AVIAN — the patent portfolio behind it. 33 USPTO filings, 192 distinct inventions, 733 claims (68 independent + 665 dependent), 263 technical figures, organized into 36 platform clusters across 13 pipeline tiers. avian.renkara.com, also built by Claude.
  • 74 repositories, 1.27 million lines of code, 25,000+ automated tests.
  • 19 production Renkara internal tools — listed publicly at renkara.com/tools, with each tool's page tagged "100% Built by Claude" alongside the commercial SaaS category it replaces: Narrative (static site generator), Courier (email client), Tribe (CRM), Trellis (cloud accounting), Vigil (uptime monitoring), Cadence (calendar), Pulse (web analytics), Fulcrum (leverage tracker), Docket (issue tracking), Chronicle (observability), Beacon (marketing automation), Herald (newsletter platform), and seven more. Together they expose 800+ MCP tools to any Claude session — so the entire fleet is agent-addressable through Anthropic's own protocol, not just human-addressable. That fleet is the operational backbone that lets one person run a 74-repo monorepo.
  • 21 production websites — 16 AVIAN/Renkara properties plus four fictional in-world sites and the book's own site for the novel below, all generated by Narrative.
  • 19,000+ pages of Markdown documentation — 3,513 files, 4.85 million words. Including the 57 nested CLAUDE.md constraint files.

Fulcrum, and Other Side Quests

Fulcrum, the leverage tracker, deserves its own paragraph. As I was starting the build I realized that nobody had ever produced a longitudinal dataset on a single solo developer's actual productivity with an AI coding agent. Most "AI productivity" claims are marketing. I wanted real data — task by task, hour by hour, dollar by dollar — and I wanted it public. So I built Fulcrum. It records every non-trivial task as a row, computes leverage factor and supervisory ROI per task, and publishes a daily blog post with analytical commentary. As of today: 2,115 records, 51.5× weighted leverage, 432.4× supervisory ROI, 24.2 work-years compressed into 80 calendar days. If anyone wants to challenge the numbers, the records are there.

The other side quest is a novel.

In parallel with the AVIAN build, I co-wrote a 67,000-word literary novel with Claude called The Deferral. As part of the world-building, Claude designed and built four in-world fictional company websites — Strataforge Robotics, Luthan Dynamics, Elysium Atelier, and MIDAS — each with its own brand identity and full marketing copy, plus the book's own site at the-deferral.com. We even wrote a fake patent to deepen the world. The novel announcement and a behind-the-scenes writeup live here. Total wall-clock cost: a side hobby on weekends. The point: this isn't just about code. Working with Claude expands what one person can attempt across every creative discipline at once.

Accessibility

Most software fails accessibility. I didn't want AccelaStudy AI to be most software.

In the final weeks before launch I ran a series of WCAG 2.1 AA audits across the web client and all 16 marketing-site properties — a deterministic Python checker plus a parallel LLM-judgment phase. The first deep audit found 123 findings, with 13 P0 blockers. I then dispatched eight parallel Claude Code sub-agents to fix them in the order an accessibility consultant would prioritize them: token contrast, focus management, ARIA wiring, keyboard navigation, focus traps, animation guards, touch targets, document titles, modal labelling, custom tablists, FAQ semantic structure, and the long tail of smaller issues. Across the fleet of 56 UI repos, the final sweep cleared 2,460 HIGH findings, 2,553 MEDIUM, and a long tail of LOW findings.

This work is invisible to most users. But it is the entire experience for users who depend on screen readers, who navigate by keyboard only, who need reduced motion, who use voice control. There is no chance I could have manually audited 16 marketing sites + a complex React SPA + a Swift iOS app + four desktop builds for full WCAG 2.1 AA compliance in a week. With Claude Code, it was tightly scoped, parallelizable, and verifiable. The deterministic checker is itself open-source, lives in the monorepo, and runs on every CI build.

That last detail matters. The audits are reproducible. Anyone can rerun them.

Built with Claude

I want to be honest about what this actually was.

I didn't write a single line of production code in 80 days. I wrote prompts, I wrote CLAUDE.md constraint files, I wrote architecture decision records, I reviewed pull requests, I made judgment calls about what to build next and what to defer. Claude wrote the code. Claude helped me turn my ideas into patents and did the grunt work of hardening the language, working examples, constructing diagrams, and checking the math. Claude wrote the marketing copy (with my voice). Claude wrote the documentation. Claude designed the UIs. Claude wrote the synthesis pipeline that wrote the learning content. Claude wrote the leverage tracker that documented Claude writing everything else.

A few specific observations from the 80 days, for anyone curious about what working at this scale with Claude is actually like:

  • Plan mode is the highest-leverage feature for any change touching more than three files. It surfaces dependency cycles and forces explicit reasoning about ordering. Twice it caught a circular import my own static analysis had missed.
  • CLAUDE.md constraint files are dramatically underused. 57 of them across 74 repos formed a knowledge graph the agent navigated before any edit. The agent's adherence to nuanced architectural rules tracked almost perfectly with whether those rules were written down. If a rule wasn't in a CLAUDE.md file, it might as well not have existed.
  • Parallel sub-agents change the work model. For the synthesis pipeline, three or four sub-agents could fan out across distinct learning domains and produce independent drafts in 10 minutes. The bottleneck moves from "writing the content" to "specifying what the content should be."
  • Hooks reduce approval-cycle friction more than any other optimization. A small settings.json hook that runs my test suite after every edit saved an enormous amount of manual cycling.

AccelaStudy AI is, in the end, an incredible product, and I didn't write a single line of its code. It is Claude's masterpiece. I am the operator who pointed the model at the target.

"Create like a god; command like a king; work like a slave."

This philosophy comes from the famous Romanian sculptor Constantin Brâncuși and is what I now live by.

Claude Code has given me the power of creation, to transform world-changing ideas into stunning reality.

Claude followed command after command after command, over 2,000 of them, tirelessly working to execute my vision.

However, I did work like a slave.

In my favorite scene from Jurassic Park, John Hammond says memorably that "creation is an act of sheer will". Delivering AccelaStudy AI, even with the work being done almost entirely by Claude Code, required the mental resolve and determination to sit at my desk an average of 120+ hours a week for almost 12 weeks, prompting Claude along, reviewing the work. That left only a handful of hours a day for sleep, eating, exercising, and spending time with family and friends. I should mention that I also worked a full-time job during 8 of those daily hours.

It was my deadline, optimistically set early on when it seemed like I'd be done in no time at Claude Code pace. But, like any project that has to go to production, the 80/20 rule applies and it was clearly evident in this effort. It's the kind of ballooning that happens when the "user sign up" feature expands to include social media sign-ups, forgot password and MFA flows, and regulatory account closure requirements. In the end, even with all the hours, I still had to move the launch by 3 weeks. But it did launch.

Giving Back

Middle school and high school curriculum is free. For students. For schools. For homeschoolers. For anyone teaching kids who deserve adaptive, personalized learning without a paywall. The K-12 curriculum rolls out across summer and fall 2026, available to any student, school, or family at no cost. Pass-probability forecasting, root-cause gap detection, real adaptive sequencing — at no cost, ever, full stop.

Adaptive learning shouldn't be a luxury good. The kids whose families can afford $4,000 tutors have always had the edge over the kids whose families can't. AccelaStudy AI doesn't know what a family's bank balance looks like, and that's the point.

The paid products fund the free K-12 work. We are launching with professional certifications to kickstart revenue. The AP catalog, AccelaStudy AI Languages, AccelaStudy AI English (IELTS + TOEFL, coming this summer), and the graduate-and-professional tests (GRE, GMAT, MCAT, and LSAT, coming in October) are all paid products. The college-entrance tests (SAT, ACT, PSAT) may also go free — that call is still open.

A solo founder, working with Claude, can build all of this in 80 days. The implication for what the rest of us — teachers, students, families — can attempt is what I want people to take from this story.

The ceiling moved. Look up.


Charles Sieg is the founder of Renkara Media Group. AccelaStudy AI is live at accelastudy.ai. The full daily leverage dataset is public at charlessieg.com/leverage. The 19 internal Renkara tools, each tagged "100% Built by Claude," are listed at renkara.com/tools. The AVIAN patent portfolio summary lives at avian.renkara.com.