Skip to main content

Leverage Record: March 15, 2026

About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.

Two tasks on a spring break Saturday. I carved out about an hour and a half of computer time to get the content synthesis pipeline running and generate interactive labs for the free tier. Minimal supervisory effort: eight minutes of prompting produced a week and a half of human-equivalent output.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

Task Log

#TaskHuman Est.ClaudeSup.FactorSup. Factor
1Generate 66 interactive labs across 35 educational domains40h35m3m68.6x800.0x
2Set up content synthesis pipeline and synthesize 35 educational domains (environment fixes + batch processing + quality review + lesson generation)20h50m5m24.0x240.0x

Aggregate Stats

MetricValue
Total tasks2
Human-equivalent hours60h (7.5 working days)
Claude wall-clock time85m (1.4h)
Supervisory time8m
Tokens consumed~595,000
Weighted avg leverage factor42.4x
Weighted avg supervisory factor450.0x

Analysis

The lab generation task at 68.6x carried most of the leverage. Once the domain specifications and lab framework exist, generating 66 labs across 35 domains is pure content stamping. The AI reads each domain's goal hierarchy and produces labs that exercise the leaf nodes. Three minutes of prompting, 35 minutes of execution, and the result would take a curriculum developer a full work week.

The pipeline setup task at 24.0x was lower because it involved debugging: fixing a virtual environment, resolving dependency conflicts, patching a batch script, and then running the actual synthesis and quality review passes. Environment troubleshooting is where AI leverage drops. The machine is fast at generating code but still has to iterate through build-fix-retry cycles just like a human would, only faster.

The supervisory leverage of 450.0x is the standout number. Eight minutes of my time on a Saturday afternoon turned into seven and a half days of engineering output. Spring break barely noticed.

Let's Build Something!

I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.

Currently taking on select consulting engagements through Vantalect.