Skip to main content

Leverage Record: March 2, 2026

AITime Record

About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.

Daily accounting of what Claude Opus 4.6 built today, measured against how long a senior engineer familiar with each codebase would need for the same work. This was a marathon day dominated by structured document authoring at scale: 95 domain specification documents across seven certification families, plus CMS infrastructure work, content moderation, and article writing.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

The Numbers

#TaskHuman Est.ClaudeLeverage
1ML pipeline configuration: batch runner updates, pool sizing, and deployment automation4 hours19 min12.6x
2Technical documentation audit and cross-reference correction across 9 documents6 hours25 min14.4x
3Integrate infrastructure provisioning into Narrative CMS (7 modules, API key auth, Flask Blueprint, 74 tests)16 hours18 min53.3x
4Education platform course structure and provider branding implementation with tests40 hours25 min96x
5LLM instructions endpoint and MCP server for Narrative CMS16 hours12 min80x
6Standardized test domain specs: 7 families (18 documents, 590 leaf goals)90 hours105 min51.4x
7IT certification domain specs: 9 CompTIA documents with 289+ leaf goals64 hours49 min78.4x
8Cybersecurity certification domain specs: 9 ISC2 documents with 424 leaf goals104 hours34 min183.5x
9Project management certification domain specs: 9 PMI documents with 405 leaf goals64 hours30 min128x
10Networking certification domain specs: 16 Cisco documents with 372+ leaf goals152 hours59 min154.6x
11Cybersecurity certification domain specs: 25 GIAC documents with 320+ leaf goals168 hours98 min102.9x
12Batch validation, cross-referencing, and README generation for 95 domain specs240 hours110 min130.9x
13Architecture article authoring (4,000 words, 6 tables, 2 diagrams) with AI detection and staging deploy8 hours20 min24x
14Port onboarding UI component between application codebases4 hours4 min60x
15Content moderation and review workflow across 3 repositories (30+ files)120 hours35 min205.7x

Aggregate

MetricValue
Tasks completed35 (grouped into 15 categories above)
Human equivalent1,096 hours (~27.4 work weeks)
Claude wall-clock643 minutes (~10.7 hours)
Tokens consumed~2,095,000
Weighted leverage factor102.3x

Analysis

This was the first day to break the 100x weighted average. The primary driver was structured document authoring at scale: 95 domain specification documents across seven certification families (CompTIA, ISC2, PMI, Cisco, GIAC, and seven standardized test formats). Each document followed a strict JSON schema with hierarchical goal taxonomies, prerequisite graphs, and cross-domain references. A human would need to research each certification's exam objectives, decompose them into leaf-level learning goals, establish prerequisite relationships, and validate the entire structure. Claude treated each certification family as a pattern after the first example and produced subsequent documents with minimal correction.

The highest individual leverage came from the ISC2 cybersecurity certification specs at 183.5x and the content moderation workflow at 205.7x. Both share the same characteristic: well-defined structural patterns applied repeatedly across large document sets. Once Claude learns the pattern from the first document, the marginal cost of each additional document drops to near zero. A human does not get that same compounding efficiency because fatigue, context-switching, and boredom accumulate across repetitions.

The lowest leverage came from the ML pipeline configuration at 12.6x and the documentation audit at 14.4x. Both involved operational tasks gated by external system interactions (deploying services, verifying cross-references against live documents) rather than pure generation work.

The CMS work landed in the middle: infrastructure provisioning at 53.3x and the MCP server at 80x. These were greenfield implementations with clear specifications, which is Claude's sweet spot. The 74 unit tests for the infrastructure module took longer to generate than the module itself, but they caught three edge cases during implementation that would have surfaced as production bugs otherwise.

Twenty-seven work weeks of output in a single day. At 102.3x leverage, every minute of Claude time replaced 1.7 hours of human engineering. The constraint was not compute or context windows. It was my ability to review output and write prompts fast enough to keep the sessions fed.


See all records under the Time Record tag.

Let's Build Something!

I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.

Currently taking on select consulting engagements through Vantalect.