Skip to main content

Leverage Record: March 2, 2026

AI Time Record

About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.

Daily accounting of what Claude Opus 4.6 built today, measured against how long a senior engineer familiar with each codebase would need for the same work. This was a marathon day dominated by structured document authoring at scale: 95 domain specification documents across seven certification families, plus CMS infrastructure work, content moderation, and article writing.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

The Numbers

# Task Human Est. Claude Leverage
1 ML pipeline configuration: batch runner updates, pool sizing, and deployment automation 4 hours 19 min 12.6x
2 Technical documentation audit and cross-reference correction across 9 documents 6 hours 25 min 14.4x
3 Integrate infrastructure provisioning into Narrative CMS (7 modules, API key auth, Flask Blueprint, 74 tests) 16 hours 18 min 53.3x
4 Education platform course structure and provider branding implementation with tests 40 hours 25 min 96x
5 LLM instructions endpoint and MCP server for Narrative CMS 16 hours 12 min 80x
6 Standardized test domain specs: 7 families (18 documents, 590 leaf goals) 90 hours 105 min 51.4x
7 IT certification domain specs: 9 CompTIA documents with 289+ leaf goals 64 hours 49 min 78.4x
8 Cybersecurity certification domain specs: 9 ISC2 documents with 424 leaf goals 104 hours 34 min 183.5x
9 Project management certification domain specs: 9 PMI documents with 405 leaf goals 64 hours 30 min 128x
10 Networking certification domain specs: 16 Cisco documents with 372+ leaf goals 152 hours 59 min 154.6x
11 Cybersecurity certification domain specs: 25 GIAC documents with 320+ leaf goals 168 hours 98 min 102.9x
12 Batch validation, cross-referencing, and README generation for 95 domain specs 240 hours 110 min 130.9x
13 Architecture article authoring (4,000 words, 6 tables, 2 diagrams) with AI detection and staging deploy 8 hours 20 min 24x
14 Port onboarding UI component between application codebases 4 hours 4 min 60x
15 Content moderation and review workflow across 3 repositories (30+ files) 120 hours 35 min 205.7x

Aggregate

Metric Value
Tasks completed 35 (grouped into 15 categories above)
Human equivalent 1,096 hours (~27.4 work weeks)
Claude wall-clock 643 minutes (~10.7 hours)
Tokens consumed ~2,095,000
Weighted leverage factor 102.3x

Analysis

This was the first day to break the 100x weighted average. The primary driver was structured document authoring at scale: 95 domain specification documents across seven certification families (CompTIA, ISC2, PMI, Cisco, GIAC, and seven standardized test formats). Each document followed a strict JSON schema with hierarchical goal taxonomies, prerequisite graphs, and cross-domain references. A human would need to research each certification's exam objectives, decompose them into leaf-level learning goals, establish prerequisite relationships, and validate the entire structure. Claude treated each certification family as a pattern after the first example and produced subsequent documents with minimal correction.

The highest individual leverage came from the ISC2 cybersecurity certification specs at 183.5x and the content moderation workflow at 205.7x. Both share the same characteristic: well-defined structural patterns applied repeatedly across large document sets. Once Claude learns the pattern from the first document, the marginal cost of each additional document drops to near zero. A human does not get that same compounding efficiency because fatigue, context-switching, and boredom accumulate across repetitions.

The lowest leverage came from the ML pipeline configuration at 12.6x and the documentation audit at 14.4x. Both involved operational tasks gated by external system interactions (deploying services, verifying cross-references against live documents) rather than pure generation work.

The CMS work landed in the middle: infrastructure provisioning at 53.3x and the MCP server at 80x. These were greenfield implementations with clear specifications, which is Claude's sweet spot. The 74 unit tests for the infrastructure module took longer to generate than the module itself, but they caught three edge cases during implementation that would have surfaced as production bugs otherwise.

Twenty-seven work weeks of output in a single day. At 102.3x leverage, every minute of Claude time replaced 1.7 hours of human engineering. The constraint was not compute or context windows. It was my ability to review output and write prompts fast enough to keep the sessions fed.


See all records under the Time Record tag.

Let's Build Something!

I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.

Currently taking on select consulting engagements through Vantalect.