Skip to main content

Leverage Record: February 27, 2026

AITime Record

About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.

Nineteen tasks today across three distinct workstreams: patent figure generation, cloud certification tooling, and technical writing. The patent work dominated in volume (11 tasks) while the infrastructure design document dominated in leverage factor.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

Task Log

TaskHuman Est.Claude TimeTokensLeverage
LLM knowledge benchmark: repo scaffolding, test harness, 106 questions, 3 model benchmark runs40h30min150k80x
Hand-craft 6 patent figure SVGs for application E6h12min25k30x
Hand-craft 7 patent figure SVGs for application D6h12min25k30x
Hand-craft 7 patent figure SVGs for application C8h15min25k32x
Hand-craft 9 patent figure SVGs for application G6h12min25k30x
Hand-craft 6 patent figure SVGs for application A8h12min45k40x
Hand-craft 7 patent figure SVGs for application I6h12min25k30x
Hand-craft 7 patent figure SVGs for application J8h8min25k60x
Hand-craft 9 patent figure SVGs for application F8h12min35k40x
Hand-craft 7 patent figure SVGs for application H6h12min25k30x
Hand-craft 7 patent figure SVGs for application K8h15min45k32x
Generate 78 patent figure SVGs + 11 compiled figure PDFs80h25min850k192x
Add validation model integration + synthesize cloud certification content4h15min50k16x
Implement challenge generator with edge-case filtering and hybrid model config8h25min150k19.2x
Infrastructure design document (16 sections: VPC, ECS, S3, IAM, EventBridge, CloudWatch, CI/CD, SNS, Terraform, costs, security)16h8min116k120x
Legacy answer support: schema, scorers, question files, reporter, formatter, tests, 4 benchmark reruns6h20min80k18x
Draft 3,800-word technical article + AI detection scan + site CTA redesign + staging/production deploy8h25min120k19.2x
Stage-isolated pipeline with per-stage model override8h12min40x
Standalone re-validation tool with batch mode and shared model instances6h15min80k24x

Aggregate Stats

MetricValue
Total tasks19
Total human-equivalent hours246h
Total Claude minutes297min (4h 57min)
Total tokens~1.87M
Weighted average leverage49.7x

Analysis

The 192x leverage factor on the consolidated patent figure generation task stands out. That task involved converting 78 Mermaid diagrams into hand-crafted SVGs formatted for patent submission and compiling them into 11 separate PDF documents. A human patent illustrator would spend two full working weeks on that volume. Claude completed it in 25 minutes.

The infrastructure design document hit 120x. Sixteen sections covering every layer of a cloud-native export pipeline, from VPC topology to Terraform module structure to cost projections. Writing that document from scratch with proper architecture diagrams takes a senior engineer two full days. Claude produced it in 8 minutes.

The knowledge benchmark build (80x) involved creating a complete evaluation framework: Pydantic schemas, three scoring mechanisms (exact match, multiple choice extraction, LLM-as-judge), provider abstraction for multiple APIs, 106 AWS questions across 7 categories, and three full benchmark runs with result reporting. That is a week-long project compressed into 30 minutes.

The lower-leverage tasks (16x to 19x) involved more iterative work: challenge generation with tuning, article drafting with AI detection compliance, and benchmark refinement with reruns. These tasks require more back-and-forth judgment calls, which compresses less dramatically.

The stage-isolated pipeline task (40x) added per-stage model overrides to the certification synthesis system, allowing different AI models to be configured for each pipeline stage. Clean greenfield infrastructure work with a well-defined scope.

The standalone re-validation tool (24x) extracted validation logic into an independent script with batch mode and shared model instances, allowing targeted re-runs without full pipeline execution.

A 49.7x weighted average means roughly six weeks of senior engineering output in under five hours of wall-clock time across nineteen tasks.

Let's Build Something!

I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.

Currently taking on select consulting engagements through Vantalect.