Skip to main content

Leverage Record: March 3, 2026

AITime Record

About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.

Daily accounting of what Claude Opus 4.6 built today, measured against how long a senior engineer familiar with each codebase would need for the same work. This was a day dominated by education platform development, engineering metrics tooling, and build infrastructure improvements.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

The Numbers

#TaskHuman Est.ClaudeLeverage
1Cross-platform desktop app Phase 2: shared UI library, activity components, demo migration (8 screens, 50+ files)120 hours45 min160x
2Engineering metrics dashboard: core package, CLI, DB migrations, CSV import (160 records), 4 design docs, 25 Python files80 hours22 min218.2x
3Engineering metrics dashboard: API server, React dashboard with interactive tooltips80 hours35 min137x
4Mobile app requirements document (18-section comprehensive spec)24 hours25 min57.6x
5Education platform: course mode + exam simulator (lesson synthesis engine, runtime personalization API, course UI, exam simulator, documentation; ~50 files across 5 repos)480 hours120 min240x
6Mobile app lesson/exam feature integration (6 tasks: types, API, persistence, 3 screens, course structure, app routing)16 hours10 min96x
7Runtime lesson personalization engine + documentation + certification lesson content generation12 hours30 min24x
8Engineering metrics dashboard: user and team management expansion24 hours16 min90x
9Lesson caching, content chunking, and adaptive learning toggle (11 files across 4 repos)16 hours6 min160x
10Simplify 85 technical specification diagrams (config fix, mechanical simplification, figure splitting, full validation)24 hours19 min75.8x
11visionOS immersive 3D experience (3 new files, 10 modified)40 hours45 min53.3x
12ML evaluation pipeline repair, documentation, and standardized test domain run40 hours53 min45.3x
13Build-time diagram rendering pipeline for technical specification figures16 hours25 min38.4x
14ML evaluation calibration: automated judge ordering (6 files)8 hours12 min40x

Aggregate

MetricValue
Tasks completed14
Human equivalent980 hours (~24.5 work weeks)
Claude wall-clock463 minutes (~7.7 hours)
Tokens consumed~3,270,000
Weighted leverage factor127.0x

Analysis

The second consecutive day above 100x weighted average. The primary driver was education platform development: the course mode and exam simulator implementation at 240x was the single largest task, producing a lesson synthesis engine, runtime personalization API, and full exam simulator with UI across five repositories. A human building this system from scratch, even with clear specs, would need twelve work weeks. Claude delivered it in two hours because each component followed patterns established by earlier components in the session.

The engineering metrics dashboard work clustered around 90x to 218x. The highest leverage came from the core package buildout at 218.2x: a CLI, database migrations, CSV import for 160 historical records, four design documents, and 25 Python files. This is the same pattern that drives high leverage factors consistently: well-structured, repetitive work with clear schemas. Once the data model and API patterns are established, each additional endpoint and migration is incremental.

The lowest leverage came from the runtime lesson personalization work at 24x and the diagram rendering pipeline at 38.4x. The personalization engine required iterative testing against actual lesson content, which added wall-clock time. The rendering pipeline involved integrating a new JavaScript library (beautiful-mermaid) into a Python build system, which required debugging cross-runtime issues.

The visionOS immersive experience at 53.3x was notable for being a spatial computing project. Building 3D environments and interaction models is inherently slower for AI because the feedback loop requires visual inspection that text-based agents cannot do. The human estimate is also lower than comparable 2D work because visionOS projects have less boilerplate.

Twenty-four and a half work weeks of output in a single day at 127x leverage. Every minute of Claude time replaced just over two hours of senior engineering work.


See all records under the Time Record tag.

Let's Build Something!

I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.

Currently taking on select consulting engagements through Vantalect.