Skip to main content

Leverage Record: February 26, 2026

AI Time Record

About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.

Daily accounting of what Claude Opus 4.6 built today, measured against how long a senior engineer familiar with each codebase would need for the same work. Twenty tasks across six projects. The day split between building a custom patent diagram renderer from scratch, standing up an interactive learning frontend with multiple activity modes, implementing a server-side scoring engine, writing three architecture articles, and iterating on layout engine improvements. The patent diagrammer hit the session's highest leverage at 200x.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

The Numbers

# Task Human Est. Claude Leverage
1 Custom patent diagram renderer: domain-specific syntax, Mermaid translation layer, and USPTO-compliant SVG output for 78 diagrams 40 hours 12 min 200x
2 Server-side scoring engine with ELO algorithm and dashboard focus area analytics across two projects (15+ files) 40 hours 15 min 160x
3 Interactive learning frontend with multiple-choice engine, practice exams, scoring system, and analytics dashboard (7 phases, 24 files) 40 hours 25 min 96x
4 Three new interactive activity modes (flashcard, timed recall, drag-and-drop) with behavioral progress visualization 16 hours 12 min 80x
5 Fix blank-screen bugs, add onboarding flow, wire test attributes, write 11 end-to-end Playwright specs, add error boundary 8 hours 14 min 34x
6 Architecture article on RDS/Aurora cost optimization strategies (~3,500 words, 9 tables, 2 diagrams) 8 hours 15 min 32x
7 Architecture article on the CAP theorem and distributed consistency models (~4,500 words, 9 tables, 2 diagrams) 8 hours 15 min 32x
8 Architecture article on real-time messaging protocols: WebSockets, SSE, gRPC, MQTT (~3,500 words, 8 tables, 2 diagrams) 8 hours 15 min 32x
9 Diagram layout engine: six improvements including node ranking, coordinate assignment, aspect ratio handling, group boundaries, crossing reduction, and edge spacing 16 hours 30 min 32x
10 Fix reference numeral mismatches across two application diagram sets and specification text 4 hours 8 min 30x
11 Wire distractor generation types, dynamic answer feedback, header and footer components, scoring modal 6 hours 12 min 30x
12 Light mode CSS overrides across 7 files with default theme switch 4 hours 8 min 30x
13 Learning app persistence layer, calibration randomization, changelog, font rendering fix, and LAN development server 4 hours 8 min 30x
14 Target-aware arrow routing with side entry points and scale calibration fix 12 hours 25 min 29x
15 Theme toggle implementation with dev server restart 1.5 hours 4 min 23x
16 Draft and apply specification amendment: 5 new sections, 3 pseudocode blocks, worked example, 3 claims, and one new figure 8 hours 25 min 19x
17 Layout engine: minimum arrow gap enforcement, layer center alignment, chain re-alignment, and documentation update 8 hours 25 min 19x
18 Fix diagonal edge segments with orthogonalization pass 2 hours 8 min 15x
19 Layout alignment rules and README documentation 6 hours 25 min 14x
20 Auto-load domain packages at engine startup and fix NumPy key mismatch 2 hours 12 min 10x

Aggregate Stats

Metric Value
Total tasks 20
Total human-equivalent hours 241.5
Total Claude minutes 313
Total tokens (approximate) 1,610,000
Weighted average leverage factor 46.3x

Analysis

The patent diagrammer at 200x is the highest single-task leverage factor recorded to date. Building a domain-specific language parser, a translation layer to Mermaid, and a USPTO-compliant SVG renderer for 78 patent application diagrams in 12 minutes. That project would have taken a senior engineer a full work week. The syntax design, the rendering pipeline, the compliance requirements for patent figure formatting: each phase is cognitively dense with clear specifications. Exactly the profile that produces extreme leverage.

The server-side ELO scoring engine (task 2) came in at 160x, the second-highest factor of the day. Implementing a full ELO rating algorithm with server-side persistence and dashboard analytics across two interconnected projects in 15 minutes. The scope covered rating calculations, focus area identification, progress tracking, and the UI to surface it all. Dense, well-specified, multi-file greenfield work.

The interactive learning frontend (tasks 3, 4, 5, 11, 13) accounts for 74 human-equivalent hours across five related tasks. The initial build at 96x set up the full stack: React components, scoring engine, practice exam flow, and dashboard. Follow-on tasks added activity modes (80x), distractor logic and feedback (30x), test coverage with bug fixes (34x), and persistence with calibration improvements (30x). The pattern is consistent: greenfield implementation runs at high leverage, and iteration stays above 30x as long as the scope is well-defined.

Three architecture articles shipped in a combined 45 minutes of Claude time, replacing 24 hours of human writing. All three scored 0.00 on AI detection. The articles covered AWS cost optimization, distributed systems theory, and real-time messaging protocols. Writing three deep technical articles in under an hour while maintaining the voice and structure of existing site content is the kind of batch output that makes the leverage metric meaningful.

The layout engine improvements (tasks 9, 14, 17, 18, 19) represent the day's lowest-leverage cluster, averaging 24x. Layout algorithms involve iterative visual tuning: implement a change, render output, evaluate visually, adjust. Each cycle takes real time even for an AI agent. The bottleneck is the evaluation loop, not the implementation.

The weighted average of 46.3x means the day's 313 minutes of agent time replaced roughly six work weeks of focused solo engineering. The total of 241.5 human-equivalent hours represents just over six 40-hour work weeks compressed into a single day.

Let's Build Something!

I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.

Currently taking on select consulting engagements through Vantalect.