Skip to main content

Leverage Record: March 30, 2026

About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.

Twenty-two tasks. March 30 was the most diagram-intensive day of the entire project: a full patent diagram audit that found and fixed 219 issues across 96 figures, five sessions of deep work on the Mermaid rendering library (collision resolution, diamond centering, exit ports, blocker avoidance, overlap detection), and a content audit validating 845 specifications, 144 packages, and 696,000 questions.

The weighted average leverage factor was 21.1x, pulled down by the rendering library sessions which averaged 12-13x due to genuine algorithmic complexity. The supervisory leverage held at 175.7x.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

Task Log

#TaskHuman Est.ClaudeSup.FactorSup. Factor
1Full diagram audit + fix 219 issues across 96 figures in 12 applications80h45m5m106.7x960.0x
2Rendering library: collision resolution + layer/port override system (ancestor filtering + group awareness)32h50m5m38.4x384.0x
3Full diagram audit + fix 17 findings across 96 figures16h35m5m27.4x192.0x
4Rendering library session 3: route optimizer + layout refinements for edge collision resolution40h120m10m20.0x240.0x
5Fix renderer regressions: phase guards + cascade logic + bezier direction + back-edge routing8h30m5m16.0x96.0x
6Fix numeral collisions in diagram files for 2 applications2h8m3m15.0x40.0x
7Fix 6 diagram visual issues: subgroup extension + micro-dogleg straightener + figure restructuring6h25m5m14.4x72.0x
8Full diagram audit: 207 figures across 25 applications + numeral collision and back-edge fixes8h35m5m13.7x96.0x
9Rendering library session 4: diamond centering, exit ports, blocker avoidance, overlap resolution, snapshot tests40h180m10m13.3x240.0x
10Content audit: 57 checks across 845 specs, 144 packages, 696K questions, 1894 labs4h20m2m12.0x120.0x
11Rendering library session 5: edge/back-edge fixes, reversed edges, subgroup termination, layout heuristics24h120m5m12.0x288.0x
12Fix diagram issues in application (7 figures)1.5h8m3m11.2x30.0x
13Fix duplicate reference numeral issues across 9 figures1.5h8m3m11.2x30.0x
14Debug and fix rendering library edge staggering for multi-exit fan-out nodes8h45m5m10.7x96.0x
15Fix diagram warnings: move components inside subgraph, remove spurious numerals0.75h5m4m9.0x11.2x
16Fix diagram warnings across 5 figures1.5h10m3m9.0x30.0x
17Fix diagram issues (duplicate numerals + dotted edge)0.5h4m3m7.5x10.0x
18Fix diagram issues across 6 figures1.5h12m3m7.5x30.0x
19Fix diagram warnings across 9 figures1.5h12m3m7.5x30.0x
20Fix diagram issues across application0.5h5m2m6.0x15.0x
21Fix diagram warnings across 5 figures0.5h8m3m3.8x10.0x
22Fix diagram warnings across 6 figures0.5h8m3m3.8x10.0x

Aggregate Statistics

MetricValue
Total tasks22
Total human-equivalent hours278.2
Total Claude minutes793
Total supervisory minutes95
Total tokens3,499,000
Weighted average leverage factor21.1x
Weighted average supervisory leverage factor175.7x

Analysis

The full diagram audit at 106.7x (task 1) was the standout. Scanning 96 figures across 12 applications, identifying 219 issues (missing numerals, duplicate references, incorrect edge styles, logic errors), and fixing all of them in 45 minutes. A human doing this work would spend two full weeks cross-referencing each figure against its specification, checking every reference numeral, and verifying every edge direction. The AI handles the cross-referencing without fatigue or drift.

The five rendering library sessions (tasks 2, 4, 9, 11, 14) consumed 515 of the 793 total Claude minutes and averaged 13.5x leverage. This is the lowest per-task leverage in the entire month, and it makes sense. Debugging visual rendering algorithms is fundamentally harder than generating code or running audits. The AI cannot see the visual output; it works from textual descriptions of pixel-level issues ("the arrow is 6 pixels short of the box"). Each fix risks regressions in other diagrams. The work is iterative, exploratory, and requires deep understanding of computational geometry.

The content audit (task 10, 12x) validated 845 domain specifications, 144 synthesized packages, 696,000 generated questions, and 1,894 lab definitions in 20 minutes. This is exhaustive verification, not sampling. Every spec checked against schema, every package validated for completeness, every question bank verified for count consistency.

The diagram fix tasks (12-22) represent the cleanup pass after the audit and library work. Each task fixes specific findings in specific figures. The low leverage factors (3.8x to 11.2x) reflect the precision nature of the work: small, targeted edits where the overhead of reading the spec, understanding the context, and making the edit dominates the actual change.

The supervisory leverage of 175.7x held despite the library debugging sessions. Ninety-five minutes of prompt-writing time produced 278 hours of output. The rendering library sessions required longer prompts (5-10 minutes each) to describe the visual issues precisely, which is why the per-task supervisory leverage on those is lower than usual.

Let's Build Something!

I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.

Currently taking on select consulting engagements through Vantalect.