Skip to main content

Leverage Record: April 04, 2026

AITime Record

About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.

Twenty-nine tasks. April 4 was dominated by full-stack rewrites: an accounting platform rewritten from Node.js to Python (252 files, 27.7K LOC), a time tracking tool refitted from Flask to FastAPI, a list management app rebuilt from scratch, and a comprehensive auth architecture overhaul covering 13 OIDC clients. Testing was also heavy, with three separate test suites generated across different services. The day also included 12 new structured content specifications for AI/ML topics, a cross-application integration feature, and several infrastructure tasks.

The weighted average leverage factor was 54.1x with a supervisory leverage of 574.2x. In human terms, this was 35.6 weeks of work.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

Task Log

#TaskHuman Est.ClaudeSup.FactorSup. Factor
1Full-stack accounting platform rewrite: Node.js to Python, 252 files, 27.7K LOC240h22m5m654.5x2880.0x
2Full refit of time tracking service: Flask to FastAPI, JS to TypeScript, auth/migrations/CI-CD, 93 files80h15m3m320.0x1600.0x
3Full rewrite of list management app: new framework stack, 36 divergences resolved, 38 files120h25m5m288.0x1440.0x
4Accounting platform React 19 frontend: 77 files, 15 page sections, all routes, CSS modules40h12m10m200.0x240.0x
5Time tracking frontend rewrite: 23 files, 5015 LOC, design system CSS24h8m5m180.0x288.0x
6OIDC client registration (13 clients), email config, login integration for 7 tool frontends, privacy policy200h90m20m133.3x600.0x
7Time tracking comprehensive test suite: strategy doc, 142 tests (80 unit + 62 integration), Playwright specs40h18m2m133.3x1200.0x
8Dual auth architecture: browse-before-auth UX, site key gate, SMS verification, registration modes120h55m15m130.9x480.0x
917 API route files (105 endpoints) with full CRUD, pagination, auth, and validation24h12m8m120.0x180.0x
10Marketing platform test suite: 288 unit tests, 3 integration test files, 8 E2E specs80h45m3m106.7x1600.0x
11Cross-app integration: list-to-task sync with API key generation, DB-backed auth16h10m3m96.0x320.0x
126 accounting service files (ledger, invoicing, banking, reports, recurring, tax) with full SQL12h8m5m90.0x144.0x
13Launch plan reconciliation + press kit + marketing feature gap analysis24h18m5m80.0x288.0x
14Bidirectional cross-app integration: pull-from-source, export-full endpoint, MCP tools12h12m2m60.0x360.0x
15MCP server config fix (5 tools) + 4 marketing launch features: scheduled campaigns, CSV import20h22m3m54.5x400.0x
1618 backend unit test files (98 tests), SQLite compatibility fixes16h18m5m53.3x192.0x
17Task tracker comprehensive test suite: 293 tests, 83%/80% backend/frontend coverage16h20m3m48.0x320.0x
18Accounting backend core skeleton: 21 files (factory, config, database, auth, dependencies)4h5m5m48.0x48.0x
19Terraform infrastructure (ECR/CodePipeline/ALB/DNS/SSM) for 2 tool services3h4m5m45.0x36.0x
20Critical proficiency scoring bug: scores stuck at 0.0 after 500 correct answers16h22m5m43.6x192.0x
21Certification marketplace frontend: API client, catalog page, detail page, routes, sidebar4h6m5m40.0x48.0x
22Reconcile task tracker with fleet conventions: 15 divergences fixed8h12m2m40.0x240.0x
23Backend unit test suite: conftest, 10 test files, 80 tests covering all service layers8h12m5m40.0x96.0x
24Production deployment: Terraform ECR/ALB/Route53/SSM/S3/CloudFront/CodePipeline + DB + Docker16h25m2m38.4x480.0x
25Infrastructure rename migration: 4 tool renames across Terraform, CI/CD, DNS, SSM28h45m5m37.3x336.0x
26Newsletter platform testing strategy + 99 new tests (246 total, 76% coverage)12h35m3m20.6x240.0x
27Rename task tracker (GitHub repo, local dir, 13 source files) + comprehensive README2h7m3m17.1x40.0x
28Write 12 new structured content specifications for AI/ML/data topics240h990m5m14.5x2880.0x
29Update product website patent portfolio numbers1h8m2m7.5x30.0x

Aggregate Statistics

MetricValue
Total tasks29
Total human-equivalent hours1,426.0
Total Claude minutes1,581
Total supervisory minutes149
Total tokens11,636,500
Weighted average leverage factor54.1x
Weighted average supervisory leverage factor574.2x

Analysis

The Trellis accounting rewrite (654.5x) was the standout. Rewriting an entire full-stack application from one language and framework to another, producing 252 files and 27.7K lines of code in 22 minutes, is the kind of task where AI leverage is most extreme. A human would spend weeks understanding the existing codebase, planning the migration, writing the new code, and debugging integration issues. The AI has the entire context in its window and generates the replacement in a single pass.

Three other rewrites followed the same pattern: the time tracking refit (320x), the list management rebuild (288x), and the accounting frontend (200x). All four shared a common characteristic: well-understood target architectures with clear specifications. When the destination is unambiguous, the AI's generation speed creates massive leverage. When it requires iterative design decisions, leverage drops.

The 12 structured content specifications (14.5x) represent the opposite end. At 990 minutes of Claude time, this was the longest single task. Content generation at this scale involves extensive validation loops; each specification requires domain knowledge verification, structural consistency checks, and quality gates. The leverage is still meaningful (240 human hours compressed into 16.5 hours), but the per-minute yield is lower than code generation tasks.

The supervisory leverage (574.2x) reflects the extreme delegation possible on a day like this. Most tasks required under 5 minutes of prompting. The auth architecture overhaul was the exception at 15 minutes of supervisory time, reflecting the architectural complexity of designing a dual-mode authentication system. Even so, 120 human hours for 15 minutes of direction is a 480x supervisory ratio.

Let's Build Something!

I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.

Currently taking on select consulting engagements through Vantalect.