Skip to main content

Leverage Record: April 19, 2026

AITime Record

About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.

Forty-seven tasks. April 19 was dominated by infrastructure provisioning: the bulk of the day went toward building a cloud infrastructure provisioner from scratch across nine sequential build phases, bringing the tool from initial scaffold through a production-ready deployment with 58 provisionable resource types, 42+ MCP tools, 97 compliance rules, 10 compliance packs, 49 advisor checks, 145 tests, and a full WebSocket-only frontend. Running alongside that were the from-scratch build of a new real-time messaging tool, a major design system migration across a desktop client, MCP coverage sweeps across a dozen fleet tools, and calibration work on a synthetic student simulator. The weighted average leverage factor was 63.7x with a supervisory leverage of 865.7x, representing 2,135.5 human-equivalent hours.

April 18 clocked in at 121.1x weighted leverage and 1,297.7x supervisory. April 19 is lower on both axes for a straightforward reason: the day included a large number of sub-10x tasks. Calibration iteration tasks (4.7x, 6.7x, 8x) ran long in AI minutes because debugging a predictive model requires exploratory back-and-forth that doesn't compress the way greenfield builds do. A deployment and signing session (9.6x) spent 75 AI minutes chasing Vite chunk-splitting breakage and native module rebuild issues for an Electron build. An admin backend standup (8.8x) ran 75 minutes against a build pipeline that had a pre-existing CORS bug. These tasks drag the weighted average down substantially relative to a day where every task is new construction. The supervisory leverage also steps back because several multi-phase tasks required more detailed briefs -- 8-minute prompts instead of 1-minute ones.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

Task Log

#TaskHuman Est.ClaudeSup.FactorSup. Factor
1Infrastructure tool build, Phases 0-3: scaffold backend (FastAPI + WebSocket + MCP + database migrations + observability log drain) and frontend (React 19 + single WS client + pages), implement resource library with 12 AWS-shaped types, build dependency graph + planner + applier + destroyer + end-to-end stack lifecycle tests against mocked AWS, wire WebSocket dispatcher + 11 MCP tools + skill. Close onboarding doc gaps. 35 tests pass, 67% coverage.160h55m3m174.5x3200.0x
2Infrastructure tool build, continuation 2: cost service (pricing API + fallback pricebook, 8 resource estimators), IP space service (CIDR math + subnet availability + per-stack ENI footprint), compliance framework (declarative rule engine + 3 bundled packs covering 11 rules + tables + MCP tools), 6 more advisor checks bringing total to 12, 6 more resource types bringing total to 31, 36 MCP tools. 80 tests pass.160h55m1m174.5x9600.0x
3Infrastructure tool build, continuation 3: +20 production-critical resource types (load balancers, target groups, listeners, container cluster services, managed database instances, cache clusters, CDN distributions, build projects, pipelines, artifact repositories, identity pool, launch template, autoscaling group, state machine, event bus -- total 51), multi-account org discovery via org walk + STS assume-role chain + 3 MCP tools, system stack bootstrap with 10 logical resources + 2 MCP tools. 100 tests pass. 41 MCP tools.140h50m1m168.0x8400.0x
4Infrastructure tool build, continuation 4: +6 gap-filler resource types (EC2 instance provisionable, instance profile, managed policy, web ACL, bucket encryption, bucket lifecycle -- total 57), parallel fleet scanner with asyncio bounded-concurrency fan-out across accounts and regions, import hook REST route, +8 advisor checks bringing total to 20, +3 compliance packs bringing total to 6 packs / 36 rules, end-to-end system-stack integration test against mocked AWS. 107 tests pass. 42 MCP tools.120h45m1m160.0x7200.0x
5Infrastructure tool build, Phases 4-7 foundations: 12 more resource types (route tables, EIP, NAT gateway, ACM certificate, DNS hosted zone + record set, container registry, secrets, SNS topic, events rule), stack importer with tag-based discoverers + MCP tool, stack versioning (auto-snapshot on apply + rollback), inventory foundation (account/inventory/scan tables + 12 per-type discoverers + collector with soft-delete tombstoning + MCP tools), advisor scaffold (check framework + registry + 6 initial checks). 53 tests pass.120h45m1m160.0x7200.0x
6Infrastructure tool build, continuation 5: +27 inventory discoverers (total 39 types), real-time event ingest with 44 event patterns + ingest endpoint, landing zone reader (handles unenrolled accounts gracefully), monitoring registration + observability dashboard JSON + sync script, +7 advisor checks bringing total to 27, 4 frontend pages wired to WebSocket. 123 tests pass. 43 MCP tools.160h60m1m160.0x9600.0x
7Infrastructure tool build, continuation 9: +44 event patterns (total 88 -- covers EC2 tag + security group changes, IAM, managed database, object storage, container, load balancer, CDN, event rules, state machine, secrets, pipelines, audit trail, identity pool), +12 inventory discoverers (analytics workgroups, data integration, data warehouse, streaming, document DB, archive, config, messaging, virtual desktops, file transfer -- total 56), +8 advisor checks (total 49), +1 compliance pack (total 11 packs / 97 rules). 145 tests pass.100h38m1m157.9x6000.0x
8Infrastructure tool build, continuation 8: +8 resource types (budgets, health checks, event source mapping, layer version, topic policy, queue policy, DB cluster, events target -- total 74), +5 inventory discoverers (backup vault, REST API gateway v1, threat detector, config recorder, security hub -- total 44), +8 advisor checks (total 41), +2 compliance packs (SOC 2-lite, FedRAMP moderate-lite -- total 10 packs / 82 rules). 144 tests pass.90h35m1m154.3x5400.0x
9Infrastructure tool build, Phase 9: boto3 production deployment stack (11 resources) + bootstrap script with plan/apply/destroy, +1 resource type, deploy script + prod buildspec. Issue tracker board seed script. Monitoring registration + observability dashboard JSON + sync script. Auth OIDC client + CORS SSM registration script. Bug reporter wired. Port collision fix: bumped infrastructure tool to ports 3442/3443, swept every reference. Port registry moved to supporting-services as single source of truth + live filter tab added to dashboard (91 rows). Corporate site tool page, card, footer, blog post, icon mapping. Supporting-services dashboard entry + MCP catalog (43 tools). 127 tests pass. 58 provisionable types.140h55m3m152.7x2800.0x
10Infrastructure tool build, continuation 7: client cert issued from internal CA (key + cert + PKCS12, 5yr validity), +8 resource types (web ACL association, identity pool, standalone IAM policy, backup vault + plan, search domain, SNS subscription, bucket website config -- total 66), metric helper (14-day CPU average + max) retrofitted into advisor check, +5 cost estimators (total 13), +6 advisor checks (total 33), +2 compliance packs (cloud security best practices, well-architected security -- total 8 packs / 58 rules). 137 tests pass.100h40m1m150.0x6000.0x
11Messaging tool build, end-to-end: walked through 14 build phases (~5,500 LOC across Python, TypeScript, Shell), all 10 tests pass, provisioned full AWS infrastructure, bootstrapped database via SSM, created public source repo, pushed 7 commits, debugged and fixed 3 deploy issues (security group port, package lock registry, npm config), production live with HTTP 200 on both frontend and API health endpoints.200h80m1m150.0x12000.0x
12Infrastructure tool design: scoped 3 legacy apps to replace, audited existing Terraform and boto3 resources, produced full design document (WebSocket-only frontend, boto3 provisioner with CloudFormation-shaped resources, system stack, org scan, conformance packs, advisor parity, cost/IP space), scaffolded repo with CLAUDE.md, README, and four canonical docs.36h18m8m120.0x270.0x
13Messaging tool build, Phases 0-8: full backend (workspaces, channels, messages, threads, reactions, agent sessions, WebSocket gateway), React 19 + Tailwind frontend (sidebar, channel view, agent thread view, tasks dashboard, composer, themes), CLI with 5 hook scripts, end-to-end bridge (start session, stream events, human reply, inbox poll, ack, complete), MCP stdio server with 15 tools, 10 passing integration tests, initial commit. ~4,350 LOC.120h60m2m120.0x3600.0x
14Desktop client design system migration: 42 screens/components converted from legacy CSS Modules + icon library to Tailwind + shared design system primitives; all TypeScript errors fixed; universal macOS DMG built and signed.60h55m8m65.5x450.0x
15Command-center app fleet-onboarding cross-walk: README, compliance section in requirements (native bug reporter, diagnostics + identity endpoints), design additions (metrics, monitoring registration, auto-update + distribution topology, observability dashboard artifact, orchestrator MCP server 14A/14B/14C), Phase 15.5 onboarding completion plan, living 94-item checklist tracking doc.16h20m2m48.0x480.0x
1662-spec free-tier expansion authored, validated, and seed-chained + lesson-asset pipeline foundation (new asset repo, object storage buckets provisioned, asset resolver and static mount in engine, topological synthesis batch).120h160m25m45.0x288.0x
17Expand MCP coverage to full observability surface (metrics, logs, traces, alerts, incidents, SLOs, dashboards).10h14m3m42.9x200.0x
18Messaging tool end-to-end onboarding: added monitoring, observability, and bug reporter specs to all 4 tool docs; AWS infra phase added to plan; codified full new-tool onboarding checklist in tools CLAUDE.md; shipped tool page, blog post, footer, tools list card, 10 screenshots to corporate site production; added tool to supporting-services dashboard catalog; created issue tracker board with 5 columns and 19 phase cards.32h45m8m42.7x240.0x
19Expand MCP coverage across pipelines, deals, activities, tasks, forecasts, and bulk operations in the CRM tool.9h13m2m41.5x270.0x
20Web client: fix 3 MCP-modal bugs -- (1) timer extension now skips answered questions, (2) engine grading integrated so correct answers score correctly and trigger auto-advance while wrong answers show server-authored explanation, (3) new context-aware adaptive hint endpoint with 3-level escalation driven by proficiency gap + session accuracy + prior hint count, plus frontend hint panel replacing the old generic toast.16h25m5m38.4x192.0x
21Replace SSE with WebSockets for real-time list updates in the list tool.6h10m4m36.0x90.0x
22Desktop client parity cleanup: deep-link dispatch, TypeScript error fixes (3 type definitions), competitive-mock dev-gate, login screen removal, tray stats push, router adoption with history stack.10h18m2m33.3x300.0x
23Replace SSE with WebSockets for real-time list updates in the task tracker.6h11m3m32.7x120.0x
24Fill MCP coverage gaps in the email tool: filters, push subscriptions, accounts, identities, signatures, contacts, and more.8h15m2m32.0x240.0x
25Replace SSE with WebSockets for AI chat token streaming in the patent browser tool.4h8m2m30.0x120.0x
26Fix page click in wiki tool (add by-slug + breadcrumbs routes); messaging tool channels/API keys (auto-enroll users, upsert on key mint); leverage dashboard leaderboard timeframe selector + All option.6h14m3m25.7x120.0x
27Wiki tool Phase 11 complete: linked repo service + bulk create, git sync worker with glob filter/frontmatter/link rewriter, HMAC-signed webhook, vector embeddings + chunker, AI chatbot with prompt caching, 17 MCP tools, frontend admin UIs + streaming chat, database migration + README walkthrough.60h140m4m25.7x900.0x
28Wiki tool Phase 11.1+11.2: ORM models (linked repo, SSH key, page embedding, chat session, chat turn + extended page) + SSH key service with secrets manager/filesystem backends + REST routes.10h28m2m21.4x300.0x
29Labs CDN: provision S3 bucket + CDN distribution + DNS + CORS, upload 2,048 labs + manifest, boto3 upload script, Terraform stub, desktop client lab-CDN module + labs browser screen + nav + local cache, unbundle 14 MB labs from web client + wire CDN fetch + local storage cache.24h90m3m16.0x480.0x
30Calendar tool MCP coverage 46 to 62 tools (100%+ endpoint coverage): added calendar sets, accounts, proposals, integrations, fleet status, feeds, export, weather, event parsing.3h12m1m15.0x180.0x
31Wiki tool linked-repos design: extend requirements, design, plan, and testing docs with linked repo + SSH key management + vector RAG + AI chatbot (Phase 11).8h35m6m13.7x80.0x
32Root-cause and fix desktop client console simulator type resolution (deleted obsolete ambient type declarations that shadowed real exports with any); fix 3 pre-existing errors hidden by any types; implement main-process code executor with sandboxed tmpdir + timeout + output caps for JS and Python; remove insecure renderer-side fallback.8h35m2m13.7x240.0x
33Marketing tool: wire missing social engagement WebSocket backend handler + reach 100% MCP coverage (lifecycle-rules CRUD).4h18m1m13.3x240.0x
34Desktop client: migrate 9 renderer screens (lobby, waiting room, competitive game, results, autopilot, release notes, provider mastery, lab console, onboarding) from legacy CSS Modules + icon library to shared design system primitives + Tailwind + Lucide icons.6h28m5m12.9x72.0x
35Close MCP coverage gap in analytics tool: add 3 missing tools to reach 100%+.2h12m1m10.0x120.0x
36Close MCP coverage gap in monitoring tool from 32/44 to 36/35 tools (100%+): add 4 new MCP tools.2h12m1m10.0x120.0x
37Test and deploy web client + build desktop DMG: run full test suites, fix test timeouts, pin transitive dependencies to artifact-cached versions, root-cause buildspec bug that published empty library (tsc --noEmit failing on test files with post_build still firing publish), rewrite lib buildspec to gate publish on dist existence + republish, trigger web pipeline, resolve Vite 6 chunk-split/for-loop breakage on code editor in desktop (esnext target + inlineDynamicImports + esbuild minify), native module rebuild for Electron 41, 477 MB universal DMG signed.12h75m5m9.6x144.0x
38Stand up admin backend as an engine proxy: JWT auth, server-side API key, full Terraform stack (load balancer rule, container registry, pipeline), frontend rewired to proxy, CORS JSON-format bug fix, end-to-end health check green.11h75m8m8.8x82.5x
39Format-matched practice endpoint (4-option MCQ): cuts predictor calibration gap 58 to 37 percentage points.8h60m3m8.0x160.0x
40Close list tool MCP coverage gap: add 4 missing tools for suggestions and invite-collaborator endpoints (40 to 44 tools).1h8m1m7.5x60.0x
41Close predictor calibration gap from 31pp to 10pp via uniform-rotation practice + tighter priors + aligned exam retrieval (v24-v26 iterations).10h90m2m6.7x300.0x
42Complete synthetic student calibration: 65-question exam generator + multi-select answering fix; student v29 passes 65q at day 90 with predicted 0.901 / actual 83.3% (8pp gap, naturally gated).14h180m4m4.7x210.0x
43Close task tracker MCP coverage gap: add 3 missing collaborator endpoint wrappers to reach 100%+.1h15m1m4.0x60.0x
44Close CRM tool MCP coverage gap: add 3 public quote wrappers (71 to 74 tools).1h15m1m4.0x60.0x
45Close static site generator MCP coverage gap to 100%: add identity tool for /api/v1/me.0.5h8m1m3.8x30.0x
46Accounting tool: add identity and health-check MCP tools to reach 111/111 endpoint coverage.0.5h12m1m2.5x30.0x
47Close newsletter tool MCP coverage gap from 55 to 57 tools (100%+): add ping and identity tools.0.5h18m1m1.7x30.0x

Aggregate Statistics

MetricValue
Total tasks47
Total human-equivalent hours2,135.5
Total Claude minutes2,010
Total supervisory minutes148
Total tokens9,487,000
Weighted average leverage factor63.7x
Weighted average supervisory leverage factor865.7x

Analysis

The infrastructure provisioner build defines April 19. Ten sequential sessions built the tool from a blank repository to a production-deployed system spanning 58 provisionable AWS resource types, 39 inventory discoverers, 49 advisor checks, 11 compliance packs covering 97 rules, 88 real-time event patterns, and 43 MCP tools -- with 145 passing tests. Each session ran 35-60 AI minutes and produced a human-equivalent estimate of 90-160 hours, yielding leverage factors in the 150-175x range. The cumulative estimate across the ten infrastructure sessions alone is 1,290 human-hours, delivered in roughly 8 hours of AI wall-clock time. At a 1-minute supervisory cost per continuation session (the user passed a single brief continuing the previous phase), supervisory factors ranged from 5,400x to 9,600x -- the upper end of anything recorded this year. This is what happens when a well-specified design document exists before the build starts: each continuation can be initiated with a minimal prompt because the prior session's state is unambiguous.

The messaging tool (two tasks, 150x each) followed the same pattern: spec documents were written first, then the build proceeded through 14 phases in two sessions. The 200-hour human estimate on the second session reflects what it would actually cost a human engineer to take a real-time messaging platform from spec to production, including AWS infrastructure, database bootstrap, debugging, and deployment verification -- all in 80 AI minutes. That is a genuinely striking number by any standard. The 1-minute supervisory cost and 12,000x supervisory factor are correct: the entire session was initiated with a single short directive referencing the existing spec.

The mid-range tasks (25x-65x) are mostly design system work and protocol migrations. The desktop client design system migration (65.5x, 55 AI minutes, 42 screens) is a good example of AI-suited work: the transformation pattern is identical for every screen -- swap icon library, replace CSS Module classes with Tailwind utilities, swap banner component with design system primitive -- and the only variable is which screen-specific props and layout need adjusting. The desktop client parity cleanup tasks (33.3x each) handled TypeScript strictness issues and routing work that required reading existing code carefully before modifying it, which explains the shorter human estimates. The three SSE-to-WebSocket migrations (30x-36x) are textbook refactors: the transport changes but the data model and business logic do not, so the scope is bounded and predictable.

The lowest-factor tasks fall into two categories. First, iterative ML calibration: three tasks (4.7x, 6.7x, 8x) worked on a synthetic student simulator's predictor accuracy. This work is empirical. Each iteration requires running a simulation, reading numeric output, forming a hypothesis about what to change (prior weights, practice rotation, exam retrieval alignment), changing it, and running again. The AI minutes accumulate because the work is fundamentally experimental rather than compositional. You cannot predict how many iterations it takes to reduce a calibration gap from 31 percentage points to 8. The 42-task synthetic student calibration at 4.7x was 180 AI minutes against a 14-hour human estimate: the leverage is still real (a human would need days of iteration), but it is constrained by the nature of empirical debugging. Second, deployment and build debugging (9.6x, 8.8x): these tasks ran long because they were chasing pre-existing infrastructure issues -- a buildspec that published empty packages, a Vite bundler breaking on a specific code editor library with the Electron target, a CORS format bug in an existing backend. The root cause was unknown at the start of each session. Exploratory debugging does not compress as well as known-scope construction.

The supervisory leverage of 865.7x is the highest of any day on record, and the primary driver is the infrastructure provisioner build. Seventeen of the 47 tasks had a 1-minute supervisory cost. Many of those were MCP coverage sweep tasks -- add 3-4 missing tools to reach 100% endpoint coverage in a given fleet tool -- where the entire brief was a single sentence. The aggregate 148 supervisory minutes for 2,135.5 human-hours of output represents a ratio of roughly 14.4 human-hours delivered per supervisory minute spent. At the other end, the 8-minute prompts (infrastructure tool design, desktop client migration, admin backend standup) reflect tasks where the scope needed explicit enumeration before the work could begin without ambiguity.

Let's Build Something!

I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.

Currently taking on select consulting engagements through Vantalect.