Skip to main content

Leverage Record: April 17, 2026

AITime Record

About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.

Eleven tasks. April 17 had one dominant theme: a shared design system. The day started with scaffolding a unified component library, proceeded through migrating individual tools onto it in phases, and ended with all 16 fleet tools running on the shared token set. Running alongside that thread were two non-design-system tasks: extracting four independent test-prep marketing sites with full AWS infrastructure, and an innovation sweep across 52 repositories. The weighted average leverage factor was 19.9x with a supervisory leverage of 549.5x, representing 348 human-equivalent hours.

The 19.9x weighted average is the lowest of this three-day stretch, and the reason is the fleet-wide migration task (600 minutes, 12x). That single task consumed 57% of the day's AI time and pulled the weighted average down substantially. The supervisory leverage at 549.5x is the highest of the three days because the human prompt time per task was minimal even when Claude time was high.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

Task Log

#TaskHuman Est.ClaudeSup.FactorSup. Factor
1Innovation sweep across 52 repos with parallel exploration agents, per-repo proposals and master index; plus macOS command-center design vision integrating all tools with LLM orchestration48h35m2m82.3x1440.0x
2Design system checkpoint: publish private package to artifact registry, unify animation versions, remove third-party toast library, rebuild kanban UI (board/column/card) with Tailwind, 4 CSS module files deleted11h14m4m47.1x165.0x
3Scaffold 2 new tool repos with full design docs: requirements, design, implementation plan, and testing strategy per repo; updated root repo map and port table32h45m4m42.7x480.0x
4Extract 4 independent test-prep marketing sites from monolith with shared-content overlay; full AWS infra per site: S3, ACM SAN cert, CloudFront, Route53, CodeBuild, CodePipeline; end-to-end deploy60h95m4m37.9x900.0x
5Issue tracker Phase 2 finish: tailwind-ify all 26 CSS Modules; rewrote 17 component files and 9 page files; CSS bundle reduced from 75 KB to 34 KB18h34m1m31.8x1080.0x
6Phase 3 list tool migration: tailwind-ify all 23 CSS Modules, swap header to library TopNav, provision CodeBuild and CodePipeline, publish design system 0.1.3 with configurable theme storage key; all unit and backend tests pass22h68m3m19.4x440.0x
7Migrate web client to consume design system via shims covering all 411 imports; add issue tracker to monorepo workspace; wire Tailwind and shared tokens with dual dark-mode bridge16h65m3m14.8x320.0x
8Design system scaffold: tokens.css with brand accent variable, Tailwind preset, generic shell and navigation components, 30 components ported, type-checks clean12h50m6m14.4x120.0x
9Rewrite canonical frontend standards to match current conventions (Tailwind, design system, dual dark-mode, brand accent per tool); swap issue tracker layout header to library TopNav with brand icon5h22m2m13.6x150.0x
10Publish design system 0.1.0 and 0.1.1 to artifact registry; unblock CI; fix peer dependency conflict by widening version range; republish 0.1.1; verified live CSS bundle contains unique animation markers4h22m1m10.9x240.0x
11Migrate all 16 fleet tools onto shared design system: Tailwind, tokens.css, unified theme provider, publish library 0.1.4, fix React context and theme toggle bugs, push all 16 repos120h600m8m12.0x900.0x

Aggregate Statistics

MetricValue
Total tasks11
Total human-equivalent hours348.0
Total Claude minutes1,050
Total supervisory minutes38
Total tokens6,610,000
Weighted average leverage factor19.9x
Weighted average supervisory leverage factor549.5x

Analysis

The fleet-wide design system migration (12x, 600 minutes) defines the day's character. Migrating 16 separate tools onto a shared component library requires touching each tool's dependency configuration, wiring the theme provider, replacing local component imports with library equivalents, and verifying the build and dark-mode behavior in each. The work is highly repetitive once the pattern is established for the first tool, but each repo has its own quirks: peer dependency conflicts, module resolution paths, existing CSS that conflicts with Tailwind utilities. At 120 human-equivalent hours, the estimate reflects the realistic cost of a human engineer doing this across 16 codebases sequentially. The 12x factor is lower than the day's other tasks, but 600 minutes of AI time for what would otherwise be several weeks of migration work is still the correct trade.

The innovation sweep (82.3x) ran parallel exploration agents across 52 repositories simultaneously, produced per-repo innovation proposals, a master index, and a design vision for a macOS command-center overlay. The 35-minute runtime and 2-minute supervisory cost make the supervisory factor 1,440x, the highest single-task supervisory leverage of the day. This is the pattern where AI leverage compounds: the human writes a short directive, and the AI fans out across an entire codebase landscape to do the exploratory work that would otherwise require a week of architectural review sessions.

The test-prep site extraction (37.9x) is notable for the infrastructure scope. Extracting four sites from a monolith while provisioning independent AWS stacks per site (S3, ACM, CloudFront, Route53, CodeBuild, CodePipeline) is the kind of work that normally requires a dedicated infrastructure sprint. The 95-minute runtime included the full deploy and verification cycle.

The CSS module migrations (31.8x for the issue tracker, 19.4x for the list tool) tell a consistent story about refactoring leverage. Large CSS-to-Tailwind migrations are well-suited to AI because the transformation is mechanical: each CSS class maps to one or more Tailwind utilities, the output is deterministic, and the acceptance gate is a clean build plus visual equivalence. The issue tracker's CSS bundle dropped from 75 KB to 34 KB as a side effect of removing unused utility definitions that had accumulated in module files.

The supervisory leverage of 549.5x reflects a day where most tasks were initiated with short, specific prompts. Eight of eleven tasks had supervisory times of 4 minutes or less. The 8-minute prompt for the fleet migration is the outlier and is justified: specifying which 16 repos to touch, what version of the library to target, and how to handle the theme toggle bugs required a more detailed brief than a typical task.

Let's Build Something!

I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.

Currently taking on select consulting engagements through Vantalect.