Skip to main content

Leverage Record: April 21, 2026

AITime Record

About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.

Nineteen tasks. April 21 was a UI polish and infrastructure remediation day. The majority of AI time went into iterative web client work: retiring the side navigation, wiring analytics charts to real activity data, fixing a silent grading bug where engine-backed answers always scored wrong, repairing a NaN propagation in the readiness gauge, and sweeping 400+ accessibility fixes across 17 sites covering 2,400+ pages. The day also included infrastructure housekeeping: stopping 11 zombie containers, pruning 4.5GB of unused images, renaming cloud resources, and retagging approximately 234 AWS resources across 4 project buckets. The weighted average leverage factor was 18.1x with a supervisory leverage of 129.2x, representing 158.25 human-equivalent hours.

The 18.1x weighted average is notably lower than April 20 (31.3x / 271.0x). The gap has a clear structural explanation. April 20 was anchored by two high-leverage tasks -- a cloud infrastructure tool's final build phase (165.0x) and its initial commit and integration (100.0x) -- that compressed enormous greenfield output into 52 minutes. April 21 had no equivalent spike. The highest single factor was 27.3x on the question-bank grading rewrite, and the next two tasks tied at 24.0x. The bulk of the day consisted of 11 web client tasks in the 12x to 18x range, each targeting well-defined but incremental UI changes. The template hygiene sweep (21.8x, 165 minutes) is the only large-block task and pulls the weighted average toward the mid-range by sheer AI-time volume. The supervisory leverage drop to 129.2x is partly because the UI sessions required more iterative specification -- each session addressed a list of specific visual and behavioral changes rather than a single high-level directive.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

Task Log

#TaskHuman Est.ClaudeSup.FactorSup. Factor
1Learning platform question-bank session rewrite: submit each answer to the engine grader, render per-option inline feedback (selected explanation and full explanation), lock options after grading, colour navigator tiles by correctness, update proficiency per question via engine response; fixes silent local-only grading bug where every answer would have scored wrong once engine-backed5h11m2m27.3x150.0x
2Analytics ledger and chart wiring: connect daily activity rings and this-week chart to real study activity via a new weekly study ledger; session summary and question-bank answer paths now record actual minutes; engine-generated study plan gains collapsible week sections with week-1-open default, activity count, total minutes, and completion pill4h10m3m24.0x80.0x
3Leverage record CSV reconciliation with cloud database: deduplicated 2 true duplicates, confirmed local CSV is a strict subset, restructured CSV into full cloud-backup plus empty upload queue; generated and published 4 backfilled leverage blog posts (Feb 22 and Apr 18/19/20, 106 records sanitized across 3 parallel writing agents), deployed to staging and production18h45m2m24.0x540.0x
4Dashboard greeting punctuation sweep (comma after greeting word, terminal punctuation on every variant, updated regression tests); relabel of course list header; engine readiness endpoint extended with per-domain panel data (overall proficiency, pass probability via Poisson-binomial, weak areas, estimated hours remaining, velocity) when domain filter supplied, keeping backward-compatible thin shape otherwise; 3 regression tests added7h18m5m23.3x84.0x
5Website template hygiene sweep across 17 sites: purged 20 macOS metadata files and 5 build caches (146MB total); established global and site-level ignore rules; consolidated 25 per-page duplicate templates into 3 shared frontmatter-driven templates (industries -5, applications -12, features -8); fixed template variable access bug in existing industry template; WCAG 2.1 AA audit and fixes across all 214 remaining templates via 4 parallel sub-agents (400+ fixes: decorative SVG aria-hidden, icon-only button labels, form label pairing, breadcrumb aria-current, landmark aria-label, dialog roles, skip links, button type attributes); rewrote static site generator CLAUDE.md accessibility section as mandatory WCAG 2.1 AA checklist and template variable reference; all 16 sites built clean with 0 errors across 2,400+ pages60h165m8m21.8x450.0x
6Fix readiness gauge NaN propagation and hardcoded pass probability: root cause was client type expecting rich engine fields that the engine does not return, so undefined values flowed through Monte Carlo as NaN; hardened readiness hook to detect thin engine responses and fall back to predictor seeded by entry proficiency; added safe-value guards throughout probability module; replaced hardcoded 35% pass probability with seeded Monte Carlo value; added assistant feature flag defaulting to off5h14m4m21.4x75.0x
7Web client UI restructure: retire side navigation and mobile tab bar; set button primitive default to rounded rectangle; sweep 58 pill-style buttons across 26 files; add recruiter certifications dashboard section; prepend back-to-dashboard link on 13 off-dashboard pages; merge activity tabs (study modes removed, labs and assistant tiles added); add sign-out icon to top navigation; fix recommendations zero-state filter6h20m4m18.0x90.0x
8Design system and web client polish: hide confirm dialog close button and match primary button style; branded date picker for autopilot setup; study plan day separators (today, tomorrow, day N) and info modal; feature flag recruiter section off by default; drop assistant from activities; colored lab difficulty pill with longer description and spacing; web audio chime and branded update-available modal replacing native browser confirm dialog5h17m7m17.6x42.9x
9Web client minor UI fixes: course card vertical centering; move exam date CTA to course dashboard; add info buttons and modals to 4 behavioral analytics cards; add footer link matching main marketing site style2.5h9m6m16.7x25.0x
10Web client: restore proficiency ring on no-plan course cards; drop main dashboard autopilot CTA banner; merge autopilot tab into renamed overview tab; move daily minutes and reminder settings to global schedule store; remove at-a-glance and per-course trademark cards; add trademarks page linked from help legal section4h15m4m16.0x60.0x
11Web client: fix test mode modal prop type crash; sign-out confirm dialog with real post-logout redirect to marketing site; exam info modal on course header; native date picker trigger with dark mode indicator CSS; remove labs from activities; rename multiple choice activity and drop stale counts; improve activity descriptions; add per-activity icons to study plan matching activities tab; recruiter dashboard section now shows any enrolled course4h15m6m16.0x40.0x
12April 21 web client consolidated push: dashboard analytics parity and info modals on analytics cards; retire side nav and rounded-rect buttons; recruiter section on dashboard with feature flag; test mode, sign-out confirm, and real logout; exam info; activities polish; study plan day separators, icons, and week collapse; branded date picker; lab polish; brand rename sweep; greeting punctuation; readiness NaN fix; per-course calibration trigger; stale-tab auto-recover; bug reporter restricted to desktop; weakest-area text wrapping; daily rings and weekly chart wiring; engine-graded questions with inline explanations; labs promotion; simplified onboarding20h75m3m16.0x400.0x
13Fix billing service CORS error on admin page: JWT verification crashed because local public key file was absent; switched to JWKS HTTP endpoint for key resolution3h12m3m15.0x60.0x
14Team-messaging tool first-login sidebar race fix: await user profile fetch before listing workspaces1h5m2m12.0x30.0x
15Web client branding sweep: hide info modal close button; beef up analytics empty states; add this-week info button; bar chart accent hover fade; course header cleanup (drop exam code, use long-form date); study plan tab split; trademark holder links; fix pre-dev and pre-build scripts for lab manifest generation5h25m8m12.0x37.5x
16Identify and stop 11 zombie containers on production services host: verified via load balancer target group membership and deployment buildspec targets; 46% RAM reduction and 98% idle CPU freed1.5h10m1m9.0x90.0x
17Remove 11 zombie containers and prune 13 unused images (4.5GB reclaimed); correct instance name drift in project reference documentation0.75h5m0.5m9.0x90.0x
18Rename cloud compute instance, security group, target group, and log group with zero-downtime load balancer cutover; retag approximately 234 AWS resources into 4 project buckets; update infrastructure configuration across 22 service directories; document 5 deferred items6h45m4m8.0x90.0x
19Team-messaging tool CORS fix: SSM parameter stored as JSON array; backend splits by comma; converted to comma-separated and redeployed0.5h8m1m3.8x30.0x

Aggregate Statistics

MetricValue
Total tasks19
Total human-equivalent hours158.25
Total Claude minutes524
Total supervisory minutes73.5
Total tokens2,064,000
Weighted average leverage factor18.1x
Weighted average supervisory leverage factor129.2x

Analysis

The question-bank grading rewrite (task 1, 27.3x) is the day's sharpest example of the AI catching a systemic bug and fixing it completely. The root cause: every answer the user submitted was being evaluated locally rather than by the engine grader, meaning the correctness signal was always wrong once the engine was live. The fix required rewriting the session component to submit each answer to the engine, wire the per-option explanation response back to the UI, lock options post-submission, and update proficiency state from the graded response. That is a multi-component change touching network calls, state management, and UI rendering -- done in 11 minutes.

The readiness gauge NaN fix (task 6, 21.4x) is a related class of bug: a type contract mismatch between what the client expected from the engine and what the engine actually returned. When the rich fields were absent, the client performed arithmetic on undefined values, which propagated through Monte Carlo sampling as NaN and produced a 0% display. The fix required detecting the thin response shape, falling back to a seeded estimator, and adding guards throughout the probability module. The hardcoded 35% pass probability was a separate but adjacent issue resolved in the same session. Both bugs would have been difficult to diagnose in a traditional debugger because the failures were silent until the UI rendered.

The template hygiene sweep (task 5, 21.8x, 165 minutes) is the day's largest single block by AI time and the clearest demonstration of where parallelism earns its cost. Running 4 parallel sub-agents across 214 templates to apply 400+ accessibility fixes -- decorative SVG aria-hidden attributes, icon-only button labels, form label pairing, breadcrumb aria-current, landmark labels, dialog roles, skip links -- is work that scales linearly with template count for a human. A human accessibility reviewer spending 15 minutes per template across 214 templates would need 53 hours before writing a fix. At 165 minutes total AI time and 8 supervisory minutes, the cost is one focused session and a coffee.

The consolidated push record (task 12, 16.0x) is the day's highest AI-time task at 75 minutes, but it is also an accounting artifact: it captures the full session work as a single record alongside the individual granular records that describe the same changes. Taken at face value, 20 human-equivalent hours of web client work committed and pushed in a 75-minute AI session is consistent with the granular records that surround it, which collectively total 116 AI minutes for approximately 52 human-equivalent hours of web client work.

The infrastructure retagging task (task 18, 8.0x, 45 minutes) sits at the bottom of the non-housekeeping range for a predictable reason: retagging 234 AWS resources across 4 project buckets requires reading existing resource configurations, cross-referencing a tagging scheme, executing API calls one resource at a time, and updating Terraform state for 22 service directories. The work is highly mechanical but also highly sequential -- each resource must be identified, tagged correctly, and verified before moving to the next. The AI compresses the research and execution phases but cannot fully parallelize the AWS API calls. The team-messaging tool CORS fix (task 19, 3.8x) is the only sub-4x task in the log: the problem was a configuration format mismatch (JSON array versus comma-separated string) that took 8 minutes to diagnose and redeploy. That is the diagnostic overhead floor for a production config issue regardless of AI assistance.

Let's Build Something!

I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.

Currently taking on select consulting engagements through Vantalect.