$ cat README.md

A system for professional developers who tried AI coding and were underwhelmed.

You'll get a production-ready harness — CLAUDE.md, scoped rules, compile hooks, PR review templates — that stays in your repo and keeps working long after the course is done.

evidence.log
275 sessions
2,305 commits
469 MB of transcripts
87% success rate
Get the Harness Kit — $97 → one-time payment · lifetime access · early access — modules ship weekly
// the problem

You tried it. It didn't work.

You gave Claude a real task. It generated something that looked right — until it didn't compile, added dependencies you didn't ask for, refactored code you told it not to touch, or produced a diff so large your reviewer sent it back.

You spent more time fixing Claude's output than you would have spent writing it yourself.

edx-workforce-survey-2025.log
# edX workforce survey, May 2025 (n=1,002)
Likely to pursue AI training64%
Currently doing it4%
The gap60 points
# Not because AI can't code.
# Because most devs try it without guardrails and quit.
// the insight

The skill isn't prompting. It's supervision.

Over 275 Claude Code sessions, the data reveals a pattern. The developers who succeed don't write better prompts — they build better rails.

session-analysis.log
Tool rejections (active steering)220
Messages with constraints21%
Messages showing context29%
Success rate87%
Positive feedback sent84
Direct corrections sent8

The 10:1 ratio of positive-to-corrective feedback isn't politeness. It's a pattern: steer early with constraints, supervise the output, course-correct before problems accumulate.

// the system

Seven modules. Seven artifacts in your repo.

THE HARNESS KIT
Memory Stack
Constraint Envelope
Context Before Instructions
Persona Selector
Tests & Hooks
Ship Checklist
Supervision > Prompting

Each module teaches a pattern and leaves a working artifact in your codebase. The harness is self-reinforcing — every session reads CLAUDE.md, every edit triggers the compile hook, every PR goes through the checklist.

// before & after

Same developer. Same AI. Different system.

✗ WITHOUT HARNESS
you: "Add validation to the order service."
Claude adds 3 new dependencies, creates a custom framework, modifies controller + service + repo layers, changes 14 files
you: "...that's not what I meant."
repeat 3×, give up, write it yourself
✓ WITH HARNESS
you: "Add input validation to
  OrderService.createOrder().
  Follow existing pattern in UserService.
  Do not add new dependencies.
  Only modify the service layer.
  Ensure all existing tests pass."
2 files changed, clean diff. Ship it.
Four constraint types. 30 seconds to write. 30 minutes of rework saved.
// what stays in your repo

Files, not advice.

your-project/
your-project/
├── CLAUDE.md ← Project conventions
├── .claude/
│ ├── rules/
│ │ ├── testing.md ← Test conventions
│ │ ├── security.md ← Security requirements
│ │ └── spring-data-jpa.md ← JPA guardrails
│ └── settings.json ← Hooks + permissions
├── .git/hooks/
│ └── pre-push ← Local build gate
├── PLAN.md ← Spec template
└── SHIP_CHECKLIST.md ← PR readiness gate

These files keep working after the course is done. Every Claude Code session reads CLAUDE.md. Every edit triggers the compile hook. Every PR goes through the checklist. The harness is self-reinforcing.

// your mission

Ship something you've been avoiding.

Every codebase has one. That ticket that's been sitting in the backlog for months because it's too big, too tangled, or touches too many things. The kind of change you wouldn't trust AI with — not without guardrails.

That's your mission. By the end of this course, you'll have taken that change from backlog to main — reviewed, tested, and merged. Not on a demo project. On your actual codebase, with your actual team's standards.

the mission
Pick the scariest ticket in your backlog. Ship it with the harness. Merge to main before you finish the course.
The modules build toward this. Module 0 gets you a quick win. Modules 1–5 give you the rails. Module 6 is where you put it all together and ship.
// the modules

~5 hours. Your codebase. Real artifacts.

MODULE 0available now20 min
The First Win
Open your terminal. Find the oldest TODO in your codebase. Fix it with Claude Code. Commit. No theory — just a result.
→ one commit in your git log
MODULE 1available now45 min
The Memory Stack
Set up the three-layer memory system: CLAUDE.md (project conventions), auto memory (silent learner), scoped rules (path-specific instructions).
→ CLAUDE.md + .claude/rules/*.md
MODULE 2shipping soon45 min
The Constraint Envelope
Learn the four constraint types. Practice the Constraint Envelope. Use the Steering Wheel — tool rejections as active course correction.
→ constraint patterns in your workflow
MODULE 3shipping soon45 min
Context as a Skill
The bug report formula. The Verification Loop. Plan mode vs. direct mode. Context hygiene for long sessions.
→ PLAN.md template
MODULE 4shipping soon30 min
The Persona Selection Effect
Why your tone determines AI output quality. Backed by Anthropic's research. The Spine Test — using /spar to prevent Claude from capitulating.
→ persona selection checklist
MODULE 5shipping soon45 min
Tests & Hooks
TDD with Claude. Auto-compile hooks. Pre-push gates. Custom slash commands. GitHub Actions PR review. Permission playbook.
→ settings.json + pre-push hook
MODULE 6shipping soon60 min
Ship It
The full harness in action: end-to-end case study. Team rollout playbook. The 7-day PR challenge.
→ SHIP_CHECKLIST.md + your first harnessed PR
MODULE 7shipping soon45 min
Autonomous Bug Investigation
When a stack trace hits production, an agent investigates the root cause, writes a fix, and opens a PR. You'll build the pipeline: error → context gathering → Claude Code session → reviewed PR.
→ automated investigation pipeline
// audience

Who this is for

✓ THIS IS FOR YOU IF
You're a professional developer in a production codebase
You've tried AI coding and found it more work than it's worth
You want a systematic approach, not tips and tricks
You prefer concrete artifacts over abstract advice
You're comfortable with terminal, git, and code review
✗ NOT FOR YOU IF
You've never programmed before
You want a Claude Code feature walkthrough
You're looking for "vibe coding"
You expect AI to replace understanding your codebase
// objections

Fair questions

"There's a free Anthropic course. Why pay $97?"
Anthropic's course teaches what Claude Code is — features, hooks, MCP. About 2.5 hours, beginner level. It teaches you to drive the car. This course teaches you to win the race. You get a production workflow system with artifacts that stay in your repo.
"AI-generated code isn't reliable enough for production."
Correct — without guardrails. That's the whole point. The harness adds automated compilation checks, test-driven development, scoped rules, and a review workflow. The 87% success rate comes from the guardrails, not from better prompting.
"I've been coding for 15 years. I don't need AI."
You don't need it. But 29 PRs in 10 days, with production-quality code, reviewed and tested — that's a force multiplier. The harness is designed for experienced developers. It requires more engineering judgment, not less.
"$97 is expensive for a course."
It's less than your hourly rate. If the harness saves you one afternoon of rework, it's paid for itself.
"Can't I just figure this out myself?"
Yes. I did — over 275 sessions and 14 months. The course compresses that into a weekend.
// author

Sten Morten Misund-Asphaug

15+ years building payment systems, mobile apps, and SaaS products. Java/Spring Boot at Giant Leap Technologies (part of Visma). Co-founder of Omumu Solutions, where most of the 2,305 commits happened.

This isn't theory from someone who read about AI coding. It's a system extracted from 469 MB of real production transcripts.

275
sessions
12
projects
14
months
2,305
commits
469 MB
data
87%
success rate
// risk

60-day money-back guarantee

Work through the modules. Set up the harness. Use it on real work. If after 60 days you didn't learn anything useful — or it just wasn't worth it for you — email me and I'll refund the full amount. No form, no questionnaire, no "but have you tried module 4?"

The harness either works in your codebase or it doesn't. You'll know.

$97
The AI Coding Harness
8 modules with named patterns · Exercises in your own codebase · Complete artifact kit · The 7-day PR challenge · Lifetime access
Early access — Modules 0 and 1 are available now. The rest ship weekly. Buy once, get everything as it lands.
Get the Harness Kit →
No countdown timers. No fake scarcity. No "only X spots left."

You've read this far because something about AI coding frustrated you — and something about this approach made sense.

The harness isn't magic. It's the same engineering discipline you already practice — scoping work, reviewing code, writing tests — applied to a new kind of collaboration.

The question isn't whether AI coding works.
It's whether you have the rails to make it work.