CARE Audit
Two-week review of your current AI initiative against the 14 files, the 8 rules, and every regulation that applies to your domain. Delivered as a prioritised gap report.
CARE is the proprietary framework aTeam uses to build AI-powered software. It's how we ship products that work in production — across healthcare, finance, retail, and manufacturing.
Every team building with AI today is figuring it out as they go. Some teams move fast and ship broken software. Other teams move carefully and ship six months late. Both groups are using AI the wrong way. Over the last two years, aTeam built the right way — by writing it down. The result is CARE: a tool-agnostic framework of 14 versioned files, 8 hard rules, and a 3-phase discipline that we now use on every AI project we deliver.
.md files in every projectIt's not because AI is bad. It's because the tooling is moving faster than the discipline. Engineers paste prompts into Claude or Cursor, get something that runs, ship it, and hope for the best. That works for demos. It does not work for production software in regulated industries. Three failure patterns show up on almost every project we audit — and you've probably felt all three.
Every time an engineer opens Claude Code, Cursor, or Cowork, the AI starts from zero. No memory of your tech stack. No memory of last week's decisions. No memory of your security rules. Engineers re-explain the same things every day — or they forget to, and the AI quietly does the wrong thing.
fetch() again?"The longer a session runs, the more the model "loses the middle." Past about 50% of its memory window, accuracy starts dropping. Most teams have no protocol for it. They just keep pushing — and the AI keeps producing code that looks right but isn't.
Move from Cowork to Claude Code, from Sonnet to Opus, from AI to a human reviewer — and the context resets. Decisions made in one tool are invisible to the next. The handoff is the moment things get dropped, and nobody owns it.
If you're building a marketing site, a hallucinated function name costs you an hour. If you're building a hospital pharmacy module, a tax invoice engine, or a banking workflow — the same mistake costs you a regulatory event, a patient incident, or a six-figure fine. Most teams don't notice the difference until something goes wrong.
AI invents a function, an endpoint, or a database column. Looks real. Compiles. Fails in front of a customer.
Sensitive patient data ends up in a prompt sent to an external model. Nobody audited it. The regulator finds out later.
The original engineer leaves. AI-generated code has no comments, no decisions log, no context. The next team can't touch it.
CARE is the way aTeam now builds every AI-powered product we deliver. It started as an internal playbook for our own engineers — a way to make sure every project, every session, every commit followed the same rules. The whole framework lives in your repository as 14 markdown files. Engineers read them. AI reads them. Everyone stays on the same page.
Every session starts by reading the canonical project file. Tech stack, security posture, naming conventions, what's been decided already. The AI is never working from scratch.
PII scrubbing. Output validation. Audit logging. Every guardrail is a named pattern wired into your build pipeline. Skipping one isn't an option — the build fails.
If a rule isn't written down, it doesn't exist. Period. That means the AI can find it, the next engineer can find it, and the auditor can find it. No "ask Suresh, he knows."
Why did we pick this model? Why does this prompt look this way? What went wrong last month? All recorded, all in the repo, all version-controlled. An audit takes a day, not a quarter.
Drop your context, your AI tools, and your engineers in on the left. CARE — four disciplined layers — sits in the middle. Out the other side comes code that ships, decisions that are written down, and evidence that survives an audit. The whole loop lives in your repository.
Every CARE engagement starts the same way: we install these 14 files at the root of your repository. They're versioned, they're owned by named people on your team, and the AI reads them automatically every session. This is what makes the framework real — it lives in your code, not in our slides.
Every project gets these. CI gates enforce them. Skipping one is a documented exception, signed off by a lead — not a default.
Domain vocabulary so the AI doesn't invent terms. One reference file per module of your system. Strongly recommended for any regulated project.
Add as the project grows. The framework is built to extend, not to box you in. Most large projects end up with 20–25 files by year two.
Before, during, after. That's the whole loop. Every engineer runs it every time they sit down with AI — a short read at the start, a quiet discipline in the middle, a clean handoff at the end. After a sprint, it stops feeling like overhead and starts feeling like the only sensible way to work.
.md files for the task.prompt_log.md.decisions.md as you make them.CLAUDE.md in the same PR.handoff.md if you're continuing tomorrow.gap_analysis.md — where you actually got to.code_review.md.incident_log.md.Each rule is wired into your build pipeline as a CI check on day one. A pull request that breaks any of them simply doesn't merge. We don't rely on engineers to remember — we rely on the system to enforce. That's the contract behind every CARE engagement.
Synthetic, anonymised data only. Sensitive fields get scrubbed before they leave your servers. Logged in data_classification.md.
AI-generated or human-written, every controller and route uses the project's standard auth pattern. Lint-checked, never bypassed.
The reviewer pulls the branch, runs the tests, and verifies no APIs were hallucinated. The code_review.md checklist is non-negotiable.
Every library, model, or service goes into decisions.md with the trade-offs documented. No silent additions.
The schema is a first-class artefact. Changes need an updated architecture.md, a reversible migration, and a security check.
Prompts are code. Versioned, reviewed, evaluation-tested. A prompt missing from prompt_log.md can't be called from production. Lint-enforced.
At 50% the wind-down begins. At 75% the handoff is drafted. At 90% the session stops. handoff.md is the bridge to the next session.
PII scrubbing, output validation, audit logs, region pinning — every compliance control in compliance.md is a hard CI gate.
Vibe-coding, copilot-assisted, generative-driven — or CARE. Here's the side-by-side, on the dimensions that decide whether your AI software ships safely or quietly accumulates risk.
| What we compare | CARE · aTeam | Vibe-coding | AI-assisted (Copilot) | Generative-driven |
|---|---|---|---|---|
| Where project memory lives | 14 versioned files in your repo | In someone's head | In the prompt, every time | In a process loop |
| If you switch AI tools | Same files, any tool | Start over | Per-tool config | Tied to one vendor's method |
| How rules are enforced | CI checks block the build | "Be careful" | Code review, sometimes | Process gates |
| When the session gets too long | 50/75/90 protocol with logged handoff | Keep going until it breaks | Start a new chat | Reset between phases |
| Audit trail | Decisions, prompts, incidents — all versioned | Chat history | Git log | Loop checkpoints |
| Fit for regulated industries | NPHIES, HIPAA, ZATCA, FHIR-aware | No | Generic | Generic enterprise |
| The engineer's role | Owner of the rules and the evidence | Prompt operator | Reviewer of AI output | Architect and validator |
| Onboarding a new engineer | Read 14 files in 2 hours · ready to ship | Shadow someone | "Read the codebase" | Workshop |
The biggest risk in adopting any AI methodology today is locking yourself to a tool that won't be the best one in twelve months. CARE solves this by sitting above the tools, not inside them. The same 14 files work with whatever your team uses — today, and for whatever's released next quarter.
Whether you're auditing an existing AI initiative or building something brand new, we slot in at the right level. Engineers talking to engineers — not slides talking to procurement.
Two-week review of your current AI initiative against the 14 files, the 8 rules, and every regulation that applies to your domain. Delivered as a prioritised gap report.
Four to six weeks installing CARE end-to-end on one production feature. 14 files filled in, CI gates wired, prompt registry set up — running in your repository by sprint end.
Senior aTeam engineers integrated into your team. They deliver under CARE and transfer the practice to your engineers. Starts with a 14-day no-cost trial.
Greenfield build of an AI-native product on CARE — from architecture through prompt registry, guardrails, launch, and ongoing observability. Outcome-based engagement.
The Saudi pharmaceutical group ran NUPCO portal reconciliation manually across 450+ SKUs. Hours of human time, every day. We rebuilt the workflow on CARE: bounded extraction agents, every prompt registered, schema-validated outputs into their ERP, every patient and supplier identifier classified at the highest tier with a server-side scrubber.
CLAUDE.md?Tell us where your AI work is today. We'll show you which of the 14 files you already have, which you don't, and what it takes to close the gap. Engineers talking to engineers.