CARE
14
files
8
rules
3
phases
0
lock-in
2026 04 / 11 chapters
Introducing the CARE Framework™ aTeam's proprietary method

CARE our
proprietary framework
for building AI-powered software.

CARE is the proprietary framework aTeam uses to build AI-powered software. It's how we ship products that work in production — across healthcare, finance, retail, and manufacturing.

Built for
Claude Code Cowork Cursor Gemini Copilot MCP servers
Context-driven AI-Guarded Reference-based Evidence-backed 14 mandatory files 8 hard rules 3 disciplined phases Tool-agnostic Audit-ready Production-grade Context-driven AI-Guarded Reference-based Evidence-backed 14 mandatory files 8 hard rules 3 disciplined phases Tool-agnostic

Every team building with AI today is figuring it out as they go. Some teams move fast and ship broken software. Other teams move carefully and ship six months late. Both groups are using AI the wrong way. Over the last two years, aTeam built the right way — by writing it down. The result is CARE: a tool-agnostic framework of 14 versioned files, 8 hard rules, and a 3-phase discipline that we now use on every AI project we deliver.

14
Mandatory .md files in every project
8
Hard rules wired as CI gates
3
Phases per session: before, during, after
0
Lock-in to any AI tool or vendor
The Problem

The way most teams use AI
is fundamentally broken.

It's not because AI is bad. It's because the tooling is moving faster than the discipline. Engineers paste prompts into Claude or Cursor, get something that runs, ship it, and hope for the best. That works for demos. It does not work for production software in regulated industries. Three failure patterns show up on almost every project we audit — and you've probably felt all three.

Failure pattern nº 01

AI forgets your project every single session.

Every time an engineer opens Claude Code, Cursor, or Cowork, the AI starts from zero. No memory of your tech stack. No memory of last week's decisions. No memory of your security rules. Engineers re-explain the same things every day — or they forget to, and the AI quietly does the wrong thing.

What it sounds like"I told Claude this last week. Why is it suggesting fetch() again?"
Failure pattern nº 02

Long AI sessions get measurably worse over time.

The longer a session runs, the more the model "loses the middle." Past about 50% of its memory window, accuracy starts dropping. Most teams have no protocol for it. They just keep pushing — and the AI keeps producing code that looks right but isn't.

What it sounds like"It was great this morning. By afternoon it forgot the auth pattern we agreed on at 10am."
Failure pattern nº 03

Switching tools loses everything you decided.

Move from Cowork to Claude Code, from Sonnet to Opus, from AI to a human reviewer — and the context resets. Decisions made in one tool are invisible to the next. The handoff is the moment things get dropped, and nobody owns it.

What it sounds like"The next dev opened the branch and asked: what was the plan again?"
Why this matters

For most software, this is annoying.
For regulated software, it's a business risk.

If you're building a marketing site, a hallucinated function name costs you an hour. If you're building a hospital pharmacy module, a tax invoice engine, or a banking workflow — the same mistake costs you a regulatory event, a patient incident, or a six-figure fine. Most teams don't notice the difference until something goes wrong.

— Tax 01

The hallucination tax

AI invents a function, an endpoint, or a database column. Looks real. Compiles. Fails in front of a customer.

— Tax 02

The compliance gap

Sensitive patient data ends up in a prompt sent to an external model. Nobody audited it. The regulator finds out later.

— Tax 03

The maintenance cliff

The original engineer leaves. AI-generated code has no comments, no decisions log, no context. The next team can't touch it.

Our Answer

So we built our own framework.
We call it CARE.

CARE is the way aTeam now builds every AI-powered product we deliver. It started as an internal playbook for our own engineers — a way to make sure every project, every session, every commit followed the same rules. The whole framework lives in your repository as 14 markdown files. Engineers read them. AI reads them. Everyone stays on the same page.

Context
Context-driven

The AI knows your project before it writes a line.

Every session starts by reading the canonical project file. Tech stack, security posture, naming conventions, what's been decided already. The AI is never working from scratch.

AI
AI-Guarded

Guardrails are real code, not good intentions.

PII scrubbing. Output validation. Audit logging. Every guardrail is a named pattern wired into your build pipeline. Skipping one isn't an option — the build fails.

Ref.
Reference-based

Every rule lives in a file. No tribal knowledge.

If a rule isn't written down, it doesn't exist. Period. That means the AI can find it, the next engineer can find it, and the auditor can find it. No "ask Suresh, he knows."

Evidence
Evidence-backed

Every decision, prompt, and incident is logged.

Why did we pick this model? Why does this prompt look this way? What went wrong last month? All recorded, all in the repo, all version-controlled. An audit takes a day, not a quarter.

The CARE flow

From your team's chaos
to audit-ready software.

Drop your context, your AI tools, and your engineers in on the left. CARE — four disciplined layers — sits in the middle. Out the other side comes code that ships, decisions that are written down, and evidence that survives an audit. The whole loop lives in your repository.

— Inputs —
Project Context
Architecture Compliance Security Data classes
AI Tools
Claude Code Cowork Cursor Gemini Copilot MCP
Engineers
Senior Mid Reviewers Auditors
CARE
The Framework
Context-driven 14 .md files in your repo
AI-Guarded PII scrub · validation · audit log
Reference-based No tribal knowledge
Evidence-backed Decisions · prompts · incidents
— Outputs —
Audit-ready code CI gates · lint · review
Versioned decisions decisions.md · append-only
Registered prompts prompt_log.md
Compliance evidence audit in a day, not a quarter

If your engineers are figuring out AI as they go, you already have the problem CARE solves. Let's talk about what your project needs.

Book a Free Audit
The framework, in your repository

CARE is 14 markdown files.
We drop them into your project on day one.

Every CARE engagement starts the same way: we install these 14 files at the root of your repository. They're versioned, they're owned by named people on your team, and the AI reads them automatically every session. This is what makes the framework real — it lives in your code, not in our slides.

/your-project/docs/care/ CARE
├──README.mdhow to adopt CARE in your reporeading…
├──CLAUDE.mdthe rules every AI session reads firstreading…
├──architecture.mdwhat the system is — modules, data flow, integration pointsreading…
├──compliance.mdevery regulation that applies, with controlsreading…
├──security.mdthreat model and named mitigation patternsreading…
├──data_classification.mdwhich data is sensitive, and what to do about itreading…
├──tool_policy.mdwhich AI tool, which model, for which taskreading…
├──prompt_log.mdevery production prompt, registered and versionedreading…
├──testing_instructions.mdhow to test — commands, coverage targetsreading…
├──unittest.mdunit-test patterns — and AI test mistakes to blockreading…
├──code_review.mdchecklist for reviewing AI-generated codereading…
├──handoff.mdhow to switch sessions, tools, or people without losing contextreading…
├──gap_analysis.mdwhere the project actually is vs. the planreading…
├──decisions.mdwhy we did what we did — append-onlyreading…
├──incident_log.mdthings that went wrong and what we learnedreading…
├──prompts/production prompts, version-controlled like codereading…
└──modules/one reference file per module of your systemreading…
Mandatory · 14 files

The non-negotiable floor.

Every project gets these. CI gates enforce them. Skipping one is a documented exception, signed off by a lead — not a default.

Recommended

Glossary and per-module references.

Domain vocabulary so the AI doesn't invent terms. One reference file per module of your system. Strongly recommended for any regulated project.

Optional

Runbooks, onboarding, experiments.

Add as the project grows. The framework is built to extend, not to box you in. Most large projects end up with 20–25 files by year two.

Three phases. Every session, every day.

How CARE runs in practice.

Before, during, after. That's the whole loop. Every engineer runs it every time they sit down with AI — a short read at the start, a quiet discipline in the middle, a clean handoff at the end. After a sprint, it stops feeling like overhead and starts feeling like the only sensible way to work.

Set the table before you start prompting.

7 actions
  • Read the relevant .md files for the task.
  • Classify the data — public, internal, sensitive, restricted.
  • Pick the right model — light, default, or heavy.
  • Register the prompt in prompt_log.md.
  • Wire the guardrails before you call the model.
  • Start fresh if the session is more than half full.
  • Decide how you'll know the answer is correct.

Stay inside the rails while you work.

6 actions
  • Watch context usage: 50% warn, 75% prepare handoff, 90% stop.
  • Don't trust "done" — run the test command and read the output.
  • Reject hallucinations as soon as you spot them.
  • Log decisions in decisions.md as you make them.
  • Update prompts in the same pull request, never separately.
  • Found a new rule? Update CLAUDE.md in the same PR.

Close the loop before you walk away.

6 actions
  • Append a handoff block to handoff.md if you're continuing tomorrow.
  • One-line update to gap_analysis.md — where you actually got to.
  • Update architecture, compliance, and data classification if anything moved.
  • Tick the PR attestation per code_review.md.
  • Mistakes or near-misses? Log them in incident_log.md.
  • Verify all CI gates are green before requesting review.
The eight rules nobody breaks

These are the rules every
CARE project signs up to.

Each rule is wired into your build pipeline as a CI check on day one. A pull request that breaks any of them simply doesn't merge. We don't rely on engineers to remember — we rely on the system to enforce. That's the contract behind every CARE engagement.

01

No real production data ever goes into a prompt.

Synthetic, anonymised data only. Sensitive fields get scrubbed before they leave your servers. Logged in data_classification.md.

CI gated
02

Every write endpoint passes through auth and access control.

AI-generated or human-written, every controller and route uses the project's standard auth pattern. Lint-checked, never bypassed.

Lint enforced
03

A human reviews every line of AI-generated code before it merges.

The reviewer pulls the branch, runs the tests, and verifies no APIs were hallucinated. The code_review.md checklist is non-negotiable.

PR blocked
04

No new dependency without a written reason.

Every library, model, or service goes into decisions.md with the trade-offs documented. No silent additions.

Logged
05

No schema change without updating the architecture file.

The schema is a first-class artefact. Changes need an updated architecture.md, a reversible migration, and a security check.

CI gated
06

No production prompt without a registry entry.

Prompts are code. Versioned, reviewed, evaluation-tested. A prompt missing from prompt_log.md can't be called from production. Lint-enforced.

Lint enforced
07

Discipline around the AI's memory window: 50, 75, 90.

At 50% the wind-down begins. At 75% the handoff is drafted. At 90% the session stops. handoff.md is the bridge to the next session.

Protocol
08

Compliance gates block the build. They don't warn.

PII scrubbing, output validation, audit logs, region pinning — every compliance control in compliance.md is a hard CI gate.

CI gated
CARE vs. how teams build today

Most teams build AI four ways.
Only one holds up under audit.

Vibe-coding, copilot-assisted, generative-driven — or CARE. Here's the side-by-side, on the dimensions that decide whether your AI software ships safely or quietly accumulates risk.

What we compare CARE · aTeam Vibe-coding AI-assisted (Copilot) Generative-driven
Where project memory lives 14 versioned files in your repo In someone's head In the prompt, every time In a process loop
If you switch AI tools Same files, any tool Start over Per-tool config Tied to one vendor's method
How rules are enforced CI checks block the build "Be careful" Code review, sometimes Process gates
When the session gets too long 50/75/90 protocol with logged handoff Keep going until it breaks Start a new chat Reset between phases
Audit trail Decisions, prompts, incidents — all versioned Chat history Git log Loop checkpoints
Fit for regulated industries NPHIES, HIPAA, ZATCA, FHIR-aware No Generic Generic enterprise
The engineer's role Owner of the rules and the evidence Prompt operator Reviewer of AI output Architect and validator
Onboarding a new engineer Read 14 files in 2 hours · ready to ship Shadow someone "Read the codebase" Workshop
Built for the stack you actually use

One framework. Every AI tool
you'll ever need. No vendor lock-in.

The biggest risk in adopting any AI methodology today is locking yourself to a tool that won't be the best one in twelve months. CARE solves this by sitting above the tools, not inside them. The same 14 files work with whatever your team uses — today, and for whatever's released next quarter.

Claude Code
multi-file refactors · default backend
Cowork
documents · plans · multi-format deliverables
Cursor
in-IDE inline assistance · refactors
Gemini
approved per project · same rules apply
GitHub Copilot
inline completions · same review rules
Windsurf
agent-style coding · same context
MCP Servers
allowlisted · least-privilege scopes
Direct API
embedded in product · via gateway
How aTeam delivers CARE

Four ways to bring CARE into
your project. Pick the level you need.

Whether you're auditing an existing AI initiative or building something brand new, we slot in at the right level. Engineers talking to engineers — not slides talking to procurement.

01Audit

CARE Audit

When you have AI in production and you're not sure if it's safe.

Two-week review of your current AI initiative against the 14 files, the 8 rules, and every regulation that applies to your domain. Delivered as a prioritised gap report.

2 weeksTime to value
Gap reportDeliverable
03Embed

Engineering Enablement

When you want senior engineers in your team and your engineers up-skilled.

Senior aTeam engineers integrated into your team. They deliver under CARE and transfer the practice to your engineers. Starts with a 14-day no-cost trial.

3+ monthsTime to value
14-dayNo-cost trial
04Build

Production AI Build

When you want a new AI product built right, end to end.

Greenfield build of an AI-native product on CARE — from architecture through prompt registry, guardrails, launch, and ongoing observability. Outcome-based engagement.

Outcome-basedEngagement
CARE-nativeFrom day 1
Not sure which fits? Talk to our engineers Get a quote
CARE in production

What CARE looks like
on a real project.

The Saudi pharmaceutical group ran NUPCO portal reconciliation manually across 450+ SKUs. Hours of human time, every day. We rebuilt the workflow on CARE: bounded extraction agents, every prompt registered, schema-validated outputs into their ERP, every patient and supplier identifier classified at the highest tier with a server-side scrubber.

87%
Less manual reconciliation time
450+
SKUs auto-tracked, daily
0
Compliance incidents in 6 months
"

The CARE files weren't there for marketing. They were the reason a junior engineer could pick up the work mid-sprint and not break anything. The reason an auditor could review six months of AI calls in a day. The reason we slept well during quarter-end close.

Al Jamjoom Group · SupplyTrack ETD · CARE Build engagement, 2025–2026
Questions we get asked a lot

Honest answers to
the questions every CTO eventually asks.

Is CARE a product, or a methodology?

It's a methodology with templates. We don't sell software. We deliver the 14 files, the CI gates, and the practice. You own all of it after the engagement — it lives in your repo, not on our servers.

Will CARE slow my engineers down?

For the first sprint, slightly. Reading the rules takes time. After that, it speeds them up — because the AI stops re-asking the same questions, decisions stop getting re-litigated, and onboarding new engineers takes hours instead of weeks.

What if our AI tools change next year?

That's exactly what CARE is designed for. The framework sits above the tools. When you switch from Cursor to whatever comes next, the 14 files stay the same. The AI tool is replaceable. The rules aren't.

Do we need to be in a regulated industry to benefit?

No. Regulated industries are where CARE pays for itself fastest, because the cost of non-compliance is highest. But the same discipline makes any engineering team faster and any codebase more maintainable.

How is this different from just using CLAUDE.md?

One file is a starting point. CARE is the full surface area: data classification, compliance, prompt registry, handoff protocol, incident logging, decision history. The single file misses the seven hardest problems.

Can we run a small pilot before committing?

Yes. Most clients start with a CARE Audit (two weeks) or a Pilot Sprint (four to six weeks on one feature). You see the framework in action on your own code before any larger commitment.

If you're shipping AI software,
you need a framework.
We have the one we trust.

Tell us where your AI work is today. We'll show you which of the 14 files you already have, which you don't, and what it takes to close the gap. Engineers talking to engineers.

Senior AI engineers 14-day no-cost trial Tool-agnostic Audit-ready by construction

Let's Talk

    ATeam Logo
    Privacy Overview

    This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

    Privacy Preference