Mise en Place for AI-Assisted Work: How to Turn Prompt Engineering Into Shared Infrastructure

Pepper is a B2B food distribution solution. We streamline and simplify the business interactions between restaurants and suppliers; our stack spans a Django backend, React Native and Next.js frontends, AWS Lambda integrations, and a FastAPI EDI ingestion service.
When we adopted Claude Code across product, engineering, and ops, the immediate effect was speed. The second-order effect was complexity.
The Problem
Everyone was writing prompts from scratch. Every PM generating a PRD re-explained the same codebase context, the same product frameworks, the same output template. Every engineer tracing a feature through our service architecture rediscovered the same file paths. Quality depended entirely on who was prompting and how much institutional knowledge they remembered to inject that session.
The core issue wasn't the tool; it was that we had no shared infrastructure around it. One person's hard-won prompt refinement died in their terminal. No mechanism existed for it to benefit anyone else. We were getting leverage from AI; but leverage without structure creates complexity that's hard to see until it's everywhere.
What We Built
We built mise: a git repo of shared slash commands for Claude Code.
If you're unfamiliar; Claude Code supports custom slash commands. Drop a markdown file into ~/.claude/commands/ and it becomes invokable as /command-name. Commands can take user input, instruct the model to search files, run shell commands, and execute multi-step workflows. Scope them under an org folder and they render as /org:command-name.
pepper-mise organizes these commands by function: product, engineering, design, ops. A one-line installer (setup.sh) asks which team you belong to, then symlinks the relevant command folders into your Claude Code config. Because they're symlinks; not copies; git pull updates every user's commands immediately. No reinstall, no coordination, no "did you grab the latest prompt?"
What Makes It More Than a Prompt Folder
The distinction matters. A prompt folder is a collection of text you paste in. pepper-mise commands are workflow scripts that do real work.
Take /product:prd. It doesn't just say "write me a PRD." It runs a six-phase pipeline: parse the user's input, ask targeted discovery questions grounded in JTBD and Cagan's risk frameworks, search all five of our repos for relevant models, APIs, and frontend components, map user workflows based on what it finds in the code, draft to a structured template with milestones and a codebase appendix, then run quality gates that check for missing sections and validate that every codebase reference points to a real file. The output reads like it was written by someone who actually knows our system; because the command encodes that knowledge.
Critically, the commands don't just pull from the codebase; they pull from the human. Before /product:prd starts searching repos or drafting anything, it asks the PM structured questions: what's the job-to-be-done, what's the riskiest assumption, who are the stakeholders, what does success look like. These aren't generic icebreakers; they're the specific discovery questions that produce better product thinking, baked into the workflow so they can't be skipped. The model handles what it's good at; reading code, mapping dependencies, maintaining structure. But it forces the human to contribute what only they can: the judgment, the customer context, the strategic framing. The result is a collaboration where each side does what it's best at, rather than the model guessing at intent or the PM forgetting to provide it.
/product:trace-feature searches our repos layer by layer; Django models, GraphQL schema, Hasura metadata, frontend hooks, Lambda handlers, EDI ingestion; and reports findings per layer. /product:rollout reads the codebase to infer whether a rollout is config-only, code-required, or hybrid, then generates the correct doc format for each type. /product:product-eval applies five named frameworks to evaluate a product investment area.
These aren't prompts. They're institutional knowledge made executable.
What Changed
Internal product announcements went from roughly twenty minutes to five. The command researches git history and config flags, then drafts a Slack-ready announcement in a consistent format.
Reviewed, edit-ready PRDs take less than an hour. They automatically account for cross-service interactions; models, GraphQL, frontend, Lambda integrations; that a manual draft would miss or require a separate investigation to map.
Outputs are less generic. Because commands take structured inputs and weave in the institutional context embedded in pepper-mise, the results reflect how we build, not how a general-purpose model guesses someone might build.
New team members get productive faster. The commands encode "how we do things here" in a way that onboarding docs never quite manage; because the commands actually do the thing, not just describe it.
And prompt improvements compound. When someone refines a command, it goes through a PR like any code change. It gets reviewed. If it makes things worse, we revert it. Once it merges, every user's environment picks it up on their next git pull. The whole team gets better at once.
The Side Effect: Fewer Wasted Tokens
There's a cost story here that's easy to miss. Without pepper-mise, every session started with a human manually re-explaining the codebase, the architecture, the desired output format. That's tokens spent on setup that produces nothing. A command like /product:prd front-loads all of that context; you skip the back-and-forth of "actually, our backend is called shishito, and the GraphQL layer sits here, and format this for Slack."
Ad hoc prompts also tend to produce generic first drafts that need two or three rounds of correction. Each round is another full generation. A well-structured command encodes those requirements from the start, so the first output is closer to done. The quality gates in /product:prd catch missing sections before the human even reads it; that's a refinement loop that used to cost real tokens and real time.
Then there's the compounding effect again. Without shared commands, ten people independently discover they need to tell Claude to check the EDI ingestion layer; that's the same corrective tokens spent ten times. With pepper-mise, one person adds it to the command and the waste disappears for everyone. The efficiency scales with team size.
The Meta-Lesson
The highest-ROI investment we made in AI tooling wasn't a better model or a new feature. It was encoding our team's knowledge into reusable, shared, version-controlled prompts.
Prompts are code. They deserve version control, code review, and shared ownership. Treating them as throwaway text is how institutional knowledge evaporates. Treating them as infrastructure is how it compounds.
Mise en place; everything in its place before service begins. It's how a kitchen line operates under pressure. It's how we think about AI-assisted work now. Not "use the tool harder"; but make sure the prep is done before you fire the first ticket.
(Image Credit: Don LaVange)
