Where people actually keep their prompts
Watch ten people who use LLMs every day. Their prompts live in, in rough order of frequency:
- A note in Apple Notes or Obsidian titled "prompts."
- Scrollback in the model's chat history.
- A file called
prompts.mdin a random folder. - System Prompts / Custom Instructions inside ChatGPT or Claude.
- Nowhere. Rewritten from memory every time.
Each of these has a failure mode. Notes and scratch files get disorganized. Scrollback is unsearchable after a few days. Custom Instructions are locked inside one vendor. Rewriting from memory produces a slightly different prompt every time, which is the definition of non-reproducible.
What to actually optimize for
Three things matter when picking a home for your prompts:
- Retrieval speed. You reach for a prompt many times a day. If getting to it takes more than a few seconds, you won't do it. Menu bar beats app-switching beats file-hunting.
- Portability. LLM providers come and go. Pricing changes. Your best prompts are intellectual property worth keeping outside any single vendor's UI.
- Easy editing. Prompts evolve. A good home lets you tweak a paragraph without ceremony.
A system that actually holds up
Here's the approach I'd recommend, in rough priority:
1. One file, plain text
Keep every reusable LLM prompt in a single markdown file. Markdown because headings and code blocks are useful. One file because the minute you have multiple, you'll forget which one has the good prompt.
If you want sync across machines, put the file in iCloud Drive, Dropbox, or a git repo. If you work alone, local is fine.
2. A menu-bar tool that reads from that file
This is where Refrain fits. Install it, point it at the markdown file, and your prompts are now one click away from the clipboard, anywhere on the Mac. Edit the file externally, Refrain reloads.
Without a menu-bar surface, the file is still just a file. You have to open an editor, search, copy, switch windows, paste. Too much friction.
3. Flat list, short names, aggressive deletion
Don't build a folder hierarchy. Don't tag. Don't categorize until you're over 40 prompts, which you probably won't be. Keep the list flat and name each prompt in kebab-case that describes what it does:
make-comprehensive-planreview-pass-nuancescheck-specifics-schemasrewrite-in-plain-english
Any prompt you haven't used in a month, delete it. A prompt library stays useful because it's short, not because it's comprehensive.
4. Write prompts long; paste long
The prompts that produce good output are usually long. Two or three paragraphs of specific instructions about what to do, what to avoid, what format to return. Store them intact. Don't try to abbreviate.
5. Version the file, eventually
When you're a few months in and your prompts are load-bearing, put the markdown file in a git repo. You'll thank yourself the first time you change a prompt and regret it. git log on a prompt file is an underrated quality-of-life feature.
What to stop doing
Common anti-patterns:
- Keeping prompts inside ChatGPT's Custom Instructions only. Vendor-locked. Breaks if you try a different model.
- Elaborate tagging systems. You'll stop using the tags inside two weeks.
- A "prompt engineering" notebook with theories and examples mixed in. Keep reference material separate from the actual paste-ready prompts.
- Manually copying prompts into every new chat thread from memory. The whole point of this exercise is to stop doing that.
Summary
Plain markdown file, menu-bar app to read it, flat list with short names, delete what you don't use. Refrain handles the menu-bar piece; everything else is just habit.
Prompts you've refined over weeks are worth more than most people realize. They're how you get reproducible output from an LLM. Keep them somewhere you can actually reach.