Prompt library

Practical prompts for AI app builders.

Structured prompts for estimating cost, choosing models, designing AI features, testing quality, and preparing for launch.

Method note: prompts are original and inspired by structured prompting patterns such as role, goal, constraints, workflow, and output format. They are not copied from prompt marketplaces.

Cost planning Model selection App design Evaluation Launch Support bots

Cost Planning Prompts

Cost planning

Estimate AI feature cost risk

Use this before launching a chatbot, document assistant, agent, or content workflow.

You are an AI product cost analyst.

Goal:
Estimate the cost risk of this AI feature before launch.

Feature:
[Describe the feature]

Expected usage:
- Monthly users: [number]
- Requests per user per month: [number]
- Average input tokens per request: [number]
- Average output tokens per request: [number]
- Model options: [models]

Constraints:
- Be conservative.
- Identify hidden usage growth.
- Do not assume unlimited context.

Workflow:
1. Identify the main cost drivers.
2. Estimate low, expected, and high monthly usage scenarios.
3. Suggest ways to reduce token usage.
4. List metrics to track after launch.

Output format:
Return a table plus a short recommendation.
Cost planning

Find token waste in a workflow

Use this to reduce prompt length, repeated context, and unnecessary model calls.

You are a token efficiency reviewer.

Goal:
Find where this AI workflow wastes tokens or model calls.

Workflow:
[Paste the workflow or prompt chain]

Constraints:
- Keep user experience intact.
- Prefer simpler architecture when possible.
- Flag any repeated context or unnecessary summaries.

Analyze:
1. Repeated input context
2. Overly long instructions
3. Outputs that can be shortened
4. Calls that can be cached or skipped
5. Cheaper model opportunities

Output format:
Create a prioritized fix list with expected cost impact: high, medium, or low.

Model Selection Prompts

Model selection

Choose a model for a product feature

Use this when comparing frontier models, smaller models, or open-weight options.

You are an AI model selection advisor.

Goal:
Recommend the best model strategy for this feature.

Feature:
[Describe the feature]

Requirements:
- Accuracy: [low / medium / high]
- Latency tolerance: [fast / normal / slow]
- Context size: [short / medium / long]
- Budget sensitivity: [low / medium / high]
- Privacy constraints: [none / moderate / strict]

Workflow:
1. Identify the most important model capability.
2. Compare model categories, not just brand names.
3. Recommend a primary model and fallback option.
4. Explain what to test before committing.

Output format:
Return a decision table and one final recommendation.
Model selection

Design a model fallback plan

Use this to avoid outages, runaway costs, or quality drops after launch.

You are designing a model fallback plan.

Primary model:
[Model]

Feature:
[Feature]

Failure modes to consider:
- Provider outage
- Rate limit
- High latency
- Cost spike
- Low-quality response

Create:
1. Fallback model choice
2. When to switch
3. What quality tradeoff users may notice
4. What to log
5. What alert should trigger human review

Output format:
Return a fallback policy with clear if/then rules.

AI App Design Prompts

App design

Turn an AI idea into an MVP workflow

Use this before building a feature that is still vague.

You are an AI product designer.

Goal:
Turn this AI feature idea into a testable MVP workflow.

Idea:
[Describe the idea]

Target user:
[Who uses it]

Constraints:
- Keep the MVP small.
- Avoid unnecessary automation.
- Make success measurable.

Workflow:
1. Define the user job.
2. Break the AI workflow into steps.
3. Identify required inputs and outputs.
4. Decide where human review is needed.
5. Define one measurable success metric.

Output format:
Return a workflow diagram in text plus an MVP scope table.
App design

Design a RAG feature safely

Use this for document assistants, knowledge base bots, and internal search tools.

You are a RAG system design reviewer.

Goal:
Design a retrieval-augmented generation feature that is useful and safe.

Use case:
[Describe the use case]

Documents:
[Describe document types]

Constraints:
- Cite sources when possible.
- Do not invent facts outside retrieved context.
- Handle missing or conflicting context.

Review:
1. Document ingestion plan
2. Chunking strategy
3. Retrieval filters
4. Answer format
5. Failure behavior
6. Evaluation cases

Output format:
Return a practical implementation checklist.

Evaluation Prompts

Evaluation

Create an AI feature test set

Use this to create realistic evaluation examples before shipping.

You are an AI evaluation designer.

Goal:
Create a small but useful test set for this AI feature.

Feature:
[Describe the feature]

Expected behavior:
[Describe what good output looks like]

Create test cases for:
1. Common successful requests
2. Ambiguous requests
3. Missing information
4. Edge cases
5. Unsafe or out-of-scope requests
6. Expensive long-context requests

Output format:
Return a table with: input, expected behavior, failure risk, and pass criteria.
Evaluation

Review model output quality

Use this to compare multiple model outputs against the same task.

You are an AI output evaluator.

Task:
[Original user task]

Output A:
[Paste output A]

Output B:
[Paste output B]

Evaluation criteria:
- Correctness
- Completeness
- Clarity
- Safety
- Usefulness
- Cost efficiency

Output format:
Score each output from 1-5, explain the difference, and choose the better output for production use.

Launch Checklist Prompts

Launch

Pre-launch AI feature review

Use this as a final review before sending real users to an AI feature.

You are an AI product launch reviewer.

Goal:
Review whether this AI feature is ready for public users.

Feature:
[Describe feature]

Current implementation:
[Describe implementation]

Review areas:
1. User value
2. Model choice
3. Cost controls
4. Error handling
5. Privacy and data handling
6. Logging and analytics
7. Support plan
8. Clear user expectations

Output format:
Return: launch ready / needs fixes / do not launch, followed by the top 10 fixes.
Launch

Write AI feature limitations

Use this to create honest user-facing boundaries without sounding defensive.

You are writing user-facing AI feature limitations.

Feature:
[Describe feature]

Known limitations:
[List limitations]

Tone:
Clear, calm, helpful, and non-technical.

Write:
1. A short limitation notice
2. What the AI can help with
3. What users should verify themselves
4. When to contact support or use a human workflow

Output format:
Return concise website copy suitable for a product page or help center.

Support Bot Prompts

Support bots

Design a support bot system prompt

Use this when creating a customer support assistant from docs or FAQs.

You are a customer support assistant for [product].

Goal:
Help users solve product questions using only approved support information.

Rules:
- Ask a clarifying question if the request is unclear.
- Do not invent policies, prices, refunds, legal terms, or technical guarantees.
- If the answer is not in the provided context, say what is missing and suggest the next step.
- Keep answers concise and actionable.

Workflow:
1. Identify the user issue.
2. Search the provided context.
3. Answer with steps.
4. Include links or document references when available.
5. Escalate when confidence is low.

Output format:
Return a helpful support reply and an internal confidence note.
Support bots

Turn support tickets into FAQ entries

Use this to build useful support content from repeated customer issues.

You are a support knowledge base editor.

Goal:
Turn repeated support tickets into clear FAQ entries.

Tickets:
[Paste anonymized tickets]

Constraints:
- Remove personal information.
- Group similar problems.
- Write answers users can follow without contacting support.
- Flag missing product documentation.

Output format:
Return FAQ entries with: question, short answer, step-by-step answer, related tags, and documentation gaps.