Prompt Genie: Can you help fix and improve this tricky prompt?

I’ve been struggling to craft a clear, effective prompt that gets consistent, useful responses from an AI assistant. No matter how I tweak it, the output is either too vague or misses key details I need. I’m looking for advice on how to rewrite and optimize this prompt so it’s more understandable, specific, and reliable for complex tasks. Any tips, examples, or best practices to turn this into a strong, high-performing AI prompt would really help.

You are fighting two things at once. Vague goals and vague instructions. Here is a structure that tends to give consistent answers.

  1. Start with role and goal
    Example:
    “You are a domain expert AI. Your goal is to help me design a step by step plan for X that I can apply in real life.”

  2. Define audience and output format
    Example:
    “Assume I am a beginner with basic knowledge.
    Write the answer as:

  1. Short summary
  2. Key steps list
  3. Example
  4. Common mistakes”
  1. Set constraints and style
    Example:
    “Use clear, concrete language.
    Avoid buzzwords.
    Use numbered lists for steps.
    Keep each step under 3 sentences.
    If you lack data, say so and explain the assumption.”

  2. Tell it what to focus on and what to ignore
    Example for “too vague” responses:
    “Focus on practical actions I can do in a week.
    Skip history, philosophy, and generic motivation.
    Give at least 3 concrete examples with numbers.”

  3. Force it to check its own work
    Example:
    “At the end, add a section called ‘Sanity check’.
    List 3 ways your advice might fail or not fit my situation.
    Then suggest what I should ask next.”

  4. Provide a mini example of what “good” looks like
    Even a small sample helps a lot.
    Example:
    “A good answer looks like this pattern:
    Step 1, define X with 1 sentence.
    Step 2, give 3 numbered options with pros and cons.
    Step 3, short action plan for 7 days.”

Put it all together into one reusable prompt:

“You are an expert in [TOPIC]. Your goal is to help me build a practical plan for [MY GOAL].
Assume I am [LEVEL].
Output format:

  1. One sentence summary
  2. Numbered action steps
  3. Concrete example with numbers
  4. Common mistakes
  5. Sanity check and what I should ask next

Constraints:
Use clear language.
Avoid vague phrases like ‘it depends’.
If you lack information, state the assumption.
Focus on what I can do within [TIMEFRAME].”

Then you tweak only the brackets instead of rewriting from scratch every time.

If you share your current prompt, people here can point at the exact parts that confuse the model.

You’re not just fighting “vague goals + vague instructions” like @sternenwanderer said. You’re also probably fighting vague inputs from yourself each time. Even a perfect meta‑prompt won’t save you if the task prompt you feed into it keeps changing shape.

A different angle: separate your system into 3 layers and lock two of them.

1. Create a fixed “meta shell” you almost never touch

Instead of rewriting instructions every time, build one stable wrapper the model always sees, and treat your actual question as data.

Example meta shell (you keep this in a text file and reuse it):

You are an AI assistant.
Your objective: produce specific, actionable, falsifiable advice.
If the user’s request is underspecified, ask up to 3 clarifying questions before answering.
Always:
• Prioritize concrete details over general theory
• Prefer examples with numbers or explicit scenarios
• Flag any uncertainty and state what extra info would improve the answer
• Check the response for missing steps or hidden assumptions

Notice:
No roles like “world‑class expert in X.” It usually adds fluff more than quality. The key is the behavior constraints: ask questions, be concrete, surface assumptions.

2. Standardize how you describe your task

Most people lose consistency here. Each time, they phrase the goal slightly differently, so the outputs drift.

Make yourself a “task template” you fill in each time:

Goal:
I want to achieve: [clear, measurable outcome]

Context:
My situation / constraints: [3 to 7 bullet points]

Success criteria:
The answer is useful if it:

  1. [e.g., gives a 7‑day plan]
  2. [e.g., avoids generic tips like “be consistent”]
  3. [e.g., includes at least one worked example]

Off‑limits:
Please skip: [e.g., long theory, history, mindset pep talk]

You paste this into the “user” part every time and just edit the bracketed parts. That alone will eliminate a lot of variance.

3. Force a “clarity handshake”

This is where I slightly disagree with the idea of always giving a full structure up front. For tricky prompts, it’s often better to force the model to reflect your request back to you before it answers.

Add this to your fixed shell:

Before giving the final answer:

  1. Briefly restate what you think the user is asking for.
  2. List any ambiguities.
  3. If ambiguities are minor, make explicit assumptions and proceed.
  4. If ambiguities are major, ask targeted questions.

That “restatement step” is massively underrated. It lets you quickly see “oh, the model thinks I care about X when I actually care about Y” and you can correct it.

4. Use “forbidden patterns” instead of only “requested patterns”

Instead of just telling it what to do, also tell it what not to do, based on the failures you keep seeing.

Example section you can reuse:

Avoid:
• Generic phrases like “it depends” without then stating what it depends on
• Advice that applies to literally any person in any situation
• Repeating the question back as filler
• Steps that are just rephrased versions of “do research” or “be consistent”

Replace them with:
• Conditional answers: “If A, then X; if B, then Y”
• At least one concrete worked scenario

This hits the “too vague” pattern directly.

5. Turn your current “tricky prompt” into a test case

Take the messy prompt you’re struggling with and do this:

  1. Run it through your new shell as-is.
  2. Highlight every sentence in the answer that feels:
    • Too generic
    • Off-target
    • Missing a precondition
  3. For each problem, write a short “anti-pattern” rule and add it to your shell.

Example:
If you keep getting “learn about your audience” as advice, add:

Do not give advice of the form “learn about X” without specifying:
• What exactly to learn
• How to gather that information
• What to do differently based on two possible findings

This is how you iteratively “train” your prompt instead of just tweaking random lines.

6. Quick generic template you can adapt

Here’s a full prompt you can copy and modify. It’s designed to be a wrapper where your real question goes in the middle:

You are an AI assistant. Provide specific, practical, and testable recommendations.

Behavior rules:
• If the request is unclear, ask up to 3 focused clarifying questions first.
• Favor concrete actions over abstract principles.
• When giving advice, use “If [condition], then [action]” patterns where relevant.
• Make assumptions explicit when needed.

Forbidden patterns:
• Do not use vague phrases like “it depends” without listing the key variables.
• Do not give generic productivity or motivation tips unless explicitly requested.
• Do not explain basic definitions the user likely knows unless they ask.

Before answering:

  1. Restate the user’s request in 2–3 sentences.
  2. List any important ambiguities.
  3. If ambiguities are minor, state your assumptions and proceed.

Now here is the user’s request and context:
[PASTE YOUR TASK TEMPLATE + QUESTION HERE]

Tweak only the bracketed parts and your own “forbidden patterns” list as you notice recurring issues.

If you want, paste your exact tricky prompt next time and we can dissect it line by line and turn every failure into one more rule in your shell.