How to PROMPT

Posted December 5, 2025

Read this guide. It will make you better at working with our AI chat interface. Notice this message does not ask "Would you like to read this guide?" — that distinction matters.

Instruct, Don't Ask

Use command language. Tell the model what to do.

When you INSTRUCT, you remove ambiguity. You are not giving the model agency to interpret, hesitate, or decline. If you give the model agency to say no, it will consider that a valid end to its objective.

āŒ Weak (Asking) āœ… Strong (Instructing)
"Could you maybe pull some data on downsells?" "Pull all downsell transactions from Q4."
"Would it be possible to see our top customers?" "List the top 50 customers by annual revenue."
"Do you think we should segment by region?" "Segment the output by region."

The model is not human. It does not need politeness to perform. It needs clarity. If you want something, instruct the model to achieve it. Do not ask if something is "better" unless you are genuinely unsure and want analysis.

When in doubt: instruct > ask.

Be Explicit

The model cannot read your mind — it reads your prompt. Vague input yields vague output. State your definitions, constraints, and expectations directly.

Business logic example:

Terms like "downsell," "churn," or "high-value" mean different things to different teams.

āŒ Vague āœ… Explicit
"Show me downsells" "Show me customers who moved from a higher pricing tier to a lower pricing tier. This is what we call a 'downsell.'"
"Find high-value customers" "Find customers with annual contract value over $50,000."
"Who's at risk of churn?" "Find customers who have not logged in for 60+ days AND whose contract renews in the next 90 days."

Coding example:

Technical requests need the same precision. Don't assume the model knows your stack, conventions, or constraints.

āŒ Vague āœ… Explicit
"Make it faster" "Optimize for runtime. The function MUST handle 100k rows in under 2 seconds."
"Add error handling" "Catch connection timeouts and retry up to 3 times with exponential backoff."
"Write a function to process this" "Write a Python function that accepts a list of dicts, filters to records where status == 'active', and returns a pandas DataFrame."

I know for a fact that explicit requirements yield better outputs. If something has a specific meaning or constraint, you MUST define it.

Use Lists

Put individual tasks in bullet points. This gives the model itemized objectives and it will attempt to complete each one explicitly.

Example prompt:

Build a report on customer health. Requirements:

  • Include only customers with contracts over $10,000
  • Define "at-risk" as: no login in 45+ days OR support tickets marked "escalation"
  • Group customers by account manager
  • Sort by contract renewal date, soonest first
  • The output MUST include last login date and NPS score

Each bullet becomes a checklist item the model works to satisfy.

Provide Examples (One-Shot Learning)

Show the model what you want. A single example teaches format, tone, and structure faster than paragraphs of description.

Example prompt:

Summarize each account's status. Follow this format:

Acme Corp — $125,000 ACV Health: 🟢 Good — logged in 3 days ago, NPS 9 Next step: Upsell conversation scheduled for Dec 12

Now do this for all accounts in the Enterprise segment.

This is called one-shot prompting. Providing examples helps the model learn your expectations immediately.

Edit Your First Prompt — Don't Just Iterate Forward

One-shot performance is typically better than stacking follow-up messages. Here's why:

  • Context provided up front shapes the model's initial reasoning
  • Follow-up messages create layered, sometimes conflicting context
  • The inception of its output changes fundamentally when you revise the original

When to revise your original prompt:

  • The output missed core requirements
  • You realized you forgot to define a key term (like "downsell")
  • You want a fundamentally different approach

When to iterate in the chat:

  • The output is 80%+ correct
  • You want to tweak specific sections (e.g., "Add a column for last contact date")
  • You're exploring variations

Observe the output. Identify what's missing. Go back and update your instructions until you're satisfied.

Use Formatting to Signal Importance

This guide uses bold, italics, ALL CAPS, and lists intentionally. These are signals.

  • Bold = key concepts the model should not miss
  • Italics = emphasis or nuance
  • ALL CAPS = use sparingly for critical terms only (e.g., MUST, NEVER). Overusing caps dilutes their impact — if everything is loud, nothing is.
  • Words like "must," "always," "never" = hard requirements

The model parses markdown. Use it.

Consider Making a Plan First

For complex tasks, ask the model to create a plan before executing. This gives you a checkpoint to review and course-correct before investing in the full output.

How to use plans effectively:

  • Break plans into phases. Ask for discrete steps, not one monolithic block. Example: "Create a plan to build this dashboard. Break it into phases."
  • Specify what success looks like. Tell the model what outputs to expect and how to validate them. If you don't define success, the model can't know when it's achieved it.
  • Provide verification commands. Give the model specific commands it can run to check its work. Example: "Test your implementation by running bun run dry run" or "Validate the changes using bun run tsc."
  • Iterate until satisfied. Review the plan. If a phase is unclear or missing, instruct the model to revise. The plan is cheap — the execution is expensive.
  • Throw it away if it's wrong. If the plan fundamentally misses the mark, don't try to salvage it. Start fresh with a clearer prompt. A bad plan leads to bad output.
  • Embed learnings from failures. If execution fails, don't just tell the model to fix the current broken state. Go back and update your original prompt with what you learned. Then rerun.

When to add more context:

If the output is far off from what you want, the model needs more information, not more corrections. Write tests it can execute. Write an example file showing what you're trying to do. Then try again.

Example prompt:

I need to build a customer retention report. Before writing any code:

  • Create a phased plan for how you'd approach this
  • Phase 1 should focus on data identification
  • Phase 2 should cover transformations
  • Phase 3 should define the final output format
  • Each phase MUST include how you'll verify success

Do not execute until I approve the plan.

Plans give you control. You steer the direction before the model commits to an approach.

TL;DR

  • Instruct. Don't ask.
  • Be explicit. Define your terms and constraints — whether it's business logic or code requirements.
  • Use lists. Itemize your objectives.
  • Provide examples. Show the format you want.
  • Edit your first prompt. One-shot > iteration chains.
  • Use formatting. Bold, caps, and "must" signal priority — but use caps sparingly or they lose their power.
  • Consider a plan. Break it into phases, define success criteria, provide verification commands, iterate until it's right, throw it away if it's not.

This message did not ask if you found it helpful. It is helpful. It told you it was.