AI Literacy

Prompting Is a Literacy Skill

Writing a good prompt has more in common with clear writing than with programming. Here's what that means for how you should approach it.

·4 min read

Prompting Is a Literacy Skill

Writing a good prompt has more in common with clear writing than with programming. The engineers figured this out early; the rest of us are catching up.

Clarity First

The model cannot infer what you meant but didn't say. If your mental model of the task has ambiguity in it, the prompt will inherit that ambiguity. The first discipline of prompting is the same as the first discipline of writing: know what you're trying to say before you say it.

Context Is Load-Bearing

LLMs have no memory of your previous conversations (in most default configurations). Every prompt is, from the model's perspective, the beginning of the world. The context you provide isn't decoration — it's the frame through which the model interprets your instruction.

"Write me a summary" produces something very different from "Write me a 3-paragraph summary for a general audience of this technical paper about transformer architectures, focusing on practical implications rather than mathematical details."

Role and Tone Are Real Levers

Asking the model to adopt a persona or perspective genuinely changes output quality for tasks where expertise matters. "As an experienced copy editor reviewing this paragraph" produces better editing feedback than just "edit this."

This isn't a trick. It's giving the model additional context about what kind of output is appropriate.

Iteration Is Normal

The best prompters don't write perfect prompts on the first try. They iterate. They read the output, identify where the model's interpretation diverged from their intent, and refine accordingly.

Treat the first response as a draft, not a verdict.