PART 02 Workshop Manual

The single skill
that changes
everything

Every AI tool you use — ChatGPT, Claude, Gemini, Perplexity — lives or dies by the quality of what you put in. Prompt engineering isn't about memorising formulas. It's about learning to communicate clearly with a system that takes you completely literally.

>_
Youthwork.AI · Prompt Engineering · Part 02
prompt-transformation.sh
✗ WEAK PROMPT
↓ ↓ ↓
Why prompting matters

Same tool.
Same question.
Wildly different results.

Two youth workers. One tool. The difference between a generic 5-point list and a full 90-minute workshop plan with icebreakers, activities, and facilitator notes. The difference is entirely the prompt.

When most people start using AI, they treat it like a search engine — type a few keywords and hope for the best. The results are mediocre, they conclude the tool isn't that useful, and they move on. This is the most common and most avoidable mistake in AI adoption.

AI language models are not search engines. They don't rank existing content — they generate new responses based entirely on the context you provide. The richer, more specific, and better-structured your input, the dramatically better your output. This relationship is more direct than almost any other tool you use: the quality of your thinking going in is reflected almost exactly in the quality of what comes back.

In practice this means that two youth workers asking ChatGPT the same question — "help me plan a session on digital citizenship" — can get results so different they feel like different tools entirely. One gets a generic 5-point list. The other gets a structured 90-minute workshop plan with icebreakers, differentiated activities for different learning styles, reflection prompts, and facilitator notes. The difference is not the tool. It is entirely the prompt.

"Prompt engineering is the new literacy. Just as learning to write a clear brief transformed what you could get from a freelancer, learning to write a clear prompt transforms what you can get from AI."

The good news: this skill is learnable in hours, not months. It follows consistent principles that transfer across every AI tool. You don't need to be technical. You need to be clear, specific, and willing to iterate. Everything in this section shows you exactly how.

The anatomy of a great prompt

Five ingredients.
One powerful prompt.

Every strong prompt contains some combination of these five elements. You don't always need all five — but knowing each one and when to use it will transform your results immediately.

example-prompt.txt — annotated
[ROLE] You are an experienced youth worker and trainer writing for a professional development workshop. [CONTEXT] The participants are 15 youth workers from 6 different countries, mixed English proficiency, working in non-formal education settings. [TASK] Create a 45-minute session plan on recognising signs of online radicalisation in young people. [FORMAT] Structure it as: Learning objectives (3 bullets), Warm-up activity (10 min), Main activity (25 min), Reflection (10 min). Include facilitator notes. [CONSTRAINTS] No jargon. Sensitive and trauma-informed tone. Avoid naming specific extremist groups.
Role
Context
Task
Format
Constraints
01 Role
+

Tell the AI who it is. The role instruction is the fastest way to shift the register, expertise level, and perspective of any response. "You are a youth worker" produces completely different output to "You are an academic researcher" or "You are writing for a 14-year-old". The model draws on different parts of its training data depending on the role you assign.

Be specific: "an experienced Erasmus+ project coordinator" outperforms "a professional". Roles work especially well for writing tasks, where voice and audience awareness matter enormously.

Example roles for youth work
  • "You are a safeguarding lead reviewing this policy for gaps"
  • "You are a funder reading this grant application for the first time"
  • "You are a 16-year-old encountering this information online"
  • "You are a youth work trainer designing non-formal education activities"
02 Context
+

Give the AI your situation. AI models have no knowledge of your organisation, your participants, your country's context, or your constraints — unless you tell them. Context is the ingredient most beginners leave out, and its absence is responsible for most generic, unhelpful AI output.

Think of it as the briefing you'd give a skilled freelancer on their first day: who are the people involved, what's the setting, what has happened before, what do they already know?

Context to regularly include
  • Who the audience is (age, background, language, prior knowledge)
  • What the setting is (online workshop, face-to-face, large group, 1:1)
  • What you've already tried or what exists
  • Any relevant constraints (time, resources, cultural sensitivities)
  • Your organisation's name and mission if relevant
03 Task
+

Be precise about what you want. This sounds obvious but is where most prompts fail. "Help me with my project report" is not a task — it's a gesture. "Write a 400-word executive summary of the attached project report, focusing on outcomes for young people and impact evidence" is a task.

Use action verbs: write, summarise, compare, identify, rewrite, translate, structure, explain, critique.

Weak vs strong task instructions
  • ✗ "Tell me about digital literacy"
    ✓ "Explain three ways digital literacy is different from digital safety, in plain language, for a parent information evening"
  • ✗ "Help with my CV"
    ✓ "Rewrite the experience section of this CV to emphasise transferable skills for a youth work role, keeping the same factual content"
  • ✗ "Write a social media post"
    ✓ "Write 3 Instagram caption options for a photo from our youth exchange, each under 150 characters, that convey inclusion and international friendship without using the word 'amazing'"
04 Format
+

Tell the AI how to structure the output. Without format instructions, AI defaults to whatever structure it thinks is appropriate — which is often not what you need. Specify: length, structure, tone, and any layout requirements.

Format instructions are especially important for outputs you plan to use directly — a section heading structure you didn't expect means editing time you didn't budget for.

Useful format instructions
  • "No bullet points — write in flowing paragraphs"
  • "Maximum 200 words"
  • "Use these exact section headings: [list them]"
  • "Output as a table with columns: Activity | Time | Materials | Notes"
  • "Give me 5 options, each one sentence, so I can choose"
05 Constraints
+

Tell the AI what NOT to do. Constraints are the most underused element in prompting. They prevent the model from making assumptions that waste your time. Negative instructions are just as powerful as positive ones, and often more efficient.

Practical constraints for youth workers
  • "Do not include statistics without noting they need verification"
  • "Avoid any language that assumes a Western cultural context"
  • "Do not suggest activities that require expensive materials"
  • "Write as if the reader has never used AI before"
  • "Don't start with 'Certainly!' or 'Of course!' — just begin the content"
Prompt library

Ready-made prompts for
real youth work situations

Copy any of these directly, or use them as starting points. Every prompt follows the five-ingredient structure from above. Replace text in [BRACKETS] with your own details.

Workshop planning
Session plan — from scratch
You are an experienced non-formal education facilitator. Design a 90-minute workshop for [NUMBER] participants aged [AGE RANGE] on the topic of [TOPIC]. The group has [PRIOR KNOWLEDGE LEVEL] familiarity with this topic. Structure it as: Objectives (3 bullets), Energiser (10 min), Main activity (50 min) broken into stages, Reflection (20 min), Closing (10 min). Include facilitation notes and any materials needed. Use non-formal, participatory methods throughout. Avoid lecture-style delivery.
Sets a clear role (facilitator), specifies exact time and structure, requires participatory methods, and prevents lecture delivery — all constraints that push the model toward genuinely usable output rather than generic content.
Workshop planning
Icebreaker generator
Generate 5 icebreaker activities for a group of [NUMBER] participants from [NUMBER] different countries, ages [AGE RANGE]. The group has just met. Each activity should take 5–10 minutes, require no materials, work with mixed language levels, and be culturally neutral. For each: name, instructions, purpose (what it builds), and one facilitator tip.
The no-materials and culturally-neutral constraints prevent the most common icebreaker problems. The structured output format (name, instructions, purpose, tip) means results are immediately usable without reformatting.
Grant writing
Theory of change paragraph
You are an experienced Erasmus+ grant writer. Write a 200-word theory of change paragraph for a project called "[PROJECT NAME]" that [BRIEF DESCRIPTION OF WHAT IT DOES]. The target group is [TARGET GROUP]. The expected change is [WHAT YOU WANT TO CHANGE]. Use the structure: current problem → our intervention → immediate outcomes → longer-term impact. Write in active voice, avoid jargon, and be specific about evidence of need. This will be read by a European funder.
The explicit logic chain structure (problem → intervention → outcomes → impact) matches how funders think. Specifying "European funder" adjusts the register. Word count and voice constraints prevent the over-padded output that AI often defaults to.
Grant writing
Project summary for non-specialists
You are a communications specialist. Rewrite the following project description for a general audience — someone with no knowledge of youth work or EU funding programmes. Maximum 150 words. Start with the problem, explain the solution clearly, and end with the impact. Avoid all acronyms. [PASTE YOUR ORIGINAL DESCRIPTION HERE]
Specifying a non-specialist reader prevents the model from defaulting to sector language. The problem → solution → impact structure ensures narrative flow. The acronym constraint catches jargon the model might not recognise as jargon.
Communication
Difficult email — setting limits
You are a professional and direct communicator. Help me write an email to [RECIPIENT TYPE] regarding [SITUATION]. The message needs to: [MAIN POINT 1], [MAIN POINT 2]. The tone should be firm but respectful. Maximum 150 words. No unnecessary softening or excessive apology. End with a clear next step or request.
"No unnecessary softening or excessive apology" is critical — AI often over-hedges in difficult communications. The word limit and required clear next step prevent the vague, long-winded output that makes difficult emails worse.
Communication
Multilingual participant information
Rewrite the following participant information text so that it is clear and accessible to someone whose first language is not English, reading at approximately B1 level. Use short sentences. Replace idioms with literal equivalents. Define any specialist terms the first time they appear. Keep all factual content exactly the same. [PASTE TEXT HERE]
The B1 level is a specific CEFR reference that the model understands well. "Replace idioms with literal equivalents" catches what generic simplification misses. The constraint to keep factual content identical prevents the model from summarising rather than simplifying.
Research
Evidence base builder
I am writing a funding application for a project addressing [ISSUE] among young people in [REGION/CONTEXT]. Summarise the current evidence base for this issue: key statistics, trends, and research findings from the last 5 years. Present as 4–5 bullet points suitable for a grant application. Note any figures that will need independent verification before use.
Asking the model to flag statistics needing verification is essential — it surfaces where hallucination risk is highest. Best used with Perplexity (which cites live sources) or with Claude/ChatGPT followed by manual verification of every figure.
Research
Policy briefing
You are a policy analyst summarising for a non-specialist youth work audience. Explain the key points of [POLICY/DOCUMENT NAME] in plain language. Focus on: what it means for youth work organisations, any funding or participation opportunities, key deadlines or requirements, and what action is recommended. Maximum 300 words. Use subheadings.
Specifying "what action is recommended" forces a practical conclusion rather than just description. The non-specialist audience constraint prevents policy jargon from appearing unchanged. Paste the policy document text directly into the chat for best results.
Content creation
Social media campaign
You are a social media manager for a youth organisation. Create a 5-post Instagram series about [TOPIC] aimed at young people aged [AGE RANGE]. For each post: caption (max 150 characters), 3 hashtag suggestions, and a brief description of what the accompanying visual should show. Tone: [TONE]. Do not use corporate language or clichés like "empowering" or "transformative".
Naming specific clichés to avoid ("empowering", "transformative") is much more effective than asking for "authentic" language — the model knows exactly what to avoid. The visual description output makes this immediately useful for briefing a designer or photographer.
Content creation
Newsletter section
Write a 200-word newsletter section announcing [WHAT YOU'RE ANNOUNCING] to our network of youth work professionals. Context: [BRIEF CONTEXT]. Tone: warm and direct, not overly formal. Include: what it is, why it matters, what readers should do next. End with a single clear call to action.
The what/why/next structure mirrors how good announcement writing works. "Single clear call to action" prevents the model adding multiple competing asks. Specify your organisation's voice in the tone field for better brand alignment.
Evaluation
Reflection questions generator
You are a non-formal education evaluator. Generate 8 reflection questions for participants at the end of [TYPE OF ACTIVITY/TRAINING]. The questions should cover: what they learned, how they will apply it, what surprised them, and what they would change. Mix closed questions (for quick surveys) and open questions (for deeper reflection). Suitable for mixed language levels. Avoid leading questions.
Specifying both closed and open question types produces a set that works for both quick pulse checks and deeper reflection. "Avoid leading questions" is critical for evaluation validity — AI often generates questions that assume positive outcomes.
Evaluation
Impact summary for funders
You are a grant writer summarising project outcomes. Using the following data and participant feedback, write a 250-word impact narrative for our funder report. Focus on: what changed for participants, evidence of that change, and one or two specific stories or quotes that illustrate the impact. Write in past tense, active voice. [PASTE DATA AND FEEDBACK HERE]
The evidence + story combination is what funders actually want. Specifying past tense prevents the common error of mixing tenses in reports. Paste real participant quotes and data directly — the model is excellent at weaving these into coherent narrative.
Working with youth
Explaining complex topics
Explain [COMPLEX TOPIC] to a group of 15-year-olds who have no background in the subject. Use one clear analogy, two real-world examples from everyday life, and avoid all technical terminology. Maximum 200 words. End with one question that would spark a good group discussion.
Requiring a specific analogy and two real-world examples forces concrete rather than abstract explanation. The discussion question at the end makes this immediately usable in a session rather than just informational.
Working with youth
Activity adaptation
I have the following activity designed for adults: [DESCRIBE ACTIVITY]. Adapt it for young people aged [AGE RANGE] with [ANY SPECIFIC CHARACTERISTICS]. Keep the core learning objective the same. Adjust: the language used in instructions, the time required, the materials, and any facilitation approach. Note any parts that need particular sensitivity with this age group.
Specifying exactly what to adjust (language, time, materials, approach) prevents the model from making superficial changes. "Note any parts needing sensitivity" surfaces safeguarding or developmental considerations that you might miss when adapting quickly.
Common mistakes & how to fix them

Five mistakes.
Five easy fixes.

These are the patterns that hold most people back. They're all fixable once you know what to look for — and every one of them has a reliable solution.

01
Mistake
Being vague about audience
"Write something for young people"

"Young people" spans a 10-year age range and dozens of different contexts. The model cannot calibrate tone, vocabulary, or content without this information.

Fix
Specify age, background, and knowledge
"...for a group of 15–17 year olds in secondary school with no prior knowledge of this topic"

Always include age range, background, and what they already know. This single change dramatically improves tone and pitch accuracy.

02
Mistake
Asking for everything at once
"Write me a full workshop programme, evaluation forms, participant handbook, and social media posts"

The model handles depth much better than breadth. Asking for everything produces shallow results across all items.

Fix
One task per prompt — then chain
Start with the session plan, then ask for the evaluation form referencing it, then the social media posts referencing both.

Each follow-up builds on the previous output. The model remembers the whole conversation — use that context.

03
Mistake
Accepting the first response
Taking the first output, editing it manually, and moving on.

Manual editing is slower and produces worse results than iterating with the model. You're doing the model's job.

Fix
Iterate in conversation
"This is good but the tone is too formal — rewrite for a less academic audience." / "Keep the structure but cut to 150 words."

Each refinement takes seconds. Three rounds of iteration is faster than one round of manual editing and produces stronger output.

04
Mistake
Forgetting format instructions
Getting a 600-word essay when you needed 3 bullet points for a slide.

AI defaults to whatever structure feels "complete" to it — which is often not what you need for the context you're working in.

Fix
Specify format before you write
Think about how the output will be used before you write the prompt — not after you read the result.

Length, structure, headings, tone. Include all four in every prompt you write for anything you'll use directly.

05
Mistake
Trusting outputs without review
Copying AI output directly into a grant application or report without review.

AI output is a first draft, not a final product. Statistics, citations, and specific claims need verification.

Fix
Review every factual claim
Check statistics independently. Verify the tone fits your organisation. Ensure the voice sounds like you.

The efficiency gain comes from starting further ahead — not from skipping the review. You're editing, not writing from scratch.

Advanced techniques

Go deeper.
Go further.

These techniques separate confident users from power users. None require technical knowledge — just a willingness to experiment with how you structure your thinking before you type.

01 Chain prompting
+
Rather than writing one long prompt, break your work into a chain of connected prompts, each building on the previous output. Ask the model to first generate an outline, then develop each section, then review and strengthen the whole. This gives you control at each stage and produces significantly better results on complex documents like grant applications or evaluation reports than trying to generate everything in one go. Chain prompts work especially well in Claude, which remembers the full conversation context.
02 Few-shot examples
+
Instead of describing what you want, show the model an example of it. "Here is a reflection question I consider good: [EXAMPLE]. Now generate 10 more in the same style and at the same difficulty level." The model learns from the example far more precisely than from a description. This is particularly powerful for matching an organisational voice or maintaining consistency across a document series.
03 Persona establishment
+
At the start of a complex project, spend one prompt establishing the full context: who you are, what your organisation does, who your audience is, your values, your writing style. Save this as a "context prompt" and paste it at the start of every new session. This is especially valuable in tools without cross-session memory (most free tiers). With Claude Pro, you can save this in a Project and it persists automatically.
04 Critique and improve
+
Once you have a draft — AI-generated or your own — ask the model to critique it before improving it. "You are a critical reader reviewing this grant application. Identify the three weakest points and explain why." Then: "Now rewrite those three sections addressing the weaknesses you identified." This two-step process produces stronger revisions than asking for direct improvement, because the model surfaces issues you might not have noticed.
05 Role reversal
+
Ask the model to ask you questions before it writes. "Before writing this, ask me the 5 questions you most need answered to do this well." The questions the model generates are often excellent prompts in themselves — they help you clarify your own thinking and surface information you hadn't thought to include. This technique is particularly valuable at the start of complex projects where you know what you want but aren't sure how to communicate it.
"

The best prompt isn't the cleverest one. It's the one that would make a skilled human colleague understand exactly what you need.

Youthwork.AI · Prompt Engineering · Part 02
Up next in this toolkit
Part 03 — Content Creation
Continue reading