PART 06 Ethics & Awareness

Powerful enough
to help.
Powerful enough
to harm.

The same tools this magazine has been celebrating — the ones that save hours of grant writing, generate podcasts from research papers, produce videos from text prompts — are also being used to spread political misinformation at scale, generate non-consensual imagery, automate discrimination, and erode the line between what is real and what is fabricated. This section does not ask you to stop using AI. It asks you to use it with your eyes open — and to help young people do the same.

T O O L L

Every tool carries a toll

This is not a future concern. It is happening now.
The risks of AI are not hypothetical scenarios from science fiction. They are documented, measurable, and accelerating. Understanding them is part of using AI responsibly — and part of preparing young people for the world they are already living in.
8M
deepfakes projected to be shared in 2025, up from 500,000 in 2023
European Parliament, 2025
90%
of online content could be synthetically generated by 2026
Europol estimate
$40B
projected US fraud losses driven by generative AI by 2027
Deloitte

AI-generated misinformation is no longer an edge case. In Ireland's 2025 presidential election, a deepfake video falsely depicted the eventual winner withdrawing his candidacy — and included fabricated footage of national broadcasters "confirming" the news, released days before polling day. In India, AI-generated content on WhatsApp and short-form video platforms sparked real-world violence. In the Netherlands, roughly 400 AI-generated synthetic images were used to attack political candidates in a single election cycle. In the United States, AI-generated images falsely depicting a celebrity endorsing a presidential candidate circulated before legal action was taken.

These are not fringe incidents. The World Economic Forum ranks AI-driven misinformation among the world's top risks. Deepfakes have crossed what researchers are calling a "critical threshold" — they have eliminated the tell-tale glitches of earlier generations and are now accessible to anyone with a smartphone. Detection is increasingly difficult even for trained experts. Research confirms that humans cannot consistently identify AI-generated voices, often perceiving them as identical to real people.

What makes this especially relevant for youth workers is the demographic reality. Young people are disproportionately exposed — and disproportionately affected. A 2025 study found that young voters on TikTok were regularly exposed to misleading political content including AI-generated and fabricated videos of political leaders, with the presence of synthetic content alongside genuine posts making it harder to distinguish parody from fact in fast-moving, short-form feeds. Younger users are more intense users of generative AI tools than adults — and surveys consistently show that only a small minority feel confident in their ability to identify deepfakes.

The speed of the problem outpaces the speed of the solutions. Detection tools exist, but they are a step behind creation tools. Platform moderation exists, but it is outpaced by volume. Media literacy education exists, but it has not yet reached the scale or depth needed. This is not a reason for despair — it is a reason for urgency, and for making AI literacy a genuine priority in youth work practice.

Prior exposure to deepfakes actually increases belief in misinformation — a counterintuitive finding confirmed across eight countries. The more deepfakes people see, the more susceptible they become to believing false content. This is the "illusory truth effect": repeated exposure makes information feel more credible, regardless of accuracy.

The risks that matter most for youth workers and young people

High severity
Medium severity
Watch
01
Deepfakes and synthetic media
AI-generated video, audio, and images that are indistinguishable from reality
+

Deepfakes are no longer a specialist concern. Any sufficiently motivated actor with a smartphone and a free tool can now generate convincing synthetic media — fake videos of people saying things they never said, fake audio of voices they don't have, fake images in situations they were never in. The harms are documented across multiple categories: political manipulation (false candidate endorsements, fabricated news coverage), financial fraud (voice cloning to impersonate executives or family members), non-consensual intimate imagery (affecting predominantly women and girls — a documented safeguarding issue for young people), and reputational damage.

For youth workers specifically: young people in your care may encounter deepfakes as targets or as consumers. The safeguarding implications of AI-generated non-consensual imagery involving young people are severe and legally significant. Understanding that this technology exists — and knowing how to respond when a young person discloses exposure to it — is now part of effective youth work practice.

What to watch for Unnatural blinking patterns, lip-sync inconsistencies, odd lighting at face edges, audio that doesn't quite match mouth movement. These cues are becoming less reliable as technology improves. For consequential decisions, verify via independent sources — not just the video itself.
02
Algorithmic bias and discrimination
AI systems that encode and amplify existing social inequalities
+

AI systems learn from human-generated data — which reflects historical discrimination, cultural assumptions, and structural inequalities. The result is that AI can perpetuate and amplify the biases already present in society, often in ways that are invisible and difficult to challenge. Amazon's AI hiring tool — which systematically downgraded CVs from women — is the most cited example. But the problem is systemic: facial recognition systems have consistently misidentified people of colour at significantly higher rates than white faces. Loan approval algorithms have replicated redlining patterns. Sentiment analysis tools perform worse on African American Vernacular English. Medical AI tools have shown systematic under-performance for patients from minority groups.

For youth workers: if AI tools are used to make or inform decisions about young people — assessing applications, evaluating participation, recommending support — those decisions carry the risk of encoding bias. The responsibility is to maintain human oversight, interrogate outputs that seem surprising or concerning, and never allow AI to make consequential decisions about individual young people without human review.

03
Misinformation at scale
The industrialisation of false content
+

Generative AI has fundamentally changed the economics of misinformation. Creating a convincing false news article, a fabricated academic reference, a fake social media campaign, or a synthetic "expert" opinion once required significant resources and skill. It now requires minutes and no technical knowledge. The result is an information environment where the volume of false content has increased dramatically while the cost of producing it has collapsed. This is not a technology problem with a technology solution — it is a social problem requiring social and educational responses.

The specific challenge for young people is navigating social media feeds where authentic and AI-generated content are mixed indistinguishably. Research shows that repeated exposure to misinformation — even exposure the reader knows is potentially false — increases the likelihood of believing it over time. The defence is not detection skill alone. It is the habit of slowing down before sharing, checking primary sources, and building a default scepticism toward emotionally triggering content.

04
Privacy and surveillance
Data collection, profiling, and the erosion of private space
+

Many AI tools — including ones recommended in this magazine — process the text you input to improve their models, unless you are on a paid or enterprise tier with stronger data protections. Every query, every document you paste, every question you ask leaves a trace. For most personal use this is a manageable risk. For youth workers it has concrete implications: personally identifiable information about young people, case notes, safeguarding concerns, or details of vulnerable situations should never be pasted into AI tools without first understanding the data handling terms of that specific tool and tier.

Beyond individual data use: AI enables a new category of surveillance — behavioural profiling from passive data, emotion detection from voice and video, predictive scoring based on aggregated patterns. The EU AI Act explicitly bans several of these applications in the context of education and public services, including emotion recognition systems in educational settings and social scoring by public authorities. Understanding these bans matters because it tells you something about what is genuinely dangerous — and because tools that do these things may circulate regardless of their legal status.

05
Dependency and deskilling
The risk of outsourcing thinking to tools
+

The efficiency gains from AI are real. So is the risk of over-reliance. When AI drafts your communications, plans your sessions, and synthesises your research, the question arises: what happens to the underlying skills? The concern is not abstract — cognitive science research on GPS navigation shows measurable decline in spatial reasoning among people who outsource navigation entirely. The same pattern may apply to writing, critical analysis, and problem-solving. For young people especially, the risk is bypassing the development of foundational skills in favour of AI outputs that deliver the result without the learning process. The fact that a tool can do something better and faster than a beginner does not mean the beginner should always use the tool.

This is not an argument against AI use — it is an argument for intentionality. Use AI to do things faster that you already know how to do. Be more cautious about using AI to bypass learning processes that are valuable in themselves. As a youth worker, distinguish between using AI to support young people's goals and using AI in ways that substitute for their development.

06
Environmental cost
The carbon footprint of intelligence at scale
+

Training large AI models requires enormous computational resources and significant energy consumption. GPT-4's training run was estimated to consume roughly the equivalent of 500 tons of CO₂. Running inference — every query you make — also has a carbon cost, though it is smaller per query. These costs are largely invisible to end users and are rarely discussed in AI adoption conversations. For organisations with sustainability commitments — and for Erasmus+ projects that explicitly address climate awareness — this is an honest reckoning worth having.

It does not mean stopping AI use. It means including environmental cost in the honest accounting of AI's benefits and trade-offs. Choosing tools with published commitments to renewable energy, batching similar queries together, and being intentional about which tasks genuinely require AI are all ways to reduce the footprint without abandoning the benefits.

The same technology.
In different hands.

These are documented applications of AI being used in service of genuine social good — in fields directly relevant to youth work and community development.

✓ Verified
Early detection of child exploitation online

AI systems trained to identify patterns in online communication associated with grooming and exploitation are being deployed by law enforcement and child safety organisations across Europe. Thorn, a non-profit, reports that its AI tools have helped identify millions of child sexual abuse images and accelerated investigations dramatically. The same generative AI capabilities that create risks for young people online are being deployed to detect and disrupt those risks.

✓ Verified
AI literacy tools for underserved communities

Organisations across Africa, South Asia, and Latin America are using AI-powered translation, text-to-speech, and simplified language tools to make educational content accessible to communities with limited literacy or in languages underserved by mainstream technology. AI translation tools are enabling non-formal education organisations to run bilingual programmes at a fraction of the previous cost, reaching young people in their own languages without requiring human interpreters for every session.

✓ Verified
Mental health support at scale

Crisis text line and similar services have deployed AI to triage incoming contacts, identify the highest-risk messages, and route them to human counsellors faster. The AI does not replace the counsellor — it ensures the most urgent contacts reach a human first. The ethical design principle — AI as triage, human as support — is the model that applies across youth work contexts, and in regions with severe shortages of mental health professionals these tools are meaningfully extending limited human capacity.

✓ Verified
Accessibility and inclusion

Microsoft Immersive Reader, Google's translation tools, and AI captioning systems have meaningfully reduced barriers for students with dyslexia, hearing impairments, vision impairments, and language barriers. Studies consistently show improved reading comprehension and engagement when AI accessibility tools are used by students with learning differences. For youth workers: these are not experimental tools — they are proven, widely available, and significantly underused in non-formal education settings.

✓ Verified
Environmental monitoring

AI systems are being used to track deforestation in real time, monitor ocean plastic levels, predict extreme weather events, and identify poaching activity in protected areas. For Erasmus+ projects with a sustainability or climate dimension: these applications demonstrate that AI and environmental responsibility are not inherently in tension — the technology can actively serve ecological goals when deployed with those goals in mind.

✓ Verified
Supporting refugee communities

AI translation and document analysis tools are being used by NGOs working with refugee communities to speed up asylum application processing, provide legal information in multiple languages, and help caseworkers manage higher caseloads without sacrificing individual attention. Several organisations working in Erasmus+ partner countries have reported significant improvements in their capacity to serve newly arrived communities using freely available AI tools.

EU AI Act

What youth workers need to know

The world's first comprehensive AI regulation is now in force. These are the key dates and the provisions that apply directly to organisations working with young people.

Feb 2025
Banned practices enforced. AI literacy obligations begin.
Aug 2025
GPAI model rules apply — ChatGPT, Claude, Gemini and similar models.
Aug 2026
Full enforcement for most organisations, including education and youth work. AI-generated content must be labelled.
Aug 2027
High-risk AI embedded in products — medical devices and similar.

What it is

The EU AI Act — which entered into force in August 2024 and becomes fully applicable in August 2026 — is the world's first comprehensive AI regulation. It categorises AI systems by risk level and applies requirements proportionally. For youth workers, two aspects are immediately relevant: what is banned, and what obligations exist around transparency and AI literacy.

What is banned — relevant to youth work contexts

Several AI applications were prohibited from February 2025 — and some are applications that could plausibly be encountered in youth and education contexts. Banned practices include: AI systems that manipulate people through subliminal techniques they are not aware of; AI that exploits vulnerabilities of specific groups including age-related vulnerabilities; social scoring systems that evaluate people based on behaviour; and emotion recognition systems in workplaces and educational institutions.

This last prohibition is directly relevant: AI tools that claim to assess engagement, attention, or emotional state of students or participants in educational settings are now illegal in the EU. If you encounter a tool that offers these functions, it cannot be legally deployed in your context.

What it requires from organisations using AI

From February 2025, the AI Act's AI literacy obligations apply. This means organisations that deploy AI systems — including youth organisations using AI tools in their work with young people — must take appropriate steps to ensure staff and users have sufficient AI literacy to understand the tools they are using.

This is not a theoretical requirement. It is a legal one. Running sessions on AI literacy with your team is not optional good practice — it is organisational compliance. By August 2026, AI-generated content must be clearly labelled as such, including deepfakes and AI-generated text published for public information purposes.

The AI literacy requirement under the EU AI Act is directly relevant to Erasmus+ projects. Organisations that participated in the Youthwork.AI training in Malta are already better positioned for compliance than most — because developing AI literacy in staff and beneficiaries is exactly what this toolkit is designed to support.

Before you use AI in your work — five questions worth asking

Not as a barrier. As a habit.

01
Is this the right tool for this task?

AI is powerful, but not everything benefits from it. Anything that requires knowing a specific person deeply, making a judgment call about a vulnerable situation, or expressing genuine human care — these are not AI tasks. Identify what requires you specifically, and protect that space.

02
Whose data am I handling?

Before pasting anything into an AI tool, ask: does this include personal information about young people, colleagues, or stakeholders? Have I checked the data handling terms of this specific tool and tier? Does your organisation have a policy on what can and cannot be shared with AI tools?

03
Have I verified what it told me?

AI can state false information with complete confidence. Before using a statistic, citing a study, referencing a policy, or making a claim based on AI output — verify it from a primary source. This is especially important for anything appearing in a funded report or formal communication.

04
Am I being transparent about AI use?

From August 2026, EU law requires AI-generated content to be labelled. Beyond the legal requirement, there is a professional and ethical one: if AI substantially drafted a document, relevant stakeholders have a right to know. "AI-assisted" is a straightforward, honest, and increasingly normal disclosure.

05
Could this harm someone?

Who could be harmed by this output — directly or indirectly? Could it reinforce a stereotype? Could it produce discriminatory content? Could it depersonalise someone who deserves personal attention? Not every AI use carries meaningful harm risk. But the habit of asking catches the cases that do.

Click to check off items — resets when you close the page. Print or screenshot to keep a copy.

Bringing these conversations into your practice

These four activities are ready to use in sessions with young people. Each explores a different dimension of AI ethics through experience rather than lecture.

01 20 minutes

Real or Fake?

Learning objective: Build scepticism as a reflex, not a burden.

Screen or projector. A collection of 10 images — 5 real, 5 AI-generated. Easily assembled using a Google image search for AI-generated examples alongside genuine news photographs.
  1. 01.Show each image for 15 seconds. Participants vote real or fake — hands, cards, or a quick phone poll.
  2. 02.Reveal the answer after each vote. Do not lecture — just reveal and watch the reaction.
  3. 03.After all 10 images, discuss: What cues did you use? Were you more wrong or more right? What changed as you went through more images?
  4. 04.Key facilitation point: the goal is not to teach detection skills (these become outdated quickly) — it is to build the habit of pausing before assuming something is real.
Discussion questions Why does it matter if a video is real or AI-generated? Have you ever shared something that turned out to be fake? What would you do differently? Who benefits from creating fake content?
02 30 minutes

Who Gets Hurt?

Learning objective: Apply ethical reasoning to concrete AI scenarios.

Five scenario cards — printed or displayed digitally. Small groups of 3–4 participants.
  1. 01.Each group receives a scenario describing an AI application: a school predicting dropout risk, an employer screening CVs, a government identifying benefit fraud, a social media platform curating feeds, or a news site generating articles without human editors.
  2. 02.Groups discuss: Who benefits? Who might be harmed? Is the harm intentional? What would make it more ethical? Who should decide?
  3. 03.Each group presents their scenario and conclusions. Facilitator connects themes across groups — noting where different scenarios share underlying patterns.
Discussion questions Is it the AI that causes harm, or the people who design and deploy it? Can you think of AI tools you use that might affect other people? What rights should people have when AI is making decisions about them?
03 25 minutes

Write a Rule

Learning objective: Move from identifying problems to proposing solutions.

Large paper or whiteboard, markers. Stakeholder role cards optional.
  1. 01.Brief opening: why rules around AI might be needed. Reference the EU AI Act — it exists because of exactly this problem, and it was written by people, not by AI.
  2. 02.Divide participants into groups representing different stakeholders: young people, teachers and youth workers, companies that build AI, governments, parents.
  3. 03.Each group has 10 minutes to write 3 rules for AI they think are most important from their stakeholder's perspective.
  4. 04.Groups share their rules. Facilitator maps agreements and conflicts: where do different groups want the same things? Where do they disagree — and why?
Discussion questions Whose rules matter most? Who currently makes these decisions? How would you know if a rule was being followed? What should happen if it isn't?
04 20 minutes

The AI Mirror

Learning objective: Develop personal awareness of AI use and its implications.

Paper and pens. No screens required — the point is to reflect without prompting from a device.
  1. 01.Participants individually write down every AI tool or AI-powered feature they have used in the last 24 hours. Prompt them: phone autocomplete, recommendation algorithms, voice assistants, filters on photos, navigation, music playlists, search engines.
  2. 02.Share and compare lists — the variety usually surprises participants, and the total is almost always larger than expected.
  3. 03.Ask: which of these did you choose? Which were chosen for you? Did you know AI was involved? Does it matter?
  4. 04.Close with a brief discussion about the difference between AI you use and AI that is used on you.
Discussion questions At what point does AI use become a problem rather than a convenience? Who decides what AI knows about you? If AI is shaping what you see, hear, and read — what is it not showing you?
"

AI literacy is not about knowing how to use the tools. It is about knowing when not to, why it matters, and how to help the young people in your care navigate a world where the line between real and generated is disappearing.

This toolkit was created as the final output of the Youthwork.AI Erasmus+ training, held in Bugibba, Malta, November 2025. Participants from Malta, Latvia, Romania, Azerbaijan, Spain, and Türkiye contributed to its development. Pricing information and model capabilities were accurate as of March 2026. The AI landscape changes rapidly — verify current details on official sites before budgeting or publishing. Created with the assistance of AI tools, reviewed and edited by human youth workers.
Up next in this toolkit
About — The project & the team
Continue reading