The same tools this magazine has been celebrating — the ones that save hours of grant writing, generate podcasts from research papers, produce videos from text prompts — are also being used to spread political misinformation at scale, generate non-consensual imagery, automate discrimination, and erode the line between what is real and what is fabricated. This section does not ask you to stop using AI. It asks you to use it with your eyes open — and to help young people do the same.
Every tool carries a toll
AI-generated misinformation is no longer an edge case. In Ireland's 2025 presidential election, a deepfake video falsely depicted the eventual winner withdrawing his candidacy — and included fabricated footage of national broadcasters "confirming" the news, released days before polling day. In India, AI-generated content on WhatsApp and short-form video platforms sparked real-world violence. In the Netherlands, roughly 400 AI-generated synthetic images were used to attack political candidates in a single election cycle. In the United States, AI-generated images falsely depicting a celebrity endorsing a presidential candidate circulated before legal action was taken.
These are not fringe incidents. The World Economic Forum ranks AI-driven misinformation among the world's top risks. Deepfakes have crossed what researchers are calling a "critical threshold" — they have eliminated the tell-tale glitches of earlier generations and are now accessible to anyone with a smartphone. Detection is increasingly difficult even for trained experts. Research confirms that humans cannot consistently identify AI-generated voices, often perceiving them as identical to real people.
What makes this especially relevant for youth workers is the demographic reality. Young people are disproportionately exposed — and disproportionately affected. A 2025 study found that young voters on TikTok were regularly exposed to misleading political content including AI-generated and fabricated videos of political leaders, with the presence of synthetic content alongside genuine posts making it harder to distinguish parody from fact in fast-moving, short-form feeds. Younger users are more intense users of generative AI tools than adults — and surveys consistently show that only a small minority feel confident in their ability to identify deepfakes.
The speed of the problem outpaces the speed of the solutions. Detection tools exist, but they are a step behind creation tools. Platform moderation exists, but it is outpaced by volume. Media literacy education exists, but it has not yet reached the scale or depth needed. This is not a reason for despair — it is a reason for urgency, and for making AI literacy a genuine priority in youth work practice.
Deepfakes are no longer a specialist concern. Any sufficiently motivated actor with a smartphone and a free tool can now generate convincing synthetic media — fake videos of people saying things they never said, fake audio of voices they don't have, fake images in situations they were never in. The harms are documented across multiple categories: political manipulation (false candidate endorsements, fabricated news coverage), financial fraud (voice cloning to impersonate executives or family members), non-consensual intimate imagery (affecting predominantly women and girls — a documented safeguarding issue for young people), and reputational damage.
For youth workers specifically: young people in your care may encounter deepfakes as targets or as consumers. The safeguarding implications of AI-generated non-consensual imagery involving young people are severe and legally significant. Understanding that this technology exists — and knowing how to respond when a young person discloses exposure to it — is now part of effective youth work practice.
AI systems learn from human-generated data — which reflects historical discrimination, cultural assumptions, and structural inequalities. The result is that AI can perpetuate and amplify the biases already present in society, often in ways that are invisible and difficult to challenge. Amazon's AI hiring tool — which systematically downgraded CVs from women — is the most cited example. But the problem is systemic: facial recognition systems have consistently misidentified people of colour at significantly higher rates than white faces. Loan approval algorithms have replicated redlining patterns. Sentiment analysis tools perform worse on African American Vernacular English. Medical AI tools have shown systematic under-performance for patients from minority groups.
For youth workers: if AI tools are used to make or inform decisions about young people — assessing applications, evaluating participation, recommending support — those decisions carry the risk of encoding bias. The responsibility is to maintain human oversight, interrogate outputs that seem surprising or concerning, and never allow AI to make consequential decisions about individual young people without human review.
Generative AI has fundamentally changed the economics of misinformation. Creating a convincing false news article, a fabricated academic reference, a fake social media campaign, or a synthetic "expert" opinion once required significant resources and skill. It now requires minutes and no technical knowledge. The result is an information environment where the volume of false content has increased dramatically while the cost of producing it has collapsed. This is not a technology problem with a technology solution — it is a social problem requiring social and educational responses.
The specific challenge for young people is navigating social media feeds where authentic and AI-generated content are mixed indistinguishably. Research shows that repeated exposure to misinformation — even exposure the reader knows is potentially false — increases the likelihood of believing it over time. The defence is not detection skill alone. It is the habit of slowing down before sharing, checking primary sources, and building a default scepticism toward emotionally triggering content.
Many AI tools — including ones recommended in this magazine — process the text you input to improve their models, unless you are on a paid or enterprise tier with stronger data protections. Every query, every document you paste, every question you ask leaves a trace. For most personal use this is a manageable risk. For youth workers it has concrete implications: personally identifiable information about young people, case notes, safeguarding concerns, or details of vulnerable situations should never be pasted into AI tools without first understanding the data handling terms of that specific tool and tier.
Beyond individual data use: AI enables a new category of surveillance — behavioural profiling from passive data, emotion detection from voice and video, predictive scoring based on aggregated patterns. The EU AI Act explicitly bans several of these applications in the context of education and public services, including emotion recognition systems in educational settings and social scoring by public authorities. Understanding these bans matters because it tells you something about what is genuinely dangerous — and because tools that do these things may circulate regardless of their legal status.
The efficiency gains from AI are real. So is the risk of over-reliance. When AI drafts your communications, plans your sessions, and synthesises your research, the question arises: what happens to the underlying skills? The concern is not abstract — cognitive science research on GPS navigation shows measurable decline in spatial reasoning among people who outsource navigation entirely. The same pattern may apply to writing, critical analysis, and problem-solving. For young people especially, the risk is bypassing the development of foundational skills in favour of AI outputs that deliver the result without the learning process. The fact that a tool can do something better and faster than a beginner does not mean the beginner should always use the tool.
This is not an argument against AI use — it is an argument for intentionality. Use AI to do things faster that you already know how to do. Be more cautious about using AI to bypass learning processes that are valuable in themselves. As a youth worker, distinguish between using AI to support young people's goals and using AI in ways that substitute for their development.
Training large AI models requires enormous computational resources and significant energy consumption. GPT-4's training run was estimated to consume roughly the equivalent of 500 tons of CO₂. Running inference — every query you make — also has a carbon cost, though it is smaller per query. These costs are largely invisible to end users and are rarely discussed in AI adoption conversations. For organisations with sustainability commitments — and for Erasmus+ projects that explicitly address climate awareness — this is an honest reckoning worth having.
It does not mean stopping AI use. It means including environmental cost in the honest accounting of AI's benefits and trade-offs. Choosing tools with published commitments to renewable energy, batching similar queries together, and being intentional about which tasks genuinely require AI are all ways to reduce the footprint without abandoning the benefits.
These are documented applications of AI being used in service of genuine social good — in fields directly relevant to youth work and community development.
AI systems trained to identify patterns in online communication associated with grooming and exploitation are being deployed by law enforcement and child safety organisations across Europe. Thorn, a non-profit, reports that its AI tools have helped identify millions of child sexual abuse images and accelerated investigations dramatically. The same generative AI capabilities that create risks for young people online are being deployed to detect and disrupt those risks.
Organisations across Africa, South Asia, and Latin America are using AI-powered translation, text-to-speech, and simplified language tools to make educational content accessible to communities with limited literacy or in languages underserved by mainstream technology. AI translation tools are enabling non-formal education organisations to run bilingual programmes at a fraction of the previous cost, reaching young people in their own languages without requiring human interpreters for every session.
Crisis text line and similar services have deployed AI to triage incoming contacts, identify the highest-risk messages, and route them to human counsellors faster. The AI does not replace the counsellor — it ensures the most urgent contacts reach a human first. The ethical design principle — AI as triage, human as support — is the model that applies across youth work contexts, and in regions with severe shortages of mental health professionals these tools are meaningfully extending limited human capacity.
Microsoft Immersive Reader, Google's translation tools, and AI captioning systems have meaningfully reduced barriers for students with dyslexia, hearing impairments, vision impairments, and language barriers. Studies consistently show improved reading comprehension and engagement when AI accessibility tools are used by students with learning differences. For youth workers: these are not experimental tools — they are proven, widely available, and significantly underused in non-formal education settings.
AI systems are being used to track deforestation in real time, monitor ocean plastic levels, predict extreme weather events, and identify poaching activity in protected areas. For Erasmus+ projects with a sustainability or climate dimension: these applications demonstrate that AI and environmental responsibility are not inherently in tension — the technology can actively serve ecological goals when deployed with those goals in mind.
AI translation and document analysis tools are being used by NGOs working with refugee communities to speed up asylum application processing, provide legal information in multiple languages, and help caseworkers manage higher caseloads without sacrificing individual attention. Several organisations working in Erasmus+ partner countries have reported significant improvements in their capacity to serve newly arrived communities using freely available AI tools.
The world's first comprehensive AI regulation is now in force. These are the key dates and the provisions that apply directly to organisations working with young people.
The EU AI Act — which entered into force in August 2024 and becomes fully applicable in August 2026 — is the world's first comprehensive AI regulation. It categorises AI systems by risk level and applies requirements proportionally. For youth workers, two aspects are immediately relevant: what is banned, and what obligations exist around transparency and AI literacy.
Several AI applications were prohibited from February 2025 — and some are applications that could plausibly be encountered in youth and education contexts. Banned practices include: AI systems that manipulate people through subliminal techniques they are not aware of; AI that exploits vulnerabilities of specific groups including age-related vulnerabilities; social scoring systems that evaluate people based on behaviour; and emotion recognition systems in workplaces and educational institutions.
This last prohibition is directly relevant: AI tools that claim to assess engagement, attention, or emotional state of students or participants in educational settings are now illegal in the EU. If you encounter a tool that offers these functions, it cannot be legally deployed in your context.
From February 2025, the AI Act's AI literacy obligations apply. This means organisations that deploy AI systems — including youth organisations using AI tools in their work with young people — must take appropriate steps to ensure staff and users have sufficient AI literacy to understand the tools they are using.
This is not a theoretical requirement. It is a legal one. Running sessions on AI literacy with your team is not optional good practice — it is organisational compliance. By August 2026, AI-generated content must be clearly labelled as such, including deepfakes and AI-generated text published for public information purposes.
Not as a barrier. As a habit.
AI is powerful, but not everything benefits from it. Anything that requires knowing a specific person deeply, making a judgment call about a vulnerable situation, or expressing genuine human care — these are not AI tasks. Identify what requires you specifically, and protect that space.
Before pasting anything into an AI tool, ask: does this include personal information about young people, colleagues, or stakeholders? Have I checked the data handling terms of this specific tool and tier? Does your organisation have a policy on what can and cannot be shared with AI tools?
AI can state false information with complete confidence. Before using a statistic, citing a study, referencing a policy, or making a claim based on AI output — verify it from a primary source. This is especially important for anything appearing in a funded report or formal communication.
From August 2026, EU law requires AI-generated content to be labelled. Beyond the legal requirement, there is a professional and ethical one: if AI substantially drafted a document, relevant stakeholders have a right to know. "AI-assisted" is a straightforward, honest, and increasingly normal disclosure.
Who could be harmed by this output — directly or indirectly? Could it reinforce a stereotype? Could it produce discriminatory content? Could it depersonalise someone who deserves personal attention? Not every AI use carries meaningful harm risk. But the habit of asking catches the cases that do.
Click to check off items — resets when you close the page. Print or screenshot to keep a copy.
These four activities are ready to use in sessions with young people. Each explores a different dimension of AI ethics through experience rather than lecture.
Learning objective: Build scepticism as a reflex, not a burden.
MaterialsLearning objective: Apply ethical reasoning to concrete AI scenarios.
MaterialsLearning objective: Move from identifying problems to proposing solutions.
MaterialsLearning objective: Develop personal awareness of AI use and its implications.
MaterialsAI literacy is not about knowing how to use the tools. It is about knowing when not to, why it matters, and how to help the young people in your care navigate a world where the line between real and generated is disappearing.