Artificial Intelligence Against Real Prejudices: How Ukraine Teaches AI to Be Inclusive
Last week, Oleksandr conducted another prompt engineering lesson for the LGBTQ+ community. Instead of the standard ‘Write a text about coffee’, his students are now formulating requests like: ‘Create a short promotional text about arabica coffee for Instagram, style – playful, length – up to 300 characters.’ It seems a small detail, but the quality of results changes dramatically.
Oleksandr is one of the trainers in the Google AI Essentials programme, which ALLIANCE.GLOBAL implements in Ukraine. He teaches people from the community to work with artificial intelligence not merely as a peculiar technology, but as a fully-fledged tool for work and creativity.
Artificial intelligence is becoming more accessible, but for everyone?
DeepSeek R1 – a Chinese AI model that achieved Google Gemini results whilst spending 32 times less money – conquered the technology community last month. ChatGPT gained 800 million users faster than any other technology. The figures are impressive, but for Oleksandr, what matters more is what lies behind them.
‘This cheapening of AI opens up technologies for small businesses, education, and start-ups. Models that were previously available only to corporations can now be launched even on a home PC. The main thing is learning to work with them.’
The problem lies elsewhere. Physical access to technologies exists, but social access doesn’t always. People who don’t fit into the classic paradigm of ‘normality’ often find themselves excluded from such processes.
‘Discrimination, pressure, absence of a safe environment – all this has an impact,’ notes Oleksandr. ‘My task as a mentor is to create a space where identity doesn’t hinder development.’
When AI itself has biases
Oleksandr analysed with his students a case where AI moderation automatically blocked a post about a pride parade, classifying it as ‘unacceptable content’. Facial recognition systems work worse with gender non-conforming individuals. AI algorithms reproduce the biases embedded in data and algorithms.
‘We examine cases where AI censors or distorts information on political topics – automatically blocks mentions of protest movements or changes the tone of statements about authoritarian regimes. I provide tools for analysing models and teach how to report problems.’
Healthy scepticism, the trainer believes, is a necessary component of AI literacy.
‘I explain to students: AI isn’t magic and it isn’t evil. It’s a tool. We automate tasks, but simultaneously analyse how AI influences elections, the economy, and education. Critical thinking is woven into every module.’
Five tools that change the rules of the game
Among the multitude of AI platforms, Oleksandr highlights a fundamental five: ChatGPT for working with text, Midjourney or DALL·E for visuals, Gamma for presentations, Perplexity for information search, and Whisper for audio processing.
‘This is the foundation for any profession: text, images, voice, data.’
The real magic begins when these tools work together. Recently, his students created small projects, combining different AIs: GPT wrote texts, DALL·E drew visuals, Whisper processed voice. The result – comprehensive solutions based on several tools.
This multimodal approach opens doors to professions that didn’t exist five years ago. AI prompt specialists earn up to $180,000 annually. AI ethics specialists have become a separate profession. AI model trainers are in demand like never before.
‘Among my students are those who already demonstrate ethical analysis skills or brilliantly formulate prompts. What’s needed is deep understanding of tools, systematic thinking, and a desire to get to the heart of matters rather than skimming the surface.’
Ukrainian AI: hope or utopia?
By December 2025, Ukraine plans to launch its own Ukrainian-language AI system – a sort of ChatGPT analogue adapted for local needs. Oleksandr approaches this initiative with cautious optimism.
‘This is very much needed. It’s important to make the model truly Ukrainian-speaking, transparent in its data sources, and not a copy of Western analogues. It could become a serious step towards digital independence.’
In a world where control over AI technologies is concentrated in the hands of several global corporations, the question of technological sovereignty becomes particularly acute. Especially for countries that know from their own experience the price of independence.
From prejudices to possibilities
Oleksandr’s approach extends far beyond technical skills. He views education as a tool for creating equal opportunities, where technologies serve inclusivity instead of deepening existing gaps.
The Google AI Essentials programme, implemented by ALLIANCE.GLOBAL, demonstrates that artificial intelligence can become a means of expanding opportunities for traditionally marginalised groups.
Artificial intelligence ceases to be the privilege of the chosen few. It becomes a tool for everyone ready to master it. This process requires not only technical knowledge, but also critical thinking, ethical awareness, and understanding of the social context of technologies.
Registration for the next cohorts of the Google AI Essentials programme is available on the ALLIANCE.GLOBAL website.
P.S. Oleksandr admits: the most challenging thing in teaching AI is convincing people that they won’t become dependent on technologies, but on the contrary, will gain more control over their digital lives. And so far, he’s managing to do just that 😊