GLR September-October 2025
ESSAY
Tech v . Hate: Can AI Make Us Safer? S HAHEENA S HEIKH
A S ARTIFICIAL INTELLIGENCE quietly re shapes how interactions within online com munities are policed, LGBT users find themselves at the center of a complex question: Can technology keep us safe without compro mising who we are? Online spaces can serve as lifelines for LGBT people, providing platforms for connec tion, self-expression, and community support. For many, espe cially those in regions where physical safe spaces are scarce or nonexistent, digital communities offer the only avenue to engage with others who share similar experiences. Research conducted by the HRC Foundation shows that LGBT people rely upon on line platforms at higher rates than do other users to access support networks, resources, and affirming interactions. However, these spaces are frequently sites of harassment, hate speech, and discrimination. Researchers have found that LGBT people face disproportionate levels of online abuse, with plat forms struggling to mitigate these issues. This persistent hostil ity not only deters participation but also can affect users’ mental health, forcing many to limit their engagement or leave platforms entirely. AI-powered moderation has emerged as a potential tool additional questions: What does it mean to have machines de cide what’s safe or offensive in queer spaces? If AI can filter out harassment, can it also end up filtering out dissent, satire, or nonconformity? And if every community builds its own pro tected digital enclave, do we risk further fragmenting the on line world into sealed-off silos? There are also significant limitations to AI moderation of LGBT spaces. Algorithmic bi ases, cultural context misinterpretations, and a lack of LGBT specific data sets pose challenges to ensuring fair and effective moderation. While AI has the potential to be a tool in safe guarding digital spaces, its implementation must be carefully designed to account for the complexities of LGBT discourse and identity. The tech companies that are still trying to reduce online abuse increasingly rely on algorithms to detect and re move it. This shift could be a double-edged sword. While it of fers new protections, it also risks misjudging the language of queer communities or enforcing opaque moderation decisions without human sensitivity. While misinterpreting the context of with which to address these challenges. By detecting and filtering harmful content in real time, AI could help create safer environments for LGBT users. Platforms have already begun implementing AI-driven moderation to curb hate speech and harassment, demon strating its potential effectiveness in improv ing digital safety. But this technological turn raises larger
a discussion, AI systems often act in binary terms—flagging, deleting, or banning anything that’s new or nuanced. This raises questions not only about safety, but about who controls queer visibility in digital spaces—and how. While no AI tool has been built specifically for LGBT safety, a few tech partnerships and moderation experiments hint at how automated systems have been used—sometimes effectively, sometimes clumsily—to respond to abuse. AI moderation tools have been implemented across various platforms to combat ha rassment, including specific applications that benefit LGBT users. The gay dating app Grindr has partnered with Spectrum Labs to deploy AI that flags hate speech and abuse before mes sages are delivered, with a stated goal of reducing harm and cre ating a more respectful dating environment. Bodyguard.ai was used during the French Open tennis tournament to shield ath letes from targeted abuse. While not queer-specific, it showed how real-time filtering can blunt large-scale harassment—some thing queer users regularly face online. Shhor AI, created by a queer engineer in India, moderates pages on the online forum Reddit using South Asian linguistic cues to detect hate. Its com munity-informed model constitutes a rare but important exam ple of queer-led tech moderation. training data, and challenges in distinguishing harmful content from reclaimed language or community discourse. To address these shortcomings, AI moderation systems must incorporate more diverse linguistic datasets that accurately reflect the ways in which we communicate online. Studies have shown that AI tools trained on mainstream datasets often fail to detect nuanced slurs or coded harassment tactics targeting queer users. En hancing dataset diversity and improving contextual analysis could allow AI tools to flag harmful content more accurately without silencing legitimate discussions. Additionally, partner ships between LGBT advocacy organizations, tech companies, and AI researchers could strengthen moderation efforts. Col laborative initiatives could help refine detection algorithms, en suring they account for cultural and regional variations in LGBT discourse. Research-backed improvements in sentiment analy sis and context-aware filtering could further reduce false posi tives and make automated protections more effective. Many platforms, such as X (formerly Twitter), YouTube, and TikTok, have used real-time filtering systems to flag and remove offensive language, slurs, and harmful phrases (though While existing AI moderation tools have demonstrated their ability to combat some online abuse, there remains significant po tential for refining these systems to better serve LGBT communities. Many AI moder ation models struggle with identifying and mitigating anti-LGBT abuse, in part due to algorithmic biases, gaps in LGBT-specific
Pla tf orms have already begun implemen ti ng AI driven modera ti on to curb hate speech and harassment. But ques ti ons arise...
Shaheena Sheikh is a freelance writer based in India.
TheG & LR
26
Made with FlippingBook Digital Proposal Maker