GLR September-October 2025

discussions about identity, activism, or lived experiences can be mistakenly flagged as inappropriate, leading to unjust content removal. In addition, AI moderation tools are typically devel oped using mainstream linguistic patterns, which may not ac count for the evolving nature of LGBT language. The lack of diverse, representative data sets contributes to inaccurate mod eration outcomes, disproportionately affecting queer users and limiting their ability to engage freely in online spaces. AI mod eration also struggles with the complexity of cultural and con textual differences, especially when moderating LGBT content across different regions and languages. Expressions of identity and queerness vary globally, and what is considered neutral or affirming in one context may be misinterpreted as offensive or inappropriate in another. Transparency is another concern. The decision-making process behind AI moderation is often opaque, leaving users without clear explanations when content is flagged or re moved. Many platforms don’t disclose the criteria AI uses to evaluate posts, making it difficult for LGBT people to under stand or contest moderation decisions. While AI can efficiently identify obvious harmful content, it lacks human judgment and can misinterpret satire, irony, or nuanced discussions about discrimination. And there are concerns about getting permis sion for personal data to be used in training the AI model and keeping the data set secure and unavailable for other AI mod els to copy. As AI moderation becomes more embedded in digital plat forms, making it work effectively for LGBT communities de mands a combination of ethical intention, technical precision, and community collaboration. Improving AI’s accuracy starts

with diversifying training data and including LGBT voices in its development. When community members contribute to shap ing moderation tools, they help encode lived experience into al gorithms that often misunderstand them. Still, AI alone is insufficient; human oversight is essential. Content flagged by machines must be reviewable by trained moderators who un derstand cultural context, evolving language, and community dynamics. Platforms must ensure that appeals processes are ac cessible and that decisions can be reversed when mistakes occur. Advocacy organizations like GLAAD and OutRight Inter national have already contributed to shaping industry standards and can play a continued role in setting ethical benchmarks for AI moderation. Their involvement ensures that queer safety on line is not treated as a technical afterthought, but as a core de sign principle. Transparency must also be central. Platforms should clearly communicate how AI moderation works, what data it draws from, and how users can contest decisions. Feedback loops in which users report misclassifications and offer corrections are especially important in LGBT spaces, where language evolves rapidly. Models that learn from user feedback grow more accu rate over time, building trust while improving functionality. Ul timately, AI moderation for queer communities is most effective when it’s accountable, participatory, and context-aware. By cen tering the needs and expertise of LGBT users, platforms can move beyond generic safety mechanisms toward systems that actively protect vulnerable groups without silencing them. The development of AI moderation tools cannot happen in isolation. Cross-industry collaboration between tech compa nies, LGBT advocacy groups, researchers, and policymakers

is necessary to create fair and inclusive moderation systems. By sharing best practices, research findings, and AI train ing improvements, platforms can reduce bias and enhance their ability to protect marginalized communities. Open-source initiatives, in which AI models are con tinuously refined with input from diverse communities, may also play a role in shaping better moderation practices. Ensuring that AI moderation serves LGBT users effectively requires ongoing evaluation, transparency, and collabora tion. While AI has the potential to foster safer online spaces, its success depend on how well it is designed, implemented, and continuously improved through real world feedback. As AI continues to shape how we interact online, it must be guided by the voices of those most af fected. Queer communities have always been innovators in digital culture—now they must also be collaborators in build ing the systems that protect it. AI alone cannot guarantee safety, but when paired with community insight, advocacy, and intentional design, it can help carve out spaces that are not only safer but more affirming.

Argentina

In Buenos Aires The thunderstorms Roll in off the pampas Like crashing surf The city is a sand castle Pretty Insubstantial Illusory

Later there was a rich boy From Recoleta Avase An objet d’art But unlike the seashell boy When you put your ear to the vase boy’s Beautifully scripted heart There was no sound Like lightning But with no sound Of thunder After I went away Out across the pampas To where the thunderheads rose up There I met a gaucho boy Who told me: “They pop up like toadstools Proud and towering And go away too soon” T REBOR H EALEY

I fell in love there With a poor boy From “la provincial” He was beautiful like a seashell We gathered trash Along the beach At Punta del Diablo For free beer But I lost him In the froth of surf That battered the foundations Of the great dream city

TheG & LR

28

Made with FlippingBook Digital Proposal Maker