University of Denver Winter 2024

Despite ethical landmines, innovations in artificial intelligence can be leveraged to address society’s problems AI FOR THE PUBLIC GOOD By Connor Mokrzycki

In the 15 months since ChatGPT was released to the public, a slurry of new artificial intelligence products—like computer programs that can write in the style of your favorite author and software that can generate images mimicking any artist’s brushstroke— have exploded onto the scene, spurring fears and debates over how AI will shape the future of work, education, arts and culture, and nearly every other aspect of our lives. Originating in the 1950s, AI is not a new concept, says Kerstin Haring, assistant professor of computer science in the Daniel Felix Ritchie School of Engineering and Computer Science. “The math was solved a long time ago,” Haring says. “But we needed the computational power. And for a long time, we didn’t have the amount of data necessary to make these large models work.” And while consumer-facing products and services like ChatGPT feel like shockingly new inventions, the underlying technology has played a behind-the-scenes role in our daily lives for years. Everything from credit card fraud monitoring and airline ticket pricing to Netflix recommendations and social media feeds are powered by AI, says Stephen Haag, professor of the practice in the Daniels College of Business. Self-driving cars, AI image gener ators, bookkeeping and other AI-driven software suites are just the first stage of services enabled by recent improve ments in computing power and data infrastructure. Though scary for some,

recent developments in AI provide a suite of new tools for researchers across disciplines. ETHICS AND BIAS IN MACHINE LEARNING Artificial intelligence describes a broad set of fields, but machine learning—computer programs that can recognize patterns in data, then build statistical models and find patterns in other data, make predictions or generate new data accordingly—is the most prominent. In her previous research, Haring and fellow researchers developed Build A-Bot, an interactive platform that lets users design robots. Haring is planning to train a machine learning system on data from the user-generated designs to build robots that humans can more comfortably and efficiently recognize and interact with. There are different approaches to training AI: supervised, with humans assisting an AI system to recognize patterns; reinforcement, with an AI system being scored on how right or wrong it is; and unsupervised, where the AI system is given huge amounts of data to process on its own. All three vary in their function and their purpose, but according to Haring, they share serious ethical implications if not designed carefully. “It’s hard to retrofit ethics into a system,” she says. And for a computer, recognizing a new pattern is no easy task, requiring massive amounts of data to train the system on.

Data used for training is often scraped from publicly accessible web pages or acquired without the knowl edge of copyright holders, leading to AI-generated text and images that bear a striking resemblance to the manmade works they were trained on, raising questions about theft and copyright violations. While litigation is underway—and more will certainly follow—there are yet to be substantive regulatory or legislative guidelines on AI data sources. And, Haring adds, AI is trained to recognize and reflect patterns from real world data, raising further ethical concerns. “We live in a biased system, so the data that we create is already biased,” Haring says. “By learning the patterns in that data, it can perpetuate and reinforce certain biases—which is a problem.” Like any technology, AI does not exist in a vacuum. Navigating AI’s complex and ever-changing ethical landscape requires diverse,

“If this technology is used wisely, I think it can radically

change our lives for the better.”

24 | UNIVERSITY of DENVER MAGAZINE • WINTER 2024

Made with FlippingBook - Share PDF online