Newswise — Professors and join this episode of Business and Society to discuss the challenges of artificial intelligence. They explore the varied and emergent nature of risk associated with AI, the challenges of governing a rapidly developing technology, layoffs, human agency, systemic biases, and the environmental toll of data centers. The panel focuses on the desire for transparency surrounding AI’s implementation and the importance of coming to a shared understanding and vision as a society for AI’s positive integration into the business world.
Responding to Biases in AI
Much of the discussion centers around the role of humans in creating, regulating, and reacting to AI. One particularly fraught interaction is the handoff of human biases to supposedly objective technology. Hassan explains that while finding ways of mitigating learned bias in AI systems is important and valuable, eradicating it completely is impossible. What’s more important is creating a framework for responding to these biases through public policy and regulation.
Bias is going to always be there. I don't think we can really remove it. I think what's more important for us is to think about how we can have an infrastructure, in terms of policies, regulations, deliberations... that can actually help us deal with the biases. They are going to happen. But when they do happen, how can we handle that?
Energy Transparency
The panel explores the possibility of sustainable AI, discussing the energy needed to power AI’s data centers. Hassan touches on communities directly impacted by the local building of data centers and their abilities to negotiate water usage and data governance. Melville stresses the importance of transparency around energy usage so that communities can hold businesses accountable.
I think one of the best things that could be done, whether it comes about through regulation or self-governance, is transparency. Right now, the major infrastructure providers, Microsoft, Google, etc., are not transparent about so-called ‘local emissions and energy use.’ The lack of transparency means that we — as consumers, institutional investors, governments, and other important stakeholders — have no idea how bad it is.
Without clear data, deciding which companies to engage with based on environmental sustainability standards is impossible. Melville also shares that self-regulating energy use benefits AI developers, as no company wants to pay higher electric bills and raise their fixed costs.
Finding a Shared Understanding and Vision
To close the conversation, Melville and Hassan discuss how business leaders, students, or policymakers can positively impact the integration of AI into our daily lives.
Hassan returns to the importance of democratic deliberation on the role people want AI to have in our society.
“AI is not really good or bad. But it's also not neutral, meaning that we can shape the technology. Society has the ability to do that. But that's going to require some hard efforts… A lot of actors and interest groups are making these decisions on our behalf, which is not a good way to have responsible, sustainable innovation. So, in a lot of ways, the focus on democratic deliberation around AI and what society wants to get out of it is so important. And we're really at a time where it's becoming more and more urgent,” shared Hassan.
Melville agrees, focusing on the role that university community members can take.
“We need to bring together artists, historians, political scientists, economists, and we need to come to a shared understanding. What is this AI thing? What does it mean to us? What are the possibilities? What are the risks… a shared understanding and a shared vision of what we can do, grapple with, and develop solutions to these very vexing problems. It's a responsibility and a great opportunity for a great university like the University of Michigan,” shared Melville.