Think Responsibly 

The realities of responsible AI—and what it would take for our society to actually build it
WIRED Brand Lab | Think Responsibly

If you're wondering how fast technology is progressing these days, you should know that just this year, a nostalgic man used artificial intelligence to resurrect an interactive version of his childhood imaginary friend: the family microwave. That microwave, named Magnetron, promptly accused him of abandonment and tried to kill him

Look, it's a long story/Twitter thread, but it's also quite illustrative of the moment we find ourselves in as a society: Things are moving very quickly, fueled in equal parts by the exponential pace of technological innovation and our own voracious appetite for smarter and more performant applications. Measured purely on the basis of continually audacious achievement, our society is positively thriving. 

But progress never lacks consequence, does it? The more complete reality is that the growing intensity of consumer expectations (and business pressure to quickly address such expectations) leaves executives, scientists, and developers feeling as if there is dwindling room for technological conscientiousness—especially when dealing with tools and tech they don’t always fully understand. To wit: Some large language models trained to predict the next word in the sentence are somehow also able to multiply two-digit numbers—but we don’t fully understand why or how. 

Artificial intelligence almost certainly sits at the crux of this societal conundrum, and not just because we don’t always fully understand it. Nowadays, many organizations are constantly innovating withAI, leveraging it as the central tool in a dead sprint to create the next groundbreaking “thing” that wins market share and solidifies competitive positioning. That reality—our ability to innovate with and within AI—also puts such technology in exceedingly rarefied air: It’s one of the few advancements in human history that would qualify as both a GPT (general-purpose technology) and IMI (invention of a method of inventing).

But, as weirdly evidenced by a suddenly homicidal microwave, AI still requires that we govern it. It’s not cognition, no matter how much we may erroneously humanize it. At the end of the day, it’s just a tool that can be applied to a vast array of problems. 

The burden of AI’s potential, then, is also squarely on us: to use it to create transformative impact, while still doing so responsibly. That means we need frameworks, principles, conferences, education, and everything in between, all in pursuit of one succinctly summated and shared societal creed: Responsible AI

For most academics and researchers, that’s the immediate goal. But the experts at Boston Consulting Group (BCG) are already thinking past that horizon—to a day where the concept of responsible AI is so ingrained in society that we no longer need to explicitly talk about the concept.

The people problem with AI

First, responsible AI might be a bit of a misnomer. It’s not about trying to make artificial intelligence act “ethically;” that would be humanizing a non-cognitive tool. Rather, it’s focused on how leadership and development teams approach the tasks of designing, building, deploying, and using AI systems that adhere to organizational values while achieving their business objective. 

When you put it that way, it sounds kind of easy. But take it from Steven Mills, BCG GAMMA’s Chief AI Ethics Officer: It’s not. 

“evalsuating the risks and benefits of AI systems is devilishly hard. The challenge is that there often isn’t a clear right or wrong answer,” Mills says. “Decisions need to be values-based, but there isn’t a shared set of values among all the stakeholders involved. That’s why it’s so important to anchor on an organization’s purpose and values in a way that is transparent to employees, customers, and the broader public.” 

One of the challenges is that, broadly speaking, risk and benefit assessments haven’t always been humanity’s core competency. We struggle with them even in simple situations, like choosing healthy foods to eat; the more complex the risk assessment, the more the problem compounds. Even in the rare instances where a colossal risk is clear, we can’t always be trusted to act toward a positive outcome. Climate change comes to mind. If you haven’t heard of the gray rhino effect, now is a good time to investigate. 

But to Mills’s point, making values-based decisions based on how we assess a risk is frequently more complex than assessing the risk itself: Values are inherently subjective, meaning two individuals’ “values” could lead them to two different risk responses. Let’s stick with healthy eating: While one person may view personal health as a responsibility to the society around them (in lowering healthcare costs, for example), another may champion the personal freedom of lifestyle choice. Neither approach is inherently wrong, but they can easily find themselves in conflict. 

In our society’s defense, making values-based decisions in response to risk—whether as buyers or AI developers—is becoming more complex by the day, largely because our world is becoming more complex by the day. Domino and multi-order effects—always notoriously tricky to plan for—are becoming increasingly difficult to manage, largely because the number of variables involved in an increasingly digitally-connected society are proliferating faster than we can catch up.

That word—“society”—is important here, because many short- and long-term risks of AI development actually intertwine with deep-rooted sociological problems. Such issues can be extremely difficult to solve for. An AI system that helps assess loan applications, for example, is often powerless to fully account for the systemic socioeconomic issues that put some populations at an inherent disadvantage in terms of qualifying for a loan and subsequently repaying it. But these are deeply rooted societal issues that are part of the system in which the AI functions; this is where things like systemic racism come into play, where entire societal systems have been wired to operate in a fundamentally biased way. AI systems are inherently socio-technical in nature, and that means they require socio-technical approaches to mitigate some of the harms that can arise from their irresponsible design, development, and deployment.

Just for a thorough accounting, there’s another dimension that humans aren't great at: recognizing our own cognitive biases. Such biases have a profound impact not just on how an executive or developer perceives an AI tool’s risks or benefits, but also on how we mitigate those risks and how the AI performs. After all, artificial intelligence tools learn from data steeped in our biases and reflect the values of their developers. AI can’t solve that problem. 

It's worth noting that when we think about bias, we usually do so at the individual level. That’s the kind of bias that makes the news, and it’s where the most amount of formalized study on fairness in AI has been conducted. Where bias gets significantly less attention, however, is at the community level—despite the fact that such bias is arguably more complicated and usually has massive ramifications. For example, when a company uses AI to decide where to open and close new retail locationss, they are impacting the entire community where those stores are located. There are winners—job creation, tax revenue, property values—and losers. Systematically moving locationss away from socioeconomically disadvantaged communities perpetuates—and accelerates—long-standing issues. 

Add up all of those factors—risk assessment, societal complexities, and overlapping layers of cognitive and community bias—and the obstacles to installing a responsible AI mindset across the world feel impenetrable. 

And that’s precisely the kind of work that Mills and the BCG team are here for. 

Operationalizing the theoretical

All these obstacles, both at an individual and societal level, mean we’re a long way off from that ideal state where responsible AI is so ingrained in our thinking that we don’t actually have to think about it much at all. That may be the endgame, but right now there’s still a journey ahead. 

Currently, Mills sees most companies relying on two primary tools to manage the ethics behind their AI development: federal/state regulations and corporate principles. It’s a good start, but neither covers all the bases. For Mills, regulations should be the minimum bar; one that organizations committed to a truly responsible AI approach should clear without thought. Too many companies, however, are still focused on doing just enough to meet emerging regulations, adhering strictly to the letter of law as opposed to also considering the spirit of it—and thus essentially outsourcing moral obligation to regulatory frameworks.

And while establishing corporate principles can be helpful as a broad guiding light, most sets that Mills have come across are too high-level to be actionable. These documents often speak of concepts like transparency, fairness, or explainability, but not of tactical product development steps. Principles don’t actually create action unless explicit steps are taken to operationalize them across an organization. That distinction between talk and action is not lost on discerning customers.

“There was a perception two or three years ago that if you made principles, everyone will follow them, and the responsible AI problem was solved,” says Abhishek Gupta, Senior RAI Leader and Expert at BCG, and Founder and Principal Researcher at the Montreal AI Ethics Institute. “But that’s not enough; it’s just words on paper. It’s not that people are malicious or ignoring them, but people don’t know how to translate them into their day-to-day work.”

The BCG GAMMA team is helping dozens of organizations do precisely that. But for Mills, the deeper need is for something far more foundational: a broader cultural change in how our society approaches AI development and deployment, one focused on possible inadvertent outcomes and risk mitigation. And that's where responsible AI evolves into something much bigger than simply committing to a particular checklist or process. “You have to think about it as catalyzing cultural transformation,” Mills says. “We have to put in place governance and processes and new tools, but those are pieces. We’re ultimately driving to broader cultural change so that responsible AI is in everyone’s subconscious.”

Here’s the million—billion? trillion?—dollar question: What does it take to achieve this kind of cultural change? 

Education, conversation, and a commitment to such change are all critical. But it’s also going to involve some specific schools of thought at countless AI-related decisions made every single day. The most core principle for any developer or executive to internalize may come down to intent: Before you even start building an AI tool, you’ve got to truly understand what you’re optimizing for, and why. Then, you’ve got to consider the risks or downsides of what you’re building and come up with mitigation solutions accordingly.

And at an organizational level, the entire process has to be underpinned by an approach to building AI that embodies the company’s existing values. This is where Mills and BCG often come in. BCG works with a vast diversity of companies—from humble small businesses to industry leaders like Microsoft—to develop resources and protocols that create a culture of responsible AI usage. They avoid new processes unless they’re necessary, instead ensuring there are concrete steps under each organization’s existing operations, philosophies, and risk governance. 

Now add it all up: When an organization focuses on intent, considers risks and mitigation plans, and aligns work to company purpose and values, things like regulations are already the minimum bar. Societally, that’s a huge step in the right direction. 

The benefits to the bottom line

Still, few organizations have yet to fully adopt this way of thinking. To get further in this cultural shift, Mills and BCG believe many companies need to see more tangible benefits of responsible AI emerge. But to Mills, that case is already clear. “I actually think you should measure on the bottom line,” he says. “I don’t agree that operating unethically is more profitable. There are real business benefits stemming from responsible AI.” 

In other words, building AI irresponsibly isn’t even really a business advantage; it’s more like a shortcut that doesn't pay off. According to a recent joint BCG and MIT-SMR report, companies that have responsible AI frameworks in place only experience 23 percent failure rates, compared to the 32 percent failure rates of those without them. From a dollars-and-cents perspective, that’s unquestionably a problem. And those organizations with mature responsible AI programs are already seeing faster innovation, stronger customer trust, higher profitability, a culture of responsible innovation, and improved employee recruiting and retention. 

The harsh reality is that unethical AI also poses massive organizational risks—in brand perception, regulation, and litigation. The world may still be struggling to learn how to build AI correctly, but governments and legal entities are rapidly improving their ability and willingness to sanction those who don’t. What was once the Wild West is becoming increasingly scrutinized, and with good reason. In some ways, brand-perception risks could be the most substantial: Companies may have a business imperative to build AI, but they need customer trust to use it. And when a company violates public trust, it can be very difficult to win it back. 

“Failures happen,” Gupta says. “But what erodes trust is if you’re not transparent about it, try to hide it, and pretend it’s not a problem. If you’re open and say, ‘We got it wrong, here’s how,’ or ‘Here’s what we’ll do to fix it,’ it builds more trust.” 

So how, ultimately, do you measure success? Sure, the number of failures caught is a good marker; the sophistication and seamlessness of the conversations around those catches is even better. But the real test comes back to the version of all this that Mills and his team have been striving for from the start: the version where the thing we’re not talking about in these kinds of conversations is the concept of responsible AI itself, because it’s as natural as breathing. When we get there, that’s when we’ll know the approach has become truly foundational.  

The responsibility for change

For now, BCG’s immediate task remains helping more organizations begin or continue embedding responsible AI into their operations. It’s a long row to hoe, but as these kinds of conversations multiply, it’s evidence that we’re on the right path.

The upside of causing a genuine culture shift toward responsible AI is clear to Mills and Gupta, and it’s a responsibility they take seriously. They want to see a society where AI exists as a powerful tool, where it’s used in a responsible way by default, and where the technology is paired with humans in order to leverage the strengths of both—combining AI’s pattern recognition with humanity’s creativity and ability to intuit nuance. 

Most importantly, they want “responsible AI” to become a redundant term: If a company uses AI, they’ve long since embraced the responsibilities that come along with it. If BCG can help make that a widespread reality, it might finally become possible to tackle some of the thornier issues of the day, from equal housing to fair credit and unbiased hiring. And in that sense, what these responsible AI advocates are trying to build isn’t just more understanding organizations—it’s a more inclusive world, earning a social license from the stakeholders to operate AI systems. 

“This is why it’s important to have bigger conversations about responsible AI. The issues we could tackle aren’t always black-and-white with a clear right and wrong,” Mills says. “The more we can have a dialogue with consumers and companies and stakeholders to start normalizing this, the more we can focus on things like the systemic  problems policymakers can help with and how we land on decisions in the gray zone. 

“As a society, it’s important to have these discussions,” he says. “We as consumers have more power than we realize.” 

This article was produced by WIRED Brand Lab for Boston Consulting Group.