AI stakeholders benefit from faster action on risk, not slower innovation

Written by Natalia Smalyuk

Super-intelligent AI systems are here. It’s only human to feel like “deer in the headlights” attempting to pause or ban the march of the machines, a big black box of the unknown. In hindsight though, technology breakthroughs teach us that kicking the can comes with its own risk. Patchwork fixes to the problems after – not before – they emerge make organizations fragile.

There is an alternative – instead of slowing down on innovation, go faster on learning to mold the unknown into something that benefits humanity.

Imagine this fictional future scene. A wearable device not unlike Fitbit or Apple Watch alerts you, in a Siri-like voice, to a red flag in your blood pressure, oxygen levels or heart rhythm. Urging you to check it out, your personal AI assistant puts you in touch with a health provider. The early diagnosis saves your life.

If something like this is a far-off utopia, the integration of AI in products and services is not. Hiding organizational heads in the sand is simply not a viable choice.

The bad news is that technology always has a dark side. When nuclear energy walked into the world, it brought radiology medicine that treats cancer and nuclear weapons that haunt humanity.  

Technology does not care how it is used, misused or abused. Left to their own devices, innovators may be more concerned with being first to market than bettering the world. Facebook – now Meta – was created to build community, but it also became a vehicle for cyberbullying, hate speech and fake news.

The good news is that many risks can be unpacked before they play havoc with the world if we are intentional and systematic about them. There are great tools in the arsenal of strategic planning, crisis management and enterprise resilience to deal with the unknown, flipping hindsight into foresight. What if a future chatbot flagging an issue in one’s oxygen levels is hacked by a criminal? Possible options: pause the technology, ban it altogether or regain control by building safeguards against cyber risk.

Even if generative AI is still terra incognita in the fog of uncertainty, there are lessons to learn from the past – technology crises in other times, industries and countries. Why did they happen? What could have lowered their chances of occurrence? How could the negative impacts be reduced?

The question is: who’s going to think about the risks?

It’s not unreasonable to expect policy-makers to take much longer than six months requested in the much-debated moratorium petition to set the right governance, ethical and safety rules around AI. When governments do intervene, they won’t have a sense of how the new technology may affect a specific organization – say, health provider. Only its stakeholders, such as physicians and patients, can grasp the benefits, risks and trade-offs of the AI applications in their unique environment. In the meantime, uncertainty should not bottleneck decisions that save lives.

The culture of foresight starts with the boards, whose primary responsibilities are strategy and risk oversight. They steered organizations through the worst of the pandemic, and, in the next disruption, their guidance and oversight matter just as much.  

While CEOs may be preoccupied with the here and now, boards should champion future thinking. For example, they can recommend to management co-exploring with data stakeholders how AI will operate in the real world.

If developers ask patients what they want, aside from the cure, I bet patients would say they crave kindness no machine can replicate. In fact, the lack of empathy could be the biggest risk of AI innovation cocooned in the tech silo.

I had the pleasure of working on both strategic and crisis planning engagements, and what strikes me is that, in many organizations, these are separate processes, although there are clear benefits to integrating them. For example, planning across scenarios accelerates both innovation and resilience. It’s not about producing neat-looking documents, but pressure-testing emerging capabilities to uncover what can go wrong and how to get it right.

Scenario simulations are a powerful learning tool. They don’t need to be stressful. In fact, a little play helps. How about an offsite team-building war game where AI is an adversary? The goal is to anticipate its moves poking holes in strategic plans – before the enemy does.

The time for a clean sheet is now. Prioritizing foresight over hindsight, boards can steer organizations to co-create solutions with stakeholders that serve human progress. Instead of waiting for AI to happen to us, we can imagine the future we want and make it happen.

NBAU Consulting offers strategic communication and crisis leadership services to guide organizations through change, facilitate understanding and build resilience.

Previous
Previous

A coffee Q&A with Christal Austin: climate emergency & disaster preparedness

Next
Next

Coffee with Dr. Ian Mitroff: thinking the unthinkable