The AI trust imperative

I spent the first part of this week at HumanX in Las Vegas, a conference focused on AI — what else? In his kickoff keynote on Monday morning, conference co-founder and CEO Stefan Weitz talked about the paradox that is AI today. While we are surely in the middle of hype bubble, that doesn’t mean the technology isn’t real or that substantial companies won’t remain when the dust settles — just as we saw with the internet bubble in the late 1990s.
But as we move through this massive change, we are going to have to believe in this software. People simply won’t be as forgiving of AI making mistakes as they will their human counterparts, and that’s something the industry has to reckon with.
In a compelling February 2024 essay called ‘It’s easier to forgive a human than a robot,’ Basecamp co-founder and CTO David Heinemeier Hansson wrote that we tend to judge AI much more harshly with higher expectations for the outcomes.
Weitz obviously recognized this too, even as he was launching a conference dedicated to the subject. “Real progress will not occur without trust and responsibility built from day one, because without trust, all we're doing is building a high tech house of cards,” he said. “If we don't embed trust into AI systems, we're building on an unstable foundation, and scaling on untrustworthy systems will lead to regulatory backlash, consumer rejection and systemic failures,” he said, pulling no punches.
In an industry that is moving as quickly as this one with new announcements coming at us on a daily basis, mistakes seem inevitable, and are actually part of the technology regardless. That means figuring out an acceptable error rate will have to be part of the AI calculus for every company.
Hansson used the high stakes example of a self-driving car crash that results in a death — a scenario that actually has happened. As we rely more and more on this technology, it raises serious policy questions for companies and lawmakers. All it takes is one death from a self-driving car, and there are cries for tighter regulatory enforcement. Yet human drivers kill people all the time in huge numbers, and there are no calls to take cars off the road.
“Embedding trust into AI from the start is crucial to avoid costly mistakes and regulatory backlash."
Even when we’re dealing with much less serious outcomes like a customer service scenario where nobody dies from bad information, humans may judge the software severely if it gives a bad or wrong answer. And it could also result in financial loss or a reputational hit. As Hansson points out, you can’t dismiss human emotional responses to negative outcomes as you build artificial intelligence systems.
Weitz suggested, as we speed into this AI future, we need to be thinking about this right now. “Embedding trust into AI from the start is crucial to avoid costly mistakes and regulatory backlash,” he said.
Ultimately, this is going to be both a moral and business imperative, especially if we expect this software to scale in line with the vision of moving it into just about every aspect of our lives.
Over time, as it improves (assuming it does), we will probably grow to see it in a more forgiving light, but for now, where we are in history, mistakes are more likely to be judged harshly. Every company needs to keep that in mind as it navigates this technological shift.
-Ron
Photo of Stefan Weitz by Big Event Media/Getty Images for HumanX Conference