The age of AI-first mandates is here

Shopify CEO Toby Lütke turned heads last month with his company-wide, AI-first mandate. But it didn't take long before other companies like Fiverr, Box, Duolingo, HubSpot, Meta and Google quickly followed suit with similar pronouncements. Although these directives vary somewhat, they all share the core goal of encouraging employees to use AI to improve their work process and efficiency.
I didn’t get a mandate from my employer, but as the sole person responsible for producing this publication every week, I’m looking for any edge I can get, and I figured AI should be able to help, right? And, yes, it has the capacity to help me as I perform my tasks.
I’ve used AI for research, to query across multiple sources and as my editor. It’s worth noting that I still send my commentaries and profiles to a human editor for final review and she always finds something that the AI didn’t, the human difference, I guess.
Then there’s the matter of AI accuracy. Let’s just say that I’ve had mixed results with AI. When AI works, it works great. As I was preparing to write this commentary, I wanted to collect that list of companies I referenced in my opening. I went into Perplexity, and after some back and forth with the model, I asked it to compile the results in a table format with the company, the name of the CEO, the source and the result if available. It produced this:

Pretty great, I have to admit, but that’s not always the case. The other day, I dropped a pdf of an interview transcript into ChatGPT and asked for a summarization of a particular part of the transcript with quotes and timestamps to back up what it found, and it did something entirely bonkers: it made up several quotes. With an actual transcript to work from, it made up something that wasn’t even in it.
The trust factor
But even if AI worked flawlessly, there are deeper issues at play like trust and fear. As I wrote in this space several weeks ago, people are much more willing to forgive human mistakes than they are bot errors. When AI gives us bad information, it makes us wonder about the accuracy of all of the information we get from these models.
But it’s more than that. While I agree that we all should be exploring how these tools work, as Sol Rashidi, a former AWS exec, speaking at an event at RSA last week, noted, there is a fundamental resistance right now among some employees when it comes to AI, mandates be damned. And why wouldn’t there be in her view.

“How are we going to market with artificial intelligence right now,” she asked. “It’s the new digital workforce. We're going to increase productivity and capacity, which means, what are we going to do? Reduce headcount, decrease teams, be more lean – and you're starting to see a lot of that happen,” she said.
It’s a blunt truth. In her view, workers may see AI, not as a helper, but as a threat. “Well, why on earth would any knowledge worker document what they know into a PDF and push it into a large language model? You're threatening their longevity and their future. Why would they?” Why indeed. Self-preservation might suggest resisting AI, but if AI is the future, then survival may depend on understanding it. And that brings us to the training problem.
The training problem
Even folks who want to learn AI with the best of intentions may have trouble mastering these tools. I consider myself tech savvy. I’m willing to try new technologies and experiment and see how this all works for me, but not everybody is like that. For a lot of people, they know what they know when it comes to tech, and they get flustered when they get outside of their comfort zones, and that’s perfectly normal.
Yet these companies are asking, and in some cases, requiring, their workers to embrace AI fully. Except that most companies are not preparing their employees for this new reality. It requires a new way of working. People, who are not comfortable with technology, are not going to just suddenly “figure it out.” They’ll need support like peer mentors who are actively exploring the tools, formal help desks to answer questions and structured training programs. Unfortunately, in many cases, that support just isn’t there.

Consider that a recent survey conducted by AWS (which I wrote about the other day) found that just 56% of respondents had a training plan in place. That leaves a lot of companies kicking the training idea down the road (along with security). Yet we are seeing companies include AI skills in job postings, while requiring their current employees to start using it. There’s the AI. Have at it, folks.
I get it. AI has the potential to be a game changing technology. I’m not denying that, but for companies to simply expect that their employees can suddenly (magically) start using it to increase their productivity, well it’s not that simple. They still have their day jobs to do while they are supposed to be learning this new way of working. Will leaders demanding an AI-first approach tolerate a short-term dip in productivity while employees figure how out how to use AI tools in their jobs?
And as I have learned from personal experience, this is not easy. There are so many tools with new functionality coming at us almost every day. Mandates are easy. Transformation isn’t. Without training, support and time, these mandates will just be empty words.
~Ron
Featured image by Igor Omilaev on Unsplash