Editor’s Note: We’re all looking to get smarter, faster about what to do with generative AI. That’s why Chief Executive has put together a great afternoon of learning and strategizing-the C-Suite’s Guide to Generative AI: Practical Applications and Uses, June 26. Get a leg up. Join us >
From the Turing Test to I Robot to 2001’s H.A.L. to Siri, it’s seemed a question of when, not if, we would share our world with “thinking machines”—none of which seems to have prepared the world for the last six months. The power of new generative AI to not only mimic but excel at creating the kind of writing, research and visual “art” once thought the exclusive province of humans has been a revelation—and, for some, a shock.
We’ve all known about the growing power of AI to see patterns within petabytes of data, assist call centers and run chatbots, freeing humans from the drudgery of mundane tasks in an increasingly complex world. ChatGPT, Midjourney and their AI peers, however, took that power and put it in the center of our collective conversation.
Their arrival precipitated a great conversation with Tom Davenport, a professor at Babson University, consultant and author, most recently of All In on AI. In an interview, he offers a fascinating framework for directors looking to process and harness all this change and successfully evaluate its use.
With every new technology, there are risks. Directors and CEOs are in a unique position to help mitigate them by asking management teams smart questions. Three areas to think about:
Eroding Foundations. Changing demographics are making the best and brightest young employees the dearest resource for any business. As AI removes much of the the early career “drudgery” for them, be mindful that you don’t accidentally cut training that builds strong foundations for long-term careers. Cold-calling customers, crunching though spreadsheets and writing 100 taglines for an ad campaign may all be easily replaced by AI, but the skills they build that turn into bigger things down the line are invaluable. Make sure you’re not killing the future in the process of saving now.
Accuracy and Opacity. One of the most daunting challenges about ChatGPT-type generative AI is twofold: First, not even the people who create these systems know exactly how they arrive at their answers (and the systems don’t lay out their sources) and second, current generation AI gets things wrong—a lot—but presents its findings with such certainty that many users are inclined to believe what the machines say. For a long while to come, keeping these AI engines clear of mission critical tasks and insisting on a culture of healthy skepticism alongside new AI usage will be a key risk-mitigation strategy.
The Uncanny Valley. As many of you are aware, in robotics and AI, creators have long known that as machines grow closer to mimicking human interactions and abilities but fall just a little short, it can cause repulsion in people. That effect is called “The Uncanny Valley,” and it’s worth thinking about as your management teams deploy public-facing AI systems. We’ve already seen Microsoft hit this wall with critics who evaluated early versions of its ChatGPT-powered Bing search and ended up in some unsettling, brand-dinging conversations. In one, Bing told a New York Times columnist it had a secret alter ego named Sydney. Sydney then tried to break up the writer’s marriage—creating perfect fodder for the front page of the paper. Uncanny, indeed.
Other issues will certainly emerge. And yet, as AI moves forward at blinding speed, there’s no doubt that leaders of nearly every company will try to adapt it in every possible way to gain an edge—and rightly so. It will fall to directors and CEOs to make sure they’re doing it in ways that enhance—with acceptable levels of risk—the long-term prospects of the company.
The post AI: A Guide To The Uncanny Valley appeared first on ChiefExecutive.net.