AI is already disrupting the way we value and interpret human skills at work. From generating ideas to organizing information to telling jokes, AI’s trajectory is intended to mimic the human mind.
But what does this mean for the way we relate to AI now and in the near future? As AI simulates cognitive tasks and gets better and better at them all the time, how can CEOs predict or design helpful use cases for AI? How can leaders understand how to incorporate AI into their businesses? And how can they accurately judge when AI is a better fit for a task than its human counterparts?
This inflection point in our journey with AI is a good time to ask some questions to uncover a few ways we can use AI to its fullest capacity while staying aligned with our business goals and objectives — and with our ambitions as human thinkers.
In a recent survey of business leaders and executives, 73% of respondents indicated they believe that ethical AI guidelines are important, but only 6% had developed such guidelines for their firms. Leading a team of people toward a common goal is difficult enough without a constant game-changer like AI getting in the way. That’s why leaders should take a pause and ask themselves some fundamental questions before diving into AI:
1. What do ethics mean in an AI world?
The most important consideration for CEOs and other decision makers may be ethics. As AI’s impact deepens, leadership teams will need to make ethical judgment calls. According to a study of business leaders, only 29% reported feeling very confident that machine learning tools are being handled ethically.
As AI’s prediction skills increase, the value of human prediction decreases. And yet, as this happens, the value of human judgment increases; we need humans to play arbiter and attend to ethics. AI may be able to tell you what is likely to occur, but it cannot judge how you should feel about that occurrence.
2. How fast are things changing?
AI is evolving too rapidly for business leaders to grasp it clearly before it shifts. According to IBM’s CEO, Arvind Krishna, 7,800 of its noncustomer-facing jobs will be “replaced” by AI in the next five years. AI is evolving even faster than many experts thought it would. There’s a flywheel effect (otherwise known as Moore’s Law) happening—and computers are now being improved by AI itself. AI is speeding up AI and speeding up the rate of change in the bargain.
CEOs and CTOs are—as we speak—trying to establish the use cases for AI to improve their companies. If they can understand the exponential nature of AI’s growth, they will be more likely to choose helpful use cases.
3. Will AI replace human jobs?
This may be the most commonly asked question about AI in relation to work, but it is worth asking again (although perhaps in a new way).
People often answer this question by suggesting a human-AI compromise. Some tasks—creative and empathetic ones, for example—are seen as being irreplaceable. Others—the more menial or digital tasks—are often expected to be taken over by AI while still falling under the remit of human managers and overseers.
The reality is that we often put creativity on a pedestal and prefer to think of AI as collaborating with us rather than replacing us. We may need to throw out this pacifier. The future will be so transformed and shaped by AI’s influence that a job with no potential application of AI will be an exceedingly rare one. AI will enter every door.
4. What will AI mean for our organizational culture?
AI is going to be interesting when it reaches the realms of cultural difference. Cultural relativism is a fascinating use case because machine learning works by mathematically encoding information, and moral information is always going to be abstract and relative depending on where you are, who you are, and what day it is.
Abstracts like culture and morality come with doses of faith and can also be inherently contradictory, even within a single individual’s framework. The judgments we often make on faith are not necessarily logically consistent, and AI will likely find this practically impossible to deal with in the near term.
5. Will there be any work exclusively reserved for humans?
Pretty much every job that involves a cognitive task will be able to be replaced or augmented by AI. Some physical or emotional tasks will surely remain in the human wheelhouse for longer, and it may take a very long time for AI to learn how to complete a task with as many real-time variables as piloting a plane from gate to gate, for example. But there will be one kind of cognitive task that humans will continue to dominate: value judgment.
Think about how decisions are made in your organization. You think about what decisions need making, gather data, make a prediction, and lastly, assign a value to that prediction. Humans are uniquely able to prefer one outcome over another. AI might be able to advise you. It might be able to say that, according to Judeo-Christian culture, you will likely value one outcome over another. But AI will never be able to “prefer” for you. Humans don’t know (yet) how unique and special their ability to prefer is.
6. Do we have an AI approval process that can and will reject AI proposals when they violate our company values?
The idea here is that the final go/no decision on AI use cases should be made by a diverse team of senior executives—one that includes legal, HR, technology and business representatives—who have the ultimate “D” approval or rejection over AI systems.
For CEOs, it may be reassuring to remember that AI is moving so fast that no one is truly an expert. Asking questions and deciding how you and your team want and prefer to use AI are the strongest steps you can take to find your own way in an AI-driven world.
The post The 6 Most Important Questions CEOs Should Be Able to Answer About AI Now appeared first on ChiefExecutive.net.