In recent months, the buzz around AI has become ever-present, constantly dominating headlines that proclaim an “AI revolution” is under way. Companies are already racing to launch their own AI services, some hiring and working around the clock to adopt the technology as quickly as possible. Reminiscent of the dotcom boom, the “AI revolution” has companies, investors and workers in every field and industry clamoring for a piece of the action.
Having been around for various iterations of tech bubbles, I have to wonder, are companies moving a little too fast on AI? The dotcom bubble infamously burst, and while AI might not meet the same exact fate, if businesses don’t tread carefully, they could find themselves in troubled waters. Ethical, legal and reputational crises are all of particular concern. As we’re already seeing, with all the AI fervor comes a whole lot of controversy.
Take a look at the disaster studios and streamers have created for themselves, as they’ve forced lengthy duals with the WGA and SAG-AFTRA unions in large part because of their efforts to lean into AI. Until an agreement writers were for months, and actors are still on the picket line, fighting an existential threat to their professions with concerns that AI may be used to devour their livelihoods. There’s also the controversy Zoom sparked after updating its terms of service in a way that suggests the company would employ user data for AI training, or Columbus Dispatch’s AI sport reporting that got put on hold due to some problematic and nonsensical AI-generated articles.
AI is a technology evolving at rapid speed, and companies understandably want to be on top of the ball. However, it’s critical for executives to keep in mind the brand risks associated with moving on a new tech too fast. With that in mind, let’s delve into some insights and best practices for avoiding an AI-generated crisis.
Do your research. With any change or launch, it’s crucial to understand your audience and their perception. AI has received an onslaught of public disdain for a variety of reasons, some of which include replacing jobs, spreading false information and stealing content from artists, creators, entertainment companies and news organizations to train AI models. AI isn’t simply a new technology, it’s a new technology with a lot of unknowns, marred by controversy, and surrounded by ethical, legal, copyright, privacy and labor concerns.
With that in mind, if you are a leader of a company looking to adopt AI, or if you want to start an AI-based company, it’s vital to know what people are saying about it before any sort of roll out. There are few things more dangerous to a brand than embracing something without fully understanding it. In doing so, you might find yourself and your company embroiled in controversy, backlash, brand damage and legal issues. For example, a small literary analytics project called Prosecraft used AI to analyze large catalogs of novels before being abruptly shut down because it didn’t get consent from the authors to use their work. Prosecraft and its executive leadership may have benefited from knowing the ongoing debate around generating or lifting other people’s work for AI use cases, especially the legal ramifications. Perhaps things would have been different if it received proper permission from authors and outlined and implemented stringent guardrails for how the data it collects from their work could be used.
Before jumping in, know what’s being said both positively and negatively about AI in your industry and at large. Make yourself aware of the potential risks, protect you and your brand from public scrutiny, and operate with due diligence and respect for creators’ IP. It may seem like the Wild West, but the court of public opinion is always issuing rulings, so don’t expect to get away with highway robbery.
Be transparent. All organizations big or small should do their best to be transparent with their customers, clients, employees and other stakeholders. Transparency lessens the potential risk for a crisis, and in some cases, may help you avoid litigation. In a world dominated by AI, transparency has entered a new level of importance. Suddenly, the use of anyone’s data to train and generate AI could put a company in hot water.
Take Zoom for example. A change to the video conferencing platform’s terms of service asked users to agree to share data to train its AI. This led to a public outcry over privacy concerns, and Zoom ended up changing its terms of service, saying it wouldn’t use customer data to train AI without consent.
Zoom could have avoided the whole fiasco by simply being more transparent about leveraging user data in the first place. The company first should have given users the option to opt out of data sharing. At the very least, it could have issued a press release, email or social media post noting the possible change before actively updating its terms of service. This also reiterates the importance of conducting extensive research prior to adopting AI. The popular Netflix show Black Mirror recently released an episode highlighting some of the privacy and consent concerns surrounding AI, specifically the potential outcome of agreeing to hand over your personal data for AI-generated content. The episode sparked fears around what’s in a company’s terms of service, especially if it has to do with personal data and AI. If Zoom had been transparent from the beginning and done its research, the crisis the company found itself in might never have happened.
Leaders must make sure their companies are open about all changes that impact users. Brand integrity has become even more critical in the age of social media, where information often travels faster than can be controlled. If you and your company are transparent from the start, it’ll eliminate at least one element of risk from an AI launch.
Be wary, but don’t avoid AI altogether. Leaders should instinctively tread carefully around AI, given that it’s a new technology. There are plenty of unknowns, and that’s always something to consider when running a business. On the other hand, all new technologies come with a level of risk, and that doesn’t mean we should avoid them all together. Instead, go in with open, well-researched eyes, an understanding of the potential risks and crisis plans for handling any number of possible contingencies.
Being a business leader often involves a certain degree of risk. What’s important is weighing that risk against the reward in a thoughtful way. The best way to mitigate risk, particularly with something as new as AI, is to be prepared. If you’re leading a company, you should have plans in place for all possible crises well before they might arise. In our highly connected digital world, brand-damaging stories can go viral instantaneously. Having a crisis team, statements and a strategy ready in advance will ensure you’re ahead of the news, not chasing after it as it runs away from you.
Regardless of your views on it, AI is becoming a part of our world. This technology has the potential to start much needed change, and it could lead your company to overwhelming success. The key is to be prepared and consider all the factors well before launching into uncharted territory.
The post How To Avoid An AI Crisis appeared first on ChiefExecutive.net.