AI agents are changing the game for business in general and marketing in particular. While they bring automation and efficiency to a new level, they require strong governance. As AI agents get smarter and more embedded in daily operations, ensuring they’re used responsibly, securely and ethically isn’t just a nice-to-have—it’s a must.
“The question is the same, whether it is generative AI, traditional AI, machine learning or agentic,” said Kavitha Chennupati, senior director of global product management at SS&C Blue Prism. “Is the LLM returning the appropriate response or not?”
The very things that make AI useful — its ability to analyze large amounts of data and to scale and personalize customer experiences, to name just two — increase the stakes if it gets something wrong.
“The impact of not having good quality responses and good governance is magnitudes above where it used to be,” said Mani Gill, VP of product at Boomi.AI. “The scale of an agent asking for that data is way more than a human asking for that data. It multiplies by thousands and thousands.”
That’s why you have to have AI guardrails. Because marketing is leading the way in AI adoption, marketers must know what guardrails are and how to develop them.
The first thing to know is that you don’t start by working on the AI. You start with people deciding what the rules are.
“We go with the philosophy of governance-first approach,” said Chennupati. “Lay the foundation before you start incorporating the technology.”
Any organization that implements AI must first create a governance council. The council consists of people from different business functions who set AI policy for everything from brand rules to what data it can access to when people need to intervene and beyond.
Establishing guardrails: steering autonomous actions
Boomi AI Studio incorporates “built-in ethical guardrails” within its design environment. These are intended to guide the development and deployment of agents towards responsible actions. Beyond platform-specific features, Chennupati outlines several key mechanisms for establishing guardrails, including:
- Referencing decisions to trusted sources: Requiring agents to justify their actions by citing the data or logic they relied upon.
- Similarity-based checks: Employing multiple AI models to perform the same task and comparing their outputs to identify discrepancies or potential errors.
- Adversarial testing: Intentionally challenging agents with incorrect or misleading information during testing to assess their resilience and adherence to boundaries.
These help ensure agents act efficiently and reason soundly, all within acceptable parameters.
Dig deeper: Are synthetic audiences the future of marketing testing?
Securing the keys: Data access and control
A primary concern in AI governance revolves around data security and access control. It’s best to implement the same role-based access security you should already use for humans.
“Here’s a typical agent use case: Wouldn’t it be great if we allowed our employees to self-serve information about themselves and their teams?” said Gill. “Now it’s very easy to connect that information sitting in your human capital management system to your human resource system. Now, if this security policy isn’t right, all of a sudden, the CEO’s salary is showing up in that agent.”
This also applies to AI models outside the organization’s direct control.
“An Agentforce agent runs on a model that you don’t control,” said Chennupati. You can’t just pretend to read the terms and conditions like most of us do with technology. “You need to understand the data privacy aspects.”
Constant vigilance
The different privacy laws mean you must know where the data lives and where it may be transmitted. Otherwise, you are at risk of significant fines and penalties. You must also have a mechanism to stay up-to-date on changes in laws and regulations.
One of the things that makes AI so valuable is its ability to learn and apply those learnings. However, that means you must continuously monitor the AI to see if it still follows the rules. Fortunately, you can use AI to monitor AI. Separate systems check for anomalies to identify when an agent’s behavior deviates from expected norms.
However, you can’t leave it all up to the AI. Both Gill and Chennupati stress the continued need for human intervention.
“It’s not just about monitoring, but also defining the threshold for the metrics in terms of when you want to bring the human in the loop,” said Chennupati. “It starts at the design phase. The design must include details about how the LLM is arriving at a solution so a human can see what is happening.”
AI is evolving at breathtaking speed and becoming increasingly enmeshed with all parts of business operations. We can now do what once took days, weeks or longer in seconds. Along with that great power comes — say it with me — great responsibility. As the saying goes, to err is human; to really mess up, you need a computer.
Dig deeper: Salesforce & Microsoft square off with new AI sales agents