AI Regulation is Complicated: Here’s How Companies Can Move Forward Responsibly
Like many others, I believe that artificial intelligence (AI) may create one of the most fundamental and far-reaching transformations we’ve experienced in our lifetimes. Leveraging AI can help us run our companies more efficiently, and deliver better and more timely customer experiences. For software companies, it’s becoming a necessity to develop products and services that tap into the power of AI. Ignore doing so and risk being left behind.
Even so, the breakneck pace of AI innovation has raised some red flags for regulators about the technology’s risks and implications for public safety. Some governments have been quick to act on AI regulation (for example, the U.S. AI executive order and Europe’s landmark AI regulation). I think that AI regulation will evolve, but it won't slow down. While regulation sets early standards, and may even over-restrict at times, it will find its water level over time. The protection of people and public safety will always be critical considerations.
While that story plays out, companies need to act now on fundamental principles to move forward with AI productively, responsibly, and safely. A “wait-and-see” strategy is not enough. At Acquia, we are here to make great technology, create value, and leave the world in a better place than we found it, which is why we developed our own responsible AI principles.
Responsible AI building blocks
Whether you’re using foundational resources like the trustworthiness in AI guidelines from the National Institute Standards and Technology (NIST) in the U.S., or you’ve developed your own policies, it’s important not to wait on regulation to make sure you’re doing right by customers, employees, and other stakeholders.
Acquia’s responsible AI principles drive our interactions with customers and employees alike. I suggest executives develop AI principles that align with your company’s core values. They need to become a part of the fabric of your organization and the way you work. On top of these principles, consider implementing a steering committee or task force to act on your policy and establish an ongoing governance model. Our own AI task force ensures that our responsible AI policy is enacted on a day-to-day basis.
Let’s explore some of the key areas of focus we’ve chosen for Acquia’s responsible AI policy, along with some straightforward tips for how to implement your own best practices.
Ensure AI systems are safe and meet ethical standards
First and foremost, all employees need to become AI literate to ensure the responsible use of AI. AI is no longer just technology for data scientists. There are features in our day-to-day business tools that we use, and every employee should understand how to leverage them safely. This approach is similar to how we coach employees to securely use email and other corporate systems. Ensuring every employee understands AI, especially generative AI, eliminates the fear in the unknown, mitigates risk, and unlocks innovation that you may have never expected.
Beyond employee education, your company should have a core set of ethical guidelines you follow as you develop AI systems. Based on these guidelines, your team should outline the risks, and then test out plans to mitigate each of these risks. According to research from Deloitte, 72% of executives have defined specific ethical principles for cognitive technologies like generative AI, and more than half of companies use a review board or process to review ethical standards for new technologies.
In addition, to ensure the safety of your own AI systems, it’s critical to thoroughly test AI systems for vulnerabilities before their release. This should include robust security testing techniques, such as red teaming (or using ethical hacking techniques internally as an outside attacker would) to ensure that potential vulnerabilities are discovered. When we’re talking about generative AI systems, security testing and red teaming looks much different. Conduct your plan to discover what might generate the most harmful results. For example, a red teaming technique such as code injection may be able to get a generative AI model to generate harmful outputs. From there, your team can refine the model to ensure that these outputs never become available for public consumption.
Embrace accountability and transparency with AI systems
A major concern around AI is a lack of transparency. Depending on the application, algorithms can make decisions that could impact people’s lives in a drastic way. As a result, you must be transparent with employees, customers, and users about how you apply AI, explaining how it functions and makes decisions.
Explainability in AI enables your customers to understand how AI systems work, reduce risk, and make more informed purchasing decisions. To increase transparency, you can disclose how models are trained, on which types of data, and how certain inputs impact results. In addition, you can clearly label which products use AI.
For example, our machine learning (ML) models for products like Acqua CDP provide users with transparency into how input variables are weighted, and allow users to also create their own algorithms if they want to adjust the impact and weighting of these inputs. We also label our AI interactions clearly, so users know they are interacting with AI. An example is labeling our chatbots as “Acquia Bot” or “AI Assistants.” We design our AI features with clear indicators of the inputs and outputs within our products.
If you embrace explainability in similar ways, your company and your employees can be accountable for the outcomes of AI systems, standing by both the ethical and business implications of their results.
Keep private information confidential
Another important aspect of establishing trust is ensuring that private data is handled appropriately. According to Acquia’s 2023 CX Trends Report, 84% of companies think that their customers trust their organization’s use of personal data more this year than last year. In reality, only 56% of consumers trust that brands will handle their personal data properly. That’s a bar that all companies must work hard to raise.
It’s a safe bet that future AI laws will align closely with current data privacy and data handling laws, including GDPR in Europe, and state-by-state regulations in the U.S. This will become even more important as AI systems are used in customer-facing applications (such as customer support), as well as for AI-generated coding. The risk of inadvertently exposing personally identifiable information (PII) or intellectual property (IP) is too great to ignore.
To prepare for this reality, ask yourself if you are exposing sensitive information to machine learning models. AI models learn from their training data. Therefore, it’s crucial to ensure you aren’t making personal or private info public. In addition, data masking PII is critical – if you are ingesting data from either your own systems or outside systems, data masking obscures confidential data with altered values before it's used for the AI model. Any PII — like names, addresses, phone numbers and other unique identifiers — needs to be masked before data is ingested by an AI system.
Ensure fairness and minimize bias
If you develop your own AI models, you must ensure fairness and minimize bias in the model’s output. AI models are inherently biased based on the data used for training. Biases can be computational/statistical (if training datasets are more heavily weighted to favor a certain outcome), human (if a dataset reflects human bias) or systemic (datasets reflecting deeper institutional issues that discriminate against certain groups). Models built on publicly available training datasets on the open internet have been shown to produce biased results. For example this recent Penn State study revealed how public models exhibit disability bias.
To fight bias, supervised learning is essential. A human in the loop can provide feedback to the algorithm, indicating which outputs are biased. This fine-tunes the model, reducing the future possibility for bias or error. It doesn’t always have to be someone at the company that is training AI algorithms. Regular feedback from customers or users can also help refine a model to ensure its fairness, reduce bias, and increase accuracy. We rely on regular feedback from our users for our AI-powered products like Acquia CDP, and allow users to see into the black box of our ML models to create custom models of their own.
Acting responsibly in the early stages of AI regulation
While we’re still in the early days of AI regulation, we need to be responsible with AI to maintain trust with both our employees and our customers. Acquia believes that technology can build a better and safer future for all people, and strives to make a positive impact on the world with all of our business decisions. AI is no exception.
Consider this a call to be proactive. Embrace a strong policy and ongoing governance with your own AI systems, if you haven’t already taken action. Start by establishing your own responsible AI best practices to keep your systems transparent, fair, safe, secure, and private. It’s not only the right thing to do; it’s also the best business decision. The more comfortable customers are with AI, the easier it will be for them to stand by your brand.