Modern organizations implement a wide array of policies and controls for their employees, aimed at educating them, preventing misconduct, and shielding against unintended harm. As Artificial Intelligence (AI) takes on more tasks and responsibilities, it's logical to set up similar safeguards for machines. Amid fluctuating standards and regulations in the marketplace, a foundational set of Responsible AI principles is taking shape, guiding early discussions, implementations, and applications.

In our earlier blog on operationalizing Responsible AI (RAI), we offered a glimpse into the strategies employed by us and various organizations to implement its first four principles—accountability, reliability, fairness, and inclusion—at an organization. Continuing the dialogue, this piece will highlight different methods and examples for ensuring transparency, privacy and security, sustainability, and governance. As you delve deeper into the nuances of artificial intelligence, we hope these real-world strategies and examples transition your responsible AI policy from theory to practice.

Note: Responsible AI is an evolving domain, spanning both theory and practical application. Its influence on different stakeholders, its role in various stages of the product lifecycle, and the evolving tools and methodologies shouldn't deter you from exploring and embracing AI. By highlighting select tools, examples, and contexts, we aim to show and affirm how you can responsibly implement AI.

Transparency in Responsible AI

Transparency is the bedrock of AI trust and understanding. The inner workings of an AI system—its design, behavior, limits, intended outcomes, and its risks—should be laid bare to users and stakeholders. At its most basic level, transparency means disclosing both how an AI system makes decisions and the associated risks internally and externally. To do so, transparency in responsible AI relies on a nuanced blend of explaining, assessing, and disclosing the use of AI during implementation and after.

Aligning AI teams on transparency practices

Risk registry table for Responsible AI project detailing transparency, sustainability, security and privacy, and fairness principles, including system components, potential impacts, mitigation strategies, and ownership statuses.

A simplified view of an Responsible AI risk registry.

A prominent challenge for most organizations is deciding the extent of explainability and disclosure needed to achieve transparency. Difficult questions, like "Should I explain how this system works?" and "Should we disclose we used AI for these projections?" can quickly become controversial or academic.

We've found it's effective to track these as questions, even if it's in a simple spreadsheet. By providing the team a central location to document transparency-related questions, we do more than practice transparency—we remove the cognitive burden of holding on to these questions and empower project leads with critical information needed to understand when, where, how, and with whom to address these thorny questions. Other stakeholders must take part in these decisions, depending on the nature of the use case. When go-forward decisions have been made, a quick update to the spreadsheet helps align teams and can also serve as a helpful history of how the organization evaluates (and values) disclosure, making future decisions easier.

Explaining AI

While there’s still much for us to learn about how foundation models work, tools and methods are appearing to help explain the factors used to generate AI responses. This is critical because trust in the AI system cannot be set up or enhanced without it. Furthermore, the recently passed EU AI Act and other legislation circulating today will require it. Phoenix Arize is one of many solutions that increases the observability of generative AI (genAI) models. Among its many capabilities, Phoenix can detect hallucinations or poorly performing prompts so that these issues do not snowball into problematic output, or even worse, actions based on flawed output. This tool can also be used to approximate a level of explainability that approaches the rigor of some Machine Learning (ML) tools. From there, who should know about the system’s behavior is critical for decisions.

Disclosing AI for strategic transparency

The evolving belief that "true AI mastery is felt, not seen" is gaining popularity, yet it may be downplaying the risks involved. While using AI to draft an email to a colleague might not necessitate a disclosure, companies need to exercise caution when incorporating AI into their core or sensitive processes, products, and services. Concealing such use can result in substantial reputational or financial harm to the company. Notably, Sports Illustrated and Gannett faced criticism for not disclosing their use of AI and factually incorrect AI-generated content, impacting their credibility and team morale (following similar events, another publication’s writers unionized). Thus, determining what to disclose becomes a crucial business decision.

Disclosure, however, can enhance a brand’s AI story and use. For example, Hanover Research, a custom market research and analytics provider, has reaped the rewards of putting AI transparency into practice. The company partnered with Neudesic to build an AI system that sifts through decades of Hanover’s historical data and delivers insights 10x faster than their more manual analytic approaches. While some companies can be cagey about how they process data or who they partner with to build data processing tools, Hanover proudly shows the importance that the Hanover Intelligence Virtual Engine (HIVE) now plays in their analytics—but, more importantly—supercharging their already brilliant researchers. By modeling AI transparency in their processes and products, Hanover distinguishes their researchers and themselves as an innovator in their field and ensures that their current and future clients see how Hanover’s capabilities are on the rise.

Privacy and security in Responsible AI

In the digital age, privacy and security form the bedrock of trust in artificial intelligence (AI) systems. As organizations increasingly adopt AI, operationalizing these principles becomes crucial. This section outlines a strategic approach to implementing privacy and security, distinguishing between custom AI models and genAI, and offering actionable guidance for organizations.

Foundational strategies for privacy and security

The first step in safeguarding AI systems involves setting up foundational strategies that apply across all AI deployments. Two points of focus should be:

Data minimization and encryption: Adhering to regulations such as GDPR and CCPA, organizations must focus on collecting only the essential data and employing robust encryption for data at rest and in transit. This approach is particularly critical for custom AI models where direct handling of personal and sensitive information is often unavoidable.

Differential privacy: Incorporating techniques that add noise to data helps obfuscate individual identities, making it a pivotal strategy for enhancing privacy without compromising the utility of data sets. Tools like IBM's differential privacy library offer practical solutions for implementing these techniques, which, while primarily an ML solution, do apply to certain genAI use cases.

Tailoring approach to custom vs. generative AI models

While foundational strategies provide a starting point, the nuances of custom AI models and genAI necessitate tailored approaches:

GenAI: The privacy focus shifts towards how models interact with user inputs, such as prompts. Policies preventing the use of business account prompts for training, exemplified by OpenAI, and Microsoft's stance on not using prompts for AI training, underscore the evolving privacy considerations in GenAI.

Custom AI models: Prioritize the least invasive data collection methods and strong encryption to protect personal information. Implementing private key infrastructure and ensuring compliance with data protection laws are essential steps in building a secure and trustworthy system.

Enforcement and Guardrails: Securing AI Systems

Beyond setting up privacy and security at the foundational level, enforcing these principles requires specific guardrails:

Access control and monitoring: Implement access control policies to ensure that only authorized personnel can access sensitive data, complemented by robust monitoring to log and audit data access, particularly privileged access. This not only helps prevent unauthorized access but also aids in finding potential breaches early.

Education and policy development: Developing clear guidelines and policies on privacy and security best practices is vital. Educating users and stakeholders about their responsibilities and the importance of adhering to these practices ensures a unified approach to securing AI systems. Likewise, providing users with the latest thinking on best practices, such as the latest prompting techniques, offers a helpful counterbalance to the rules of what not to do.

Amazon One illustrates a practical application of privacy and security principles in an AI-driven system. By converting palm prints into non-identifiable hashes and encrypting this data, Amazon One shows how sensitive information can be handled responsibly, offering a model for other organizations to emulate.

Sustainability in Responsible AI

Sustainability in the context of AI extends beyond mere environmental considerations—it encapsulates a strategic approach to balancing long-term economic and social impacts with immediate technological advancements and demands. Unlike custom ML models, genAI’s complexities, ad hoc usage, and extensive requirements make its sustainability a critical focal point for companies striving for responsible innovation. Doing so minimizes the negative social and environmental impacts of AI systems, maximizing their longevity and ROI.

Both in AI and more broadly, environmental costs are being translated into financial costs. To make the connection clear: more compute means more money and more carbon emissions. You are already paying for your compute, and you can expect to pay for your project’s emissions soon, if you are not already.

Certain architectural decisions can have a big impact on sustainability, such as model selection. While OpenAI’s GPT-4 is the de facto standard, it is by no means alone in its performance capabilities. There are many other, smaller models that could reasonably do the job, sometimes even better than a larger model. The benefits of smaller models span the range of sustainability – from the environment, to cost, to social impacts. Smaller models simply don’t require the same level of compute to run, can be trained in a way that produces more efficient results, and some models are open source, making them essentially free of licensing costs. Other architectural decisions – whether to use something like Semantic Kernel or LangChain, or which agent framework(s) to use will also strongly affect the sustainability factors.

Measuring for sustainable AI implementation

The best path then is for organizations to measure and monitor the good and bad impacts associated with their AI deployments. Using existing tools and dashboards provided by internal teams or your cloud provider is essential, but these reports are limited to the financial impact story. Free tools, like Code Carbon, can translate your cloud consumption into emissions, helping your organization understand its carbon footprint and bringing a fuller picture to stakeholders.

Most profitable companies have a low tolerance for wasted time and money, and disruption to strategic priorities. Having a strong sustainability element to your Responsible AI program will minimize risks associated with some regulatory requirements, financial cuts, and much more.

Governance in Responsible AI

It could be argued that all Responsible AI is good governance. But given that the success of all the principles hinges on setting up a robust framework that unites them, we decided governance calls for its own section. Establishing a clear chain of authority, including executive sponsors and advisory panels, ensures that policies are actively followed. Just as you have established for the rest of your organization, you will need a comprehensive charter and structure for responsible AI, including the roles and responsibilities to heighten awareness and adherence across all these responsible AI principles. And though a dedicated responsible AI governance charter is needed, it can and should align with your existing governance frameworks.

Establishing a robust AI governance framework

A cornerstone of effective AI governance is the establishment of a clear chain of authority. This begins with executive sponsors who empower a dedicated governance committee. This committee, in turn, delegates to advisory panels and other entities tasked with translating policies into actionable procedures. A published governance charter, including a Decision Rights Framework, is vital. It clarifies decision levels and information flow, thereby enhancing policy adherence and awareness.

To bolster enforcement, policies should explicitly outline the consequences of non-compliance. Additionally, regular training sessions can reinforce the importance of these policies. Incorporating Human Resources and Legal departments ensures that governance measures align with broader organizational standards and legal requirements.

Measuring the impact of governance

How do you figure out the efficacy of your governance structures? Measure them. Your software development life cycle needs processes and gates, key performance indicators and service level agreements, just like every other initiative in your organization. These systems are designed to quantify the friction points governance might introduce and provide an estimate of how much time you should budget as you move through each stage of your project. They also assess the relationship between the potential value of a use case and its predicted risks and required resources. This balance of enforcement and measurement ingrains principles of responsible AI into your organizational fabric, driving innovation at a pace you can actually sustain.

JP Morgan Chase was able to integrate AI governance throughout the organization as a part of their $15.3 billion technology investment in 2023. JP Morgan has explicit standards for data management, for aligning with current and future AI regulation and for how to move through the software development lifecycle, to name just a few of their AI governance structures. But these dictates do not exist in isolation; they align directly with the protocols that govern other business units.

And given the way JP Morgan weaves AI into their existing functions, integration was the only way. Data scientists and AI/ML projects are embedded within their business units. This integration prevents their AI projects from spiraling in rudderless, under-governed, costly experimentation cycles and keeps their technology initiatives accountable to the same high business standards as their existing functions. By developing clear AI governance and embedding their projects into teams that are already well-governed, JP Morgan ensures that their technological innovation is simultaneously profitable and responsible.

Conclusion

In our discussion of transparency, privacy and security, sustainability, and governance, we have shown that these principles and the four principles highlighted in our earlier piece are both distinct and interconnected. As you define how each principle applies uniquely to your organization, you will gradually form a cohesive blueprint for how these pillars bolster your technology projects and enhance your organization. But one thing is true for all organizations: adopting these responsible AI principles allows you to navigate the rapidly evolving landscape of artificial intelligence with confidence and foresight.

Partnering with Neudesic to implement these AI principles can reveal a responsible and profitable path for artificial intelligence within your organization.