Credo AI, the governance company that powers Responsible AI, today announced the general availability of new scoring and reporting capabilities in its Responsible AI governance platform. These improvements will enable companies to meet new regulatory and customer demands for governance artifacts, reports and disclosures on their development and use of AI, with a focus on assessing and documenting responsible AI issues such as fairness and bias, explainability, Robustness, security, and privacy.
This release is the latest addition to Credo AI’s software that helps organizations manage AI risk and compliance at scale. The new feature set enables companies to standardize and automate the reporting of Responsible AI issues across all their AI/ML applications.
Credo AI’s intelligent SaaS platform empowers organizations to measure, monitor and manage risks introduced by AI at scale.
These capabilities were developed in response to growing demands from regulators, customers, and consumers for transparency and documentation of AI systems. The world is increasingly demanding to know how AI systems behave, especially when it comes to issues like fairness and bias. Future regulations such as the New York City Algorithmic Hiring Act and the EU AI Act will soon require organizations that develop, purchase and use AI to conduct periodic assessments or audits of their AI tools and publish public usage reports. Recently, the White House also introduced a blueprint for an AI Bill of Rights that provides guidance on the design, use, and deployment of AI. And last month, the House Science, Space and Technology Committee held a hearing on addressing the risks of AI, where technology leaders, including Credo AI founder and CEO Navrina Singh, stressed the need for contextual governance and transparent reporting discussed.
Credo AI enables customers to be compliant with upcoming regulations and address their customers’ questions and concerns about the AI systems they offer and implement. The platform is already deployed by Fortune 100 companies in the financial services, insurance, high-tech, and aerospace and defence sectors, which they use to generate governance artifacts and reports on the fairness, performance, and governance of their AI systems and to share with customers and regulators.
The product update also includes improvements to the platform’s integration with Credo AI Lens, an open-source responsible AI assessment framework, to provide programmatic technical assessments of fairness and bias, explainability, robustness, security, and privacy of ML models and datasets to enable and burden technical teams with responsible AI reporting and documentation.
“Credo AI builds the layer of governance that empowers organizations to ensure all of their internal and third-party AI meets business, regulatory and ethical requirements,” said Navrina Singh, Founder and CEO of Credo AI. “This product release is the next step in our journey to bring contextual governance and accountability to AI. This solution will not only help companies align their AI but also ensure that their AI operates in alignment with human-centric values. “