Equifax Continues to Set the Standard in Responsible, Explainable AI Use
EQUIFAX HAS BEEN DRIVING ARTIFICIAL INTELLIGENCE (AI) INNOVATION FOR MORE THAN A DECADE for the benefit of the businesses and consumers served in 24 countries around the world. Whether for innovation, internal development, or operational improvements, Equifax ensures that its AI systems are used in a transparent, trustworthy, fair, explainable, and secure manner to provide benefits to consumers and customers.
“Explainability remains one of the central commitments of EFX.AI. It is important in ensuring that the decisions assisted by AI are fair, ethical and understandable,” said Harald Schneider, Global Chief Data & Analytics Officer at Equifax. “Ultimately, explainability provides accountability in the use of AI, providing specific detail on what informed a decision.”
In support of that, Equifax maintains an AI Governance Program, which sets strategic direction, provides global oversight of the use of AI systems, and defines the principles and practices that comprise Responsible AI at Equifax. The AI Governance Steering Committee helps ensure the company consistently and appropriately designs, implements, and uses AI Systems for approved use cases. The organization believes that effective governance starts with the tone at the top, and the Equifax Board of Directors and senior leadership team are directly involved in shaping its AI Governance Program, which applies to the entire workforce.
Equifax also adopted the National Institute of Standards and Technology (NIST) AI Risk Management Framework (“NIST AI Framework”) as the foundation for managing AI risks across the company. The NIST AI Framework complements the NIST-aligned Security and Privacy Controls Framework and helps the organization define actionable requirements for managing AI risks. These AI Controls and Operational Requirements help ensure regulatory requirements are met for the use of AI systems and establish a unified set of requirements across the company. Where possible, Equifax leverages existing governance and oversight processes and Standard Operating Procedures to address the risks associated with using AI, further supporting end-to-end governance and oversight.
Equifax has led the way in setting the industry standard for explainable AI (xAI) for over a decade, recently celebrating the inception of its first xAI patented solution, NeuroDecision® Technology, introduced in 2015. Explainable AI is critical in ensuring that the correct data used to make credit decisions is surfaced by AI models and scores. To that end, Equifax has approved or submitted patents for explainable AI techniques such as neural networks, gradient boosted machines and random forests and as of January 2025, more than 300 of its pending and approved patents support the Equifax approach to AI.
Critical to the acceleration of AI innovation at Equifax is the custom-built Equifax Cloud™, a top-tier global technology and security infrastructure backed by a more than $1.5 billion multi-year investment that continues to set the company apart in the industry. Central to the Equifax Cloud is its custom data fabric. The adaptable data platform unifies the enterprise’s data (from over 100 siloed data sources) in a single, virtual structure, while enabling critical data governance measures, including data segregation, and maintaining compliance with regulatory requirements. It is powered by layering advanced analytics and AI with deep, proprietary, high-quality data.
Equifax understands the role and importance of AI to its customers’ businesses and ultimately works to support their growth, agility and speed at scale through its own use of the technology. As Equifax continues to innovate not just today, but for the future, the organization is laser-focused on the importance of ongoing governance and oversight to ensure that AI Systems are used in a transparent, trustworthy, fair, explainable and secure manner.
Learn more about how Equifax continues to drive AI Innovation here.