.jpg)

For years, financial institutions have envisioned a future where artificial intelligence (AI) transforms credit decisioning, fraud detection, and operational efficiency. That future is here, but it comes with regulatory uncertainty and heightened risk, forcing lenders to rethink governance and compliance strategies.
As AI becomes embedded into operations, institutions must determine whether their defenses keep pace with emerging threats or merely meet the minimum standards to satisfy regulators.
AI models are redefining underwriting. By analyzing alternative data, such as rental payments, utility bills, and even digital signals, AI models help individuals with thin credit profiles access credit. These tools can process information in minutes that would take humans hours, enabling faster approvals and portfolio growth. Vendors like Zest AI have attracted significant investment, signaling a market that’s heating up. Yet adoption still lags behind expectations.
Why the hesitation? Many financial institutions remain cautious about scaling AI, given ongoing doubts about its financial value. Concerns have not abated around costs, data breaches, model hallucinations (inaccurate outputs), insufficient validation, and model bias.
According to a recent McKinsey survey, most institutions are still in the piloting phase, with almost two-thirds reporting they have not deployed AI organization-wide. Progress is slower than expected because piloting is easy, but scaling is hard. While reskilling efforts are underway, cultural resistance and trust gaps remain. Without structural changes, institutions will struggle to move beyond pilots and achieve measurable results.
AI readiness will define tomorrow’s lending leaders. Institutions are embracing advanced AI to boost efficiency but with greater risk. AI accelerates underwriting and fraud detection, enabling faster consumer loan approvals and freeing analysts to focus on higher-risk areas. Emerging tools like AI-driven chatbots that replicate analyst workflows illustrate the potential for efficiency and insight.
AI-driven lending offers the promise of broader credit access, but it also introduces significant compliance risks. Models often rely on alternative data signals such as utility payments, rental history, checking account behavior, and digital footprints (device type, email provider, browsing patterns, etc.). While these inputs can expand credit access for underserved borrowers, they also risk embedding socioeconomic bias into lending decisions. Consider these examples:
Such correlations, though predictive, raise compliance concerns. Regulators are watching closely. The Consumer Financial Protection Bureau (CFPB) has flagged algorithmic discrimination as a major issue, warning that historical bias in training data can perpetuate redlining or disparate impact violations. AI models may infer race, ethnicity, or gender through proxies like ZIP codes or surnames, creating compliance landmines for lenders.
Recent regulatory shifts complicate the picture. Regulators are leveraging advanced AI and statistical models to assess fair lending risks. Federal guidance has moved away from disparate impact testing, though state laws may still apply. Institutions cannot assume reduced scrutiny equals reduced risk. Failing to conduct thorough internal reviews leaves financial institutions vulnerable to enforcement actions and reputational damage.
See our related article: Preparing for Increased Fair Lending Compliance Scrutiny
As AI models grow more complex, sound model validation practices become mission critical. Model risk happens when a model produces inaccurate results or is used incorrectly, leading to financial loss, poor decisions, or reputational harm. Institutions should manage this risk through rigorous oversight and independent review, especially when models have significant influence on business operations or risk management decisions.
Evaluating model risk starts with assessing the model’s conceptual soundness, including data quality and representativeness, potential bias, transparency of documentation, explainability, adoption of parameters and methods, and how the training set was chosen.
Frameworks like SR 11-7, require annual validation activities such as:
While outsourced model validations can help, governance remains the institution’s responsibility. Regulators expect comprehensive and effective documentation and ongoing oversight and testing, especially as third-party partnerships blur accountability.
Vendor management remains a top regulatory focus as institutions increasingly rely on external service providers, particularly fintech firms. With digital offerings expanding, institutions are seeking ways to enhance their technology stack through strategic partnerships with third-party providers.
Fintech-driven customer acquisition introduces new challenges. If a fintech makes lending decisions on behalf of an institution, does the institution’s board-approved risk tolerance still apply? Updated policies, vendor due diligence, and risk assessments are essential.
For financial institutions, strong vendor management is a regulatory expectation. Examiners continue to scrutinize how well institutions assess, monitor, and manage third-party relationships. Concerns also extend to non-financial risks, such as reputational and compliance risks. Examiners can and will require institutions to shut down non-compliant models.
See our related article: U.S. Regulators Prioritize Vendor Management
As institutions deepen their reliance on vendors and fintech companies, regulators are reinforcing expectations for strong vendor oversight. Institutions that proactively assess, monitor, and manage vendor risks will be better positioned to pass regulatory exams and strengthen overall operations.
AI offers transformative potential, but success hinges on balancing innovation with governance. Institutions must:
The message is clear: AI can accelerate growth and improve efficiency, but without disciplined oversight, it can also amplify risk.
Contact Elliott Davis today to prepare for regulatory scrutiny and stay compliant.
The information provided in this communication is of a general nature and should not be considered professional advice. You should not act upon the information provided without obtaining specific professional advice. The information above is subject to change.