8.13.2024

Federal: Comment Letter to Treasury on RFI on Request for Information on Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector

Ms. Jeanette Quick

Deputy Assistant Secretary for Financial Institutions Policy

1500 Pennsylvania Avenue,NW  

Washington, D.C. 20220

 

Re:       Request for Information on Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector

 

Dear Ms. Quick,

On behalf of The American Fintech Council(AFC),[1] I am submitting this comment letter in response to the Department of the Treasury (Treasury or the Department) Request for Information on Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector (RFI).

AFC’s mission is to promote an innovative, transparent, inclusive, and customer-centric financial system by fostering responsible innovation in financial services and encouraging sound public policy. AFC members are at the forefront of fostering competition in consumer finance and pioneering ways to better serve underserved consumer segments and geographies. Our members are also improving access to financial services and increasing overall competition in the financial services industry by supporting the responsible growth of lending and lowering the cost of financial transactions, allowing them to help meet demand for high-quality, affordable financial products.

AFC is encouraged by the pragmatic approach taken by the Department regarding Artificial Intelligence (AI) over the past several years. Prior to the Treasury’s current RFI, AFC was heartened by the insights provided in the Department’s report, “Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector” that recognized the importance of responsible innovation across the financial services sector in addition to opportunities surrounding AI systems.[2] While AI has certainly received increasing attention, it is important to remember that many industry participants have been utilizing the technology for years to reach consumers often excluded from the traditional financial services ecosystem.

AFC believes in the pragmatic, context-specific regulation of AI within financial services. While AI is rapidly developing across markets, not all AI technologies are operationalized in the same manner within financial services. Importantly, when considering how to regulate a product or service that may leverage AI, AFC believes that when technology is enabling an existing financial productor service to reach more consumers, the existing regulatory framework for similarly situated products is generally sufficient. However, when technology is used to create a wholly new product or service, then a distinct regulatory framework is required.

In our view, it is crucial that policymakers and regulators understand the relevant distinctions between each offering that leverages AI technologies and seek to craft the correct regulatory structure for each financial product. Further, when considering a regulatory framework for AI, it is important to recognize that AI technologies, when used responsibly, functions as tools for financial institutions that expand and improve their existing processes and practices. AI technologies, when responsibly deployed, are not a panacea used to replace or supplant existing processes or practices conducted by the financial institution. Therefore, we strongly advocate for the development policies under a risk-based framework that recognize the context-specific nature of a given use case for the AI technology deployed and provides the ability for innovative financial institutions and fintech companies to use AI as a tool to improve the products and services offered for the benefit of consumers.

 

     I.        AFC Recommends Industry Participants Implement a Risk-Based Approach to their Assessment and Use of AI Technologies

AI tools and models have become integral to the financial services industry's innovation and efficiency. Machine learning is revolutionizing the lending industry by enabling more accurate and inclusive credit scoring. Traditional credit scoring models often rely on limited data and can be biased, excluding many potential borrowers. However, modern machine learning algorithms can analyze vast amounts of alternative data to assess creditworthiness more comprehensively. Industry leaders are leveraging advanced AI algorithms to analyze complex sets of data and generate insights that human analysts may miss. These tools assist financial institutions to various degrees, such as fraud detection and investment strategies. This approach not only broadens access to credit for underserved populations but also improves the accuracy of risk assessments, leading to better loan performance and lower default rates. Furthermore, AI systems can identify patterns and trends in real-time, allowing organizations to respond swiftly to emerging risks and opportunities, ultimately maintaining a competitive edge in the dynamic landscape of financial services. By continuously learning and adapting to new data, machine learning models enhance the efficiency and fairness of lending processes, ultimately benefiting both lenders and borrowers.

Additionally, chatbots are becoming an integral part of customer service in the financial sector, providing users with instant, personalized assistance. These chatbots can perform a wide range of tasks, from answering basic inquiries about account balances and recent transactions to offering financial advice and personalized budgeting tips. By simulating human conversation, chatbots provide a seamless user experience, available 24/7, which enhances customer satisfaction and engagement. Additionally, they help financial institutions reduce operational costs by automating routine customer interactions, allowing human representatives to focus on more complex issues. As AI technology continues to evolve, chatbots will play an increasingly significant role in delivering efficient and customized financial services.

In practice, AFC advocates that responsible industry participants who implement AI technologies into their processes, products, and services, develop and tailor their risk management frameworks to appropriately characterize the potential risks associated with a given use case. Each of the above AI technologies present decidedly different risk profiles to the resilience of the financial institution as well as to the consumer’s well-being should the use of the technology go awry. Therefore, itis prudent for financial institutions to use a risk-based approach to their assessments and monitoring of the potential risks posed by the application of a given AI technology.

 

  II.        AFC Recommends the Development of a Unified and Cohesive Federal Regulatory Framework that Effectively Encourages the Development and Deployment of AI Technologies

Despite the aforementioned technological advancements, several challenges hinder the widespread adoption of AI in the financial services industry. Integration with legacy systems remains a significant hurdle, as many institutions operate on outdated infrastructure that cannot support advanced AI technology. Additionally, the technical expertise required to implement and maintain these systems is often lacking, creating a barrier to entry for smaller institutions. The costs associated with deploying AI technologies, both in terms of investment and ongoing maintenance, can be quite prohibitive.

However, the most crucial challenge to widespread adoption of AI is the navigation of regulatory compliance and the costs associated with this endeavor. Simply put, the existing regulatory environment is not conducive to encouraging the growth of AI technologies in the financial services industry. Additionally, the current regulatory environment rests upon the need for clear rules of the road for responsible actors. Through the competing and conflicting requirements, the risk of extraterritorial jurisdiction poses compliance challenges and harmonization efforts. A state-by-state approach is ill-suited for AI technologies because the myriad applications of AI within financial products and services exceed the regulatory jurisdiction of a state. To subject these AI technologies to a patchwork, state-by-state regulatory framework would create significant difficulties in the proper deployment of the technology in the financial services industry and would also discourage new entrants from the market due to high regulatory costs.

For example, Colorado recently passed its “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” which presents an expansive definition of AI technologies, as well as significant and nuanced regulatory requirements that will significantly increase the regulatory costs associated with deploying AI technologies or using them to support financial products and services in the state.[3]

In any effort to pursue functional AI regulation, there is a critical need exists regarding federal preemption over the fragmented landscape of state-level AI regulations. The current patchwork of state laws creates significant compliance challenges for financial institutions operating across multiple jurisdictions. Each state’s unique regulatory framework results in a complex, inconsistent, and often conflicting set of rules that can stifle innovation and create unnecessary barriers of entry for smaller institutions. A unified federal approach would streamline regulatory requirements, providing clarity and consistency that would enable financial institutions to leverage AI technologies more effectively. Federal preemption would not only reduce the regulatory burden but also ensure a level playing field, fostering innovation and competition while safeguarding consumer interests. By establishing a cohesive regulatory framework, the federal government can facilitate the responsible development and deployment of AI technologies, ultimately enhancing the efficiency and inclusivity of the financial services sector.

In addition to formal regulations, there are numerous voluntary guidelines and ethical frameworks developed by industry groups, academic institutions, and non-profit organizations. These can influence how AI is developed and used but do not have the force of law. Through this ever-evolving patchwork of standards and regulations, institutions are increasingly forced to adapt their practices to ensure legal and ethical use of AI in their offerings. For example, when further considering the international landscape, overarching but broad steps were taken by the Organisation for Economic Cooperation and Development (OECD), which established the first intergovernmental standards on AI. These standards include inclusive growth, sustainability and well-being, human centered values, fairness, transparency, explainability, robustness, security, safety and accountability. [4] Additionally, the General Data Protection Regulation (GDPR) has significantly impacted data practices and model development. While AI is not explicitly called out in the GDPR, the use of automated decisionmaking does impact many functions.[5]

While these standards and laws may have a de facto impact on the use of AI technologies in financial services and the industry’s practices, they lack the necessary force of law and jurisdiction to effectively regulate the use of AI technologies in a sufficient manner and. still leave many responsible industry participants in the dark as it relates to the specific rules of the road across the country. Therefore, AFC believes federal preemption over the fragmented landscape of state-level AI regulations is needed. The current patchwork of state laws creates significant compliance challenges for financial institutions operating across multiple jurisdictions. Each state’s unique regulatory framework results in a complex, inconsistent, and often conflicting set of rules that can stifle innovation and create unnecessary barriers of entry for smaller institutions.

 

III.        AFC Recommends Agencies Review and Modernize Existing Regulations and Guidance Documents to Provide Clear Requirements and Supervisory Expectations for the Responsible Use of AI Technologies in Financial Services

 

The financial sector is increasingly integrating these technologies, as discussed above, and growing increasingly reliant on the technology for decision-making processes, customer service, and operational efficiency.

Governance structures for managing third-party risk, especially with the incorporation of AI, are already essential for ensuring effective oversight and control. These structures typically involve establishing policies, procedures, and oversight mechanisms to manage the risks associated with third-party relationships and AI integrations. Financial institutions should integrate these technologies into those existing structures to mitigate biases in AI models, ensuring decisions are equitable and justified. AFC believes it is prudent for financial institutions to use a risk-based approach to their assessments and monitoring of the potential risks posed by the application of a given A Itechnology. This pragmatic approach includes regular audits, model validation, and the establishment of clear accountability mechanisms.

Ensuring that AI systems respect user privacy and adhere to consumer protection standards is also crucial for maintaining trust and compliance with regulations. Institutions must prioritize the protection of consumer data, ensuring compliance with relevant privacy laws anc regulations. This involves implementing data security measures and transparent data usage policies, fostering consumer trust and confidence in the use of AI technologies. This approach helps to ensure that personal data is managed securely and that consumer rights are protected from the outset. Moreover, transparency about how AI systems use consumer data and how decisions are made by these systems is essential for meeting privacy and consumer protection standards.

A responsible and pragmatic approach also involves a comprehensive assessment of third-party risks, incorporating criteria such as the vendor's data handling practices, compliance with regulatory standards, and the robustness of their AI models. Adopting a risk-based approach, as outlined in the National Institute of Science and Technology (NIST) Framework, can help institutions manage these risks effectively. The NIST AI Risk Managements Framework is a risk-based approach that advocates for policies and resources to be prioritized based on the assessed risk level and potential impact. [6] However, these guidelines and frameworks lack the necessary force of law and impact on the regulatory system to provide the necessary clarity for increased adoption of AI technologies in financial services.

Further, there are opportunities to evaluate existing laws and regulations to improve the use of AI technologies for the benefit of the financial services industry and consumers. For example, the Bank Secrecy Act (BSA) offers further requirements on the regulatory front. AI systems, equipped with the capabilities to analyze vast amounts of data, can effectively monitor and flag suspicious activities in real-time, enhancing compliance and BSA requirements. This synergy not only strengthens the global financial system's integrity but also fosters international cooperation in combating financial crimes. A unified approach that integrates these diverse regulatory frameworks is crucial for American companies to compete and win in the global financial system.

In addition, Less Discriminatory Alternatives(LDAs), as noted by the CFPB in its Fair Lending Report, represent an opportunity for modernizing the agency’s regulatory approach to improve consumer protections through the use of AI technologies.[7] As previously noted, AFC believes that when AI is enabling an existing financial product or service to reach more consumers, the existing regulatory framework for similarly situated products is generally sufficient. However, when technology is used to create a wholly new product or service, then a distinct regulatory framework is required. We echo the thoughts presented in a joint letter by Consumer Reports and the Consumer Federation of America that presses for “the urgent need for regulatory clarity and certainty regarding the expectation that financial institutions search for and implement less discriminatory algorithms (LDAs) in credit underwriting and pricing”.[8] Through the development and encouragement of LDAs, where appropriate, the Department can help create a more equitable financial landscape while maintaining industry efficiency and innovation. Context-specific regulation of AI within financial services would be the most prudent course of action from the Treasury. While AI is rapidly developing across markets, it is important to recognize that not all AI technologies are operationalized in the same manner even within financial services. The use of AI as a tool can greatly improve the products and services institutions are able to offer, and result in consumer benefit.

AFC believes that responsibly deployed AI technologies are not a panacea used to replace or supplant existing processes or practices conducted by the financial institution. However, when it is used to enable an existing financial product or service to reach more consumers, the standing regulatory framework for similarly situated products is generally sufficient. Yet, when technology is used to create a wholly new product or service, then a distinct regulatory framework is required. Further, it is crucial to provide regulatory clarity through means that will create a unified approach to regulating the use of AI technologies in financial services. To that end, AFC recommends Treasury engages with its fellow financial services regulators to identify ways to encourage the use of AI technologies in financial services and providing clear “rules of the road” for industry participants. Specifically, III. AFC recommends agencies review and modernize existing regulations and guidance documents to provide clear requirements and supervisory expectations for the responsible use of AI technologies in financial services.

*          *          *

 

As noted above, AFC strongly advocates for the development of policies that recognize the context specific nature of AI and provides the ability for innovative financial institutions and fintech companies to use AI as a tool to improve the products and services offered for the benefit of consumers. We believe that it is essential to foster responsible innovation and competition in the use of AI, and in doing so, regulatory bodies should review and modernize their regulatory frameworks to provide clear guidance, particularly by promoting a risk-based approach to regulation, and ultimately by ensuring that guidelines are specific enough to address the complexities of AI systems while being flexible to accommodate diverse applications. AFC appreciates the opportunity to comment on the Treasury’s Request for Information on Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector. We thank you for your consideration of our comments.

 

Sincerely,

 

Ian P. Moloney

SVP ,Head of Policy and Regulatory Affairs

American Fintech Council

[1] American Fintech Council’s (AFC)membership spans EWA providers, lenders, banks, payments providers, loanservicers, credit bureaus, and personal financial management companies.

[2] U.S. Department ofthe Treasury. Managing artificial intelligence-specific cybersecurity risksin the financial services sector, (Mar. 2024), available at https://home.treasury.gov/system/files/136/Treasury-AI-Report.pdf.

[3] Colorado GeneralAssembly,  “Concerning ConsumerProtections In Interactions with Artificial Intelligence Systems”, Senate Bill24-205 (2023), available at https://leg.colorado.gov/sites/default/files/2024a_205_signed.pdf

[4] Organisation forEconomic Co-operation and Development. (n.d.). OECD AI Principles, availableat https://www.oecd.org/en/topics/sub-issues/ai-principles.html.

[5] DLA Piper, Europe:The EU AI Act's relationship with data protection law - Key takeaways, PrivacyMatters(Apr. 2024), available at https://privacymatters.dlapiper.com/2024/04/europe-the-eu-ai-acts-relationship-with-data-protection-law-key-takeaways/.

[6] National Instituteof Standards and Technology, NIST AI risk management framework (NIST AI100-1), (Jan. 2023), available at https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.

[7] Consumer FinancialProtection Bureau. (2023). Fair lending report of the Consumer FinancialProtection Bureau, fiscal year 2023. https://files.consumerfinance.gov/f/documents/cfpb_fair-lending-report_fy-2023.pdf

[8] Consumer Reports& Consumer Federation of America, Statement on less discriminatoryalgorithms, (Jun. 26, 2024), available at https://advocacy.consumerreports.org/wp-content/uploads/2024/06/240626-CR-CFA-Statement-on-Less-Discriminatory-Algorithms-FINAL.pdf

About the American Fintech Council: The mission of the American Fintech Council is to promote an innovative, responsible, inclusive, customer-centric financial system. You can learn more at www.fintechcouncil.org.