March 13, 2026
The Honorable Jorge Perez
Banking Commissioner
Connecticut Department of Banking
280 Trumbull Street
Hartford, CT 06103
Re: Response to Regulatory Guidance on the Governance of Artificial Intelligence
Dear Commissioner Perez,
On behalf of the American Fintech Council (AFC), I appreciate the opportunity to submit this comment letter in response to the Connecticut Department of Banking’s regulatory guidance regarding the governance of artificial intelligence systems (“The Guidance”) used by Connecticut-chartered financial institutions and other entities regulated by the Department.
AFC recognizes the Department’s efforts to engage stakeholders on this important issue and welcomes the opportunity to provide perspectives regarding the development of governance expectations for the responsible use of artificial intelligence within financial services. The Department’s guidance appropriately recognizes that artificial intelligence technologies (“AI”) may enhance operational efficiency, strengthen risk management capabilities, and improve customer service while also introducing potential cybersecurity, privacy, and consumer protection considerations that warrant thoughtful oversight.
A standards-based organization, AFC is the largest and most diverse trade association representing financial technology (fintech) companies and innovative banks. On behalf of more than 150 member companies and partners, AFC promotes a transparent, inclusive, and customer-centric financial system by supporting responsible innovation in financial services and encouraging sound public policy. AFC members foster competition in consumer finance and pioneer products designed to better serve underserved consumer segments and geographies.
AFC’s membership includes a broad range of financial technology companies and banks that deploy advanced analytical tools and data-driven technologies across numerous aspects of financial services. These technologies support a wide variety of functions, including fraud detection, risk monitoring, compliance operations, customer service, underwriting, and transaction monitoring. As a result, AFC members have substantial practical experience implementing governance frameworks that ensure emerging technologies are deployed responsibly, securely, and in compliance with applicable regulatory obligations.
AFC has long advocated for a pragmatic approach to regulating AI. In our view, it is crucial that policymakers and regulators understand the relevant distinctions between each offering that leverages AI technologies and seek to craft the correct regulatory structure for each financial product. Further, when considering a regulatory framework for AI, it is important to recognize that AI technologies, when used responsibly, function as tools for financial institutions that expand and improve their existing processes and practices. AI technologies, when responsibly deployed, are not a panacea used to replace or supplant existing processes or practices conducted by the financial institution.
As such, we support regulatory efforts that promote responsible innovation while ensuring that financial institutions maintain effective risk management, consumer protection safeguards, and operational accountability when deploying new technologies. In particular, AFC believes that governance frameworks addressing the use of artificial intelligence should build upon the well-established risk management and compliance structures that financial institutions already maintain. Approaches grounded in proportionality, flexibility, and existing supervisory standards are best suited to ensuring that institutions can responsibly deploy emerging technologies while maintaining strong protections for consumers and the financial system.
The following comments outline AFC’s perspective on the Department’s guidance and offer recommendations intended to support balanced governance expectations that protect consumers, preserve institutional accountability, and allow banks and fintech companies to continue leveraging innovative technologies that improve the delivery of financial services.
I. AFC Supports a Risk Based Governance Framework Predicated Upon the Specific Use Case of the AI Tool
AI tools and models have become integral to the financial services industry's innovation and efficiency. These tools support a myriad of functions such as credit evaluation, fraud detection, risk monitoring, customer service, and operational automation. For many years, financial institutions have incorporated machine learning models, statistical modeling techniques, and natural language processing tools into much of their services offered to consumers. Machine learning models and natural language processing systems are widely used to support activities such as credit scoring, fraud detection, algorithmic trading, and automated financial advice. These systems allow financial institutions to process large volumes of financial and behavioral data, identify patterns in financial behavior, and improve decision making processes across a range of financial products and services. In the credit context, AI-driven models can incorporate alternative datasets, including transaction histories and payment records, to generate more accurate assessments of consumer creditworthiness. These capabilities can help financial institutions better serve consumers who may lack extensive traditional credit histories, thereby expanding access to responsible financial services for historically underserved populations. As AI capabilities continue to expand, responsible governance frameworks are necessary to ensure that these systems are deployed in a manner that appropriately manages operational, cybersecurity, privacy, and consumer protection risks while preserving the substantial benefits that AI technologies can provide.
It is important to note that within each of these examples, there exists a unique risk profile associated with the underlying use case, not the application of an AI tool. While the AI tools discussed above are part of the risk profile, responsible industry participants leveraging AI have existing risk management frameworks for each underlying use case. Accordingly, regulatory frameworks governing the use of AI should recognize that the applications of these AI tools represent an evolution of existing processes and techniques rather than an entirely new category of activity. Supervisory expectations should therefore build upon the well-established model risk management and enterprise risk governance frameworks that institutions already maintain.
A risk-based governance framework that is predicated upon the product or service with which the AI tool is being deployed represents the most effective regulatory approach for supervising the use of AI in financial services. Such a framework enables institutions and regulators to evaluate risks associated with specific use cases rather than applying uniform requirements to technologies that may differ substantially in function, complexity, and potential consumer impact. Financial institutions rely on a wide range of AI tools that vary in their degree of automation, decision making authority, and operational significance. Consequently, governance frameworks should align oversight and risk management obligations with the characteristics of each AI application, including the sensitivity of the data involved, the materiality of the decision supported, and the extent to which outputs directly affect consumers.
Considering that AI technologies operate across diverse functions within financial institutions, governance requirements should be calibrated to the operational context in which the technology is deployed. Systems that support internal administrative processes may present significantly different risk profiles than systems that directly influence consumer facing determinations such as underwriting or fraud decisions. A risk-based approach allows institutions to allocate oversight resources in proportion to the potential operational, legal, and consumer risks associated with a given application.
Importantly, a risk-based framework also aligns with the reality that financial institutions already operate within an extensive network of consumer protection, data privacy, and financial regulation requirements. Federal statutes such as the Fair Credit Reporting Act, the Gramm Leach Bliley Act, and prohibitions against unfair, deceptive, or abusive acts or practices continue to apply regardless of whether a financial service is supported by traditional software systems or by AI enabled technologies. Rather than creating duplicative regulatory structures, policymakers should ensure that AI governance frameworks integrate with existing regulatory obligations.
In practice, this means that regulatory expectations should focus on the function and impact of a given AI application rather than the mere presence of artificial intelligence within a system. When AI technologies are used to support existing financial products or operational processes, the regulatory frameworks that already govern those activities typically remain appropriate. Conversely, where an AI use case materially alters the manner in which a regulated activity is performed or introduces novel consumer risk, additional governance expectations may be warranted. This context-specific approach ensures that oversight remains proportionate to the risks presented while avoiding unnecessary duplication of regulatory requirements.
For these reasons, regulatory guidance addressing AI governance should emphasize flexibility and proportionality. Institutions should be expected to implement governance frameworks that are predicated upon the products or services with which the AI tools are being deployed and are appropriate for the size, complexity, and risk profiles therein. This approach ensures that supervisory expectations remain adaptable as technology evolves while preserving the ability of financial institutions to deploy innovative tools that improve operational efficiency and expand consumer access to financial services.
II. AFC Supports Governance Expectations that Emphasize Data Privacy, Transparency, and Human Oversight
Effective AI governance frameworks should prioritize three core principles: ensuring data privacy, transparency regarding how AI systems function within financial operations, and continued human oversight of material decision making. Financial institutions routinely rely on large volumes of consumer and financial data when deploying technologies that support activities such as credit underwriting, transaction monitoring, and fraud detection. Governance expectations should therefore ensure consumer information remains protected and mitigate cybersecurity risks associated with data intensive systems, while allowing institutions to implement these protections through risk management and security programs they already maintain.
Ensuring that AI systems respect user privacy and adhere to consumer protection standards is also crucial for maintaining trust and compliance with regulations. Institutions must prioritize the protection of consumer data, ensuring compliance with relevant privacy laws and regulations. This involves implementing data security measures and transparent data usage policies, fostering consumer trust and confidence in the use of AI technologies. This approach helps to ensure that personal data is managed securely and that consumer rights are protected from the outset. Moreover, transparency about how AI systems use consumer data and how decisions are made by these systems is essential for meeting privacy and consumer protection standards.
Transparency and institutional understanding of AI supported processes are equally important. Increased automation can introduce operational risk if institutions lose visibility into how systems generate outputs, what data sources inform those outputs, and how those outputs are incorporated into downstream decisions. Institutions should therefore maintain documentation, monitoring, and testing practices that allow them to understand, explain, and supervise the role AI systems play in operational and consumer-facing activities, particularly where such systems support underwriting, fraud detection, pricing, or other material determinations.
Finally, governance expectations should reinforce that the use of AI must continue to operate alongside meaningful human oversight. Financial institutions already incorporate human review into a wide range of activities including credit decision making, compliance monitoring, and enterprise risk management. AI governance policies should complement these existing practices by preserving clear lines of accountability and ensuring that institutions retain the ability to review outcomes, escalate concerns, and intervene when necessary. Framing governance in this manner promotes responsible use of AI while avoiding rigid requirements that could unnecessarily limit the ability of banks and fintech companies to deploy beneficial technologies.
When implemented collectively, these principles would support a balanced governance approach in which institutions protect sensitive data, maintain clear visibility into how AI systems operate, and preserve meaningful human accountability for important outcomes. Such an approach promotes consumer protection and operational integrity while allowing financial institutions to continue leveraging AI tools that enhance efficiency, risk management, and access to financial services.
* * *
AFC remains committed to working as an ally with the Connecticut Department of Banking as it strives to provide guidance regarding the governance of artificial intelligence systems used by regulated financial institutions. As AI technologies continue to evolve and become increasingly integrated into financial services operations, establishing clear and pragmatic governance expectations will be important to ensuring that these technologies are deployed responsibly while maintaining strong consumer protections and operational safeguards.
As discussed throughout this comment letter, AFC believes that governance frameworks addressing the use of artificial intelligence should build upon the existing risk management, data privacy, and consumer protection obligations that already apply to financial institutions. A risk-based approach that emphasizes data protection, transparency, and meaningful human oversight, while allowing institutions to integrate these safeguards within existing compliance and risk management programs, will best support both responsible innovation and effective supervision.
AFC appreciates the opportunity to provide these comments on The Guidance and looks forward to continued engagement with the Department as it evaluates these important issues.
Sincerely,
Ian P. Moloney
Chief Policy Officer
American Fintech Council
[1] American Fintech Council’s (AFC) membership spans banks, non-bank lenders, payments providers, EWA providers, loan servicers, credit bureaus, and personal financial management companies.
[2] Connecticut Department of Banking, Regulatory Guidance: Governance of Artificial Intelligence Systems (Hartford, CT: Connecticut Department of Banking, February 17, 2026).
[3] IBM, “AI in Fintech,” IBM Think, accessed March 5, 2026, https://www.ibm.com/think/topics/ai-in-fintech.
[4] Worcester Polytechnic Institute, “AI in Financial Technology (Fintech), Explained,” WPI News, accessed March 5, 2026, https://www.wpi.edu/news/explainers/financial-technology-ai-fintech.
[5] American Fintech Council, Response to the U.S. Department of the Treasury Request for Information on the Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector (Washington, DC: American Fintech Council, August 2024), https://cdn.prod.website-files.com/65ffe2c368384b0aaeb608e2/66ba820ea08ad46f44ef6ef9_AFC%20Letter%20to%20Treasury%20on%20RFI%20on%20AI%20in%20the%20Financial%20Services.pdf.
[6] See generally 15 U.S.C. §§ 1681–1681x (Fair Credit Reporting Act); 15 U.S.C. §§ 6801–6809 (Gramm–Leach–Bliley Act); and federal prohibitions on unfair, deceptive, or abusive acts or practices under 12 U.S.C. §§ 5531–5536.
About the American Fintech Council: The mission of the American Fintech Council is to promote an innovative, responsible, inclusive, customer-centric financial system. You can learn more at www.fintechcouncil.org.