Partnership Testimony on the Use of Artificial Intelligence in Consequential Decisions
January 15, 2026
New York State Senate Committee on Internet and Technology
Public Hearing on January 15, 2026
Chair Gonzalez and members of the Committee, thank you for the opportunity to testify today on the regulation of high-risk uses of artificial intelligence in the private sector.
My name is Alex Pena, and I serve as Executive Vice President of the Partnership for New York City. The Partnership mobilizes private sector resources and expertise to advance New York City’s standing as a global center of economic opportunity, upward mobility, and innovation. We are a nonprofit organization of more than 300 corporate, investment, and entrepreneurial firms whose members support nearly one million jobs in New York City and deliver approximately $263 billion in economic output.
Our distinctive value in today’s important discussion is a cross-industry view of how the NY AI Act would operate in practice, not only for AI developers that build and sell systems, but also for the far larger set of deployers that use those systems inside real-world workflows.
Unlawful discrimination is unacceptable; and whether it comes from people or technology, it must be eliminated. Our members support responsible AI governance that reduces discrimination in practice and preserves New York’s ability to lead in innovation and job creation.
New York has outsized stakes in getting this right. The Empire State ranks second for AI venture capital, with more than $10 billion invested in 2025, and the New York metro area alone is the second-largest AI talent hub in North America with more than 45,000 workers. New AI-driven infrastructure investment is also arriving in force, including major data center projects in the Hudson Valley and Buffalo that could generate substantial long-term tax revenue and sustained job creation.
Our concern is simple: as written, the NY AI Act could end up working against its own purpose. It casts too broad a net and then layers on requirements that many companies cannot reliably carry out across high-volume operations. When regulations are that broad and that hard to implement, the inevitable happens. Attention shifts from fixing problems to managing process and preparing for lawsuits, and the New Yorkers this bill is meant to protect do not end up better served.
We see four structural issues.
First, the bill authorizes private suits and then instructs courts, at the motion-to-dismiss stage, to automatically presume both bad behavior and causation unless a defendant can fight back those presumptions with clear and convincing evidence. This means defendants are required to meet a higher evidentiary bar than the usual preponderance of the evidence standard, and to do it at the very start of the case, before facts have even been tested through discovery. Respectfully, this private right of action enforcement model will skyrocket settlement pressure and defensive compliance well before the bill’s standards are clear, and it risks further intensifying New York’s already challenging litigation climate.
Second, and on the topic of the bill’s unclear standards, a system is “high-risk” if it is a substantial factor in a consequential decision, and “substantial factor” includes any factor that is merely capable of altering an outcome. That reaches far into the larger universe of tools that assist, influence, or inform decisions across sectors, even when people remain meaningfully involved. It also makes compliance turn on subjective, after-the-fact assessments rather than clear lines that developers and deployers can plan around. For example, an organization using AI to help triage customer service tickets or flag fraud may not know in advance whether that tool will be treated as “high-risk,” whether it played a “substantial factor” in a particular decision, or what “reasonable care” requires when it supports staff judgment. That uncertainty invites inconsistent outcomes and expensive guesswork.
Third, the audit requirement is built on a foundation that does not yet exist. The NY AI Act mandates third-party audits on a fixed schedule for both developers and deployers, but it never clearly answers the basic question any operator will ask: what does it mean to pass? Without clear, objective criteria for what constitutes a satisfactory audit, the mandate becomes an uncertain and costly process that will vary by auditor and invite second-guessing in enforcement and litigation. And while the bill delays the effective date, it does not fix the core problem. Unless the state sets clear, consistent standards and the audit market develops in a structured way, we will still see uneven outcomes, too few qualified auditors, and uncertainty for responsible businesses and the people they serve. That uncertainty is not abstract. Companies will have to build and rebuild compliance programs around a moving target, driving up costs and pulling resources away.
Fourth, the combined reporting, public database, and end-user rights package creates significant operational and disclosure risk. Reports and audits would be posted in a public database, increasing exposure to sensitive operational details that can create security and competitive risks while offering limited consumer value. Separately, deployers must implement notice, opt-out, and appeal processes. In high-volume environments, those obligations can slow service, increase costs, and reduce access. Put simply, when requirements this consequential are paired with lingering uncertainty about what must be made public, what can be protected, and what actually helps consumers, they can inadvertently create the wrong incentives.
Beyond these structural issues, the Partnership believes there is a broader question about timing and unintended consequences.
This legislation, while well-intentioned, could unintentionally harm the very communities it aims to protect. Human decision-making is prone to well-documented biases that are often implicit, inconsistent, and unrecorded. Responsibly designed AI systems are not a perfect substitute for human judgment, but they can be a powerful supplement and force multiplier. By structuring analysis around defined, relevant criteria, these tools can help people make sounder, fairer, and more consistent decisions in high-stakes domains like hiring and credit. Critically, because the logic and inputs can be documented, results are reviewable internally, creating a pathway to identify and remedy bias that is simply unavailable in purely discretionary human processes.
For these reasons, we respectfully recommend that the Committee consider whether this bill, as drafted, is the right vehicle at this moment, and continue engaging stakeholders through constructive forums like today’s hearing.
A potential first move could be an Office of the Attorney General-led review of the protections New York already has in civil rights, labor, and consumer law, including how they apply today when decisions are supported by algorithmic and AI-enabled systems. We believe the foundation is strong. The question is not whether discrimination is illegal. It is whether AI is creating a real, measurable enforcement gap that current laws and current tools cannot reach. The Partnership respectfully recommends starting with that assessment so that any legislation that follows can be more targeted, better scoped, and focused on any specific vulnerabilities that are not addressed under existing law.
The Partnership looks forward to continuing to work with this Committee and the Legislature, and to engaging with stakeholders across our great state, as this conversation moves forward. We share the absolute goal of protecting New Yorkers and strengthening trust, and we are committed to being a productive voice in the work ahead.
Thank you, and I look forward to the discussion.
