![]() |
Congress is considering a dangerous bill that would let big banks and other financial firms deploy risky artificial intelligence while dodging major civil rights, consumer protection, and financial stability laws. H.R. 4801, the deceptively named Unleashing AI Innovation in Financial Services Act would unleash dangerous AI test projects on unsuspecting customers and the economy by granting them so-called sandboxes that would exempt them from federal laws and regulations. Financial companies could request exemptions from rules that protect people from discrimination, fraud, predatory lending, market manipulation, and other financial harm — and the bill would effectively force regulators to rubber stamp these dangerous AI projects. In other words, they want to unleash risky untested tech at the expense of consumers with no regard to the negative impacts on people or the economy when things go wrong. That is a reckless bargain. Financial firms already use AI across lending, customer service, fraud detection, debt collection, securities trading, and more — all while following civil rights, consumer protection, and financial stability laws. Big banks cannot be allowed to use any AI systems while escaping the laws designed to protect the public. The harm will fall hardest on the same communities that have long faced discrimination from banks and lenders, including Black, Latine, Asian American, Indigenous, disabled, and women consumers. Congress should be strengthening AI oversight, not handing financial firms a legal escape hatch from accountability.
The bill does not even require financial firms to tell customers when AI is being used. It does not require customers to consent before companies collect and use personal data. It does not guarantee real remedies when AI systems rely on bad data, make inaccurate decisions, or cause serious harm. That should alarm everyone. AI systems can be unexplainable and difficult to challenge when they erroneously harm people. When a financial firm makes a wrong decision through a black-box system, the person harmed may have no meaningful way to understand what happened, correct the mistake, or hold anyone accountable. The risks go beyond individual people. AI systems used in trading, risk management, and financial decision-making can undermine market integrity and financial stability. AI-powered business strategies — because they come from a few dominant AI vendors that create self-reinforcing feedback loops — could push large parts of the financial system toward the same investment strategies, the same assumptions, and the same failures. Wall Street wants the upside of AI while offloading the risk onto consumers, investors, communities, and the broader economy. Congress must stop this giveaway before it becomes law. Let’s keep fighting for a financial system that protects people, not just Wall Street’s profits. -Patrick Patrick Woodall (he/him)
|

Aucun commentaire:
Enregistrer un commentaire