The Center for AI and Digital Policy (CAIDP), an offshoot of the Democrat-aligned Michael Dukakis Institute for Leadership and Innovation, is pressing the Federal Trade Commission (FTC) to put the brakes on OpenAI.
The policy group, which seeks to influence governments around the world on AI policy (in Europe, it has agitated for limitations on technology used to track illegal migrants), is asking the FTC to prevent OpenAI from releasing any more commercial models of its market-leading ChatGPT tool.
In a complaint to the agency on Thursday, which is on the group’s website, the Center for Artificial Intelligence and Digital Policy called GPT-4 “biased, deceptive, and a risk to privacy and public safety.”
The group in its complaint said OpenAI’s ChatGPT-4 fails to meet the FTC’s standard of being “transparent, explainable, fair and empirically sound while fostering accountability.”
The group urged the FTC “to open an investigation into OpenAI, enjoin further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.”
The involvement of the leftist organization, which in December told the UN High Commissioner on Human Rights that AI could be used to “spread disinformation” and “deepen inequality or exacerbate existing discrimination,” reveals political motivations behind recent efforts to constrain AI.
Much attention has focused on non-partisan concerns about the potential of AI to damage human interests, or its potential to be used to facilitate scamming, hacking, and cyberwarfare. Others worry that a superintelligent AI could start killing humans.
The involvement of an overtly partisan, Democrat-linked organization like CAIDP, reveals another agenda — that AI, like social media before it, needs to be prevented from undermining their policy goals.
Allum Bokhari is the senior technology correspondent at Breitbart News. He is the author of #DELETED: Big Tech’s Battle to Erase the Trump Movement and Steal The Election.
Read the full article here
Discussion about this post