When studies are done on human beings, they are required to have an “Institutional Review Board” or “IRB” review the study, and formally approve the research, this is not being done at present for federally-funded work with AI/LLM programs and may, experts warn, be significantly harming U.S. citizens.
This is done because studies are being conducted on human beings.
Critics say that ‘Large Language Models’ powered by Artificial Intelligence, platforms like “Claude” and “ChatGPT” are engaged in this kind of human research and should be subject to board review and approval.And they point out that current HHS policies would appear to require IRB-review for all federally-funded research on human subjects, but that Big Tech companies have so far evaded such review.
IRB Rules (45 C.F.R. 46.109, “The Common Rule”), requires all federally funded human-subjects research to go through IRB approval, informed consent, and continuing oversight.
Some courts have recognized that failure to obtain IRB approval can be used as evidence in itself of negligence or misconduct.
Even low-impact and otherwise innocent research requires this kind of professional review to ensure that harmful effects are not inadvertently caused to the human participants. Most modern surveys are often required to have an IRB review prior to its start.
Already, scientists have raised alarm about the mental and psychological impact of LLM use among the population.

No comments:
Post a Comment