Police drag away a tent from a pro-Palestinian encampment at the University of California, Irvine on May 15, 2024.
Leonard Ortiz/MediaNews Group/Orange County Register via Getty Images
Framing dissent and poverty as a menace to public order can threaten fundamental rights, particularly when it’s used to justify the deployment of predictive technology.
Algorithms could serve as mirrors for you to check your biases.
FG Trade/E+ via Getty Images
People are better able to see and correct biases in algorithms’ decisions than in their own decisions, even when algorithms are trained on their decisions.
Using technology to screen job applicants might be faster than reading CVs and face- to-face interviews but the most suitable candidate could be overlooked.
The AI most likely to cause you harm is not some malevolent superintelligence, but the loan algorithm at your bank.
AP Photo/Mark Humphrey
The explosion of generative AI tools like ChatGPT and fears about where the technology might be headed distract from the many ways AI affects people every day – for better and worse.
Vice President Kamala Harris held a meeting with civil rights leaders and consumer protection experts about the societal impact of AI on July 12, 2023.
Mandel NGAN/AFP via Getty Images
Large language models have been shown to ‘hallucinate’ entirely false information, but aren’t humans guilty of the same thing? So what’s the difference between both?
An increasing number of health care decisions rely on information from algorithms.
Tom Werner/Digital Vision via Getty Images
Biased algorithms in health care can lead to inaccurate diagnoses and delayed treatment. Deciding which variables to include to achieve fair health outcomes depends on how you approach fairness.
Markets are increasingly driven by decisions made by AI.
PhonlamaiPhoto/iStock via Getty Images
AI algorithms reinforce existing biases. Before they are introduced as routine tools in clinical care, we must establish ethical guidelines to reduce the risk of harm.
The new generation of AI tools makes it a lot easier to produce convincing misinformation.
Photo by Olivier Douliery/AFP via Getty Images
Powerful new AI systems could amplify fraud and misinformation, leading to widespread calls for government regulation. But doing so is easier said than done and could have unintended consequences.
Is a wildly popular social media app a threat to U.S. citizens?
AP Photo/Michael Dwyer
James P. Jimirro Professor of Media Effects, Co-Director, Media Effects Research Laboratory, & Director, Center for Socially Responsible AI, Penn State
Professor, Computing and Information Systems, Pro Vice-Chancellor (Research Systems), and Pro Vice-Chancellor (Digital & Data), The University of Melbourne