When Technology Falls Short
Fall 2019 - Present
Description
Machine learning, and artificial intelligence more broadly, presently inhabits an ambiguous space between liberating and anti-social technologies. This work is looking at socially constructive use cases and fixes for existing problems. With respect to new use cases that help society, we're developing an AI early warning system to monitor how manipulated content online such as altered photos in memes leads, in some cases, to violent conflict and societal instability. It can also potentially interfere with democratic elections. Look no further than the 2019 Indonesian election to learn how online disinformation can have an unfortunate impact on the real world. Our system may prove useful to journalists, peacekeepers, election monitors, and others who need to understand how manipulated content is spreading online during elections and in other contexts.
When it comes to existing problems, applied machine learning research has the potential to fuel further advances in data science, but it is greatly hindered by an ad hoc design process, poor data hygiene, and a lack of statistical rigor in model evaluation. Recently, these issues have begun to attract more attention as they have caused public and embarrassing issues in research and development, such as claims that machine learning can predict criminality based on the appearance of a face. Drawing from our experience as machine learning researchers, we follow the applied machine learning process from algorithm design to data collection to model evaluation, drawing attention to common pitfalls and providing practical recommendations for improvements. At each step, case studies are introduced to highlight how these pitfalls occur in practice, and where things could be improved.
This work was supported by USAID under agreement number 7200AA18CA00054, DARPA and Air Force Research Laboratory (AFRL) under agreement number FA8750-16-2-0173, and the NVIDIA Corporation.
Publications
- "AI Misinformation Detectors Can’t Save Us From Tyranny — At Least Not Yet,",Bulletin of the Atomic Scientists,September 2024.
- "C-CLIP: Contrastive Image-Text Encoders to Close the Descriptive-Commentative, ,
Gap,"Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision
(WACV),January 2024.[pdf][bibtex]@inproceedings{theisen2024c,
title={C-CLIP: Contrastive Image-Text Encoders
to Close the Descriptive-Commentative Gap},
author={Theisen, William and Scheirer, Walter J},
booktitle={Proceedings of the IEEE/CVF Winter Conference
on Applications of Computer Vision (WACV)},
pages={7241--7250},
year={2024}
}
- "Motif Mining: Finding and Summarizing Remixed Image Content,", , , ,
, ,Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision
(WACV),January 2023.[pdf][bibtex]@inproceedings{Theisen_2023,
author = {Theisen, William and
Cedre, Daniel Gonzalez and
Carmichael, Zachariah and
Moreira, Daniel and
Weninger, Tim and
Scheirer, Walter},
title = {Motif Mining: Finding and Summarizing Remixed Image Content},
publisher = {arXiv},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on
Applications of Computer Vision (WACV)},
year = {2023},
}
- "Meme warfare: AI countermeasures to disinformation should focus on popular,, , ,
not perfect, fakes,"Bulletin of the Atomic Scientists,May 2021.[pdf][bibtex]@article{yankoski2021memes,
title={}Meme warfare: AI countermeasures to disinformation should focus on popular,
not perfect, fakes,
author={Yankoski, Michael, Scheirer, Walter and Weninger, Tim},
journal={Bulletin of the Atomic Scientists},
month={May},
year={2021},
publisher={Taylor \& Francis}
}
- "Automatic Discovery of Political Meme Genres with Diverse Appearances,", , , , , , ,Proceedings of the International AAAI Conference on Web and Social Media (ICWSM),June 2021.[pdf] [code] [data][bibtex]@inproceedings{theisen2021automatic,
title={Automatic Discovery of Political Meme Genres with Diverse Appearances},
author={William Theisen and Joel Brogan and Pamela Bilo Thomas and
Daniel Moreira and Pascal Phoa and Tim Weninger and Walter Scheirer},
booktitle={International AAAI Conference on Web and Social Media (ICWSM)}
year={2021},
}
- "The Criminality From Face Illusion,", , , ,IEEE Transactions on Technology and Society (T-TS),December 2020.[pdf][bibtex]@ARTICLE{9233349,
author={K. W. {Bowyer} and M. C. {King} and W. J. {Scheirer} and K. {Vangara}},
journal={IEEE Transactions on Technology and Society},
title={The "Criminality From Face" Illusion},
year={2020},
volume={1},
number={4},
pages={175-183},
doi={10.1109/TTS.2020.3032321}
}
- "Pitfalls in Machine Learning Research: Reexamining the Development Cycle,", ,Proceedings of the I Can't Believe It's Not Better! Workshop (ICBINB@NeurIPS 2020),December 2020.[pdf] [poster][bibtex]@inproceedings{Biderman20_ICBINB,
author = {Stella Biderman and
Walter J. Scheirer},
title = {Pitfalls in Machine Learning Research: Reexamining the Development Cycle},
booktitle = {Proceedings of the I Can't Believe It's Not Better! Workshop
(ICBINB@NeurIPS 2020)},
year = {2020}
}
- "A Pandemic of Bad Science,",Bulletin of the Atomic Scientists,July 2020.
- "An AI Early Warning System to Monitor Online Disinformation, Stop Violence,, , ,
and Protect Elections,"Bulletin of the Atomic Scientists,March 2020.[pdf][bibtex]@article{yankoski2020ai,
title={An AI early warning system to monitor online disinformation, stop
violence, and protect elections},
author={Yankoski, Michael and Weninger, Tim and Scheirer, Walter},
journal={Bulletin of the Atomic Scientists},
month={March},
year={2020},
publisher={Taylor \& Francis}
}
Press Coverage
- Throughline (NPR): "The Conspiracy Files"
- Yahoo News: "Telegram CEO Pavel Durov Reportedly Charged by French Authorities"
- The Hill: "Misinformation Floods Social Media in Wake of Breakneck News Cycle"
- The Guardian: "The Big Idea: The Simple Trick That Can Sabotage Your Critical Thinking"
- The Atlantic: "The End of the 'Photoshop Fail'"
- To the Best of Our Knowledge (NPR): "Does AI Dream?"
- Yahoo News: "Misinformation Took Over Social Media After the Key Bridge Collapse"
- Red Hat Research Quarterly: "Walter Scheirer. Future Vision: On the Internet, Technopanic, and the Limits of AI"
- Los Angles Times: "Scammers Used AI to Tell the World I Was Dead. Why? I Had to Find Out"
- Financial Times, FT Magazine: "It's Only a Matter of Time Before Disinformation Leads to Disaster"
- The Media Show (BBC Radio 4): "Deepfakes v Democracy"
- Nautilus Magazine: "Stop Worrying About Deepfakes"
- IEEE Spectrum: "Fakes: Not an Internet Thing, a Human Thing"
- The Political Scene Podcast: "We've Been Wrong to Worry About Deepfakes (So Far)"
- Undark: "Limits to Growth: Can AI's Voracious Appetite for Data Be Tamed?"
- Bulletin of the Atomic Scientists: "How can the Biden administration reduce scientific disinformation?
Slow the high-pressure pace of scientific publishing" - Daily Beast: "Prof Said Jade Amulets May Block COVID — and Became a Science Supervillain"
- Bulletin of the Atomic Scientists: "How to Make AI Less Racist"
- VentureBeat: "AI Weekly: AI Phrenology is Racist Nonsense, so of Course it Doesn't Work"