Practical Technology Ethics
Fall 2019 - Present
Description
Machine learning, and artificial intelligence more broadly, presently inhabits an ambiguous space between liberating and anti-social technologies. This work is looking at socially constructive use cases and fixes for existing problems. With respect to new use cases that help society, we're developing an AI early warning system to monitor how manipulated content online such as altered photos in memes leads, in some cases, to violent conflict and societal instability. It can also potentially interfere with democratic elections. Look no further than the 2019 Indonesian election to learn how online disinformation can have an unfortunate impact on the real world. Our system may prove useful to journalists, peacekeepers, election monitors, and others who need to understand how manipulated content is spreading online during elections and in other contexts.
When it comes to existing problems, applied machine learning research has the potential to fuel further advances in data science, but it is greatly hindered by an ad hoc design process, poor data hygiene, and a lack of statistical rigor in model evaluation. Recently, these issues have begun to attract more attention as they have caused public and embarrassing issues in research and development, such as claims that machine learning can predict criminality based on the appearance of a face. Drawing from our experience as machine learning researchers, we follow the applied machine learning process from algorithm design to data collection to model evaluation, drawing attention to common pitfalls and providing practical recommendations for improvements. At each step, case studies are introduced to highlight how these pitfalls occur in practice, and where things could be improved.
This work was supported by USAID under agreement number 7200AA18CA00054, DARPA and Air Force Research Laboratory (AFRL) under agreement number FA8750-16-2-0173, and the NVIDIA Corporation.
Publications
- "Motif Mining: Finding and Summarizing Remixed Image Content,", , , ,
, ,Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision
(WACV),January 2023.[pdf][bibtex]@inproceedings{Theisen_2023,
author = {Theisen, William and
Cedre, Daniel Gonzalez and
Carmichael, Zachariah and
Moreira, Daniel and
Weninger, Tim and
Scheirer, Walter},
title = {Motif Mining: Finding and Summarizing Remixed Image Content},
publisher = {arXiv},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on
Applications of Computer Vision (WACV)},
year = {2023},
}
- "Meme warfare: AI countermeasures to disinformation should focus on popular,, , ,
not perfect, fakes,"Bulletin of the Atomic Scientists,May 2021.[pdf][bibtex]@article{yankoski2021memes,
title={}Meme warfare: AI countermeasures to disinformation should focus on popular,
not perfect, fakes,
author={Yankoski, Michael, Scheirer, Walter and Weninger, Tim},
journal={Bulletin of the Atomic Scientists},
month={May},
year={2021},
publisher={Taylor \& Francis}
}
- "Automatic Discovery of Political Meme Genres with Diverse Appearances,", , , , , , ,Proceedings of the International AAAI Conference on Web and Social Media (ICWSM),June 2021.[pdf] [code] [data][bibtex]@inproceedings{theisen2021automatic,
title={Automatic Discovery of Political Meme Genres with Diverse Appearances},
author={William Theisen and Joel Brogan and Pamela Bilo Thomas and
Daniel Moreira and Pascal Phoa and Tim Weninger and Walter Scheirer},
booktitle={International AAAI Conference on Web and Social Media (ICWSM)}
year={2021},
}
- "The Criminality From Face Illusion,", , , ,IEEE Transactions on Technology and Society (T-TS),December 2020.[pdf][bibtex]@ARTICLE{9233349,
author={K. W. {Bowyer} and M. C. {King} and W. J. {Scheirer} and K. {Vangara}},
journal={IEEE Transactions on Technology and Society},
title={The "Criminality From Face" Illusion},
year={2020},
volume={1},
number={4},
pages={175-183},
doi={10.1109/TTS.2020.3032321}
}
- "Pitfalls in Machine Learning Research: Reexamining the Development Cycle,", ,Proceedings of the I Can't Believe It's Not Better! Workshop (ICBINB@NeurIPS 2020),December 2020.[pdf] [poster][bibtex]@inproceedings{Biderman20_ICBINB,
author = {Stella Biderman and
Walter J. Scheirer},
title = {Pitfalls in Machine Learning Research: Reexamining the Development Cycle},
booktitle = {Proceedings of the I Can't Believe It's Not Better! Workshop
(ICBINB@NeurIPS 2020)},
year = {2020}
}
- "A Pandemic of Bad Science,",Bulletin of the Atomic Scientists,July 2020.
- "An AI Early Warning System to Monitor Online Disinformation, Stop Violence,, , ,
and Protect Elections,"Bulletin of the Atomic Scientists,March 2020.[pdf][bibtex]@article{yankoski2020ai,
title={An AI early warning system to monitor online disinformation, stop
violence, and protect elections},
author={Yankoski, Michael and Weninger, Tim and Scheirer, Walter},
journal={Bulletin of the Atomic Scientists},
month={March},
year={2020},
publisher={Taylor \& Francis}
}
Press Coverage
- Bulletin of the Atomic Scientists: "How can the Biden administration reduce scientific disinformation?
Slow the high-pressure pace of scientific publishing" - Daily Beast: "Prof Said Jade Amulets May Block COVID — and Became a Science Supervillain"
- Bulletin of the Atomic Scientists: "How to Make AI Less Racist"
- VentureBeat: "AI Weekly: AI Phrenology is Racist Nonsense, so of Course it Doesn't Work"