“[A] scientific and literary adventure – a form of experimental comprehension.”
“[The] point by point analysis of these unique documents is a task of the future. It shall require the efforts of more than one investigator and the collaboration of experts of most divergent scientific fields.”
David Boder, “The Tale of Anna Kovitzka,” pp. 2-3. (unpublished manuscript, c. 1948). UCLA Young Research Library Special Collections.
Directed by Professor Todd Presner (UCLA), the purpose of the lab is to develop and apply computational methods of analysis to expand how we read, hear, and analyze Holocaust and genocide testimonies.
The visualizations published on this site, developed by lab members over the past few years, are part of the forthcoming book Ethics of the Algorithm: Digital Humanities and Holocaust Memory (Princeton University Press, 2024). As part of my research and teaching in UCLA’s Digital Humanities program and Department of European Languages and Transcultural Studies, I direct an interdisciplinary digital humanities lab focused on exploring how computational methods and DH tools can be used to ask new questions about Holocaust history and memory.
Our vertically-integrated research team brings together undergraduate and graduate students, research interns, librarians, technology staff, and other faculty interested in such questions. Members of the team have backgrounds in fields such as history, literature, statistics, computer science, linguistics, digital humanities, and art & design.
Digital Humanities research: We apply and develop text analysis tools used in computational linguistics (spaCy and BERT for Natural Language processing), PRAAT and SPEK for phonetic and speech analysis, network visualization and data analysis tools such as Tableau and Neo4j, as well as custom-developed, rule-based algorithms for “semantic triplet” extraction and experimental, machine learning processes for text disambiguation, indexing, and search.
Cultural Heritage and AI research: Our current research investigates how AI (mostly Large Language Models) can be used in ethical and responsible ways to analyze cultural repositories (digitized museum collections and digital archives/libraries). This research includes the use of LLMs to enhance metadata and indexing, create contextual information, and develop knowledge graphs. We are also using LLMs to re-imagine the extraction and characterization of semantic triplets.
Learn more about our work: