Guest Lecture: Prof. Murtuza Jadliwala

Topic: Reality Check: Seeing Through the Lies and Deceit of Generative AI Models

2024/08/08 14:00-15:00

Location: TU Darmstadt, Pankratiusstraße 2 (S2|20, 121)

Organizer:
Inviting Professor: Ahmad-Reza Sadeghi


Abstract

Large Language Models (LLMs), such as GPT-4, Gemini, LlaMA, and Mistral, are foundational text generation systems capable of understanding and creating natural language and other types of textual content and can be trained to perform a wide range of specialized tasks such as answering questions, summarizing documents, translating languages, and completing sentences. An emerging application of such models is to generate computer programs (e.g., code snippets and application/cryptographic libraries) using a natural language description of the desired functionality. This is typically accomplished by training or fine tuning these foundational models on vast programming-related datasets including code repositories, technical forums and coding documentation. Such code-generating language models, however, are prone to outputting fictitious (hallucinated) packages/libraries and incorrect programming constructs, which can seriously undermine the correctness, security, and performance of the developed software. In this talk, I will outline how package/library name hallucinations by code-generating LLMs can result in package confusion attacks against open-source package repositories such as PyPI and npm, eventually resulting in the generation of malicious application/cryptographic code and libraries. I will characterize the package hallucinations generated by these models and discuss some strategies to significantly reduce the occurrence of such hallucinations. Time permitting, I will also briefly discuss some other challenges in the domain of generative AI security, specifically the problem of mitigating toxicity in conversational applications (e.g., chatbots) of LLMs and defending against fake or synthetically generated images by vision-language models.


Speaker Bio

Murtuza Jadliwala is an Associate Professor and Cloud Technology Endowed Fellow in the Department of Computer Science at the University of Texas at San Antonio (UTSA), where he directs the Security, Privacy, Trust and Ethics in Computing Research Lab (SPriTELab). He obtained his doctoral (PhD) degree in Computer Science from the University at Buffalo, State University of New York in 2008. Prior to joining UTSA, he was a Post-doctoral Fellow at the Swiss Federal Institute of Technology (EPFL) in Lausanne, Switzerland (2008 – 2011) and an Assistant Professor in the Electrical Engineering and Computer Science department at Wichita State University in Wichita, Kansas (2012 – 2017). He has served as the United States (US) Air Force Office of Scientific Research (AFOSR) Summer Faculty Fellow in 2015, has received the Dwayne and Velma Wallace Excellence in Teaching Award in 2017, and the US National Science Foundation’s CAREER award in 2020.