Cover Image for Hallucinating Certificates: Using Generative Language Models for Testing TLS Software Parsing
Cover Image for Hallucinating Certificates: Using Generative Language Models for Testing TLS Software Parsing
Avatar for Trajectory Labs
Presented by
Trajectory Labs
Hosted By

Hallucinating Certificates: Using Generative Language Models for Testing TLS Software Parsing

Get Tickets
Welcome! Please choose your desired ticket type:
About Event

In this talk, Talha Paracha will present insights from his latest research on using language models for improving software security ("Hallucinating Certificates", to appear at ICSE 2026).

Certificate validation is a crucial step in Transport Layer Security (TLS), the de facto standard network security protocol. Prior research has shown that differentially testing TLS implementations with synthetic certificates can reveal critical security issues, such as accidentally accepting untrusted certificates.

Paracha et al. introduce a new approach, MLCerts, to generate synthetic certificates that leverages generative language models to more extensively test software implementations. Recently, these models have become (in)famous for their applications in generating content, writing code, and conversing with users, as well as for "hallucinating" syntactically correct yet semantically nonsensical output. The authors leverage two novel insights in their work: (a) TLS certificates can be expressed in natural-like language, namely in the X.509 standard that aids human readability, and (b) differential testing can benefit from hallucinated malformed test cases. MLCerts finds significantly more distinct discrepancies between the five TLS implementations OpenSSL, LibreSSL, GnuTLS, MbedTLS, and MatrixSSL than the state-of-the-art benchmark Transcert.

Event Schedule
6:00 to 6:30 - Food and introductions
6:30 to 7:30 - Presentation and Q&A
7:30 to 9:00 - Open Discussions

​​​​If you can't attend in person, join our live stream starting at 6:30 pm via this link.

​​​​This is part of our weekly AI Safety Thursdays series. Join us in examining questions like: 

  • ​​​​How do we ensure AI systems are aligned with human interests? 

  • ​​​​How do we measure and mitigate potential risks from advanced AI systems? 

  • ​​​​What does safer AI development look like?

Location
30 Adelaide St E
Toronto, ON M5C 3G8, Canada
Enter the main lobby of the building and let the security staff know you are here for the AI event. You may need to show your RSVP on your phone. You will be directed to the 12th floor where the meetup is held. If you have trouble getting in, give Georgia a call at 519-981-0360.
Avatar for Trajectory Labs
Presented by
Trajectory Labs
Hosted By