Dominik Jeurissen
Queen Mary University of London
Dominik Jeurissen is a PhD student at Queen Mary University, exploring new ways of playtesting complex games using large language models (LLMs). Dominik collaborates with Creative Assembly to develop playtesting methods that can be applied to games such as Total War. His research has recently shifted towards LLMs due to their potential for developing new playtesting methods that require no training or can yield more diverse playtesting agents. Before his PhD, Dominik worked for four years in commercial software development.
Dominik Jeurissen is speaking at the following session/s
LLM Agents for QA - Potential & Limitations
With tight deadlines and a constantly evolving game, properly testing a game is challenging. Using AI agents to simplify this work sounds promising, but machine learning is often too slow, and manually implementing the agents takes time. As such, one particularly exciting application for QA is to use Large Language Models (LLMs) as zero-shot game-playing agents. LLM-based agents can play games without pre-training, making them a valuable asset to test a constantly changing game. But how well do they play games? What are their strengths, and what do they struggle with?
In this session, we will review how to implement zero-shot agents with LLMs and show examples of existing LLM-based game-playing agents. We will also show that although these agents have many limitations, they have the potential to be a valuable tool for QA to automate many repetitive tasks.
Session Takeaway
- An overview of the cutting-edge research on LLM-based zero-shot game-playing agents.
- Learn what these agents can do well and what their limitations are.
- Practical tips on how to utilize LLM agents as QA tools.
Session speakers