Towards Autonomous Testing Agents via Conversational Large Language Models

by   Robert Feldt, et al.
KAIST 수리과학과
Chalmers University of Technology

Software testing is an important part of the development cycle, yet it requires specialized expertise and substantial developer effort to adequately test software. The recent discoveries of the capabilities of large language models (LLMs) suggest that they can be used as automated testing assistants, and thus provide helpful information and even drive the testing process. To highlight the potential of this technology, we present a taxonomy of LLM-based testing agents based on their level of autonomy, and describe how a greater level of autonomy can benefit developers in practice. An example use of LLMs as a testing assistant is provided to demonstrate how a conversational framework for testing can help developers. This also highlights how the often criticized hallucination of LLMs can be beneficial while testing. We identify other tangible benefits that LLM-driven testing agents can bestow, and also discuss some potential limitations.


page 1

page 2

page 3

page 4


An Automated Testing Framework for Conversational Agents

Conversational agents are systems with a conversational interface that a...

Getting pwn'd by AI: Penetration Testing with Large Language Models

The field of software security testing, more specifically penetration te...

A Study on the Challenges of Using Robotics Simulators for Testing

Robotics simulation plays an important role in the design, development, ...

GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models

We investigate the potential implications of large language models (LLMs...

Fundamentals of Generative Large Language Models and Perspectives in Cyber-Defense

Generative Language Models gained significant attention in late 2022 / e...

Please sign up or login with your details

Forgot password? Click here to reset