Klaudia Dussa-Zieger

Dr Klaudia Dussa-Zieger is team leader for consulting at Imbus and a recognised expert in the field of software testing. She specialises in test management, the continuous improvement of test processes and testing of and with AI.

With a strong passion for testing and quality, she has been involved in customer projects for more than 25 years, is a trainer for the ISTQB® Certified Tester Foundation and Advanced Level and has taught for many years as a lecturer for software testing at the University of Erlangen-Nuremberg. She is also a frequent speaker at conferences on testing topics that concern her. Dr Dussa-Zieger contributes her expertise to standardisation work: since March 2009, she has been the chairwoman of the DIN working group ‘System and Software Engineering’ and is actively involved in international standardisation, including ISO/IEC/IEE 29119. She has been a member of the German Testing Board (GTB) for over ten years and is currently its deputy chairwoman.. She is also President of the ISTQB® (International Software Testing Qualifications Board) and heads the AI Taskforce there, which deals with the challenges and potential of AI in software testing.

Klaudia Dussa-Zieger

Dr Klaudia Dussa-Zieger is team leader for consulting at Imbus and a recognised expert in the field of software testing. She specialises in test management, the continuous improvement of test processes and testing of and with AI.

With a strong passion for testing and quality, she has been involved in customer projects for more than 25 years, is a trainer for the ISTQB® Certified Tester Foundation and Advanced Level and has taught for many years as a lecturer for software testing at the University of Erlangen-Nuremberg. She is also a frequent speaker at conferences on testing topics that concern her. Dr Dussa-Zieger contributes her expertise to standardisation work: since March 2009, she has been the chairwoman of the DIN working group ‘System and Software Engineering’ and is actively involved in international standardisation, including ISO/IEC/IEE 29119. She has been a member of the German Testing Board (GTB) for over ten years and is currently its deputy chairwoman.. She is also President of the ISTQB® (International Software Testing Qualifications Board) and heads the AI Taskforce there, which deals with the challenges and potential of AI in software testing.

Klaudia Dussa-Zieger

Dr Klaudia Dussa-Zieger is team leader for consulting at Imbus and a recognised expert in the field of software testing. She specialises in test management, the continuous improvement of test processes and testing of and with AI.

With a strong passion for testing and quality, she has been involved in customer projects for more than 25 years, is a trainer for the ISTQB® Certified Tester Foundation and Advanced Level and has taught for many years as a lecturer for software testing at the University of Erlangen-Nuremberg. She is also a frequent speaker at conferences on testing topics that concern her. Dr Dussa-Zieger contributes her expertise to standardisation work: since March 2009, she has been the chairwoman of the DIN working group ‘System and Software Engineering’ and is actively involved in international standardisation, including ISO/IEC/IEE 29119. She has been a member of the German Testing Board (GTB) for over ten years and is currently its deputy chairwoman.. She is also President of the ISTQB® (International Software Testing Qualifications Board) and heads the AI Taskforce there, which deals with the challenges and potential of AI in software testing.

calendar

Keynote
25 November
Using LLM's for Software Testing

This talk begins with a brief overview of Large Language Models (LLMs), explaining their general approach and underlying principles. The main focus is on the practical application of LLMs in the field of software testing. Various use cases across the software testing lifecycle will be outlined, highlighting how LLMs can support and enhance different testing activities. A detailed example will demonstrate a Retrieval-Augmented Generation (RAG) approach using open-source LLMs deployed in a private cloud environment.

The session concludes with a summary of updates to the ISTQB portfolio, specifically those related to testing AI-based systems and leveraging AI tools to support the testing process.

Workshop Session
Half-Day
24 November

This workshop offers a comprehensive introduction to prompt engineering. Participants will explore the fundamental components of prompts, common prompt patterns, and effective prompting techniques. All concepts will be illustrated through examples specifically tailored to the domain of software testing.

In addition, the workshop includes a series of hands-on exercises that allow participants to observe firsthand how variations in prompt design can influence the output of large language models (LLMs). By experimenting with multiple LLMs, attendees will also gain insights into how different models respond uniquely to the same prompts

calendar

Keynote
25 November
Using LLM's
for Software Testing

This workshop offers a comprehensive introduction to prompt engineering. Participants will explore the fundamental components of prompts, common prompt patterns, and effective prompting techniques. All concepts will be illustrated through examples specifically tailored to the domain of software testing.

In addition, the workshop includes a series of hands-on exercises that allow participants to observe firsthand how variations in prompt design can influence the output of large language models (LLMs). By experimenting with multiple LLMs, attendees will also gain insights into how different models respond uniquely to the same prompts

Workshop Sessions
Half-Day
24 November

This workshop offers a comprehensive introduction to prompt engineering. Participants will explore the fundamental components of prompts, common prompt patterns, and effective prompting techniques. All concepts will be illustrated through examples specifically tailored to the domain of software testing.

In addition, the workshop includes a series of hands-on exercises that allow participants to observe firsthand how variations in prompt design can influence the output of large language models (LLMs). By experimenting with multiple LLMs, attendees will also gain insights into how different models respond uniquely to the same prompts

en_US