Michaël Pilaeten

Breaking the system, helping to rebuild it, and providing advice and guidance on how to avoid problems. That’s Michaël in a nutshell. With almost 20 years of experience in test consultancy in a variety of environments, he has seen the best (and worst) in software development. In his current role as Learning & Development Manager, he is responsible for guiding his consultants, partners, and customers on their personal and professional path towards quality and excellence. He is the chair of the ISTQB Agile workgroup and Product Owner of the ISTQB® CTFL 4.0 syllabus. Furthermore, he is a member of the BNTQB (Belgium & The Netherlands Testing Qualification Board), an accredited training for most ISTQB® & IREB trainings and an international keynote speaker and workshop facilitator.


28 November

The Data Bias

In our profession as testers, analysts, and developers, we try to base our decisions on objectivity. On data. Unfortunately, the information or data we base our decisions on is often biased. There are many reasons behind these biases, but the most striking one is a gender bias. A gender bias that is actually confirmed by a data bias. There are numerous examples of products being designed (IT and non-IT related), where a data bias results in a gender bias – with a negative effect on the quality, and on the uptake. Where does that put us as testers? What can we undertake to avoid releasing products that are not designed for the audience we are targeting?

This track talk provides examples of the presence of data and gender biases, and how they result in negative consequences. I will also show you techniques to detect the biases, plus tools and best practices to avoid them.

27 November - Full Day

Collaborative user story writing, review discussions, pair testing, crowd testing, etc.
Testers are working more and more with others to achieve higher levels of quality for the products they help deliver, yet there is one activity in the test process that testers keep doing alone. Whether you are determining your equivalence classes, setting up your decision tables or struggling with n-switch coverage for your state transition diagram, typically it is just you by yourself trying to uncover the required test cases for an item under test. Even if you are part of a team of testers, you still split up the work, each tester being responsible for his or her assigned part of functionality. You might review or execute your colleagues’ work, yet that is not real collaboration. Everybody knows that when people work together, the sum is often greater than its parts, so why do we keep designing tests all alone? Traditional test design techniques seem to be created for solo application, so where to find the tunes to sing a different song?

In this workshop, we want to answer the question on how to design great tests together with your peers. We present different techniques, which you will also learn to apply to experience yourself how well each new technique suits your needs. Since cooperation does not come out of a can from a vending machine, we will not only focus on the results, but also on setting the required context for collaborative test design. All by doing and experiencing yourselves!
So what are you waiting for? Now is your chance to expand your test design tool case(s) with techniques that you can practice with the whole team. Let’s make testing even more fun!