Dr. Stuart Reid is Chief Technology Officer at STA Consulting in Seoul with 40 years’ experience in the IT industry, working in development, testing, and education. While currently concentrating on the testing of AI, application areas range from safety-critical to financial and media.
Stuart supports the worldwide testing community in a number of roles. He is convener of the ISO Software Testing Working Group, which has published the ISO/IEC/IEEE 29119 series of software testing standards and is the co-convener of the ISO Joint Working Group on Testing AI. Stuart previously led the ISO project on autonomous systems for software and systems engineering. He was also co-founder and first president of the International Software Testing Qualifications Board (ISTQB), created to promote software testing qualifications globally and he was one of the authors of the new ISTQB certification on the testing of AI-based systems.
Building Trust in AI Through Risk: An Emerging Test Specialism
Artificial intelligence (AI) has become a mainstream force, set to contribute $15.7 trillion to the global economy by 2030. It’s reshaping industries, boosting efficiency, and affecting our daily lives through personalized recommendations and predictive analytics. Despite its potential, AI faces significant trust issues concerning bias, ethics and transparency. The EU’s AI Act seeks to enhance trust by managing risk through regulations. However, addressing these new risks requires a concerted testing effort. AI and machine-learning (ML) systems differ from traditional IT systems, creating both new risks and the need for new testing practices to mitigate them. For instance, their unique probabilistic and non-deterministic nature poses a substantial test oracle challenge, which requires new approaches, such as adversarial testing and metamorphic testing.
This talk proposes a new AI testing specialism, tailored to tackle AI and ML challenges, and grounded in risk-based testing fundamentals. As AI systems grow in complexity, AI test specialists will play a crucial role in ensuring quality, confidence, and trust. This keynote underscores the necessity of these specialists and provides insights into their future role.
27 November - Full Day
Tutorial: Testing Machine Learning Systems
Machine Learning (ML), by far the most popular form of AI, is now the top priority in IT investment. However, from the user’s perspective AI has major trust issues. These can most easily be addressed through testing, but the large volume of new AI software and the lack of testers with the necessary specialist skills means we are failing to satisfactorily address this growing problem.
This tutorial will provide testers with an insight into the fascinating world of ML development and testing. The tutorial is largely hands-on, but is backed up by the latest developments in in ML testing and reflects the current progress in this area of the ISO working group on testing AI systems. It leads attendees through the experience of both building various ML models, such as neural networks and decision trees, and then testing these models using specialist techniques. These test techniques are focused on three principal areas, input data testing, ML framework testing and ML model testing, such as metamorphic testing, each of which will be covered in this tutorial.
So, if you’re at all interested in how ML works, and feel that this is an area you should know more about, this tutorial is an excellent place to start.