Register for the 5th NLP Summit, a Free Online Conference on Sep 24-26. Register now.
was successfully added to your cart.
Watch the webinar

Automated Testing of Bias, Fairness, and Robustness of Language Models in the Generative AI Lab

Testing and mitigating bias, fairness, and robustness issues in AI applications in now a legal requirement in the USA in regulated industries like healthcare, human resources, and financial services. This webinar presents new capabilities within the no-code Generative AI Lab, designed for building custom language models by non-technical domain experts, that enable compliance with such requirements and embody best practices for Responsible AI. We’ll cover how you can:

  1. Create, edit, and reuse test suites
  2. Automatically generate test cases for robustness & bias
  3. Manually review and edit tests when needed
  4. Run LLM test suites and see both summarized and drill-down results
  5. Run regression testing before certifying new versions of models, or competing models

This webinar is intended for anyone interested in testing, certifying, and mitigating bias issues in custom language models for real-world systems.

David Cecchini
Senior Data Scientist at John Snow Labs

Ph.D. at Tsinghua-Berkeley Shenzhen Institute | Data Scientist

preloader