was successfully added to your cart.

Responsible AI Blog

Builders and buyers of AI systems are required to test and show that their systems comply with legislation – on safety, discrimination, privacy, transparency, and accountability. This talk covers recent regulation in this space, limitations that current Generative AI models have, and an automated testing framework that mitigates them.We describe the open-source LangTest library, which can automate the generation and execution of more than 100 types of Responsible AI tests. We then introduce Pacific AI, which provides a no-code interface for this capability for domain experts, as well as automating many of the best practices on how these tools should be used.

Blog

Builders and buyers of AI systems are required to test and show that their systems comply with legislation – on safety, discrimination, privacy, transparency, and accountability. This talk covers recent...

Grant will fund R&D of LLMs for automated entity recognition, relation extraction, and ontology metadata...

This talk presents new levels of accuracy that have very recently been achieved, on public and independently reproducible benchmarks, on the three most common use cases for language models in...

This webinar presents key findings from the 2024 Generative AI in Healthcare Survey, conducted in February & March of 2024 by Gradient Flow to assess the key use cases, priorities, and...

In the era of rapidly evolving Large Language Models (LLMs) and chatbot systems, we highlight the advantages of using LLM systems based on RAG (Retrieval Augmented Generation). These systems excel...