was successfully added to your cart.

    John Snow Labs Medical LLMs are now available in Amazon SageMaker JumpStart

    Avatar photo
    Chief technology officer at John Snow Labs

    Today, we are excited to announce that John Snow Labs’ Medical LLM – Small and Medical LLM – Medium large language models (LLMs) are now available on Amazon SageMaker Jumpstart. Medical LLM is optimized for the following medical language understanding tasks:

    • Summarizing clinical encounters – Summarizing discharge notes, progress notes, radiology reports, pathology reports, and various other medical reports
    • Question answering on clinical notes or biomedical research – Answering questions about a clinical encounter’s principal diagnosis, test ordered, or a research abstract’s study design or main outcomes

    For medical doctors, this tool provides a rapid understanding of a patient’s medical journey, aiding in timely and informed decision-making from extensive documentation. This summarization capability not only boosts efficiency but also makes sure that no critical details are overlooked, thereby supporting optimal patient care and enhancing healthcare outcomes.

    In a blind evaluation performed by the John Snow Labs research team, Medical LLM – Small outperformed GPT-4o in medical text summarization, being preferred by doctors 88% more often for factuality, 92% more for clinical relevance, and 68% more for conciseness. The model also excelled in clinical notes question answering, preferred 46% more for factuality, 50% more for relevance, and 44% more for conciseness. In biomedical research question answering, the model was preferred even more dramatically: 175% for factuality, 300% for relevance, and 356% for conciseness. Notably, despite being smaller than competitive models by more than an order of magnitude, the small model performed comparably in open-ended medical question answering tasks.

    Medical LLM in SageMaker JumpStart is available in two sizes: Medical LLM – Small and Medical LLM – Medium. The models are deployable on commodity hardware, while still delivering state-of-the-art accuracy. This is significant for medical professionals who need to process millions to billions of patient notes without straining computing budgets.

    Both models support a context window of 32,000 tokens, which is roughly 50 pages of text. You can try out the models with SageMaker JumpStart, a machine learning (ML) hub that provides access to algorithms, models, and ML solutions so you can quickly get started with ML. In this post, we walk through how to discover and deploy Medical LLM – Small using SageMaker JumpStart.

     

    About John Snow Labs

    John Snow Labs, the AI for healthcare company, provides state-of-the-art software, models, and data to help healthcare and life science organizations put AI to good use. John Snow Labs is the developer behind Spark NLP, Healthcare NLP, and Medical LLMs. Its award-winning medical AI software powers the world’s leading pharmaceuticals, academic medical centers, and health technology companies. John Snow Labs’ Medical Language Models is by far the most widely used natural language processing (NLP) library by practitioners in the healthcare space (Gradient Flow, The NLP Industry Survey 2022 and the Generative AI in Healthcare Survey 2024).

    John Snow Labs’ state-of-the-art AI models for clinical and biomedical language understanding include:

    • Medical language models, consisting of over 2,400 pre-trained models for analyzing clinical and biomedical text
    • Visual language models, focused on understanding visual documents and forms
    • Peer-reviewed, state-of-the-art accuracy on a variety of common medical language understanding tasks
    • Tested for robustness, fairness, and bias

     

    What is SageMaker JumpStart

    With SageMaker JumpStart, you can choose from a broad selection of publicly available foundation models (FMs). ML practitioners can deploy FMs to dedicated Amazon SageMaker instances from a network isolated environment and customize models using SageMaker for model training and deployment. You can now discover and deploy a Medical LLM – Small model with a few clicks in Amazon SageMaker Studio or programmatically through the SageMaker Python SDK, enabling you to derive model performance and machine learning operations (MLOps) controls with SageMaker features such as Amazon SageMaker PipelinesAmazon SageMaker Debugger, or container logs. The model is deployed in an AWS secure environment and under your virtual private cloud (VPC) controls, helping provide data security. The Medical LLM – Small model is available today for deployment and inference in SageMaker Studio.

     

    Discover the Medical LLM – Small model in SageMaker JumpStart

    You can access the FMs through SageMaker JumpStart in the SageMaker Studio UI and the SageMaker Python SDK. In this section, we go over how to discover the models in SageMaker Studio.

    SageMaker Studio is an integrated development environment (IDE) that provides a single web-based visual interface where you can access purpose-built tools to perform all ML development steps, from preparing data to building, training, and deploying your ML models. For more details on how to get started and set up SageMaker Studio, refer to Amazon SageMaker Studio.

    In SageMaker Studio, you can access SageMaker JumpStart, which contains pre-trained models, notebooks, and prebuilt solutions, under Prebuilt and automated solutions.

    From the SageMaker JumpStart landing page, you can discover various models by browsing through different hubs, which are named after model providers. You can find the Medical LLM – Small model in the John Snow Labs hub (see the following screenshot). If you don’t see the Medical LLM – Small model, update your SageMaker Studio version by shutting down and restarting. For more information, refer to Shut down and Update Studio Classic Apps.

    You can also find the Medical LLM – Small model by searching for “John Snow Labs” in the search field.

    You can choose the model card to view details about the model such as license, data used to train, and how to use the model. You will also find two options to deploy the model, Deploy and Preview notebooks, which will deploy the model and create an endpoint.

    Subscribe to the Medical LLM – Small model in AWS Marketplace

    This model requires an AWS Marketplace subscription. When you choose Deploy in SageMaker Studio, you will be prompted to subscribe to the AWS Marketplace listing if you don’t already have it. If you are already subscribed, choose Deploy.

    If you don’t have an active AWS Marketplace subscription, choose Subscribe. You will be redirected to the listing on AWS Marketplace. Review the terms and conditions and choose Accept offer.

    After you’ve successfully subscribed to the model on AWS Marketplace, you can now deploy the model in SageMaker JumpStart.

     

    Deploy the Medical LLM – Small model in SageMaker JumpStart

    When you choose Deploy in SageMaker Studio, deployment will start.

    You can monitor the progress of the deployment on the endpoint details page that you’re redirected to.

    On the same endpoint details page, on the Test inference tab, you can send a test inference request to a deployed model. This is useful if you want to verify that your endpoint responds to requests as expected. The following prompt asks a question to the Medical LLM – Small model with the context followed by the question and checks the resulting response. Performance metrics, such as execution length time, is also included.

    You can also test out the medical text summarization response.

    Deploy the model and run inference through a notebook

    Alternatively, you can choose Open in JupyterLab to deploy the model through the example notebook. The example notebook provides end-to-end guidance on how to deploy the model for inference and clean up resources. You can configure additional parameters as needed, but SageMaker JumpStart enables you to deploy and run inference out of the box with the included code.

    The notebook already has the necessary code to deploy the model on SageMaker with default configurations, including the default instance type and default VPC configurations. You can change these configurations by specifying non-default values in JumpStartModel. To learn more, refer to the API documentation.

    After you deploy the model, you can run real-time or batch inference against the deployed endpoint. The notebook includes example code and instructions for both.

     

    Clean up

    After you’re done running the notebook, delete all resources that you created in the process.

    When deploying the endpoint from the SageMaker Studio console, you can delete it by choosing Delete on the endpoint details page.

    If you want to unsubscribe to the model package completely, you need to unsubscribe from the product in AWS Marketplace:

    1. Navigate to the Machine Learning tab on your software subscriptions page.
    2. Locate the listing that you want to cancel the subscription for, then choose Cancel Subscription to cancel the subscription.

    Complete these cleanup steps to avoid continued billing for the model.

    Conclusion

    In this post, we showed you how to get started with the first healthcare-specific model available now in SageMaker JumpStart. Check out SageMaker JumpStart in SageMaker Studio now to get started. To learn more, refer to the following resources:

    About the Authors

    Art Tuazon is a Solutions Architect on the CSC team at AWS. She supports both AWS Partners and customers on technical best practices. In her free time, she enjoys running and cooking.

    Beau Tse is a Partner Solutions Architect at AWS. He focuses on supporting AWS Partners through their partner journey and is passionate about enabling them on AWS. In his free time, he enjoys traveling and dancing.

    David Talby is the Chief Technology Officer at John Snow Labs, helping companies apply artificial intelligence to solve real-world problems in healthcare and life science. He was named USA CTO of the Year by the Global 100 Awards and Game Changers Awards in 2022.

    How useful was this post?

    Try The Generative AI Lab - No-Code Platform For Model Tuning & Validation

    See in action
    Avatar photo
    Chief technology officer at John Snow Labs
    Our additional expert:
    David Talby is a chief technology officer at John Snow Labs, helping healthcare & life science companies put AI to good use. David is the creator of Spark NLP – the world’s most widely used natural language processing library in the enterprise. He has extensive experience building and running web-scale software platforms and teams – in startups, for Microsoft’s Bing in the US and Europe, and to scale Amazon’s financial systems in Seattle and the UK. David holds a PhD in computer science and master’s degrees in both computer science and business administration.

    John Snow Labs Achieves CarbonNeutral® Certification for 2024, Championing Sustainability as a Pillar of Responsible AI

    Working with Climate Action Veteran Natural Capital Partners, John Snow Labs Minimizes the Environmental Impact Associated with Building Large Language Models John...
    preloader