NLP Summit Experts Share Perspectives on How Advanced NLP Technologies for Healthcare, Finance, Legal Will Shape Their Industries and Unleash Better & Faster Results.
According to the data collected by Forbes, over half (53.3% to be precise) of data scientists and engineers plan to deploy Large Language Model (LLM) applications into production in the next 12 months or “as soon as possible.” Additionally, the data indicates that 8.3% of data science teams have implemented LLM applications currently used by their own or client companies.
The market for LLMs and generative AI is expected to reach $11.3 billion by the end of the year, with an estimated $76.8 billion by the end of 2030.
These statistics reflect the inherent versatility and adaptability of large language models. While this technology can address many business challenges today, the real task lies in identifying these challenges and determining the most effective applications and strategies.
Adapting to Domain-Specific Needs
Knowing how large language models work, experts acknowledge the enhanced capabilities of LLMs in daily operations. Supriya Raman, VP of Data Science at JPMorgan, comments, “Using LLMs on domain-specific knowledge bases ensures that we can fine-tune them on data specific to our organization or domain, improving search accuracy, automating tagging, and generating new content. This builds repositories with synthetic data for foundational case studies.”
“The most pivotal applications I see are in healthcare, particularly diagnostic assistance and patient engagement. NLP algorithms can sift through vast medical literature to aid diagnosis, while LLMs facilitate smoother patient-doctor interactions. The Amazon API’s ability to convert voice to text during medical interviews will be a game-changer in healthcare. Imagine a world without the need for a keyboard!” said Harvey Castro MD, an author at ChatGPT Healthcare and an upcoming speaker at the NLP Summit.
The adaptability and applicability of Large Language Models are spurring their adoption. “On a case-by-case basis, depending on the project, we might find that simple rule-based systems or static word embeddings yield better results than large language models. I’d view models fine-tuned on BioBERT, trained on biomedical articles, as an ideal starting point for projects concerning clinical data,” shares Pavithra Rajendran, PhD, NLP Technical Lead, Senior Data Scientist, and an upcoming speaker at the NLP Summit.
Despite their versatility, the role of human oversight remains pivotal.
“I have used LLMs to help me quickly learn or remember coding syntax and use unfamiliar libraries. This has worked great. The moment the logic gets a little complicated, the LLMs’ code misses the mark. It takes longer to understand how and why it is failing and to try to fix the issue than it might to start from the beginning to come up with an algorithm myself,” says Sonali Tamhankar, Data Scientist in the Advanced Analytics department at Seattle Cancer Care Alliance and a speaker at upcoming NLP Summit conference. “Occasionally, it might perform spectacularly well, but even for my own use, I have learned that the LLMs make ‘adroit assistants’, and I’ll be disappointed if I expect them to be omniscient oracles” she adds.
Learning from Private Data
LLMs are renowned for their capability to process intricate queries and harness the vast information acquired during their training phase.
LLM-based tools are recognized for their ability to access databases and knowledge bases. Zain Hassan, Senior ML Developer Advocate at Weaviate, asserts, “The most significant current application of LLMs lies in chatbots that leverage external knowledge bases. By performing Retrieval Augmented Generation (RAG), they help generate appropriate responses. This capacity to retrieve from a knowledge base and consult source material before forming a response allows us to harness the LLMs as practical reasoning engines. It also enables LLMs to cite sources and answer questions beyond their training data’s scope.”
“I work in healthcare, so I am extremely conservative about these issues. I am starting out with projects like building ‘adroit assistants’ that help our regulatory teams identify relevant excerpts from documents that are in the public domain, thus bypassing privacy and security concerns almost entirely. For the next use cases, I am working closely with our cloud infrastructure and information security teams to ensure that the protected and proprietary information is treated appropriately” Comments Sonali Tamhankar.
Swagata Ashwani, Data Science Lead at Boomi, adds, “Domain-specific knowledge is pivotal. It enables customization of LLMs to cater to particular industry needs, magnifying their utility in specialized realms.”
“In my experience, models like GPT-3 and BERT have consistently excelled in diverse areas, from generating text to evaluating sentiments” adds Dr. Fatema Nafa, Assistant Professor, Computer Science Department at Salem State University. “The versatility of NLP and LLMs has led to their use in analyzing sentiments in customer feedback, powering chatbots and virtual support assistants, offering translation and transcription solutions, and creating concise summaries, among others” she adds.
Zain Hassan, Senior Developer Advocate, agrees: “I predominantly use Weaviate, an open-source vector database, alongside LLMs. I believe augmenting LLMs with external knowledge bases for RAG should be the standard approach to amplifying the power of LLMs.”
Extensive Testing & Monitoring
Given the significant benefits and the inevitable reliance on the system, companies must vigilantly manage the deployment process.
Nafiseh Mollaei, a Postdoc at Loyola University Chicago and a speaker at the upcoming NLP Summit, lists, “The development, testing, and deployment tools I value most include Hugging Face Transformers, TensorFlow, and PyTorch.”
“With the rise in popularity of large language models, testing becomes an important aspect when using these models in a production environment, and I’m very much interested in exploring open-source libraries for this purpose” comments Pavithra Rajendran. “A very good example is the LangTest library released by John Snow Labs that supports out-of-the-box testing for different NLP tasks” she adds.
Shir Chorev, CTO and Co-Founder at Deepchecks, adds, “During the production phase, we put measures in place for both the inputs and outputs. Before sending data to the LLM application, we cleanse and structure it. Before returning data to the user, we filter and modify it as needed. This vital component is termed an ‘LLM Gateway’. In the pre-production phase, we establish continuous validation and testing processes. This encompasses evaluating application behavior in outlier scenarios and conducting stress tests.”
Itay Zitva, Software Engineering Team Lead (NLP) at Hyro and a speaker at the upcoming NLP Summit, remarks, “In the healthcare sector, I dedicate a significant amount of time to safeguarding our clients’ data. This involves using models that don’t retain the data sent to them, whether due to their proprietary nature or due to binding agreements, and rigorously monitoring LLM outputs.”
Learning More
LLMs are rapidly becoming essential drivers for contemporary businesses, offering transformative advantages through the creation of language assistants. The landscape is continuously evolving, introducing innovative features almost every day.
Acquiring up-to-date knowledge is crucial for businesses aiming to remain at the forefront of technological progress. Participating in events focused on lessons learned from applying these technologies in practice offers an efficient avenue for such enrichment. Don’t miss out on joining the upcoming NLP Summit!
Try Healthcare LLMs
See in action