was successfully added to your cart.

Unpacking the NLP Summit: The Promise and Challenges of Large Language Models

Avatar photo
Marketing Communications Lead at John Snow Labs

The recent NLP Summit presented by JohnSnowLabs, an award-winning AI company that provides medical NLP, legal NLP, finance NLP solutions, and more, served as a vibrant platform for experts to delve into the many opportunities and also challenges presented by large language models (LLMs). As the market for generative AI solutions is poised to hit $51.8 billion by 2028, LLMs play a pivotal role in this growth trajectory.

McKinsey & Company’s findings underscore 2023 as a landmark year for generative AI, hinting at the transformative wave ahead. However, alongside the promise, various challenges emerge:

  • Budget Allocation: Top performers allocate a hefty 20% of their digital budget to AI solutions.
  • Implementation Hurdles: For these top performers, 24% see the models and tools as their primary challenge, followed by talent acquisition (20%) and scaling (19%).
  • Strategy and Data: Non-top-performers highlight strategizing (24%), talent availability (21%), and data scarcity (18%) as their leading challenges.

Large language models (LLMs) are a powerful new technology with the potential to revolutionize many industries. However, there are also a number of challenges that need to be addressed before LLMs can be widely adopted. At the recent NLP Summit, experts from academia and industry shared their insights. Here is a summary of some of the key takeaways:

Data and predictability:

“Despite the potential of LLMs, there are some challenges when working with them, including data availability issues, resource constraints for computation, ethical and bias concerns, and regulatory and compliance complexity.” – Carlos Rodriguez Abellan, Lead NLP Engineer at Fujitsu

“The main obstacles to applying LLMs in my current projects include the cost of training and deploying LLM models, lack of data for some tasks, and the difficulty of interpreting and explaining the results of LLM models.” – Nafiseh Mollaei, Postdoc at Loyola University Chicago

Bias and fairness:

“We need to differentiate LLM generated content from human-generated content through technologies such as watermarking. I believe the potential to accelerate the spread of misinformation with LLMs is very real, and we need a way to clearly identify the source of content.” – Zain Hasan, Senior ML Developer Advocate at Weaviate

“An under-discussed yet crucial question is how to ensure that LLMs can be trained in a way that respects user privacy and does not rely on exploiting vast amounts of personal data.” – Swagata Ashwani, Data Science Lead at Boomi

Transparency and interpretability:

“I think currently the technique that works best in practice for this is to use LLMs along with vector databases, to perform RAG. Anytime you get the LLM to generate content, you also print and cite the source material that was provided to the LLM before it generated the response. This makes it so that we can compare and contrast the generated response to factual information. This factual information can be used to explain the generation and also verify the veracity of the response.” – Zain Hassan

“Before enterprises adopt LLMs for critical, real-world tasks, they need to trust that LLM responses are timely and accurate. Unstructured.IO solves this problem by extracting metadata during the data preparation process. File, page, and section metadata enable users to determine the source of a response, while document publication and modification dates allow retrieval systems to bias toward more recent data. Using this metadata builds confidence in LLM systems and spurs faster adoption.” – Matt Robinson, Head of Product at Unstructured.IO

“In critical applications, subject matter experts may conduct a thorough review and validation of content generated by LLMs. This ensures that the content aligns with domain-specific knowledge and maintains accuracy” adds Dishant Banga, Sr. Analyst at Bridgetree.

Cost and scalability:

“I think using LLMs as agents is overhyped – they have a lot of potential, but LLM technology and popular large language models seem to have some more fundamental problems that need to be solved prior to agents working well. Currently, getting LLMs to function as agents requires a lot of messy prompt engineering, and the technology seems quite finicky and unready for use in production.” – Zain Hasan

“In the usage of LLM/AI, I feel the most overhyped aspect is the usage itself. We need not use LLM just because everyone else is using it. We should leverage AI as a tool to enhance your product’s value proposition and meet the needs of your target audience. We don’t need to treat AI as a hammer and everything as a nail. Before integrating AI into my product, identifying specific areas or tasks where AI can add value, I should consider the pain points my users face and how AI can address those challenges effectively. Taking a problem-solving approach ensures that AI is applied in a purposeful and meaningful way.” – Supriya Raman, VP Data Science at JPMorgan

Despite the challenges, the experts at the NLP Summit were very optimistic about the future of LLMs. They believe that LLMs have the potential to revolutionize many industries, for example LLM in Healthcare will improve many providers processes, and they are committed to developing new technologies and techniques to address the challenges and make LLMs more accessible and reliable.

The recent NLP Summit served as a vibrant platform for experts to delve into the many opportunities and also challenges presented by large language models (LLMs). As the market for generative AI solutions is poised to hit $51.8 billion by 2028, LLMs play a pivotal role in this growth trajectory.

McKinsey & Company’s findings underscore 2023 as a landmark year for generative AI, hinting at the transformative wave ahead. However, alongside the promise, various challenges emerge:

  • Budget Allocation: Top performers allocate a hefty 20% of their digital budget to AI solutions.
  • Implementation Hurdles: For these top performers, 24% see the models and tools as their primary challenge, followed by talent acquisition (20%) and scaling (19%).
  • Strategy and Data: Non-top performers highlight strategizing (24%), talent availability (21%), and data scarcity (18%) as their leading challenges.

Large language models (LLMs) are a powerful new technology with the potential to revolutionize many industries. However, a number of challenges also need to be addressed before LLMs can be widely adopted. At the recent NLP Summit, experts from academia and industry shared their insights. Here is a summary of some of the key takeaways:

Data and predictability:

“Despite the potential of LLMs, there are some challenges when working with them, including data availability issues, resource constraints for computation, ethical and bias concerns, and regulatory and compliance complexity.” – Carlos Rodriguez Abellan, Lead NLP Engineer at Fujitsu

“The main obstacles to applying LLMs in my current projects include the cost of training and deploying LLM models, lack of data for some tasks, and the difficulty of interpreting and explaining the results of LLM models.” – Nafiseh Mollaei, Postdoc at Loyola University Chicago

Bias and fairness:

“We need to differentiate LLM-generated content from human-generated content through technologies such as watermarking. I believe the potential to accelerate the spread of misinformation with LLMs is very real, and we need a way to identify the source of content clearly.” – Zain Hasan, Senior ML Developer Advocate at Weaviate

“An under-discussed yet crucial question is how to ensure that LLMs can be trained in a way that respects user privacy and does not rely on exploiting vast amounts of personal data.” – Swagata Ashwani, Data Science Lead at Boomi

Transparency and interpretability:

“I think currently the technique that works best in practice for this is to use LLMs along with vector databases, to perform RAG. Anytime you get the LLM to generate content, you also print and cite the source material that was provided to the LLM before it generated the response. This makes it so that we can compare and contrast the generated response to factual information. This factual information can be used to explain the generation and also verify the veracity of the response.” – Zain Hassan

“Before enterprises adopt LLMs for critical, real-world tasks, they need to trust that LLM responses are timely and accurate. Unstructured.IO solves this problem by extracting metadata during the data preparation process. File, page, and section metadata enable users to determine the source of a response, while document publication and modification dates allow retrieval systems to bias toward more recent data. Using this metadata builds confidence in LLM systems and spurs faster adoption.” – Matt Robinson, Head of Product at Unstructured.IO

“In critical applications, subject matter experts may conduct a thorough review and validation of content generated by LLMs. This ensures that the content aligns with domain-specific knowledge and maintains accuracy” adds Dishant Banga, Sr. Analyst at Bridgetree.

Cost and scalability:

“I think using LLMs as agents is overhyped – they have a lot of potential, but LLM technology seems to have some more fundamental problems that need to be solved prior to agents working well. Currently, getting LLMs to function as agents requires a lot of messy prompt engineering, and the technology seems quite finicky and unready for use in production.” – Zain Hasan

“In the usage of LLM/AI, I feel the most overhyped aspect is the usage itself. We need not use LLM just because everyone else is using it. We should leverage AI as a tool to enhance your product’s value proposition and meet the needs of your target audience. We don’t need to treat AI as a hammer and everything as a nail. Before integrating AI into my product, identifying specific areas or tasks where AI can add value, I should consider the pain points my users face and how AI can address those challenges effectively. Taking a problem-solving approach ensures that AI is applied in a purposeful and meaningful way.” – Supriya Raman, VP Data Science at JPMorgan

Despite the challenges, the experts at the NLP Summit were very optimistic about the future of LLMs. They believe that LLMs have the potential to revolutionize many industries, and they are committed to developing new technologies and techniques to address the challenges and make LLMs more accessible and reliable.

Free NLP Summit 2023

How useful was this post?

Try Healthcare LLMs

See in action
Avatar photo
Marketing Communications Lead at John Snow Labs
Our additional expert:
Marketing Communications Lead at John Snow Labs. Experienced Branding, Marketing Strategy and Communications with a demonstrated history of working in the marketing and advertising industry. For media inquiries: Ida Lucente John Snow Labs ida@johnsnowlabs.com

Vision-Based Automatic Groceries Tracking System - Smart Homes

The rapid proliferation of advanced AI technologies has propelled numerous industries forward, but the smart home sector has yet to realize its...
preloader