LLMs

This “smart coach” helps LLMs switch between text and code
Large language models (LLMs) excel at using textual reasoning to understand the context of a document and provide a logical answer about its contents. But these same LLMs often struggle to correctly answer even the simplest math problems. Textual reasoning is usually a less-than-ideal way to deliberate over computational or algorithmic tasks. While some LLMs…

Google study shows LLMs abandon correct answers under pressure, threatening multi-turn AI systems
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now A new study by researchers at Google DeepMind and University College London reveals how large language models (LLMs) form, maintain and lose confidence in their answers. The findings reveal…

Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Japanese AI lab Sakana AI has introduced a new technique that allows multiple large language models (LLMs) to cooperate on a single task, effectively creating a “dream team” of…

LLMs factor in unrelated information when recommending medical treatments
A large language model (LLM) deployed to make treatment recommendations can be tripped up by nonclinical information in patient messages, like typos, extra white space, missing gender markers, or the use of uncertain, dramatic, and informal language, according to a study by MIT researchers. They found that making stylistic or grammatical changes to messages increases…

From LLMs to hallucinations, here’s a simple guide to common AI terms | TechCrunch
Artificial intelligence is a deep and convoluted world. The scientists who work in this field often rely on jargon and lingo to explain what they’re working on. As a result, we frequently have to use those technical terms in our coverage of the artificial intelligence industry. That’s why we thought it would be helpful to…

LLMs Can Now Reason in Parallel: UC Berkeley and UCSF Researchers Introduce Adaptive Parallel Reasoning to Scale Inference Efficiently Without Exceeding Context Windows
Large language models (LLMs) have made significant strides in reasoning capabilities, exemplified by breakthrough systems like OpenAI o1 and DeepSeekR1, which utilize test-time compute for search and reinforcement learning to optimize performance. Despite this progress, current methodologies face critical challenges that impede their effectiveness. Serialized chain-of-thought approaches generate excessively long output sequences, increasing latency and…

Swapping LLMs isn’t plug-and-play: Inside the hidden cost of model migration
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Swapping large language models (LLMs) is supposed to be easy, isn’t it? After all, if they all speak “natural language,” switching from GPT-4o to Claude or Gemini should be as simple as changing an API key……

A Coding Implementation on Introduction to Weight Quantization: Key Aspect in Enhancing Efficiency in Deep Learning and LLMs
In today’s deep learning landscape, optimizing models for deployment in resource-constrained environments is more important than ever. Weight quantization addresses this need by reducing the precision of model parameters, typically from 32-bit floating point values to lower bit-width representations, thus yielding smaller models that can run faster on hardware with limited resources. This tutorial introduces…

Bigger isn’t always better: Examining the business case for multi-million token LLMs
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The race to expand large language models (LLMs) beyond the million-token threshold has ignited a fierce debate in the AI community. Models like MiniMax-Text-01 boast 4-million-token capacity, and Gemini 1.5 Pro can process up to 2…

Could LLMs help design our next medicines and materials?
The process of discovering molecules that have the properties needed to create new medicines and materials is cumbersome and expensive, consuming vast computational resources and months of human labor to narrow down the enormous space of potential candidates. Large language models (LLMs) like ChatGPT could streamline this process, but enabling an LLM to understand and…
- 1
- 2