Do reasoning models really “think” or not? Apple research sparks lively debate, response

Do reasoning models really “think” or not? Apple research sparks lively debate, response

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Apple’s machine-learning group set off a rhetorical firestorm earlier this month with its release of “The Illusion of Thinking,” a 53-page research paper arguing that so-called large reasoning models (LRMs) or reasoning…

Read More
ether0: A 24B LLM Trained with Reinforcement Learning RL for Advanced Chemical Reasoning Tasks

ether0: A 24B LLM Trained with Reinforcement Learning RL for Advanced Chemical Reasoning Tasks

LLMs primarily enhance accuracy through scaling pre-training data and computing resources. However, the attention has shifted towards alternate scaling due to finite data availability. This includes test-time training and inference compute scaling. Reasoning models enhance performance by emitting thought processes before answers, initially through CoT prompting. Recently, reinforcement learning (RL) post-training has been used. Scientific…

Read More
DeepSeek-Prover-V2: Bridging the Gap Between Informal and Formal Mathematical Reasoning

DeepSeek-Prover-V2: Bridging the Gap Between Informal and Formal Mathematical Reasoning

While DeepSeek-R1 has significantly advanced AI’s capabilities in informal reasoning, formal mathematical reasoning has remained a challenging task for AI. This is primarily because producing verifiable mathematical proof requires both deep conceptual understanding and the ability to construct precise, step-by-step logical arguments. Recently, however, significant advancement is made in this direction as researchers at DeepSeek-AI…

Read More
LLMs Can Now Reason in Parallel: UC Berkeley and UCSF Researchers Introduce Adaptive Parallel Reasoning to Scale Inference Efficiently Without Exceeding Context Windows

LLMs Can Now Reason in Parallel: UC Berkeley and UCSF Researchers Introduce Adaptive Parallel Reasoning to Scale Inference Efficiently Without Exceeding Context Windows

Large language models (LLMs) have made significant strides in reasoning capabilities, exemplified by breakthrough systems like OpenAI o1 and DeepSeekR1, which utilize test-time compute for search and reinforcement learning to optimize performance. Despite this progress, current methodologies face critical challenges that impede their effectiveness. Serialized chain-of-thought approaches generate excessively long output sequences, increasing latency and…

Read More
TxAgent: An AI Agent that Delivers Evidence-Grounded Treatment Recommendations by Combining Multi-Step Reasoning with Real-Time Biomedical Tool Integration

TxAgent: An AI Agent that Delivers Evidence-Grounded Treatment Recommendations by Combining Multi-Step Reasoning with Real-Time Biomedical Tool Integration

Precision therapy has emerged as a critical approach in healthcare, tailoring treatments to individual patient profiles to optimise outcomes while reducing risks. However, determining the appropriate medication involves a complex analysis of numerous factors: patient characteristics, comorbidities, potential drug interactions, contraindications, current clinical guidelines, drug mechanisms, and disease biology. While Large Language Models (LLMs) have…

Read More
This AI Paper Introduces R1-Onevision: A Cross-Modal Formalization Model for Advancing Multimodal Reasoning and Structured Visual Interpretation

This AI Paper Introduces R1-Onevision: A Cross-Modal Formalization Model for Advancing Multimodal Reasoning and Structured Visual Interpretation

Multimodal reasoning is an evolving field that integrates visual and textual data to enhance machine intelligence. Traditional artificial intelligence models excel at processing either text or images but often struggle when required to reason across both formats. Analyzing charts, graphs, mathematical symbols, and complex visual patterns alongside textual descriptions is crucial for applications in education,…

Read More
This AI Paper from Menlo Research Introduces AlphaMaze: A Two-Stage Training Framework for Enhancing Spatial Reasoning in Large Language Models

This AI Paper from Menlo Research Introduces AlphaMaze: A Two-Stage Training Framework for Enhancing Spatial Reasoning in Large Language Models

Artificial intelligence continues to advance in natural language processing but still faces challenges in spatial reasoning tasks. Visual-spatial reasoning is fundamental for robotics, autonomous navigation, and interactive problem-solving applications. AI systems must effectively interpret structured environments and execute sequential decisions to function in these domains. While traditional maze-solving algorithms, such as depth-first search and A*,…

Read More
Reinforcement Learning Meets Chain-of-Thought: Transforming LLMs into Autonomous Reasoning Agents

Reinforcement Learning Meets Chain-of-Thought: Transforming LLMs into Autonomous Reasoning Agents

Large Language Models (LLMs) have significantly advanced natural language processing (NLP), excelling at text generation, translation, and summarization tasks. However, their ability to engage in logical reasoning remains a challenge. Traditional LLMs, designed to predict the next word, rely on statistical pattern recognition rather than structured reasoning. This limits their ability to solve complex problems…

Read More
Together AI’s 5M bet: Reasoning models like DeepSeek-R1 are increasing, not decreasing, GPU demand

Together AI’s $305M bet: Reasoning models like DeepSeek-R1 are increasing, not decreasing, GPU demand

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More When DeepSeek-R1 first emerged, the prevailing fear that shook the industry was that advanced reasoning could be achieved with less infrastructure. As it turns out, that’s not necessarily the case. At least, according to Together AI,…

Read More