Researchers from FutureHouse and ScienceMachine Introduce BixBench: A Benchmark Designed to Evaluate AI Agents on Real-World Bioinformatics Task

Researchers from FutureHouse and ScienceMachine Introduce BixBench: A Benchmark Designed to Evaluate AI Agents on Real-World Bioinformatics Task

Modern bioinformatics research is characterized by the constant emergence of complex data sources and analytical challenges. Researchers routinely confront tasks that require the synthesis of diverse datasets, the execution of iterative analyses, and the interpretation of subtle biological signals. High-throughput sequencing, multi-dimensional imaging, and other advanced data collection techniques contribute to an environment where traditional,…

Read More
Google DeepMind researchers introduce new benchmark to improve LLM factuality, reduce hallucinations

Google DeepMind researchers introduce new benchmark to improve LLM factuality, reduce hallucinations

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Hallucinations, or factually inaccurate responses, continue to plague large language models (LLMs). Models falter particularly when they are given more complex tasks and when users are looking for specific and highly detailed responses.  It’s a challenge…

Read More
Are We Ready for Multi-Image Reasoning? Launching VHs: The Visual Haystacks Benchmark!

Are We Ready for Multi-Image Reasoning? Launching VHs: The Visual Haystacks Benchmark!

Humans excel at processing vast arrays of visual information, a skill that is crucial for achieving artificial general intelligence (AGI). Over the decades, AI researchers have developed Visual Question Answering (VQA) systems to interpret scenes within single images and answer related questions. While recent advancements in foundation models have significantly closed the gap between human…

Read More
FACTS Grounding: A new benchmark for evaluating the factuality of large language models

FACTS Grounding: A new benchmark for evaluating the factuality of large language models

Responsibility & Safety Published 17 December 2024 Authors FACTS team Our comprehensive benchmark and online leaderboard offer a much-needed measure of how accurately LLMs ground their responses in provided source material and avoid hallucinations Large language models (LLMs) are transforming how we access information, yet their grip on factual accuracy remains imperfect. They can “hallucinate”…

Read More