AI maps how a new antibiotic targets gut bacteria

AI maps how a new antibiotic targets gut bacteria

For patients with inflammatory bowel disease, antibiotics can be a double-edged sword. The broad-spectrum drugs often prescribed for gut flare-ups can kill helpful microbes alongside harmful ones, sometimes worsening symptoms over time. When fighting gut inflammation, you don’t always want to bring a sledgehammer to a knife fight. Researchers at MIT’s Computer Science and Artificial…

Read More
Can a Small Language Model Predict Kernel Latency, Memory, and Model Accuracy from Code? A New Regression Language Model (RLM) Says Yes

Can a Small Language Model Predict Kernel Latency, Memory, and Model Accuracy from Code? A New Regression Language Model (RLM) Says Yes

Researchers from Cornell and Google introduce a unified Regression Language Model (RLM) that predicts numeric outcomes directly from code strings—covering GPU kernel latency, program memory usage, and even neural network accuracy and latency—without hand-engineered features. A 300M-parameter encoder–decoder initialized from T5-Gemma achieves strong rank correlations across heterogeneous tasks and languages,…

Read More
Martin Trust Center for MIT Entrepreneurship welcomes Ana Bakshi as new executive director

Martin Trust Center for MIT Entrepreneurship welcomes Ana Bakshi as new executive director

The Martin Trust Center for MIT Entrepreneurship announced that Ana Bakshi has been named its new executive director. Bakshi stepped into the role at the start of the fall semester and will collaborate closely with the managing director, Ethernet Inventors Professor of the Practice Bill Aulet, to elevate the center to higher levels. “Ana is uniquely…

Read More
The Role of Model Context Protocol (MCP) in Generative AI Security and Red Teaming

The Role of Model Context Protocol (MCP) in Generative AI Security and Red Teaming

Overview Model Context Protocol (MCP) is an open, JSON-RPC–based standard that formalizes how AI clients (assistants, IDEs, web apps) connect to servers exposing three primitives—tools, resources, and prompts—over defined transports (primarily stdio for local and Streamable HTTP for remote). MCP’s value for security work is that it renders agent/tool interactions explicit and auditable, with normative…

Read More
MLPerf Inference v5.1 (2025): Results Explained for GPUs, CPUs, and AI Accelerators

MLPerf Inference v5.1 (2025): Results Explained for GPUs, CPUs, and AI Accelerators

What MLPerf Inference Actually Measures? MLPerf Inference quantifies how fast a complete system (hardware + runtime + serving stack) executes fixed, pre-trained models under strict latency and accuracy constraints. Results are reported for the Datacenter and Edge suites with standardized request patterns (“scenarios”) generated by LoadGen, ensuring architectural neutrality and reproducibility. The Closed division fixes…

Read More
Why Your Next MVP Should Be an AI Agent—Not a Full Stack App

Why Your Next MVP Should Be an AI Agent—Not a Full Stack App

Table of Contents 1.Introduction: The Costly Illusion of Building Big2.Why Traditional MVP Thinking Breaks in Enterprise Contexts3.Hitting the Enterprise Wall: When Full-Stack MVPs Fail4.The Rise of AI Agents: Enterprise-Grade MVPs for Real Impact5.Full-Stack MVP vs AI Agent MVP — Enterprise Edition6.Core Benefits of AI Agent MVPs for EnterprisesSpeedCost EfficiencyScalabilityGovernance7.When AI Agents Outperform Full Apps: Enterprise…

Read More
Responding to the climate impact of generative AI

Responding to the climate impact of generative AI

In part 2 of our two-part series on generative artificial intelligence’s environmental impacts, MIT News explores some of the ways experts are working to reduce the technology’s carbon footprint. The energy demands of generative AI are expected to continue increasing dramatically over the next decade. For instance, an April 2025 report from the International Energy Agency predicts…

Read More
Ensuring AI Safety in Production: A Developer’s Guide to OpenAI’s Moderation and Safety Checks

Ensuring AI Safety in Production: A Developer’s Guide to OpenAI’s Moderation and Safety Checks

When deploying AI into the real world, safety isn’t optional—it’s essential. OpenAI places strong emphasis on ensuring that applications built on its models are secure, responsible, and aligned with policy. This article explains how OpenAI evaluates safety and what you can do to meet those standards. Beyond technical performance, responsible AI deployment requires anticipating potential…

Read More
Improving the workplace of the future

Improving the workplace of the future

Whitney Zhang ’21 believes in the importance of valuing workers regardless of where they fit into an organizational chart. Zhang is a PhD student in MIT’s Department of Economics studying labor economics. She explores how the technological and managerial decisions companies make affect workers across the pay spectrum.  “I’ve been interested in economics, economic impacts, and…

Read More