
human

Combining technology, education, and human connection to improve online learning
MIT Morningside Academy for Design (MAD) Fellow Caitlin Morris is an architect, artist, researcher, and educator who has studied psychology and used online learning tools to teach herself coding and other skills. She’s a soft-spoken observer, with a keen interest in how people use space and respond to their environments. Combining her observational skills with active community engagement,…

Phonely’s new AI agents hit 99% accuracy—and customers can’t tell they’re not human
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More A three-way partnership between AI phone support company Phonely, inference optimization platform Maitai, and chip maker Groq has achieved a breakthrough that addresses one of conversational artificial intelligence’s most persistent problems: the awkward delays that immediately…

AI learns how vision and sound are connected, without human intervention
Humans naturally learn by making connections between sight and sound. For instance, we can watch someone playing the cello and recognize that the cellist’s movements are generating the music we hear. A new approach developed by researchers from MIT and elsewhere improves an AI model’s ability to learn in this same fashion. This could be…

With AI, researchers predict the location of virtually any protein within a human cell
A protein located in the wrong part of a cell can contribute to several diseases, such as Alzheimer’s, cystic fibrosis, and cancer. But there are about 70,000 different proteins and protein variants in a single human cell, and since scientists can typically only test for a handful in one experiment, it is extremely costly and…

The great cognitive migration: How AI is reshaping human purpose, work and meaning
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Humans have always migrated to survive. When glaciers advanced, when rivers dried up, when cities fell, people moved. Their journeys were often painful, but necessary, whether across deserts, mountains or oceans. Today, we are entering a…
How can we build human values into AI?
Responsibility & Safety Published 24 April 2023 Authors Iason Gabriel and Kevin McKee Drawing from philosophy to identify fair principles for ethical AI As artificial intelligence (AI) becomes more powerful and more deeply integrated into our lives, the questions of how it is used and deployed are all the more important. What values guide AI?…

MIT’s McGovern Institute is shaping brain science and improving human lives on a global scale
In 2000, Patrick J. McGovern ’59 and Lore Harp McGovern made an extraordinary gift to establish the McGovern Institute for Brain Research at MIT, driven by their deep curiosity about the human mind and their belief in the power of science to change lives. Their $350 million pledge began with a simple yet audacious vision:…

Tufa Labs Introduced LADDER: A Recursive Learning Framework Enabling Large Language Models to Self-Improve without Human Intervention
Large Language Models (LLMs) benefit significantly from reinforcement learning techniques, which enable iterative improvements by learning from rewards. However, training these models efficiently remains challenging, as they often require extensive datasets and human supervision to enhance their capabilities. Developing methods that allow LLMs to self-improve autonomously without additional human input or large-scale architectural modifications has…

Like human brains, large language models reason about diverse data in a general way
While early language models could only process text, contemporary large language models now perform highly diverse tasks on different types of data. For instance, LLMs can understand many languages, generate computer code, solve math problems, or answer questions about images and audio. MIT researchers probed the inner workings of LLMs to better understand how they…

Aligning AI with human values
Senior Audrey Lorvo is researching AI safety, which seeks to ensure increasingly intelligent AI models are reliable and can benefit humanity. The growing field focuses on technical challenges like robustness and AI alignment with human values, as well as societal concerns like transparency and accountability. Practitioners are also concerned with the potential existential risks associated with…
- 1
- 2