LLMs factor in unrelated information when recommending medical treatments

LLMs factor in unrelated information when recommending medical treatments

A large language model (LLM) deployed to make treatment recommendations can be tripped up by nonclinical information in patient messages, like typos, extra white space, missing gender markers, or the use of uncertain, dramatic, and informal language, according to a study by MIT researchers. They found that making stylistic or grammatical changes to messages increases…

Read More