LLMs easily exploited using run-on sentences, bad grammar, image scaling

LLMs easily exploited using run-on sentences, bad grammar, image scaling

A series of vulnerabilities recently revealed by several research labs indicate that, despite rigorous training, high benchmark scoring, and claims that artificial general intelligence (AGI) is right around the corner, large language models (LLMs) are still quite naïve and easily confused in situations where human common sense and healthy suspicion would typically prevail. For example,…

Read More