train

You Don’t Need to Share Data to Train a Language Model Anymore—FlexOlmo Demonstrates How
The development of large-scale language models (LLMs) has historically required centralized access to extensive datasets, many of which are sensitive, copyrighted, or governed by usage restrictions. This constraint severely limits the participation of data-rich organizations operating in regulated or proprietary environments. FlexOlmo—introduced by researchers at the Allen Institute for AI and collaborators—proposes a modular training…

Update: Cybercriminals still not fully on board the AI train (yet)
In November 2023, Sophos X-Ops published research exploring threat actors’ attitudes towards generative AI, focusing on discussions on selected cybercrime forums. While we did note a limited amount of innovation and aspiration in these discussions, there was also a lot of skepticism. Given the pace at which generative AI is evolving, we thought we’d take…