OpenAI unveils open-weight AI safety models for developers

OpenAI is putting more safety controls directly into the hands of AI developers with a new research preview of “safeguard” models. The new ‘gpt-oss-safeguard’ family of open-weight models is aimed squarely at customising content classification. The new offering will include two models, gpt-oss-safeguard-120b and a smaller gpt-oss-safeguard-20b. Both are fine-tuned versions of the existing gpt-oss…

Read More

CAMIA privacy attack reveals what AI models memorise

Researchers have developed a new attack that reveals privacy vulnerabilities by determining whether your data was used to train AI models. The method, named CAMIA (Context-Aware Membership Inference Attack), was developed by researchers from Brave and the National University of Singapore and is far more effective than previous attempts at probing the ‘memory’ of AI…

Read More

Samsung benchmarks real productivity of enterprise AI models

Samsung is overcoming limitations of existing benchmarks to better assess the real-world productivity of AI models in enterprise settings. The new system, developed by Samsung Research and named TRUEBench, aims to address the growing disparity between theoretical AI performance and its actual utility in the workplace. As businesses worldwide accelerate their adoption of large language…

Read More