HomeUSStrategies Implemented by Anthropic to Reduce Risks of Users Creating Weapons

Strategies Implemented by Anthropic to Reduce Risks of Users Creating Weapons

Share and Follow


Anthropic on Thursday said it activated a tighter artificial intelligence control for Claude Opus 4, its latest AI model.

The new AI Safety Level 3 (ASL-3) controls are to “limit the risk of Claude being misused specifically for the development or acquisition of chemical, biological, radiological, and nuclear (CBRN) weapons,” the company wrote in a blog post.

The company, which is backed by Amazon, said it was taking the measures as a precaution and that the team had not yet determined if Opus 4 has crossed the benchmark that would require that protection.

Anthropic announced Claude Opus 4 and Claude Sonnet 4 on Thursday, touting the advanced ability of the models to “analyze thousands of data sources, execute long-running tasks, write human-quality content, and perform complex actions,” per a release.

The company said Sonnet 4 did not need the tighter controls.

Share and Follow