Anthropic's CEO, Dario Amodei, stated that they tested DeepSeek's R1 model in a security evaluation, and the results were very poor. During the test, DeepSeek generated rare and dangerous information about biological weapons, and most concerningly, the model did this without any blocking mechanisms in place. This means the model faced no barriers in spreading such information, which is considered a serious threat to Anthropic. Amodei explained that these tests are part of Anthropic's efforts to assess the national security risks posed by various AI models, and this situation with DeepSeek raised significant concerns.
Amodei emphasized that DeepSeek’s models are not dangerous today, but they could potentially create and spread harmful information in the future. Although he praised DeepSeek's engineers as very skilled, he advised them to take AI safety matters more seriously.
This concern is not limited to Anthropic alone. Cisco’s security researchers also found that DeepSeek's R1 model failed to block any harmful prompts, achieving a 100% jailbreak success rate. This also opens the door to the generation of harmful information and illegal activities.
Despite these security issues, DeepSeek is rapidly gaining acceptance from many companies and government organizations. For instance, AWS and Microsoft have started integrating the model into their cloud platforms. However, some countries and organizations, such as the U.S. Navy and the Pentagon, have begun banning DeepSeek’s use.
Amodei acknowledged that DeepSeek is now a new competitor, on par with the top AI companies in the U.S., and this development should be taken seriously.