Researchers found that DeepSeek's R1 AI "failed to block a single harmful prompt" after being tested against 50 jailbreaking prompts.