AppleInsider published a blog post on April 9 reporting that Apple has strengthened its security measures in iOS 26.4 and macOS 26.4, fixing a Prompt Injection vulnerability in Apple Intelligence on Apple devices.
At RSAC, the world’s largest cybersecurity event, researchers disclosed a security vulnerability in Apple’s on-device AI system and successfully bypassed Apple’s security protection mechanisms using the Prompt Injection technique. Test data showed an attack success rate of 76% in 100 experiments.
According to a blog post, attackers combined two innovative methods to carry out the intrusion. The first method, called “Neural Exec,” involves constructing adversarial inputs that appear meaningless to humans but can induce the model to perform specific actions.
The second method utilizes the right-to-left overlay capability of Unicode to hide malicious commands and bypass filters. The manipulated model can not only generate content controlled by the attacker, but may also directly influence application behavior or leak sensitive user data through system APIs.
Security researchers point out that Apple’s hybrid design, which places small models locally to protect privacy, but also increases the attack surface due to deep system integration, raises an estimated 100,000 to 1 million users who face the aforementioned security risks.
The study concludes that local models do not inherently possess higher security. While Apple’s privacy strategy is superior to cloud-based solutions, the key to system security lies in the model’s ability to effectively resist adversarial input. Apple has strengthened its protective measures in iOS 26.4 and macOS 26.4, and no actual attack cases have been reported to date.
