When Google introduced photo scanning capabilities on Android devices, it triggered major backlash. The company was accused of silently installing monitoring tools without user consent.
At the time, Google clarified that this new framework, called SafetyCore, was only an on-device infrastructure meant to help apps detect harmful content — not to scan user data by default. According to Google, SafetyCore would only activate when a feature that uses it is enabled, and it would be fully under user control.
That moment has now arrived, starting with Google Messages. As reported by 9to5Google, the app is now rolling out Sensitive Content Warnings, which blur explicit images like nudes before they’re shown. Users are also given options to view the content, ignore it, or block the sender.
Importantly, all this scanning is done entirely on the device. Google states that no data is sent back to its servers, and this claim is supported by GrapheneOS, a security-focused Android variant. According to them, SafetyCore does not perform client-side scanning for reporting. Instead, it offers local AI models that allow apps to identify and flag content like spam, scams, and malware, all while keeping the user’s data private and local.
However, GrapheneOS voiced concerns about transparency, noting that it’s unfortunate SafetyCore isn’t open source or part of the Android Open Source Project. They also pointed out that the AI models themselves are closed, stating they’d welcome local neural network features—as long as they were open source.

This brings us back to the ongoing debate around secrecy and control. While the rollout of content scanning in Google Messages was anticipated, it raises bigger questions about what’s coming next.
The timing is especially significant, as this capability emerges while end-to-end encryption and user privacy are increasingly under pressure from governments and regulatory bodies around the world.
Each time such tools are introduced, privacy advocates raise alarms, concerned about how far this kind of AI-driven content monitoring might go—and who gets to decide how it’s used.
For now, the feature is switched off by default for adult users, but it’s automatically enabled for children. Adults can choose to activate the feature manually by going to Google Messages Settings > Protection & Safety > Manage sensitive content warnings. For children, the ability to adjust these settings depends on their age and can only be changed through account settings or Google Family Link.

Lenovo India

SentryPC

Matrinic Audio
But this is only the beginning. Much like Gmail and other Google services, over 3 billion users across Android, email, and beyond will soon need to decide how much AI-driven scanning and content analysis they’re willing to accept. Although this scanning currently takes place on the device, future features may not offer the same level of privacy.
AI monitoring is no longer a distant concept—it’s becoming a standard part of the digital experience, and users will have to adapt. As Phone Arena notes, the feature also works proactively—if you attempt to send a potentially sensitive image, Google Messages will notify you with a warning and require confirmation before the image is sent.
We’re entering a new era—one shaped by AI oversight and blurred lines between protection and surveillance. Welcome to the age of digital gatekeepers.
Coupons and Promotions
Shop at Amazon
* This article contains affiliate links; if you click such a link and make a purchase, Doer Digitalz FZC may earn a commission.