Apple’s Proposed CSAM Scanning is a Mistake

If you’re unfamiliar with it, you can brush up on Apple’s CSAM proposal here (scroll down to the “CSAM detection” heading): https://www.apple.com/child-safety/

As an iPhone user since 2009, this move by Apple is very concerning. I can’t say it any better than the good folks here have already: https://appleprivacyletter.com

But Apple Already Scans Photos Uploaded to iCloud…

Yes, they do. And they have every right to scan their servers for illegal and/or abusive content. But this is a new technology we’re talking about, one that moves scanning to users’ devices, so it deserves its own discussion.

But Apple Says It Won’t Let This System Be Abused…

Are we really going to say that long-term this functionality is just going to sit on our phones and do nothing even though it could scan photos as they’re added to the library? Or messages as they’re being typed?

The problem is that, once you add this functionality, the cat’s out of the bag and replies to requests to further invade users’ privacy on-device change from “we can’t” to “we won’t”, a position Apple can’t possibly hope to maintain, esp. with the likes of China or the US government.

But Don’t We Already Trust Apple, Especially with Data Uploaded to iCloud?

Yes, we do. And at some point you have to trust someone, whether it’s your phone’s manufacturer, your telco, or the developers of the apps you use.

But when there’s a flagrant disregard for users and the potential for abuse a system like CSAM has (see “Apple photo-scanning plan faces global backlash from 90 rights groups“), to me that crosses a line and means the company pushing such a policy is no longer trustworthy.

If a company actively screws its users in broad daylight, then what is it willing to do behind closed doors?

At least previously Apple had the pastiche of a privacy and user-centric company. No more if this goes through.

But the Technology is Solid…

No, it’s not. Experts who previously researched CSAM have said:

A foreign government could, for example, compel a service to out people sharing disfavored political speech. That’s no hypothetical: WeChat, the popular Chinese messaging app, already uses content matching to identify dissident material. India enacted rules this year that could require pre-screening content critical of government policy. Russia recently fined Google, Facebook and Twitter for not removing pro-democracy protest materials.

We spotted other shortcomings. The content-matching process could have false positives, and malicious users could game the system to subject innocent users to scrutiny.

We were so disturbed that we took a step we hadn’t seen before in computer science literature: We warned against our own system design, urging further research on how to mitigate the serious downsides….

University Researchers Who Built a CSAM Scanning System Urge Apple to Not Use the ‘Dangerous’ Technology (https://www.macrumors.com/2021/08/20/university-researchers-csam-dangerous/)

Also, check out this post by respected technologist Bruce Schneier: Apple’s NeuralHash Algorithm Has Been Reverse-Engineered.

OK but CSAM Scanning is Only for Photos Uploaded to iCloud, So Just Opt-Out

This is where it starts, not where it ends.

The long-term goal here is probably to allow the scanning of Signal, WhatsApp, etc. messages on-device. No need for encryption backdoors then. And better to have this capability device-wide vs. app-by-app like China (WeChat) and others are currently doing.

But I Have Nothing to Hide…

The problem is that the definition of what is “illegal” can shift very quickly, esp. in certain countries, so moving the ability to scan for “illegal” stuff to user’s devices is absolutely disastrous privacy/freedom-wise.

This technology exposes all users of iOS, past and present, to the whims of the current definition of “legal”.

But we Have to Stop the Bad Guys…

If you add CSAM scanning to phones, yeah bad guys probably won’t use them to do bad things, but they’ll just move to using other technology. Meanwhile, all users are now exposed to the potential abuses of the CSAM scanning system. And if history has taught us anything, it’s that things like this will be abused and/or co-opted.

In the end, aren’t we just trading the abuse of iPhone’s privacy for the much greater abuses that CSAM will enable?


These are my concerns. I respect the opposite position though, and definitely have no judgment for anyone who continues to use Apple devices should this feature be rolled out. I’m just not comfortable with where this path will take us…and it makes me sad because I freakin’ love my iPhone.

If you’re concerned too, please sign the EFF’s petition here: https://www.nospyphone.com

1 thought on “Apple’s Proposed CSAM Scanning is a Mistake”

  1. Like you imply, this is “thin end of the wedge” stuff. It puts in place a mechanism to break/bypass any level of encryption by compromising the actual device itself. Because “think of the children” and “those filthy terrorists”. Yeah right.

    And because they’re using a “think of the children” argument to promote this subjugation/abuse, anyone who opposes it can instantly be deemed to be supporters of child porn or to not care about children – which, as anyone with two brain-cells to rub-together should already know, is the classic textbook-example of the “appeal to emotion” logical-fallacy.

    My problem with the “nothing to hide, nothing to fear” argument, is that it makes several assumptions that cannot be guaranteed, and in fact there are perverse incentives for the exact opposite to occur:
    1. The assumption that the current people in positions of power (whether governments or tech companies) are benign, altruistic, have no ulterior motives and have the “users” best interests at heart.
    2. The assumption that all future power-holders for the rest of time (or at least for the rest of the users’ lives) will be equally benign, altruistic, honest and transparent.
    3. The assumption that laws and cultural-norms will remain as they are today.
    4. The assumption that these monitoring systems cannot be hacked or hijacked by malevolent third parties (e.g. criminals, people with grudges).
    5. The assumption that the power-holders’ repositories of your information are of no use to enemies, identity-thieves etc., and that these repositories are unhackable, and that the information won’t be lost or leaked.
    6. The assumption that the algorithms driving these systems are flawless and won’t wrongly-accuse (and automatically convict / tarnish the reputation) of any individual.
    7. The assumption that the record-systems used in the backend of these various systems will never develop errors/corruption, and will therefore never hold incorrect data on an individual.

    The biggest thing people don’t get is probably assumption 2.

    Supposing new “emergency anti-terrorist legislation” comes in as a knee-jerk reaction to an incident or attack on domestic soil. This legislation allows the current government to identify anyone who criticises them as a potential “terrorist”. Either the left-wing or the right-wing could do this, and depending on your political stance you might not see a problem with one side doing it. If you’re a “leftie”, you might think “I’m safe as they’re just locking-up those racist skin-heads” – and if you’re a bit right-wing, you might think “Brilliant! It’s time they sorted-out those horrible vandals, violent gangs, leftie-bigots and Jew-haters”.

    But what about when the goalposts move? What when they determine that 10 years ago, you texted someone to say, “Ugh, have you seen this government’s latest budget announcement? Cutting education funding? I wish I could string-up the education minister by his balls!” Suddenly you’re identified as a potential terrorist, and you’re on record as inciting violence or issuing a death threat!

    And what if you’re simply not “left” or “right” enough? We see this within the general “woke” community and saw it within the BLM hysteria. People who broadly adhered to the original values of these movements were being demonised and publicly-shamed by “mob rule” for not using the exact correct terminology that had been determined to be the mass consensus that day, or they were branded “racist” for not virtue-signalling prominently enough (ignoring the fact that they, y’know, weren’t a racist n’all). You might be glad that “your” party is in power, but can you be sure that nothing will ever happen to make them doubt that you are a dedicated, “true” supporter? And even if so, can you guarantee that they will maintain power for the rest of your life? Will the opposition never get in, and will viewpoints and motives never shift in the big corporations?

    I’m aware that “thin end of the wedge” and extrapolation arguments can also be abused to make a point, but I’m doing my best to keep this all within the realms of possibility and restrict it to behaviours and patterns I’ve already witnessed in UK/EU/US since the millennium. It’s really to further-illustrate the dangers that need to be mitigated-against. I’ve not seen any proposal yet for any sort of blanket surveillance system or censorship system, that has suitable checks and balances in place to prevent the abuses described above. I’m not sure it would even be possible to have such a system with such checks and balances.

    It all seems to be a case of “We are your government/Apple/Google/Microsoft. We are your friends. Don’t be stupid, just trust us, we promise not to abuse you”. I don’t like any of their track-records regarding honesty and promises.

    Personally, I’ve never used commercial cloud storage or sync services, because I know that Apple and Google don’t care about my privacy or the safety of my data. They protect their cloud the minimal amount required to maintain a convenience for “most” people. That’s why you get naive celebrities’ nude photos leaked online every so often, etc. They actually believe that their stuff is “safe” in “the cloud”! All the provider (e.g. Apple) cares about is how they can lock you in to their walled-garden so you keep buying their products, and how they can find new ways of selling you to advertisers. They don’t give a hoot about your identity or your precious memories. Those are literally just a tool to get you hooked. There’s simply no incentive for them to make any of it “secure” beyond the annoying security-theater you see whenever you try to log-in to these systems.

    Using sync or “the cloud” is the equivalent of putting all your private details, photos and music in a cardboard box in the middle of a busy street, with a note on saying, “my private stuff, please don’t open”. If everyone does it, hackers might not pick yours. But if your box has bank details and nude selfies in it, or you’re a famous person, why would you risk it? And if the owner of the box decides to incinerate all the boxes they store, or accidentally loses your box, or even just decides you’re not allowed to look in your box anymore, you’re at their mercy.

    The only approach I can see, beyond self-hosted solutions or more-ethical services such as /e/ and Disroot, would be to do it the old fashioned way, and save your photos and contacts to your computer and/or external offline storage media like USB disks. And buy a phone with enough storage (or get an SD-card) that you don’t even need “the cloud”.

    And if a company like Apple decides to abuse you with its devices, you can always vote with your feet! You don’t “need” an iPhone! You don’t even “need” a smartphone. I had an Android handset briefly, but ditched it years ago and haven’t missed it. Anyone you care about, or cares about you will contact you by whatever means you are available.

Leave a Reply to Yr Ysgyfarnog Chwedlonol Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.