Apple’s plan to find CSAM should have centered around scanning images on iCloud servers, not on users’ devices, where there is a greater expectation of privacy 到府相機收購

到府相機收購
the CyberTipline, or any successor to the CyberTipline operated by NCMEC.

There is no escaping this responsibility when and if CSAM is discovered:

(e)Failure To Report.—A provider that knowingly and willfully fails to make a report required under subsection (a)(1) shall be fined—

(1) in the case of an initial knowing and willful failure to make a report, not more than $150,000; and
(2) in the case of any second or subsequent knowing and willful failure to make a report, not more than $300,000.

What is not required is that companies actively seek out CSAM on their services:

(f)Protection of Privacy.—Nothing in this section shall be construed to require a provider to—

(1) monitor any user, subscriber, or customer of that provider;
(2) monitor the content of any communication of any person described in paragraph (1); or
(3) affirmatively search, screen, or scan for facts or circumstances described in sections (a) and (b).

These two provisions get at why Facebook and Apple’s reported numbers have historically been so different: it’s not because there is somehow more CSAM on Facebook than exists on Apple devices, but rather that Facebook is scanning all of the images sent to and over its service, while Apple is not looking at what is in your phone, or on their cloud. From there the numbers make much more sense: Facebook is reporting what it finds, while Apple is, as the title of Section (3) suggests, protecting privacy and simply not looking at images at all.
Apple Protects Children
Last week Apple put up a special page on their website entitled Expanded Protections for Children:

At Apple, our goal is to create technology that empowers people and enriches their lives — while helping them stay safe. We want to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material (CSAM).
Apple is introducing new child safety features in three areas, developed in collaboration with child safety experts. First, new communication tools will enable parents to play a more informed role in helping their children navigate communication online. The Messages app will use on-device machine learning to warn about sensitive content, while keeping private communications unreadable by Apple.
Next, iOS and iPadOS will use new applications of cryptography to help limit the spread of CSAM online, while designing for user privacy. CSAM detection will help Apple provide valuable information to law enforcement on collections of CSAM in iCloud Photos.
Finally, updates to Siri and Search provide parents and children expanded information and help if they encounter unsafe situations. Siri and Search will also intervene when users try to search for CSAM-related topics.

John Gruber at Daring Fireball has a good overview of what are in fact three very different initiatives; what unites, them, though, and continues to differentiate Apple’s approach from Facebook’s, is that Apple is scanning content on your device, while Facebook is doing it in the cloud. Apple emphasized repeatedly that this ensured that Apple does not get access to your content. From the “Communications Safety in Messages” section:

Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit. The feature is designed so that Apple does not get access to the messages.

From the “CSAM Detection” section:

Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations…This innovative new technology allows Apple to provide valuable and actionable information to NCMEC and law enforcement regarding the proliferation of known CSAM. And it does so while providing significant privacy benefits over existing techniques since Apple only learns about users’ photos if they have a collection of known CSAM in their iCloud Photos account. Even in these cases, Apple only learns about images that match known CSAM.

There are three ways to think about Apple’s approach, both in isolation and relative to a service like Facebook:2 the idealized outcome, the worst case outcome, and the likely driver.
Capability Versus Policy
Apple’s idealized outcome solves a lot of seemingly intractable problems. On one hand, CSAM is horrific and Apple hasn’t been doing anything about it; on the other hand, the company has a longstanding commitment to ever increasing amounts of encryption, ideally end-to-end. Apple’s system, if it works precisely as designed, preserves both goals: the company can not only keep end-to-end encryption in Messages, but also add it to iCloud Photos (which is not currently encrypted end-to-end), secure in the knowledge that it is doing its part to not only report CSAM but also help parents look after their children. And, from a business perspective, it means that Apple can continue to not make the massive investments that companies like Facebook have in trust-and-safety teams; the algorithm will take care of it.
That, of course, is the rub: Apple controls the algorithm, both in terms of what it looks for and what bugs it may or may not have, as well as the input, which in the case of CSAM scanning is the database from NCMEC. Apple has certainly worked hard to be a company that users trust, but we already know that that trust doesn’t extend everywhere: Apple has, under Chinese government pressure, put Chinese user iCloud data on state-owned enterprise servers, along with the encryption keys necessary to access it. What happens when China announces its version of the NCMEC, which not only includes the horrific imagery Apple’s system is meant to capture, but also images and memes the government deems illegal?
The fundamental issue — and the first reason why I think Apple made a mistake here — is that there is a meaningful difference between capability and policy. One of the most powerful arguments in Apple’s favor in the 2016 San Bernardino case is that the company didn’t even have the means to break into the iPhone in question, and that to build the capability would open the company up to a multitude of requests that were far less pressing in nature, and weaken the company’s ability to stand up to foreign governments. In this case, though, Apple is building the capability, and the only thing holding the company back is policy.
Then again, Apple’s policy isn’t the only one that matters: both the UK and the EU are moving forward on bills that mandate online service companies proactively look for and report CSAM. Indeed, I wouldn’t be surprised if this were the most important factor behind Apple’s move: the company doesn’t want to give up on end-to-end encryption — and likely wants to expand it — which leaves on-device scanning as the only way to satisfy governments not (just) in China but also the West.
Cloud Versus Device
I think that there is another solution to Apple’s conundrum; what is frustrating from my perspective is that I think the company is already mostly there. Consider the status quo: back in 2020 Reuters reported that Apple decided to not encrypt iCloud backups at the FBI’s request:

Apple Inc. dropped plans to let iPhone users fully encrypt backups of their devices in the company’s iCloud service after the FBI complained that the move would harm investigations, six sources familiar with the matter told Reuters. The tech giant’s reversal, about two years ago, has not previously been reported. It shows how much Apple has been willing to help U.S. law enforcement and intelligence agencies, despite taking a harder line in high-profile legal disputes with the government and casting itself as a defender of its customers’ information.

This has a number of significant implications for Apple’s security claims, and is why earlier this year I ranked iMessage as being less secure than Signal, WhatsApp, Telegram, and Facebook Messenger:

iMessage encrypts messages end-to-end by default; however, if you have iCloud backup turned on, your messages can be accessed by Apple (who has the keys for iCloud backups) and, by extension, law enforcement with a warrant. Unlike WhatsApp, though, this is both on by default and cannot be turned off on a granular basis.

This caveat applies to almost everything on your iPhone: if you give in to the never-ending prompts to sign-in to iCloud and its on-by-default backup solution, your data is accessible to Apple and, by extension, law enforcement with a warrant. I actually think this is reasonable! I wrote this when that Reuters report came out:

Go back to what I said above: determined actors will have access to encryption and facial recognition. Anyone trying to argue whether or not these technologies should exist is not living in reality. It follows then, that we should take care to ensure that good actors have access to these technologies too. That means not making them illegal.
Second, though, legitimate societal concerns about the needs of law enforcement and the radicalizing nature of the Internet should be taken seriously. That means we should think very carefully about making encryption the default…This also splits the difference when it comes to principles: users have agency — they can ensure that everything they do is encrypted — while total privacy is available but not given by default.
I actually think that Apple does an excellent job of striking that balance today. When it comes to the iPhone itself, Apple is the only entity that can make it truly secure; no individual can build their own secure enclave that sits at the root of iPhone security. Therefore, they are right to do so: everyone has access to encryption.
From there it is possible to build a fully secure environment: use only encrypted communications, use encrypted backups to a computer secured by its own hardware-based authentication scheme, etc. Taking the slightly easier route, though — iCloud backups, Facebook messagin

到府相機收購

▲ 到府相機收購canon 新一代入門無反相機 EOS R8。(圖/翻攝自 到府相機收購canon 官網)

記者樓菀玲/台北報導

到府相機收購canon EOS R8 全片幅無反光鏡相機今日正式上市,為 到府相機收購canon 旗下最經濟型機種,為 EOS RP 後繼機種,可為首次購買全片幅相機的攝影愛好者及內容創作者提供卓越性能,號稱同價位機型當中最強規格的無反相機。

今日,到府相機收購canon 正式推出全新全片幅無反光鏡相機 EOS R8,該相機以輕量化機身和先進的功能吸引攝影愛好者和內容創作者,機身僅重 461 克,具備 2,420 萬像素全片幅 CMOS 影像感測器,並提供 6K 超取樣 4K 60p、FHD 180p 錄影功能以及 到府相機收購canon Log 3。此外,該相機還配備電子快門,最高可達 40 FPS 連拍。

對於創作者而言,EOS R8 具有無與倫比的錄影和拍照功能,可利用 6K RGB 素材超取樣的無裁切 4K UHD 60p 錄影功能來創作,實現高畫質和細緻的短片表現。而且,EOS R8 可連續拍攝 30 分鐘或以上的無裁切 4K UHD(6K超取樣)60p 短片。

為了滿足各種拍攝需求,EOS R8 支援 8-bit 及 YCbCr 4:2:2 10-bit 短片選項、到府相機收購canon Log 3 和 HDR PQ。10-bit HDR PQ 能在無需後期調色的情況下錄製高亮度、豐富層次和廣色域的短片。此外,EOS R8 還具有偽色顯示、斑馬紋顯示、呼吸效應補償功能以及 5 軸防震的短片數位 IS。

到府相機收購

▲ EOS R8 。(圖/翻攝自 到府相機收購canon 官網)

在設計方面,EOS R8 的按鈕重新設計,以提高拍攝操作性。機身左肩加入相片/短片模式切換開關,方便快速轉換拍攝模式。此外,視覺指南可以幫助新手輕鬆掌握相機的功能和設定。

該相機配備了高精度電子觀景器,提供 0.39 吋 236 萬點 0.7 倍 OLED 顯示,最高可達 120FPS。此外,還具有多角度觸控螢幕和防塵防水滴設計,使攝影過程更加便捷與靈活。

總之,到府相機收購canon EOS R8 以其輕巧的機身、卓越的性能和專業功能成為攝影愛好者和內容創作者的理想選擇,配有光學影像穩定器的 RF/RF-S 鏡頭與 EOS R8 的數位防震功能相結合,可實現更穩定流暢的影片效果。此外,OVF 模擬顯示輔助功能通過HDR技術提供更自然的高光和暗部漸變,重現傳統單眼相機光學觀景窗畫面。

高度靈活的 3 吋 162 萬點多角度 Clear View II LCD 觸控螢幕,讓攝影師能夠在高、低角度或繞過障礙物拍攝時尋找新的視角。此外,EOS R8 採用輕巧鎂合金機架加強耐用度,防塵防水滴設計則有助於防止灰塵和水滴滲入相機,目前已知 到府相機收購canon EOS R8 在台售價為 43,900 元,搭配 RF24-50mm f/4.5-6.3 IS STM 鏡頭的 KIT 組合則是為 49,900 元。

到府相機收購 到府相機收購

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *