Rarotonga, 2010

Simon's Megalomaniacal Legal Resources

(Ontario/Canada)

ADMINISTRATIVE LAW | SPPA / Fairness (Administrative)
SMALL CLAIMS / CIVIL LITIGATION / CIVIL APPEALS / JUDICIAL REVIEW / Practice Directives / Civil Portals

Home / About / Democracy, Law and Duty / Testimonials / Conditions of Use

Civil and Administrative
Litigation Opinions
for Self-Reppers

Simon's Favourite Charity -
Little Friends Lefkada (Greece)
Cat and Dog Rescue


TOPICS


Internet - Facebook

. Canada (Privacy Commissioner) v. Facebook, Inc. [failed to safeguard its users’ information]

In Canada (Privacy Commissioner) v. Facebook, Inc. (Fed CA, 2024) the Federal Court of Appeal allowed an appeal by the Privacy Commissioner from a decision of the Federal Court which dismissed an application that the respondent Facebook "breached the Personal Information Protection and Electronic Documents Act, S.C. 2000, c. 5 (PIPEDA) through its practice of sharing Facebook users’ personal information with third-party applications (apps) hosted on the Facebook platform".

Here the court considers whether Facebook "failed to safeguard its users’ information":
The safeguarding obligation

[109] An organization can be perfectly compliant with PIPEDA and still suffer a data breach. However, the unauthorized disclosures here were a direct result of Facebook’s policy and user design choices. Facebook invited millions of apps onto its platform and failed to adequately supervise them. The Federal Court failed to engage with the relevant evidence on this point, and this was an error of law.

[110] Facebook did not review the content of third-party apps’ privacy policies, despite these apps having access to downloading users’ data and the data of their friends. Since Facebook never reviewed these privacy policies, and since friends of downloading users could not have reviewed these privacy policies either, the policing of an apps’ data use and disclosure was left in the hands of a small number of downloading users who may never have read the policies themselves.

[111] Facebook also did not act on TYDL’s 2014 request for unnecessary user information, despite this request being described as a "“red flag”" by Facebook’s affiant. While Facebook’s failure to review third-party apps’ privacy policies was a failure to take sufficient preventative action against unauthorized disclosure of user data, Facebook’s failure to take action upon seeing red flags amounted to Facebook turning a blind eye to its obligation to adequately safeguard user data.

[112] I would add that Facebook’s inaction here was part of a larger pattern: in December 2015, when Facebook became aware that TYDL had scraped and sold the data of users and users’ friends, contrary to Facebook’s own policies, it did not notify affected users and it did not ban Cambridge Analytica or Dr. Kogan from Platform. Facebook only banned Dr. Kogan and Cambridge Analytica in March 2018—two and a half years after the media reports emerged about TYDL’s scraping and selling of user data—when Facebook found out that Dr. Kogan and Cambridge Analytica may not have actually deleted the improperly obtained user data (Federal Court decision at para. 39; see also Facebook’s 2018 Partial Response to the Commissioner).

[113] To be clear, Facebook’s conduct following its disclosure of data to TYDL is not legally relevant. As held by the Federal Court, the safeguarding principle deals with an organization’s "“internal handling”" of data, not its post-disclosure monitoring of data. However, Facebook’s post-disclosure actions contextually support the finding that it did not take sufficient care to ensure the data in its possession prior to disclosure was safeguarded.

[114] Facebook argues that it would have been practically impossible to read all third-party apps’ privacy policies to ensure compliance, and that Facebook was entitled to rely on the good faith performance of the contracts it had in place.

[115] It may be true that reading all third-party apps’ privacy policies would have been practically impossible. But, this is a problem of Facebooks’s own making. It invited the apps onto its website and cannot limit the scope of its responsibilities under section 6.1 and Principle 3 of PIPEDA by a claim of impossiblitiy.

[116] Despite its obvious limitations, there is a loose analogy here to the commercial host liability line of cases (beginning with Jordan House Ltd. v. Menow, [1974] S.C.R. 239, 1973 CanLII 16 (SCC) at 248): having invited customers in with a clear profit motive, the host cannot now argue that too many came and some behaved badly for it to meet its obligations. Admittedly, the question before this court is not one of negligence—but it is one of whether Facebook took reasonable steps to protect the data of users that it invited onto its site. This observation has even greater resonance when considered in the context of Facebook’s business model: the more apps, the more users, the more traffic, the more revenue. Having created the opportunity for the data breach, Facebook cannot contract itself out of its statutory obligations.

[117] Facebook is entitled to rely on the good faith performance of contracts, but only to a point. As discussed above, Mark Zuckerberg admitted that it would be difficult to guarantee that there were no "“bad actors”" using its Platform. It is incongruent to expect a bad actor to carry out a contract in good faith. Facebook therefore should have taken further measures to monitor third-party contractual compliance.

[118] I conclude that Facebook breached its safeguarding obligations during the relevant period by failing to adequately monitor and enforce the privacy practices of third-party apps operating on Platform.
. Canada (Privacy Commissioner) v. Facebook, Inc.

In Canada (Privacy Commissioner) v. Facebook, Inc. (Fed CA, 2024) the Federal Court of Appeal allowed an appeal by the Privacy Commissioner from a decision of the Federal Court which dismissed an application that the respondent Facebook "breached the Personal Information Protection and Electronic Documents Act, S.C. 2000, c. 5 (PIPEDA) through its practice of sharing Facebook users’ personal information with third-party applications (apps) hosted on the Facebook platform".

Here the court describes Facebook, and some of it's information activities:
[1] The Privacy Commissioner of Canada commenced proceedings in the Federal Court alleging that Facebook, Inc. (now Meta Platforms Inc.) breached the Personal Information Protection and Electronic Documents Act, S.C. 2000, c. 5 (PIPEDA) through its practice of sharing Facebook users’ personal information with third-party applications (apps) hosted on the Facebook platform. The proceeding arose from the Commissioner’s investigation into the scraping of Facebook user data by the app "“thisisyourdigitallife”" (TYDL) and its subsequent selling of the data to Cambridge Analytica Ltd. (Cambridge Analytica) for psychographic modeling purposes between November 2013 and December 2015.

[2] The Federal Court, per Manson J. (Canada (Privacy Commissioner) v. Facebook, Inc., 2023 FC 533, 2023 A.C.W.S. 1512), dismissed the Commissioner’s application, finding that the Commissioner had not shown that Facebook failed to obtain meaningful consent from users for disclosure of their data, nor that Facebook failed to adequately safeguard user data.

[3] I would allow the appeal. The Federal Court erred in its analysis of meaningful consent and safeguarding under PIPEDA. I conclude that Facebook breached PIPEDA’s requirement that it obtain meaningful consent from users prior to data disclosure and failed in its obligation to safeguard user data.

....

Facebook’s privacy measures

[4] Facebook is an online social media platform that allows users to share information. Facebook’s business model centres around attracting and maintaining users on its platform for the purpose of selling advertising. The greater the number of users and the more specific the information about users known to advertisers, the greater the revenue to Facebook. As will be discussed later, this is an important contextual fact which frames the legislative obligations at issue in this appeal.

[5] In 2007, Facebook launched "“Platform”", a technology that enabled third parties to build apps that can run on Facebook and be installed by users. These apps offer users personalized social and entertainment experiences, such as playing games, sharing photos, or listening to music. By 2013, 41 million apps were available on Facebook.

[6] Facebook also deployed an app programming interface called "“Graph API”" which allows third-party apps to receive user information. Between 2013 and 2018, Graph API underwent two revisions. Under Version 1 (v1), apps could ask installing users for permission to access information about installing users and about installing users’ friends. Under Version 2 (v2), issued in April 2014, apps could no longer request permission to access information about installing users’ friends, subject to limited exceptions, all of which were removed by March 2018. Facebook also introduced "“App Review”" alongside v2, a process that was meant to require apps seeking access to user information beyond a user’s basic profile to show how the additional information would improve the user’s experience on the app.

[7] Although Graph API v2 took effect in April 2014, existing apps were given a one-year grace period to continue functioning under Graph API v1. The alleged breaches of PIPEDA that provided the impetus for these proceedings occurred under Graph APIv1, and took place between November 2013, when TYDL was launched, and December 2015, when TYDL was removed from Facebook’s Platform.

[8] During this period, there were three layers to Facebook’s consent policies and practices: platform-wide policies, user controls, and educational resources. As these practices provide context to the inquiries into meaningful consent and safeguarding, they require some elaboration.

Facebook’s platform-wide policies

[9] Facebook had two user-facing policies in place at the relevant time: the Data Policy and the Terms of Service. While Facebook employed different versions of these policies over the relevant period, the policies "“remained mostly consistent”" (Federal Court decision at para. 15). When users signed up to Facebook, they had to agree with the Terms of Service and were told that in so doing, they were deemed to have read the Data Policy. Both policies were hyperlinked directly above Facebook’s "“sign up”" button.

[10] The Terms of Service explained users’ rights and responsibilities, including how users could control their information. The Terms of Service explained that "“[apps] may ask for your permission to access your content and information as well as content and information that others have shared with you”"; that "“your agreement with that [app] will control how the [app] can use, store and transfer that content and information”"; and that "“[y]ou may also delete your account or disable your [app] at any time”".

[11] The Terms of Service were approximately 4,500 words in length.

[12] The Data Policy explained how information is shared on Facebook and included descriptions of the following:
a) The meaning of "“public information”" (namely, information that a user "“choose[s] to make public, as well as information that is always publicly available”"), and the consequences of making information public (including the information being "“accessible to anyone who uses… [Facebook’s] Graph API”");

b) Facebook’s user controls and permissions for sharing user data; and

c) Information about users that is shared with third-party apps—including when their Facebook friends used third-party apps—and how users could control the information they wished to share.
[13] The Data Policy, which the user was deemed to have read by agreeing to the Terms of Service, was approximately 9,100 words in length.

Facebook’s user controls

[14] Facebook users could manipulate certain settings and permissions to choose the extent to which information was shared with third-party apps.

[15] In 2010, Facebook added the Granular Data Permissions (GDP) process to Platform. The GDP provided users installing an app with a notice about which categories of information that app sought to access, a hyperlink to the app’s privacy policy, and the choice to grant or deny the requested permissions. Facebook’s 2014 version of the GDP process gave users the ability to grant or deny apps permission to access specific categories of data.

[16] Facebook users also had access to an "“App Settings”" page that allowed them to view all apps in use, delete unwanted apps, or turn off Platform to prevent any apps from accessing any non-public information. After the launch of the GDP process, Facebook updated the App Settings page to display each app’s current permissions and to allow users to remove certain permissions.

[17] The App Settings page also had an "“Information Accessible Through Your Friends”" setting that enabled users to restrict information accessible to apps installed by their friends. The setting stated that "“[p]eople on Facebook who can see your information can bring it with them when they use apps”".

[18] Finally, Facebook users had access to a "“Privacy Settings”" page, which allowed them to select a default audience for posts, but which also reminded users that "“the people you share with can always share your information with others, including apps”". Facebook users could also opt out of Platform, preventing apps from accessing any of their information, or delete their account and ask relevant apps to delete their information.

Facebook’s educational resources

[19] Resources offered to Facebook users between 2013 and 2015 included a Help Center, which provided educational materials on privacy topics such as what information is shared when friends use third-party apps and how to control that information. Other tools available included "“Privacy Tour”", "“Privacy Checkup”", and "“Privacy Basics”", through which users could inform themselves about Facebook’s privacy policies and review certain privacy settings; and "“Privacy Shortcuts”", found next to Facebook’s "“home”" button, which provided information to users under the headings of "“Who can see my stuff?”", "“Who can contact me?”", and "“How do I stop someone from bothering me?”".
Facebook’s contracts with third-party apps

[20] Facebook required third-party apps to agree to Facebook’s Platform Policy and Terms of Service before being granted access to Platform. The Platform Policy imposed contractual duties on apps, including that the app:
a) Only request user data necessary to operate their app, and only use user’s friends’ data in the context of the user’s experience on the app;

b) Have a privacy policy telling users what data the app would use and how it will use or share that data;

c) Obtain explicit consent from a user before using any non-basic information for any other purpose aside from displaying it back to the user; and

d) Refrain from selling or purchasing data obtained from Facebook.
[21] Facebook admits that it did not assess or verify the actual content of apps’ privacy policies; it only verified that the hyperlink to an app’s privacy policy linked to a functioning web page.

[22] The Platform Policy also specified Facebook’s right to take enforcement action. While Facebook took approximately 6 million enforcement actions against apps between August 2012 and July 2018, the reasons for each enforcement action are unknown.


CC0

The author has waived all copyright and related or neighboring rights to this Isthatlegal.ca webpage.




Last modified: 19-09-24
By: admin