Rarotonga, 2010

Simon's Megalomaniacal Legal Resources

(Ontario/Canada)

ADMINISTRATIVE LAW | SPPA / Fairness (Administrative)
SMALL CLAIMS / CIVIL LITIGATION / CIVIL APPEALS / JUDICIAL REVIEW / Practice Directives / Civil Portals

Home / About / Democracy, Law and Duty / Testimonials / Conditions of Use

Civil and Administrative
Litigation Opinions
for Self-Reppers

Simon's Favourite Charity -
Little Friends Lefkada (Greece)
Cat and Dog Rescue


TOPICS


Contracts - Tilden v Clendenning

. Canada (Privacy Commissioner) v. Facebook, Inc. [consent]

In Canada (Privacy Commissioner) v. Facebook, Inc. (Fed CA, 2024) the Federal Court of Appeal allowed an appeal by the Privacy Commissioner from a decision of the Federal Court which dismissed an application that the respondent Facebook "breached the Personal Information Protection and Electronic Documents Act, S.C. 2000, c. 5 (PIPEDA) through its practice of sharing Facebook users’ personal information with third-party applications (apps) hosted on the Facebook platform".

Here the court considers that PIPEDA consent is contextual, in passages that are also relevant to internet and fine-print consumer contracts:
Meaningful consent: the friends of users

[74] Clauses 4.3.4 and 4.3.6 of PIPEDA state that the form of consent sought by an organization and the way an organization seeks consent may vary depending on the circumstances. Here, the circumstances of consent differed between two groups of Facebook users whose data was disclosed: users that downloaded third-party apps, and friends of those users.

[75] Only those who installed the third-party apps, and not their friends, were given the opportunity to directly consent to TYDL’s (or other apps’) use of their data upon review of the app’s privacy policy. Direct users of third-party apps were able to use the GDP process, through which they were given notice about the information categories the app sought to access, a link to that app’s privacy policy, and provided the opportunity to grant or deny data permissions.

[76] This distinction between users and friends of users is fundamental to the analysis under PIPEDA. The friends of users could not access the GDP process on an app-by-app basis and could not know or understand the purposes for which their data would be used, as required by PIPEDA.

[77] The only conclusion open to the Federal Court on the evidence was that Facebook failed to obtain meaningful consent from friends of users to disclose their data, and thus breached PIPEDA. This finding hinges mainly on Facebook’s different consent practices for users of apps and those users’ friends, and Facebook’s user-facing data policies and practices with third-party apps more broadly. To the extent this evidence was acknowledged by the Federal Court, it made a palpable and overriding error in its conclusion that there was no breach of PIPEDA.

[78] Facebook did not afford friends of users who downloaded third-party apps the opportunity to meaningfully consent to the disclosure of their data, since friends of users were simply unable to review those apps’ data policies prior to disclosure. This is not in accordance with PIPEDA: clause 4.3.2 of PIPEDA requires that organizations "“make a reasonable effort to ensure that the individual is advised of the purposes for which the information will be used”".

[79] Facebook’s Platform Policy required third-party apps to inform users via privacy policies what data that app will use, and how it will use or share that data. Even if this were a sufficient practice to obtain meaningful consent of those that installed the app, it would only be sufficient for users able to access that policy at the time of disclosure, which would not include the friends of installing users.

[80] Friends of users were only informed at a high level through Facebook’s Data Policy that their information could be shared with third-party apps when their friends used these apps: the Data Policy noted that "“if you share something on Facebook, anyone who can see it can share it with others, including the games, applications, and websites they use”" and that "“[i]f you have made [certain] information public, then the application can access it just like anyone else. ""But if you've shared your likes with just your friends, the application could ask your friend for permission to share them”" (emphasis added).

[81] However, the Data Policy offers mundane examples of how those apps may use user data. The Policy does not contemplate large-scale data scraping, disconnected from the purpose of the app itself, which occurred in this case:
Your friends and the other people you share information with often want to share your information with applications to make their experiences on those applications more personalized and social. For example, one of your friends might want to use a music application that allows them to see what their friends are listening to. To get the full benefit of that application, your friend would want to give the application her friend list - which includes your User ID - so the application knows which of her friends is also using it. Your friend might also want to share the music you "like" on Facebook.

If an application asks permission from someone else to access your information, the application will be allowed to use that information only in connection with the person that gave the permission, and no one else.

For example, some apps use information such as your friends list, to personalize your experience or show you which of your friends use that particular app.

(Emphasis added.)
[82] This language is too broad to be effective. A user reading this could not sufficiently inform themself of the myriad ways that an app may use their data, and thus could not meaningfully consent to future disclosures to unknown third-party apps downloaded by their friends. Additionally, the language of the Data Policy suggests that there are limitations on an app’s use of a user’s friend’s data. Here, even if consent can be distilled from the circumstances, there was use beyond that which could have reasonably been contemplated.

[83] It should not be forgotten that meaningful consent under Principle 3 and section 6.1 of PIPEDA is based on a reasonable person’s understanding of the nature, use and consequences of the disclosure. Here, it was impossible for friends of users to inform themselves about the purposes for which each third-party app would be using their data at the time of disclosure, or even to know that their data was being shared with such apps. This was a privilege only afforded to direct users of that app. Friends of direct app users who read the Data Policy would have had, at best, a vague and rosy picture of how third-party apps may use their data. Upon signing up to Facebook, friends of direct app users were effectively agreeing to an unknown disclosure, to an unknown app, at an unknown time in the future of information that might be used for an unknown purpose. This is not meaningful consent.

Meaningful consent: the installers of TYDL

[84] I reach the same conclusion with respect to the users or installers of the apps: these users also did not provide meaningful consent. There are certain differences in the analysis given the contextual and factual differences between the two groups. Centrally, installing users were able to use the GDP process, while friends of users were not. However, an analysis of Facebook’s policies and the installing users’ expectations in light of these policies leads to the same conclusion on meaningful consent.

[85] The starting points are the Terms of Service and the Data Policy. Together they describe the types of user information collected by Facebook, what user information would be public, and how that information would be used. On a literal reading, the user could be understood to have been warned of the risks and to have consented. Whether this translates into meaningful consent is another matter.

[86] Terms that are on their face superficially clear do not necessarily translate into meaningful consent. Apparent clarity can be lost or obscured in the length and miasma of the document and the complexity of its terms. At the length of an Alice Munro short story, the Terms of Service and Data Policy—which Mark Zuckerberg, speaking to a U.S. Senate committee, speculated that few people likely ever read—do not amount to meaningful consent to the disclosures at issue in this case.

[87] The word "“consent”" has content, and in this case the content is legislatively prescribed. It includes an understanding of the nature, purpose and consequences of the disclosure. In this case, the question that the Federal Court was obligated to ask, therefore, was whether the reasonable person would have understood that in downloading a personality quiz (or any app), they were consenting to the risk that the app would scrape their data and the data of their friends, to be used in a manner contrary to Facebook’s own internal rules (i.e. sold to a corporation to develop metrics to target advertising in advance of the 2016 U.S. election). Had the question been asked of the reasonable person, they could have made an informed decision.

[88] Certain other contextual evidentiary points support this perspective of a reasonable person.

[89] First, the key provisions that Facebook relies on to establish consent are in the Data Policy and not the Terms of Service. Mark Zuckerberg speculated before the U.S. Senate that even Facebook itself may not expect all of its users to have read, let alone understood, the Terms of Service or the Data Policy: he stated that he "“imagine[d] that probably most people do not read the whole [policies]”". Worse, the consent of the Data Policy itself is obscured by the Terms of Service, as the Data Policy is incorporated by reference into the Terms of Service. By accepting the Terms of Service, the user is deemed to have consented to both. This is not the kind of active positive and targeted consent contemplated by Principle 3 and section 6.1 of PIPEDA.

[90] Facebook did not warn users that bad actors could, and may likely, gain access to Facebook’s Platform and thus potentially access user data. As will be discussed further below, Mark Zuckerberg admitted in 2018 that it would be "“difficult to… guarantee”" that no bad actors could ever use Facebook’s Platform. Facebook’s response to this is to position itself as a neutral, passive intermediary; an interlocutor between members of the Facebook community, with no responsibility for what transpires on its platform.

[91] The consequence of viewing Facebook in this light is to diminish, if not efface, Facebook’s responsibilities under PIPEDA. While Facebook did warn users via its Data Policy that third-party apps were "“not part of, or controlled by, Facebook”", and cautioned users to "“always make sure to read [apps’] terms of service and privacy policies to understand how they treat your data”", it does not follow that users who read the Data Policy were aware that these third-party apps could be bad actors with intentions to ignore Facebook’s policies or local privacy laws, let alone sell their information to a third party.

[92] Importantly, the reasonable Facebook user would expect Facebook to have in place robust preventative measures to stop bad actors from misrepresenting their own privacy practices and accessing user data under false pretences. Organizations can rely on third-party consent to disclose data, but those organizations must still take reasonable measures to ensure that the consent obtained by the third party is meaningful (Federal Court decision at para. 65). It is difficult to see how Facebook can advance this defence in light of its own evidence that 46% of app developers did not read the pertinent policies since launching their apps.

[93] There was evidence before the Court which informed both the consent and safeguarding obligations. That evidence indicates that during the relevant period Facebook took a hands-off approach to policing the privacy-related conduct of third-party apps using Platform. Facebook did not review the content of third-party apps’ privacy policies as presented to users; Facebook only verified that the hyperlink led to a functioning website.

[94] In response, Facebook describes various types of enforcement systems, both human and automated, that it has in place to protect users’ privacy interests. It also notes that it took 6 million enforcement actions during the relevant period of time. The targets and purposes of these 6 million enforcement actions, their consequences and effectiveness were not disclosed by Facebook. Without more, this number reveals little; it is unknown how many enforcement actions Facebook took against any third-party apps for breaches of Facebook’s privacy policies.

[95] Finally, and tellingly, Facebook failed to act on TYDL’s 2014 request for unnecessary user information. Instead, Facebook allowed the app to continue collecting users’ friends’ data for an additional year (Federal Court decision at para. 43). Requests for unnecessary user information, such as that made by TYDL, were described by Facebook’s affiant as "“red flags”" for an app’s potential policy violations.

[96] I agree, and note that this begs the question of why Facebook made no further inquiries of TYDL and its privacy practices once this red flag was raised.

[97] These practices, taken together, lead only to the conclusion that Facebook did not adequately inform users of the risks to their data upon signing up to Facebook (risks that materialized in the case of TYDL and Cambridge Analytica). Therefore, meaningful consent was not obtained. As will be discussed below, these same practices and measures—or lack thereof—inform Facebook’s breach of its safeguarding duties.

[98] I conclude by noting that much of Facebook’s argument presumes that users read privacy policies presented to them on signing up to social networking websites. As I mentioned earlier, at the length of a short story, this is a dubious assumption; see, for example, Laurent Crépeau’s critiques of the effectiveness of social networking websites’ data policies in his article "“Responding to Deficiencies in the Architecture of Privacy: Co-Regulation as the Path Forward for Data Protection on Social Networking Sites”" (2022) 19 Can. J. L. & Tech. 411 at 446:
... consumers are in an extremely unbalanced relationship with [social networking websites]. Rarely are they aware of how their information is collected and used, and they are even less aware of the amount of information. Furthermore, information regarding a firm's data practices has usually been sanitized in documentation provided in help sections and privacy policies or is written with so much imprecision it is impossible to concretely grasp what is, in fact, being described.
[99] I agree. I also note that these comments align with Facebook’s own admissions as to the reach and effectiveness of its consent policies, which, in the context of this case, are admissions against interest. I add to this that many of Facebook’s privacy settings default to disclosure, and that this requires both an understanding on the part of the user as to the risks associated with these default settings and a positive step on the part of the user to vary their settings. Consent requires active, affirmative choice, not choice by default.

[100] Another important part of the context is that these are a consumer contracts of adhesion. This places Facebook’s privacy and disclosure clauses in their contractual context. Consumer contracts of adhesion give the consumer no opportunity to negotiate contractual terms. To become a member of Facebook, one must accept all the terms stipulated in the Terms of Service and Data Policy. As the Abella J., concurring, observed in Douez v. Facebook, Inc., 2017 SCC 33, [2017] 1 S.C.R. 751 [Douez]: "“No bargaining, no choice, no adjustments”" (at para. 98).

[101] There is a consequence to this. No negotiation and no bargaining enhances the likelihood of a divergence of expectations in what the contract entails. Again, as Abella J. wrote in Douez at para. 99:
Online contracts such as the one in this case put traditional contract principles to the test. What does “consent” mean when the agreement is said to be made by pressing a computer key? Can it realistically be said that the consumer turned his or her mind to all the terms and gave meaningful consent? In other words, it seems to me that some legal acknowledgment should be given to the automatic nature of the commitments made with this kind of contract, not for the purpose of invalidating the contract itself, but at the very least to intensify the scrutiny for clauses that have the effect of impairing a consumer’s access to possible remedies.
[102] This same heightened scrutiny should apply here, to the clauses in Facebook’s Data Policy that purport to authorize broad future disclosures of data, potentially to bad actors.

[103] Douez admittedly dealt with a different beast: a forum selection clause. There was no way a Facebook user could individually alter their litigation forum rights after signing up to Facebook. This stands in contrast to the inherent malleability of a user’s privacy settings on Facebook. However, as detailed above, it is not clear that any given user signing up to Facebook understood the intricacies of the Data Policy and the potential data disclosures they were agreeing to in the first place. Additionally, I do not suggest that the clauses at issue in this case would become unenforceable due to the fact that they are contained within a consumer contract of adhesion, as was the case in Douez (see majority judgment at paras. 52-57, and Abella J.’s concurring judgment at para. 104). Here, the nature of the contract rather acts as an interpretive prism that limits the effect of the relevant provisions.

[104] David Lie et al. acknowledge the importance of privacy policies in data transparency in their article "“Automating Accountability? Privacy Policies, Data Transparency, and the Third-Party Problem”" (2022) 72 U. Toronto L.J. 155. However, the authors go on to note that privacy policies "“are widely thought to be a failure in relation to improving consumer understanding of data flows”", as "“[m]ost people do not read them, many find them difficult to understand, and, even if people were to read and understand the policies directly relevant to the services they use, it would take an unreasonable amount of time”" (at 157-158).

[105] Lie et al. are also critical of privacy policies’ failure to "“provide a clear picture of privacy ‘defaults’”", noting that Facebook’s Data Policy itself states "“[w]hen you share and communicate using our [Products], you choose the audience [for what you share]”". This language does not "“help the user… to analyse the initial default settings”" (at 165; Data Policy text updated to reflect Facebook’s most recent Data Policy on the record before this Court). Default settings may also "“nudge an individual to make a privacy choice that is not consistent with his or her privacy preferences or that raises issues of broader social concern”" (at 165). Crépeau also notes that social networking websites are generally designed to induce disclosure of user information, with default settings "“aimed towards allowing disclosures of information because people will seldom take the time to change them, let alone become aware that they can be changed”" (at 420).

[106] In 2018, Mark Zuckerberg acknowledged before the U.S. Senate that Facebook had failed "“the basic responsibility of protecting people’s information”", that it had not done enough to "“prevent [Facebook’s] tools for being used for harm”", and that Mark Zuckerberg himself "“imagine[d] that probably most people do not read the whole [Data Policy and Terms of Service of Facebook]”". Additionally, Facebook’s Vice President and Chief Privacy Officer announced in a news release in 2018 that the Cambridge Analytica breach "“showed how much more work we need to do to enforce our policies and help people understand how Facebook works and the choices they have over their data”".

[107] No distinction is made in these admissions between the users of TYDL and their friends.

[108] Had the Federal Court considered all of the factors above, it would have concluded that no user provided meaningful consent to all data disclosures by Facebook in the relevant period.
. Deswal v. ADT LLC (ADT Security Services)

In Deswal v. ADT LLC (ADT Security Services) (Ont CA, 2021) the Court of Appeal considered when a commercial party had a duty to draw specific terms of a contract to a consumer's attention:
[7] In Fraser Jewellers (1982) Ltd. v. Dominion Electric Protection Co. (1997), 1997 CanLII 4452 (ON CA), 34 O.R. (3d) 1 (C.A.), this court emphasized that, in a commercial setting, in the absence of fraud or other improper conduct that induced a plaintiff to enter the alarm services contract, the plaintiff bore the onus to review the contract and satisfy itself of its advantages and disadvantages before signing it. As this court stated at para. 32, “[t]here is no justification for shifting the plaintiff’s responsibility to act with elementary prudence onto the defendant.”
. MacQuarie Equipment Finance Ltd. v. 2326695 Ontario Ltd. (Durham Drug Store)

In MacQuarie Equipment Finance Ltd. v. 2326695 Ontario Ltd. (Durham Drug Store) (Ont CA, 2020) the Court of Appeal considered when specific terms of a contract must be expressly notified by the party advancing the term:
[31] We do not dispute the ability of the contracting parties to agree to such a “no cancellation” provision in an adhesion contract such as the Lease between Macquarie and Durham Drug Store.

[32] Nor do we dispute the binding effect of a party’s assent to a contract’s terms by signing it, whether or not they read the contract with appropriate care or at all. As noted by Professor John D. McCamus in The Law of Contracts, 2nd ed. (Toronto: Irwin Law, 2012), at p. 193: “If an agreement is entered into on the basis of a document proffered by one party and signed by the other, it is clearly established that the agreement between the parties contains the terms expressed in the document, whether or not the signing party has read the documents.”

[33] However, Professor McCamus adds that sometimes, even with a signed agreement, inadequate notice of a particularly unfair term may render that term unenforceable, at p. 194:
In many contractual settings, it will not be expected that a signing party will take time to read the agreement. Even if the document is read, it may well be, especially in the context of consumer transactions, the purport of particular provisions of the agreement will not be understood by the signing party. Under traditional doctrine, then, although the fact of the signature appears to dispense with the notice issue, the opportunities for imposing harsh and oppressive terms on an unsuspecting party are, as a practical matter, as present in the context of signed documents as they are in the context of unsigned documents. Accordingly, it is perhaps not surprising that the recent jurisprudence indicates that notice requirements are migrating into the context of signed agreements.
[34] The leading Ontario case on this point remains this court’s decision in Clendenning. There, Dubin J.A. (as he then was) for the majority refused to enforce a limitation of liability provision in a car rental agreement that purported to exclude the rental company’s liability for a collision where the customer had driven the car after consuming alcohol. Before renting the car, the customer had chosen to pay an additional premium for “collision damage waiver”, which he had been led to understand provided comprehensive insurance for vehicle damage. He signed the rental agreement without reading it.

[35] In finding the exclusion clause unenforceable, Dubin J.A. highlighted that such a rental transaction was typically concluded in a “hurried, informal manner”, and that the liability exclusion provision was “[o]n the back of the contract in particularly small type and so faint in the customer’s copy as to be hardly legible”: at pp. 602, 606. The exclusion clause was also “inconsistent with the over-all purpose for which the transaction is entered into by the hirer”: at p. 606.

[36] In these circumstances, Dubin J.A. concluded that “something more should be done by the party submitting the contract for signature than merely handing it over to be signed” (at p. 606) — namely, reasonable measures must be taken to draw harsh and oppressive terms to the attention of the other party, at p. 609:
In modern commercial practice, many standard form printed documents are signed without being read or understood. In many cases the parties seeking to rely on the terms of the contract know or ought to know that the signature of a party to the contract does not represent the true intention of the signer, and that the party signing is unaware of the stringent and onerous provisions which the standard form contains. Under such circumstances, I am of the opinion that the party seeking to rely on such terms should not be able to do so in the absence of first having taken reasonable measures to draw such terms to the attention of the other party, and, in the absence of such reasonable measures, it is not necessary for the party denying knowledge of such terms to prove either fraud, misrepresentation or non est factum.
[37] In our view, the highly unusual circumstances of this case bring it within the principle in Clendenning. Without suggesting that there was any intention to mislead Ms. Abdulaziz, here, the no-cancellation provision should have been specifically brought to Ms. Abdulaziz’s attention. It should have been explained to her that she would remain obligated to pay for the telemedicine equipment under the Lease even if Medview defaulted on its obligations.




CC0

The author has waived all copyright and related or neighboring rights to this Isthatlegal.ca webpage.




Last modified: 19-09-24
By: admin