Privacy Policy

Your Privacy, Our Priority

A.I. Data Privacy Principles
Executive summary:The Global Data Privacy Office (GDPO) created these A.I. Data Privacy Principles to help Publicis Groupeagencies understand the risks and responsibilities connected to the processing of personal data in A.I. systemsdeveloped and/or used within the company. Taking these principles into account will help Publicis agenciescomply with applicable data privacy laws.The data privacy principles agencies must consider depend on the role they have in the A.I. lifecycle. In case theagency develops an A.I. system it needs to consider different rules and principles than when it only uses an A.I.system.The applicable rules and principles also depend on the fact if Publicis is processing personal data in an A.I. systemfor its own purposes or if it does so on the explicit instructions of a client.To differentiate between these different circumstances this document distinguishes four different roles an agencymay have:1. 2. 3. 4. Controlling Provider of an A.I. system;Controlling Deployer of an A.I. system;Processing Provider of an A.I. system; andProcessing Deployer of an A.I. system.In case the agency acts in the capacity of a Controlling Provider it is responsible for complying with the dataprivacy principles for processing personal data during the development-, testing- and monitoring phase of the A.I.system.In case the agency acts in the capacity of a Controlling Deployer it is responsible for complying with the dataprivacy principles when processing personal data during the use of the A.I. system. During the use of the systemthe Controlling Deployer must apply those principles during the vendor due diligence process and to both thepersonal data it puts into the A.I. system and the data that is generated by the system.In case the agency develops or uses an A.I. system on behalf of and in accordance with the instruction of the client,respectively acting as a Processing Provider and Processing Deployer, it is not responsible for complying withthe data privacy principles, the client is.The A.I. Data Privacy Principles are: Legal Basis, Purpose Limitations, Accountability, Data Minimization,Accuracy, Fairness, Retention, Transparency/ Data Subjects’ Rights, Confidentiality and Security.

IntroductionIn recent years the use of Artificial Intelligence (“A.I.”) has grown exponentially. Within Publicis Groupe(“Publicis”) this is no different. Many A.I. systems our agencies use are processing personal data; for example, totrain these systems or to generate new products or services.The use of A.I. systems can have many benefits, but also has risks. To limit the risks from a data privacyperspective, it is crucial to adopt a series of A.I. Data Privacy Principles (“the Principles”).These Principles help guarantee safe and compliant processing of personal data in the A.I. systems used anddeveloped within Publicis.The Principles aim to:• minimize the risks associated with personal data processing in every stage of the A.I. lifecycle;• help demonstrating compliance with applicable data privacy law; and• ensure alignment of applicable rules for safe use of A.I. within the group from a data protectionperspective.

This document describes Publicis Groupe’s internal A.I. Data Privacy Principles and is shared with Heineken for transparency and reassurance purposes.Where Publicis agencies develop and/or use Artificial Intelligence systems solely on behalf of Heineken and strictly in accordance with Heineken’s documented instructions, Publicis acts exclusively as a data processor (Processing Provider and/or Processing Deployer, as defined in this document).In these circumstances:Heineken remains the data controller and retains full ownership, control, and responsibility for the personal data processed.Publicis does not determine independent purposes or means of processing and will not use Heineken personal data for its own purposes.Heineken personal data will not be reused, retained, or used to train or improve A.I. systems beyond what is expressly agreed and instructed by Heineken.All personal data is processed confidentially, securely, and on a strict need‑to‑know basis, in line with applicable data protection laws and Publicis Groupe Global Data Privacy & Security Policies.Any third‑party A.I. tools or vendors involved are subject to appropriate due diligence and contractual safeguards, where required.Nothing in these Principles:transfers data controller responsibilities from Heineken to Publicis;authorises Publicis to process personal data beyond Heineken’s instructions; oralters the contractual allocation of roles, responsibilities, or liability agreed between Heineken and Publicis.This assurance is intended to confirm that Publicis’ internal A.I. governance framework is designed to protect client data, preserve client control, and prevent unintended data reuse.

Uncontrolled if printed Classification: Restricted Page 1 of 6ApplicabilityThe Principles are applicable when A.I. systems process personal data. The Principles are meant to help Publicisagencies comply with data privacy rules. The Principles do not cover compliancy with any other area of law, likeintellectual property, trade secrets, product safety, etc.

DefinitionsA.I. system - means a machine-based system that, for explicit or implicit objectives, infers, from the input itreceives, how to generate outputs such as predictions, content, recommendations, or decisions that can influencephysical or virtual environments. Different A.I. systems vary in their levels of autonomy and adaptiveness afterdeployment.Deployer - means the entity under whose authority an A.I. system is used.Input Data - means data provided to or directly acquired by an A.I. system based on which the system producesan output.Output Data - means new data an A.I. system creates or synthesizes based on input data and the A.I.’s algorithm.Personal Data - means any information relating to an identified or identifiable natural person (‘data subject’); anidentifiable natural person is one who can be identified, directly or indirectly, in particular by reference to anidentifier such as a name, an identification number, location data, an online identifier or to one or more factorsspecific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.Provider - means entity that develops an A.I. system or that has an A.I. system developed and places that systemon the market or puts the system into service under its own name or trademark, whether for payment or free of charge.

SupportIn case there are questions about the Principles or legal support is required to ensure compliance with thePrinciples, please reach out to the Global Data Privacy Office.

Role DeterminationThe principles agencies must consider depend on the role they have in the A.I. lifecycle. In case the agency develops an A.I. system it needs to consider different rules than when it uses an A.I. system. It also depends on thefact if Publicis is processing personal data in an A.I. system for its own purposes or if it does so on the explicit instructions of a client.We differentiate between four different scenarios:1. 2. 3. 4. Publicis as Controlling Provider (see section 1): Publicis develops an A.I. system for its own purposes,which A.I. system processes personal data.Publicis as Controlling Deployer (see section 2): Publicis uses an A.I. system for its own purposes,which use involves the processing of personal data.Publicis as a Processing Provider (see section 3): Publicis develops an A.I. system on behalf of and inaccordance with the instruction of the client, which A.I. system is processing personal data.Publicis as a Processing Deployer (see section 3): Publicis uses an A.I. system on behalf of and inaccordance with the instruction of the client, which use involves the processing of personal data.Uncontrolled if printed Classification: Restricted Page 2 of 6The Principles1. Publicis as Controlling Provider.The agency that develops an A.I. system for its own purposes is considered a Controlling Provider of an A.I.system. Controlling Providers are responsible for the processing of personal data in their A.I. system. Controlling Providers of A.I. systems can process personal data for different purposes, like developing, training,testing, security, Input Data offering, legal and/or monitoring purposes (“Developer Activities”).The agency acting as a Controlling Provider needs to consider the following Principles:

1.1 Legal BasisWhere required by applicable data privacy laws, like GDPR or LGPD, the agency as Provider of the A.I. systemmust ensure a legal basis, like consent or legitimate interest, for each of the Developer Activities.If consent is chosen as legal basis, the agency as Provider must ensure the A.I. system has the capability to respondproperly to the withdrawal of data subjects’ consent. This requires an (automatic) review process of the withdrawal(data subject identification, extend of the withdrawal etc.) and the Personal Data of that data subject to (partially)be erased by the A.I. system following a legitimate withdrawal.If legitimate interest is chosen, a legitimate interest assessment (LIA) needs to be conducted prior to the personaldata being processed in the A.I. system. In this LIA the agency’s interest for conducting the Developers Activitiesneeds to be weighed against the fundamental rights and freedoms and interests of the individuals whose data isbeing processed.

1.2 Purpose LimitationPrior to processing personal data in an A.I. system the agency acting as Controlling Provider needs to define,register, and communicate the specific purposes for each of the Developer Activities. The purposes need to becommunicated to the data subjects in clear and plain language, ensuring the target audience understands why theirdata is being processed.The agency also needs to align the functioning of the A.I. system with these purposes to avoid personal data beingprocessed for any other purposes.In case additional purposes for processing the personal data are identified, these need to be compatible andproportionate to the original purpose. It is essential to promptly inform data subjects about these new purposesand, if required by law, obtain consent from the data subject to use their personal data for the new purpose.This approach ensures that we only process personal data for purposes that are transparent and well-defined,reflecting our commitment to responsible data processing.
1.3 AccountabilityAccountability means that the agency acting as Controlling Provider is responsible for compliance of theprocessing of personal data by their A.I. system with data privacy law and that the agency should be able todemonstrate such compliance. To help demonstrate this compliance the agency should:• Conduct a Data Protection Impact Assessment/ Privacy Impact Assessment (DPIA/ PIA).• Implement privacy by design and default.• Implement Publicis’ Privacy Policies (like these Principles and the Global Data Privacy & SecurityPolicies).• Enter into Data Processing Agreements with A.I. system vendors.• Regularly audit the A.I. system.• Register and (if necessary) report the data breaches suffered by the A.I. system.• Register the data processing for the Developers Activities in the Data Inventory.Uncontrolled if printed Classification: Restricted Page 3 of 61.4 Data MinimizationThe personal data processed during the Developers Activities must be appropriate, relevant, and limited to what isnecessary for the defined purposes.This does not mean to limit the data to the largest extent; rather, it means that the agency needs to consider theright amount of data for the defined purpose.To properly minimize the personal data processing in the A.I. system the agency should:• Evaluate the quality, accuracy, source, and quantity of personal data used in the A.I. system. This includesminimizing any unnecessary, redundant, or marginal data.• Consider the use of synthetic data or anonymous data. This is a viable strategy to reduce the volume ofpersonal data processed by A.I. systems, enhancing both efficiency and data privacy.• Implement pseudonymization or encryption methods to safeguard the identity of data subjects, ensuringminimal impact on their privacy and data protection rights. This approach effectively balances data utilitywith privacy concerns.

1.5 AccuracyThe personal data processed by the A.I. system should be up to date and accurate.In the design phase of the A.I. system the agency will have to ensure that the A.I. system will have a build-inmechanism for ensuring the personal data processed for the Developers Activities stays up to date and accurate.Part of this mechanism should be allowing data subjects access to their personal data in the A.I. system and thepossibility for correcting that data on request.

1.6 FairnessA.I. systems often suffer from biases due to historical data, incomplete datasets, or poor governance models. Suchbiases can lead to direct or indirect discrimination. To mitigate this, biases should be identified and removed duringthe development-, training- and the after-market testing phase of the A.I. system.

1.7 RetentionThe personal data processed in the A.I. system should not be processed longer than necessary for the definedpurposes.Therefore, it is essential to define the retention period for each specific purpose and clearly communicate theretention periods to the data subjects.In the design phase of the A.I. system the agency should ensure that the A.I. system will have a build-in mechanismfor ensuring the personal data processed for the Developers Activities will not be stored longer than necessary forthe relevant purpose. The mechanism should also provide the option, where possible, to have the data deleted onrequest of the data subject.Also, the agency should periodically review the data it holds, and erase or anonymise it when it no longer needsit.

1.8 TransparencyTransparent processing is about being clear, open, and honest with data subjects from the start about who theagency acting as Controlling Provider is, and how and why its A.I. system processes their personal data and whatrights the data subjects have.This information should be concise, transparent, understandable, and easily accessible.In practice, this transparency is usually provided in a Privacy Notice that should be offered to the data subjecteither before or at the moment their personal data is collected/ processed.Uncontrolled if printed Classification: Restricted Page 4 of 6

Data Subjects’ Rights
The agency needs to inform data subjects about their rights. The agency should also ensure that their A.I.system contains a mechanism that ensures that these rights can be honoured. These rights include access,rectification, erasure, restriction, portability, and objection.

1.9 ConfidentialityThe agency must implement effective controls and mechanisms to ensure that everyone involved in the processingof personal data during the Developers Activities only have access on a need-to-know basis and respects theconfidentiality of such data.

1.10 SecurityA.I. systems need to be robust, secure, and safe at every stage of their lifecycle. To achieve this, the agency actingas Controlling Provider should ensure the data security level of the A.I. system they develop is in line with Publicis’Global Security Policies and industry standards.The Global Security Office (GSO) should review the A.I. system from a data security perspective prior to itslaunch.

2. Publicis as Controlling DeployerThe agency that uses an A.I. system (either provided by Publicis or a vendor) for its own purposes is considered aControlling Deployer of an A.I. system.Controlling Deployers are responsible for the compliance of the personal data they determine to process in the A.I.systems they use.The personal data processed during the use of an A.I. system can, for example, be part of Input Data as well asOutput Data.Ensuring compliance during the deployment- or use phase of the A.I. system has two components: 1. 2. Ensuring that the A.I. system the agency uses, either provided by Publicis or by a vendor, allows theagency to comply with the Principles; andEnsuring that the personal data the agency determines to process in the A.I. system, for example asInput- and Output data, complies with the Principles.For the first component the agency acting as Controlling Deployer needs to assess whether the (use of an) A.I.system:• Requires a DPIA/ PIA; for example, if the A.I. system is processing large amounts of personal data(Accountability).• Requires the agency to enter into a Data Processing Agreement; for example, with the third-party providerof the A.I. system (Accountability).• Requires Vendor Due Diligence as outlined in the Data Handling Policy (Accountability); for exampleof the third-party provider of the A.I. system.• Is in line with Publicis’ Privacy Policies (Accountability).• Allows the agency to conduct audits on the A.I. system (Accountability).• Needs to be registered in the Data Inventory (Accountability).• Does not process more personal data than is required for the defined purposes (Data Minimization).• Does not process personal data the agency determined to process in the A.I. system for other purposesthan those defined by the agency prior to the use of the A.I. system (Purpose Limitation).• Contains mechanisms to prevent bias (Fairness).Uncontrolled if printed Classification: Restricted Page 5 of 6• Does not store the personal data the agency determined to process in the A.I. system for a period longerthan required for obtaining the defined purposes (Retention).• Provides enough (access to) information required for the agency to comply with its transparencyrequirements (Transparency).• Allows the agency to honour Data Subjects’ Rights.• Has effective controls and mechanisms to ensure that everyone involved in the processing of personaldata in the A.I. system only have access on a need-to-know basis and respects confidentiality of such data(Confidentiality).• Has a GSO-approved Data Security program in place (Security).For the Second component the agency needs to assess whether the personal data it determines to process in theA.I. system:• Have a legal basis for every defined purpose they are processed for (Legal Basis).• Requires a DPIA/ PIA. For example, where the personal data is sensitive (Accountability).• Requires the agency to enter into a Data Processing Agreement; for example, with the third-partyprovider of the personal data (Accountability).• Is in line with Publicis’ Privacy Policies (Accountability).• Does not go beyond the amount of personal data required for the defined purpose (Data Minimization).• Was obtained/ collected in a fair and legal way; for example, not scraped from the internet withoutproperly informing the data subjects (Fairness and Transparency).• Is only being processed by Publicis employees confidentially and on a need-to-know basis(Confidentiality).

3. Publicis as Processing Provider and as Processing DeployerThe agency that develops an A.I. system on behalf of and in accordance with the instruction of the client, whichA.I. system processes personal data, is considered a Processing Provider of an A.I. system.The agency that uses an A.I. system on behalf of and in accordance with the instruction of the client, which useinvolves the processing of personal data, is considered a Processing Deployer of an A.I. system.In both scenario’s mentioned above the agency is not the primary responsible party for the personal data processedin the A.I. system, the client is.In case the agency is either a Processing Provider or a Processing Deployer it is crucial to enter into an agreementwith the client. This agreement should clearly indicate that, among other:• the agency is processing personal data in the A.I. system on behalf of and on the instruction of the client;and• client is responsible for the processing of personal data to be compliant with applicable data privacy law.***