It can take a while depending on the size of the document..please wait
Discuto
0 days left (ends 17 May)
description
Since artificial intelligence (AI) and algorithmic decisions are increasingly influencing our everyday life, the European Parliament is currently working on several own-initiative reports – this means texts which the Commission is then to present as a legislative proposal. I am rapporteur – in other words, the author of the report – in my Committee on Consumer Rights and the Internal Market. I already published my draft opinion (see discussion tab) and will present it in the committee on 18 May 2020 (livestream). Now the other political groups, as well as I myself, have until 19 May 2020 to propose amendments to my draft. This is why I call on you to provide me with any comments or suggestions you have to the current text!
For further background information on the report and the subject in general check out my blogpost.
Further info
LATEST ACTIVITY
LEVEL OF AGREEMENT
MOST DISCUSSED PARAGRAPHS
LATEST COMMENTS
-
I would add that there is a need to model operational accountability and impact assessment so as to deter public and private entities from implementing AI systems frivolously (as in Sweden, where “the Swedish data protection regulator recently banned facial recognition in schools, based on the principle of data minimization, which requires that the minimum amount of sensitive data should be collected to fulfill a defined purpose. Collecting facial data in schools that could be used for a number of unforeseen purposes, it ruled, was neither necessary nor a proportionate way to register attendance.”) I would want to empower the people who may be adversely affected by AI, by recommending mandatory creation of simple, transparent, time-boxed mitigation channels such that humans can easily request human review of AI decisions that may have been made in error. See CiGi summary here: “Policies around AI systems must focus on ensuring that those directly impacted have a meaningful say in whether these systems are used at all, and in whose interest.” The report also references: “The Algorithmic Accountability Act of 2019, a proposed bill in the United States Congress, attempts to regulate AI with an accountability framework. This legislation requires companies to evaluate the privacy and security of consumer data as well as the social impact of their technology, and includes specific requirements to assess discrimination, bias, fairness and safety. ***Similarly, the Canadian Treasury Board’s 2019 Directive on Automated Decision Making requires federal government agencies to conduct an algorithmic impact assessment of any AI tools they use. The assessment process includes ongoing testing and monitoring for “unintended data biases and other factors that may unfairly impact the outcomes.”**** URL - Centre for International Government Innovation -https://www.cigionline.org/articles/artificial-intelligence-policies-must-focus-impact-and-accountability
-
Dear Alexandra, dear team, all your suggestions are highly valuable, thank you very much for this initial report! For my MA thesis, I currently analyse biometric and emotional AI. A point that struck me was the absence of data protection for emotional datasets (eg. in voice assistants, FRT or sensors) in form of pseudonymous aggregate data by the GDPR. However, I'm not sure in how far the ePrivacy directive would/will tackle this issue. Below I include a paragraph from my draft (!) thesis to illustrate the point: "The large-scale data collection of information on individual’s emotional states are the most concerning developments in relation to biometric and emotional AI technology. In fact, privacy protection of individuals is not always granted. Information about emotional states, gathered with biometric technology, can be highly valuable even if individuals cannot be singled out. Especially aggregate datasets on emotional behaviour can be gathered without containing ‘personal’ or ‘sensitive’ information. Let us consider the case of a surveillance camera in a public space: The video material collected by the camera could be analysed by an algorithm in order to ‘read’ facial expressions and detect emotions by people. This example triggers a range of issues. First, it is not clear whether the data was given by consent. Second, although the information about emotions is surely rather personal information, because the camera cannot make links to an individual, their emotions are not considered ‘personal’. Third, if a certain person would be regularly filmed, and its emotions were to be tracked frequently, its safety, consumer and/or fundamental rights are at stake. The tracking of facial recognition emotions is thus a particularly critical use case of AI technology, based on large-scale datasets and algorithmic infrastructures in combination with biometric technological artefacts." The main idea/source is Andrew McStay's book "Emotional AI: The Rise of Empathic Media" (2018). Any questions, don't hesitate to contact me. Wishing you the best of success with the report! All the best, Rosanna
P1
DRAFT OPINION
with recommendations to the Commission on the framework of ethical aspects of artificial intelligence, robotics and related technologies
SUGGESTIONS
The Committee on the Internal Market and Consumer Protections calls on the Committee on Legal Affairs, as the committee responsible:
- to incorporate the following suggestions into its motion for a resoltution:
P3
1.Underlines the importance of an EU regulatory framework being applicable where consumers within the Union are users of or subject to an algorithmic system, irrespective of the place of establishment of the entities that develop, sell or employ the system;
Add comment
P4
2.Notes that the framework should apply to algorithmic systems, including the fields of artificial intelligence, machine learning, deep learning, automated decision making processes and robotics;
Add/View comments (2)
P5
3.Stresses that any future regulation should follow a differentiated risk-based approach, based on the potential harm for the individual as well as for society at large, taking into account the specific use context of the algorithmic system; legal obligations should gradually increase with the identified risk level; in the lowest risk category there should be no additional legal obligations; algorithmic systems that may harm an individual, impact an individual’s access to resources, or concern their participation in society shall not be deemed to be in the lowest risk category; this risk-based approach should follow clear and transparent rules;
Add/View comments (6)
P7
4.Underlines the importance of an ethical and regulatory framework including in particular provisions on the quality of data sets used in algorithmic systems, especially regarding the representativeness of training data used, on the de-biasing of data sets, as well as on the algorithms themselves, and on data and aggregation standards;
Add comment
P9
5.Believes that consumers should be adequately informed in a timely, impartial, easily-readable, standardised and accessible manner about the existence, process, rationale, reasoning and possible outcome of algorithmic systems, about how to reach a human with decision-making powers, and about how the system’s decisions can be checked, meaningfully contested and corrected;
Add/View comment (1)
P10
6.Recalls the importance of ensuring the availability of effective remedies for consumers and calls on the Member States to ensure that accessible, affordable, independent and effective procedures are available to guarantee an impartial review of all claims of violations of consumer rights through the use of algorithmic systems, whether stemming from public or private sector actors;
Add/View comments (2)
P11
7.Stresses that where public money contributes to the development or implementation of an algorithmic system, the code, the generated data -as far as it is non-personal- and the trained model should be public by default, to enable transparency and reuse, among other goals, to maximise the achievement of the Single Market, and to avoid market fragmentation;
Add/View comments (2)
P13
8.Underlines the importance of ensuring that the interests of marginalised and vulnerable consumers and groups are adequately taken into account and represented in any future regulatory framework; notes that for the purpose of analysing the impacts of algorithmic systems on consumers, access to data should be extended to appropriate parties notably independent researchers, media and civil society organisations, while fully respecting Union data protection and privacy law; recalls the importance of training and giving basic skills to consumers to deal with algorithmic systems in order to protect them from potential risks and detriment of their rights;
Add/View comment (1)
P14
9.Underlines the importance of training highly skilled professionals in this area and ensuring the mutual recognition of such qualifications across the Union;
Add comment
P16
10.Calls for the Union to establish a European market surveillance structure for algorithmic systems issuing guidance, opinions and expertise to Member States’ authorities;
Add comment
P17
11.Notes that it is essential for the software documentation, the algorithms and data sets used to be fully accessible to market surveillance authorities, while respecting Union law; invites the Commission to assess if additional prerogatives should be given to market surveillance authorities in this respect;
Add/View comment (1)
P18
12.Calls for the designation by each Member State of a competent national authority for monitoring the application of the provisions;
Add/View comments (3)
P19
13.Calls for the establishment of a European market surveillance board for algorithmic systems, to ensure a level playing field and to avoid fragmentation of the internal market, to decide with a qualified majority and by secret vote in case of different decisions on algorithmic systems used in more than one Member State, as well as at the request of the majority of the national authorities;
Add comment
P20
–to incorporate the following recommendations into the annex to its motion for a resolution:
Did you know you can vote on comments? You can also reply directly to people's comments.