Smart cities, artificial intelligence and the triangular nature of legitimate interest

0
  1. Introduction

Cities are at the foundations of Western civilisation. From Greek poleis to Roman urbes, from medieval communes to capitals of modern nations and metropolitan areas in the current world, cities have always been catalysts and incubators for any technological, societal and political development because of their unique ability to combine people, ideas and cultures.

We should not expect anything different from the future. Development and innovation will flourish in and through cities. That is why promoting the growth and functionality of cities is in the general interest.

However, here is the problem. Our cities are sick. Pollution and waste, traffic, criminality and inequalities[1], wrong distribution of social and welfare services affect their (and our) life and prospects, putting at stake the viability of a sound urban model, favouring diseases in the population, representing roadblocks for that essential mobility of people and ideas that characterise their success.

We have powerful allies to surmount that climb, though. AI systems and Big Data analytics allow indeed accurate predictions – which the cleverest of human officers could not formulate – through discovery of hidden patterns and correlations[2]. It is unmistakably true that we need that help to manage super complex ecosystems such as the cities of the present, prevent emergencies[3] and allocate efficiently energies and resources.

Still, many consider AI, IoT and Big Data with suspicion, and legislation and decisions by data protection authorities (at both local and EU level) seem to reflect that attitude.

Indeed several problems are on the table[4], which one cannot omit to mention as severe legal issues. Lack of transparency, the potential migration of data to Countries that have inadequate standards of protection, profiling and clustering of individuals and groups, increased risks of exploitation and abuse. On the scenery, the possibility that private players may govern, influence and direct (in their interest or on behalf of third parties) our personal and political liberties is present. Furthermore, the risk is there that algorithms act far from objectively, as they are inevitably blurred and side-tracked by biases due to the inner and latent prejudices of their creators and developers[5], or caused by the inaccuracy of the data sets.

In light of all the foregoing, many have opined for a conservative approach that surrounds AI with limits and conditions, and restrictions and controls. Yet, that does not seem the right path.

While one cannot but recognise the grounds of concern expressed by many data protection authorities[6], I am admittedly convinced that circulation and use of data need to be maximised (not minimised). More thorough and free exploitation of new technologies will not worsen the problems of our communities. It will be part of the solutions.

Indeed, discriminations, inequalities, prejudices to civil and political freedoms (i.e., all the risks that many raise in the public discussion on AI) are not the results that necessarily spring from using AI and Big Data, such that it suffices not to use AI or to firmly limit it to avoid them.

All those elements of unfairness and injustice already characterise our life in our non-smart cities. Hence, avoiding or minimising AI and Big Data analytics will not keep us safe. It will only leave us far from being just and equitable in our current world [7].

In fact, two facts seem striking. First, there needs to be a metric for benchmarking pros and cons in the public debate on AI. Risks are evoked, and those risks are grounded; yet, there is no economic analysis and metric to compare whether those risks are higher or lower in an AI-driven society rather than in our current society.

The second is that biases and prejudices are not AI-originated. They do not stem from the use of machines and technology. Conversely, biases are inherent to individuals, and it is because of humans that they contaminate AI.

If we avoid AI, human biases will unfortunately remain untouched in minds and actions of humans, including decision-makers, judges, and authorities.

Only, while in an algorithm those biases can be more objectively detected and then removed, biases and prejudices in individuals persist more hidden.

At the very bottom, citizens have the right to move, receive medical treatments, avoid diseases from pollution, live decently, work and do business and those are fundamental rights the same as the right to avoid third-party’s intrusions in private life or personal data.

That is why I hold the view that legitimate interest is triangular. To the interests of data subjects and those of data controllers we should always add the general interest of all those who live in a community: that is, the interest that data be collected, communicated and used so that the city and the citizens may thrive.

  1. Legal limitations

There is a wide literature on the crash between Big Data analytics and data protection legislation[8].

As mentioned above[9], AI can work only if it is nourished and trained with large data sets. That is per se difficult to reconcile with data minimisation (art. 5, para. 1, lett. c), of the General Data Protection Regulation (EU) 2016/679; hereinafter “GDPR”).

Moreover, manifest problems of transparency rise (art. 5, para. 1, lett. a), of the GDPR). The whole mechanics of the GDPR, with its detailed privacy notices to data subjects (artt. 12, 13 and 14 of the GDPR), hardly works in instances where Big Data players have to collect millions of data from a number of different sources, including through internet scraping.

Other issues then emerge when you consider the purpose limitation rule (art. 5, para. 1, lett. b), of the GDPR), given that final purposes of data analytics often are indefinite in the stage of assembling and analysing data. Uses and utility of the analysis come to the surface only after the exercise (not before).

Again, additional questions are triggered considering that data controllers must identify a legal basis for their processing[10]. In that regard:

  • Consent (art. 7 of the GDPR) cannot work when you process data of millions of data subjects especially because consent can always be withdrawn (art. 7, para. 3, of the GDPR).
  • Legitimate interest (art. 6, para. 1, lett. f), of the GDPR) does not suffice for data belonging to special categories (e.g., health data) (art. 9 of the GDPR) and cannot be invoked by public authorities (and concessionaires) in the performance of their tasks (art. 6, para. 1, of the GDPR).

Hence, in the end, the possibility to collect and process Big Data for smart-city management seems to rest with the legal basis of public interest, which, however, is rather narrowly defined in EU legislation[11].

On top of that, several other limitations exist. To mention only some:

  • When you process data, cybersecurity obligations emerge (Digital Operational Resilience Act, Regulation (EU) 2022/2554; Directive (EU) 2022/2555; Directive (EU) 2022/2557).
  • Secondly, case law of the Italian Council of State has repeatedly stated that use of AI tools in decisions by public authorities must be subject to at least three requirements: algorithm must be known and, more than that, explicable[12]; the human officer’s judgement must remain central; the no discrimination principle cannot but be fully respected.
  • Thirdly, Member States may keep or introduce special conditions for the processing of biometric data (art. 9, para. 4, of the GDPR), which of course are a core part of many video surveillance systems.

More than that, rules on fully automated decision-making are very strict (art. 22 of the GDPR). In practice, the possibility of automated decisions is confined to three specific cases, i.e., the fully automated decision-making is:

  • either necessary for completing or performing a contract (which may apply in certain instances, but not to the majority of automated decisions by a municipality); or
  • based on the data subjects’ explicit consent (always revocable) (which does not clearly work when you process data of all citizens in a municipality); or, third and final case,
  • authorised by the law and specific safeguards are provided.

When special categories of data are concerned, lawfulness of processing is even narrower and subsists only in case of public interest or express consent, topped by suitable safeguards.

In any case, in the information notice the data controller should provide meaningful information on the logic involved and the envisaged consequences of the related data processing (art. 13, para. 2, lett. f), of the GDPR): an exercise that is almost impossible when, in a data analytics context, the result and consequences of the analysis are not foreseeable until the analysis is done and the processing is completed.

Now, the combination of all the above results in a very limited room to use AI to govern smart cities. In fact, the Italian data protection authority has recently launched an investigation on the Municipalities of Arezzo and Lecce[13], which had announced the intention to use technologies for facial recognition and detection of infringements.

Furthermore, a very strict interpretation of the “necessity” and “proportionality” tests is embraced by data protection authorities[14], such that processing is never “necessary” if there is at least another possibility to attain the purpose without processing personal data.

Even less vital space is left for AI, if one holds the view that data subjects have a right to obtain an explanation of automated decisions (recital 71 of the GDPR). Indeed, by definition, AI is employed to do things that humans cannot do by themselves and through a process of inferring knowledge that is not based on syllogisms but on hidden patterns not visible to humans.

  1. A different view

All in all, a path forward needs to be found which ensures protection from data abuses but allows data use. Many points are worth investigating to find it.

Primarily, the interests of all those who live in the city should be factored in interpreting data protection. Efficiency is an integral part of rights. Rights need to be actual and effective, not merely formal. As a result, whatever increases efficiency in allocating welfare and public services results in fostered civil rights. If AI and Big Data analysis allow increased efficiency in the working of cities[15], that is part of citizens’ right to a better and safer life.

That general interest is not irrelevant and it should be pondered in a triangular analysis that considers the data controller, the data subject but also the community of data users. The interest of citizens to data sharing, assembling and exploitation in the interest of the whole city and is not different from their interest to breathe air or to use water.

Data could be regarded as natural resources, and the right paradigm should not be owned. Instead, we could consider that there are concurring legal and equitable interests in data processing, such that the legal point is not to identify an “owner” (who has the right to exclude any others), but to see, case by case, which is the title to data that should prevail on others in a specific instance. In that conceptual framework, the notion of “legitimate interest” would extend to all data processing and would be vital to identifying the superior title, which, in that very case and circumstances, deserves protection[16].

That assessment will necessarily consider that the necessity and proportionality of the data processing are to be regarded as dynamic features. In other terms, whether data processing is necessary and proportionate does not depend only on the state of the art of technological tools (art. 25 of the GDPR) but also on the concrete viable alternatives, which need to be appraised in light of their different efficiency, speed, and costs and, again, with a forward-looking spirit (as it is inevitable when you manage a city) and not merely considering current situations.

Legitimate interest impact assessments (LIA) or data protection impact assessments (DPIA) may serve that exercise and should be inspired to the triangle of interests involved in data processing, taking into account all pros and cons (including those of citizens).

  1. Conclusions

From theory to practice, the above views might imply the following concrete consequences:

In any balancing exercise or impact assessment, the general interest in data circulation and use should be considered as a third facet in addition to the interest of the data subject and that of the data controller. The collection and use of large data sets with the objective of training algorithms is justified by legitimate and public interest[17].

Necessity does not mean that the data controller must have an absolute need to process the data as the only way to achieve the intended results; it should mean that the data controller does not have reasonable alternatives that are equivalent in terms of results, costs, rapidity, and efficiency.

Transparency standards should also be regarded, having consideration to what is reasonable. Big Data players could satisfy the GDPR obligations by publishing notices on their internet site or other websites that can reach the public.

Data analytics is a purpose per se and should suffice as a purpose indication in the privacy notices to data subjects. The ultimate use of Big Data analysis can only emerge after completing the data analytics exercise. Thus, we cannot impose anything more specific during the first notice.

Explicability of the AI output should consider the technological black box effect of machine learning and deep learning; hence, we cannot impose that the production is explicable to a human officer because the working of an AI system is not. We could conversely insist that the AI output is explainable to another AI system (double-check exercise). In addition, teams of human providers, developers, and controllers should continuously check the accuracy of the data input and the correct working of the AI system on a statistical basis[18].

[1] E.g. between uptown, downtown and peripheral areas

[2] On a specific application of AI (predictive justice) but with solid, documented and enlightening considerations on AI in general, see A. Santosuosso-G. Sartor, La giustizia predittiva, in Giurisprudenza Italiana, 2022, 1760 ss.

[3] Floods, earthquakes, pandemics, etc.

[4] F. Paolucci-O. Pollicino, Intelligenza urbana e tutela dei diritti fondamentali. Antinomia o complementarietà nella nuova stagione algoritmica?, in MediaLaws, 2023, 137 ss.

[5] N. Abriani-G. Schneider, Il diritto societario incontra il diritto dell’informazione. IT, Corporate governance e Corporate Social Responsibility, Rivista delle Società, 2020, 1326, ss. E. Battelli, Necessità di un umanesimo tecnologico: intelligenza artificiale e diritti della persona, in Diritto di famiglia e delle persone, 2022, 1096 ss. G. Provini-A. Reghelin-A. Rizzo, Smart cities: aspetti privacy e di sicurezza da considerare, in Agenda Digitale, 2022.

[6] Garante Privacy, 6 October 2021: Si alle smart cities ma occorre proteggere i dati delle persone.

[7] In other words, a pros and cons analysis need to compare a smart city AI-driven with an actual non-smart city human officers-driven.

[8] The possibility to escape from the GDPR by using anonymised data is of scarce help, given that in many instances through the so-called singling-out anonymised data can be reattached to an individual and return to be personal (and, thus, covered by the GDPR). Art. 29 Data Protection Working Party, Opinion 5/2014.

[9] See bibliography o quotes in A. Fedi, Big Data: analisi e proposte, in Nuovo diritto delle società, 2020, 604 ss.

[10] F. Dughiero, Urban Big Data e tutela dei dati personali: adeguamento privacy e best practices, in MediaLaws.eu, 11 September 2020.

[11] A task carried out in the public interest or in the exercise of official authority vested in the controller (art. 6, para. 1, lett. e), of the GDPR). Processing which is necessary for reasons of substantial public interest on the basis of applicable law, respecting the essence of data protection rights and with specific safeguards (art. 9, para. 2, lett. g), of the GDPR).

[12] Cons. Stato, sec. VI, 13 December 2019, n. 8472, in Foro Italiano, 2020, 6, III, 340. See D. Marongiu, Algoritmo e procedimento amministrativo: una ricostruzione, in Giurisprudenza Italiana, 2022, 1515 ss.; F. Costantino, Algoritmi, intelligenza artificiale e giudice amministrativo, ibid., 1527 ss. L. Parona, Government by algorithm: un contributo allo studio del ricorso all’intelligenza artificiale nell’esercizio di funzioni amministrative, in Giornale di Diritto Amministrativo, 2021, 10 ss.

[13] See Garante Privacy, Videosorveglianza: stop del Garante privacy a riconoscimento facciale e occhiali smart. L’Autorità apre istruttorie nei confronti di due Comuni. The Garante has requested clarifications on (i) systems/tools, (ii) purposes, (iii) legal bases, (iv) use and origin of data sets and (v) DPIA.

[14] See EDPB, Guidelines 3/019 on processing of personal data through video devices (10 July 2019).

[15] less polluted cities, more efficient waste control, enhanced sustainability standards, quicker and more efficient commuting from home to work and vice versa, improved security and reduced risk of danger

[16] The same as the legitimate interest to use natural resources, which do not belong either to the State, the concessionaire, the lessor or the private citizen and must be reasonably allocated in view of the current situation from time to time.

[17] The recent decision of the Garante Privacy on Chat GPT seems suggesting exactly so (Intelligenza artificiale: il Garante blocca ChatGPT. Raccolta illecita di… – Garante Privacy). See L. Scudiero, La sfida del Garante, Il Foglio, 2023.

[18] M.B. Armiento, Prove di regolazione dell’intelligenza artificiale: il Regolamento della Banca d’Italia sulla gestione degli esposti, in Giornale di Diritto Amministrativo, 2023, 105 ss.

Andrea Fedi è Partner presso lo Studio legale Legance.

Share this article!
Share.

About Author

Leave A Reply