By browsing this website, you acknowledge the use of a simple identification cookie. It is not used for anything other than keeping track of your session from page to page. OK

Documents Floridi, Luciano 4 results

Filter
Select: All / None
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

19-66018

Springer

"The goal of the book is to present the latest research on the new challenges of data technologies. It will offer an overview of the social, ethical and legal problems posed by group profiling, big data and predictive analysis and of the different approaches and methods that can be used to address them. In doing so, it will help the reader to gain a better grasp of the ethical and legal conundrums posed by group profiling. The volume first maps the current and emerging uses of new data technologies and clarifies the promises and dangers of group profiling in real life situations. It then balances this with an analysis of how far the current legal paradigm grants group rights to privacy and data protection, and discusses possible routes to addressing these problems. Finally, an afterword gathers the conclusions reached by the different authors and discuss future perspectives on regulating new data technologies."
"The goal of the book is to present the latest research on the new challenges of data technologies. It will offer an overview of the social, ethical and legal problems posed by group profiling, big data and predictive analysis and of the different approaches and methods that can be used to address them. In doing so, it will help the reader to gain a better grasp of the ethical and legal conundrums posed by group profiling. The volume first maps ...

More

Bookmarks
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Philosophical Transactions of the Royal Society A - vol. 376 n° 2133 -

Philosophical Transactions of the Royal Society A

"The article discusses the governance of the digital as the new challenge posed by technological innovation. It then introduces a new distinction between soft ethics, which applies after legal compliance with legislation, such as the General Data Protection Regulation in the European Union, and hard ethics, which precedes and contributes to shape legislation. It concludes by developing an analysis of the role of digital ethics with respect to digital regulation and digital governance."
"The article discusses the governance of the digital as the new challenge posed by technological innovation. It then introduces a new distinction between soft ethics, which applies after legal compliance with legislation, such as the General Data Protection Regulation in the European Union, and hard ethics, which precedes and contributes to shape legislation. It concludes by developing an analysis of the role of digital ethics with respect to ...

More

Bookmarks
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Science and Engineering Ethics - vol. 24

Science and Engineering Ethics

"In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence (AI). In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a ‘good AI society'. To do so, we examine how each report addresses the following three topics: (a) the development of a ‘good AI society'; (b) the role and responsibility of the government, the private sector, and the research community (including academia) in pursuing such a development; and (c) where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a ‘good AI society'. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach."
"In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence (AI). In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a ‘good AI society'. To do so, we examine how each report addresses the ...

More

Bookmarks
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Digital Society - n° 3 -

Digital Society

"The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed by the Intergovernmental Panel on Climate Change (IPCC) reports and related literature. This approach enables a nuanced analysis of AI risk by exploring the interplay between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We further refine the proposed methodology by applying a proportionality test to balance the competing values involved in AI risk assessment. Finally, we present three uses of this approach under the AIA: to implement the Regulation, to assess the significance of risks, and to develop internal risk management systems for AI deployers."

This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source, provide a link to the Creative
Commons licence.
"The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk ...

More

Bookmarks