Related papers
Artificial intelligence and civil liability—do we need a new regime?
andrew tettenborn
International Journal of Law and Information Technology
Artificial intelligence (AI) is almost ubiquitous, featuring innumerable facets of daily life. For all its advantages, however, it carries risks of harm. In this article, we discuss how the law of tort should deal with these risks. We take account of the need for any proposed scheme of liability to protect the existing values of tort law without acting as a barrier to innovation. To this end, we propose a strict liability regime in respect of personal injury and death, and a bespoke fault-based regime for dignitary or reputational injuries. For other losses, we take the view that there is no justification for introducing any new regime, on the basis that AI applications do not introduce substantial added degrees of risk that would justify departing from the existing scheme of liability arising under the current law of tort.
View PDFchevron_right
In support of “no-fault” civil liability rules for artificial intelligence
Emiliano Marchisio
SN Social Sciences
Civil liability is traditionally understood as indirect market regulation, since the risk of incurring liability for damages gives incentives to invest in safety. Such an approach, however, is inappropriate in the markets of artificial intelligence devices. In fact, according to the current paradigm of civil liability, compensation is allowed only to the extent that "someone" is identified as a debtor. However, in many cases it would not be useful to impose the obligation to pay such compensation to producers and programmers: the algorithms, in fact, can "behave" far independently from the instructions initially provided by programmers so that they can err despite no flaw in design or implementation. Therefore, application of "traditional" civil liability to AI may represent a disincentive to new technologies based on artificial intelligence. This is why I think artificial intelligence requires that the law evolves, on this matter, from an issue of civil liability into one of financial management of losses. No-fault redress schemes could be an interesting and worthy regulatory strategy in order to enable this evolution. Of course, such schemes should apply only in cases where there is no evidence that producers and programmers have acted under conditions of negligence, imprudence or unskillfulness and their activity is adequately compliant with scientifically validated standards.
View PDFchevron_right
Legal liability issues and regulation of Artificial Intelligence (AI) Dissertation work -Post Graduate Diploma in Cyber Laws and Cyber Forensics Course Code: PGDCLCF Submitted by: Jomon P Jose
Jomon Jose
Dissertation, 2018
New liability issues emerge with pervasive adoption of AI technologies. This paper looks at wide range of existing and potential legal liability issues associated with AI. The possible approaches to regulate AI also have be dwelt with.
View PDFchevron_right
The Expert Group’s Report on Liability for Artificial Intelligence and Other Emerging Digital Technologies: a critical assessment
francesca episcopo
European Journal of Risk Regulation
This article offers a critical discussion of the “Report on Liability for Artificial Intelligence and Other Emerging Digital Technologies” released by the Expert Group on Liability and New Technologies. In particular, the authors consider: the excessive diversity of applications encompassed through the notion of “artificial intelligence and other emerging technologies”; the distinction between high- and low-risk applications as a potential source of legal uncertainty; the primary reliance on evidentiary rules over substantive ones; the problematic role attributed to safety rules; the unclear relationship between the Product Liability Directive and other ad hoc liability regimes; the radical exclusion of electronic personhood as a way of addressing liability issues; and the limited contextualisation of compulsory insurance and no-compensation funds.
View PDFchevron_right
Civil and Criminal Liability in Cases of Artificial Intelligence Failure
S.S. Rai
Artificial Intelligence has started to become an important part of our day-to-day life and in the near future, Artificial Intelligence (AI) based technology is going to be introduced in the country especially in form of self-driving cars by Tesla and archrivals. These technologies are being capable of performing various autonomous tasks including but not limited to interactions with human beings. However, use of AI based technologies may give rise to disputes where one party may be the Artificial Intelligence itself, and to deal with such situations there is a need of proper regulatory framework for the adjudication of such disputes. This paper attempts to analyze the methods by which other countries are dealing with the problem while striking a balance between protecting the rights of the victim vis-à-vis the interests of manufacturers and programmers of AI. This paper further focuses on the factors that are needed to be ascertained while deciding the liability in accidents caused by Artificial Intelligence by committing an offence in criminal matters and breach of duty in civil matters. Keywords: Artificial Intelligence (AI), Legal Personality, Civil Liability, Criminal Liability, Tort negligence, Regulation of AI.
View PDFchevron_right
Inefficiency of legal laws in Applying to Damages Caused by Artificial Intelligence
ehsan lame
The emergence and increasing progress of artificial intelligence has faced the legal science with unsolvable challenges. Artificial intelligence systems, like other new technologies, have faced serious challenges from the principle of accountability and legal rules about civil responsibilities (compensation for damages caused by artificial intelligence systems). This is an important issue and ensures the confidence of potential victims of these systems and trust in the artificial intelligence industry. In the face of changes in smart technology, the courts experience challenges in applying traditional laws that the current laws are unable to respond to, and regulatory organizations and legislators must pay attention to the fact that the current laws are not responsive in monitoring artificial intelligence and exercising legal responsibilities. They need to contemplate to enact the special and new laws. However, the important issue that the legislators in all legal systems are concerned with is whether artificial intelligence is considered as a legal entity or not, and whether artificial intelligence can be tried before the courts, the issue which has not yet been answered. This article, while reviewing the nature and elements of artificial intelligence, which is necessary for lawyers, examines the various aspects of the challenges facing the science of law in the field of artificial intelligence and examines the ineffectiveness of the laws governing the damages caused by artificial intelligence. The result is that the rules of law need to be revised in dealing with the responsibilities arising from artificial intelligence.
View PDFchevron_right
An analysis of the international and European Union legal instruments for holding artificial intelligence accountable
Justice Kgoale
Juridical Tribune, 2023
Despite being applauded as a great technological breakthrough of the current century, Artificial Intelligence (AI) technology and its operations keep attracting condemnations because of the failure by most countries to regulate and hold AI accountable. This assertion is made against the backdrop that mostly, AI perform functions and activities just like human beings, as such, AI is prone to make mistakes which might even negatively impact human beings and violate human rights. Mistake calls for accountability. This paper accentuates that even if there are no clear provisions in some country's statute books, there are existing international and European Union legal instruments for regulating and holding AI accountable should it erred. Methodologically, using literature review research approach, this paper highlights and discusses selected but salient international and European legal instruments which have direct and indirect impacts on AI, especially pertaining to regulation, liability and accountability.
View PDFchevron_right
Challenges of Criminal Liability for Artificial Intelligence Systems
Prof. Ramy El-Kady
Exploration of AI in Contemporary Legal Systems, 2024
The idea of artificial intelligence first surfaced at the turn of the 20th century, with the goal of enabling machines to carry out tasks that resemble those performed by humans. Since then, a number of theories have been proposed regarding the extent to which mistakes made by artificial intelligence entities and systems may be held accountable, particularly in the area of criminal law. This chapter seeks to clarify this matter by discussing the legal obstacles that surround the question of criminal responsibility for artificial intelligence's actions. It also offers concepts, justifications, and factors that tackle this problem by using a comparative analytical and descriptive methodology. The chapter concluded with a proposal for international cooperation to develop a legal and ethical framework for the worldwide use of artificial intelligence. Given the anticipated widespread use of this technology in the future, governments could use this framework as a reference.
View PDFchevron_right
The Criminal Liability of Artificial Intelligence Entities
الدكتور المحامي مشعل الرقاد
Pak. j. life soc. Sci. (2024), 22(2): 8785-8790
The rapid evolution of information technologies has led to the emergence of artificial intelligence (AI) entities capable of autonomous actions with minimal human intervention. While these AI entities offer remarkable advancements, they also pose significant risks by potentially harming individual and collective interests protected under criminal law. The behavior of AI, which operates with limited human oversight, raises complex questions about criminal liability and the need for legislative intervention. This article explores the profound transformations AI technologies have brought to various sectors, including economic, social, political, medical, and digital domains, and underscores the challenges they present to the legal framework. The primary aim is to model the development of criminal legislation that effectively addresses the unique challenges posed by AI, ensuring security and safety. The article concludes that existing legal frameworks are inadequate to address the complexities of AI-related crimes. It recommends the urgent development of new laws that establish clear criminal responsibility for AI entities, their manufacturers, and users. These laws should include specific penalties for misuse and encourage the responsible integration of AI across various sectors. A balanced approach is crucial to harness the benefits of AI while safeguarding public interests and maintaining justice in an increasingly AI driven world
View PDFchevron_right
A Review Of The Issues And Challenges In Relation To The Criminal Liability Of Artificial Intelligence Entities
SEAHI Global Publications SEAHI
International Journal of Innovative Legal & Political Studies 11(4):1-9, Oct.-Dec., 2023, 2023
This paper discussed the basic issues and challenges of criminal liability of artificial intelligence entities. These issues and challenges which are in form of computer programs or software are akin to the problems of the enforcement of cybercrimes. Most of the crimes committed by artificial intelligence software or entities are known as internet crimes/offences and their enforcement poses some problems and challenges. Also, determining the personhood of artificial intelligence entities, their possible right to dignity and power to own or acquire property and so on are also some of the issues which contribute to the challenge inherent in determining their possible liability for crimes and way of enforcing the relevant laws on them if at all they are found guilty. Thus, the recent technological advancement has undoubtedly reshaped the world as some of the tasks earlier on reserved or being solely performed by humans are now carried out with ease by non-human entities generally referred to as artificial intelligence entities with some attendant negative impact. This development made some scholars and legal minds to agitate for the criminal liability of such entities so that they can be treated as humans when things go wrong with their use or when they act illegally. Unfortunately the criminal liability of such artificial intelligence entities may be bedeviled by some issues and challenges aforesaid which may be capable of making it impossible or difficult to arrest and prosecute them. The aim of this paper was to examine the basic issues and challenges militating against the criminal liability of AI entities. The methodology adopted is doctrinal via analyzing the relevant laws, judicial decisions and opinions/suggestions of some erudite scholars. It is found that there are discordant views of authors on this subject and that these issues and challenges have hitherto affected the criminal liability of AI entities. It is recommended that in order to circumvent these threatening issues and challenges in respect of AI liability for crimes/offences, their personhood should be made definite so that they could be held directly liable for their crimes/offences. They should also be classified for the purposes of criminal liability and where they could not be held liable, their developers, users, controllers or instructors should be held liable just like corporations by invoking the principle of ‘lifting the veil’.
View PDFchevron_right