ARTIFICIAL INTELLIGENCE LAW

Back

INTRODUCTION

The rapid development of artificial intelligence requires new legal regulations. Artificial Intelligence can be defined as a computer system that exhibits behaviours that require human intelligence. It should be noted that there is still no legally recognised definition. Artificial intelligence has the capacity to predict future events or solve complex problems by processing large amounts of data and extracting meaningful functions from this data.

Artificial intelligence performs all operations by collecting data. Since artificial intelligence will be used materially from the moment artificial intelligence collects the data, legal regulations will be necessary from this point onwards. It is obligatory to mention the right to Protection of Personal Data due to the fact that the processed data is personal data. For example, in case the artificial intelligence software used to analyse the medical history of patients in the hospital makes a wrong diagnosis and the patient is harmed, the legal and criminal liability of artificial intelligence is one of the legal problems related to artificial intelligence. In this article, legal problems related to artificial intelligence will be discussed.

LEGAL PERSONALITY OF ARTIFICIAL INTELLIGENCE

 

Analysing the legal status of artificial intelligence and determining its legal and criminal liability is very important in terms of artificial intelligence law regulations. According to the Turkish Civil Code and other legal systems, the person is considered in two different groups as real and legal person. Personality, on the other hand, is a broad concept that includes the capacity to have rights and obligations, as well as the capacity to have rights and obligations through one's own actions, and personal values and situations. In natural persons, personality starts with birth and natural persons are human beings. In legal persons, the principle of limited number exists. Legal person companies can only be established as stipulated by law.Therefore, it is debatable whether artificial intelligence can be included in these two classes.

 The euRobotics working group within the European Union (EU) has stated the idea of an ‘electronic personality’ model based on the registry system. With this idea, it has argued that a system covering various parties such as the user, manufacturer and seller would be particularly useful in terms of liability. It was envisaged that a system similar to a commercial registry system should be designed in such a way that artificial elements such as robots are registered in an official registry and become entitled to personality as soon as they are registered, and that the funds to be allocated to robots should be applied in compensation liability.¹ Similarly, in the report prepared by the Legal Affairs Committee of the European Parliament and announced in 2017, an electronic personality assessment was made for advanced autonomous robots.²The report suggested that each artificial intelligence should be registered in the official register and in case of liability for damages, it should be applied to the material funds established specifically for artificial intelligence assets.³ Another important recommendation of the report is the adoption of civil liability for the damages caused by artificial intelligence. This proposal, which is a consequence of the adoption of the artificial intelligence entity as a person in law, stipulates the strict liability of artificial intelligences for the damages they cause. For the compensation of the damage caused, only the proof of the causal link between the damage and the act of artificial intelligence is sufficient for the liability to arise. 4

LEGAL LIABILITY OF ARTIFICIAL INTELLIGENCE

In terms of legal liability, contractual liability and tort liability are discussed through various examples within the scope of the study. In terms of contractual liability, firstly, a liability can be evaluated according to the relationship between the manufacturer and the software developer. 

Contracts concluded between the manufacturer and the user should be examined under the Law No. 6502 on the Protection of Consumers.

When the third-party company wants to sue the developer for just cause, it must prove the fault of the developer (negligence or breach of contract), the damage suffered and the causal link between the developer's fault and the damage.  However, the involvement of artificial intelligence complicates the concepts of fault and causation due to the way artificial intelligence develops its own behaviour over time.

The concept of tort is regulated in Article 49 of the Turkish Code of Obligations and its continuation. “Whoever causes damage to another person by a defective and unlawful act is obliged to compensate for this damage.” Wrongful act is the source of compensation obligation.In order to be able to speak of the existence of a wrongful act, the elements of act, illegality, damage, causal link and fault must exist together.  Although the general rule is fault liability, there are also cases of strict liability with legal regulations and jurisprudence of the Supreme Court. The provisions of Article 49 et seq. of the TCO shall apply to matters not regulated by special provisions. In cases of strict liability, the elements of act, illegality, damage and causal link are sufficient for the existence of strict liability.A number of principles have been adopted in the acceptance of faultless liability.These are the principle of care and diligence (ordinary causation liability), the principle of equity

Also, we will explain later on in criminal liability, the existence of will is essential for the defect to be in question. In addition to the fact that artificial intelligence does not have a legal personality, it does not have a will. In the event that artificial intelligence is recognised as an ‘eectronic personality’ as mentioned above, only strict liability will be possible.

However, damage may have occurred as a result of the negligent behaviour of any of the stakeholders participating in the production, use and development stages of artificial intelligence.5  In this case, liability may arise due to the faults of the producers, designers, users and persons claiming ownership of the artificial intelligence that causes damage. For example, if a third party is damaged due to the faulty code written by the engineer who writes the codes related to artificial intelligence, the engineer may be liable for compensation for the damage.

 

When we look at the cases of strict liability regulated under the TCO, there are the liability of the employer (TCO Art. 66 the liability of the owner of a building or other structure (TCO Art. 69), the liability of those who do not have the power of discernment (TCO Art. 65). Regarding the determination of liability for the use of artificial intelligence and robots, the applicability of strict liability is evaluated in doctrine.

 

CRIMINAL LIABILITY OF ARTIFICIAL INTELLIGENCE

Artificial intelligence can be used as a tool in the commission of a crime. However,is clearly visible in the near futurethe role of artificial intelligence cannot be limited to being a tool. While artificial intelligence can only be a tool when a crime is committed through an illegal website, it would not be correct to define artificial intelligence as only a tool in an accident caused by a driverless vehicle.

In private law, there are proposals such as the adoption of new forms of strict liability, the establishment of a special fund, insurance to cover the damage, or even the direct liability of the autonomous vehicle. However, the principles of legality and fault in criminal law make it difficult to determine the "thing" or person to be punished for the act. On the other hand, in terms of the social control function of criminal law, it is necessary to discuss whether punishing the robot itself will have a preventive effect. However, it seems unlikely that punishing a robot will have an effect on it or other robots in the form of avoiding criminal acts.

 

Another point that needs to be discussed in the criminal responsibility of artificial intelligence is the concept of will. There can be no talk of punishing an element without the ability to act and the ability to fault. The ability to act is a voluntary act. The main prerequisite for the ability of fault and the ability to act is the will. At this point, it is accepted that artificial intelligence does not have willpower because current artificial intelligence technologies lack the ability to make conscious decisions. Artificial intelligence operates according to certain rules and algorithms, and in the process tries to achieve programmed goals without any conscious purpose or intention.

In terms of the criminal liability of natural and legal persons due to the actions of artificial intelligence, first of all, in the event that the robot is used intentionally, that is, intentionally in crime, the person who uses it is punished just like using a weapon.6In the event that the robot is involved in a crime without intent, Article 177 of the TPC on the release of animals in a way that may create danger may be guiding in terms of the criminal responsibility of the owner. However, it should not be ignored that this crime is only a danger crime. It should also be kept in mind that a robot with artificial intelligence is in a superior position than an animal. 7In the event that the robot harms someone else, if the conditions of negligence are met for responsibility for the crime related to the resulting damage, punishment can be mentioned.

In the criminal context of these types of crimes that will occur by negligent action, a reasonable person, i.e. a manufacturer or software developer, who does not have any knowledge about the crime, but because the crime in question is a possible, natural consequence of his/her8own behavior, may be the perpetrator. Here, in the context of the products or services provided while practicing the profession, overdoing and deviating from the rules will constitute a violation of the obligation of attention and care in accordance with the principle of foreseeability, and negligent liability will be accepted in the context of criminal law.

ARTIFICIAL INTELLIGENCE AND PERSONAL DATA PROTECTION

Artificial intelligence systems work by analyzing and learning from large amounts of data. This data is often based on users' behaviors, preferences and demographics. AI algorithms use this data to make predictions, recommendations and decisions. However, much of the data used in this process is personal data, and the processing, storage and sharing of this data carries serious risks to user privacy and security.

Artificial intelligence systems collect and process large amounts of data. If this data is not adequately protected, data breaches and security vulnerabilities can occur. This can lead to unauthorized access to users' personal information. The collection of personal data ranging from phone numbers to bank details by artificial intelligence systems requires strong security systems.

The right to protection of personal data is one of the fundamental rights guaranteed by the Constitution. The amendment to the Constitution made by Law No. 5982 dated May 17, 2010, and numbered 5982 guarantees this right as follows“Everyone has the right to demand the protection of personal data concerning him/her.This right includes the right to be informed about personal data concerning him/her, to access such data, to request their correction or deletion, and to learn whether they are used for their intended purposes. Personal data may only be processed in cases stipulated by law or with the explicit consent of the person. The principles and procedures regarding the protection of personal data shall be regulated by law”. Accordingly, there is a Law on the Protection of Personal Data.

Personal data can only be processed with the explicit consent of the person. However, the clarity of the consent given to artificial intelligence is controversial.

Artificial intelligence systems often force the user to give permission to collect data. Processing of personal data is requested as a precondition for using the system. In addition, the user is not clearly informed about what this personal data is.

We foresee that comprehensive regulations will be made on all these issues in the near future.