A legal perspective on the debates regarding ChatGPT

Numerous technology leaders and government officials worldwide have expressed concerns recently regarding the development of artificial intelligence platforms, specifically ChatGPT. These concerns should also be debated in Romania, and lawyers should be involved in these discussions considering the nature of their profession. The new technology has effects on companies and people's lives, according to Ioana Chiper Zah, an Attorney Associate at Hațegan Attorneys. Even Sam Altman, the CEO of OpenAI, has said that thoughts about artificial intelligence disturb his sleep. "What keeps me up at night is the hypothetical idea that we have already done something very bad by releasing ChatGPT," Altman told Satyan Gajwani, Vice President of Times Internet, at an event organized by Economic Times on June 8. When asked if AI should be regulated similarly to atomic energy, Altman said that there should be a better system for verifying the process.

Attorney Ioana Chiper Zah believes that the use of AI technologies in professional or commercial activities has significant legal implications, such as data protection, legal liability, and adherence to ethical standards. These implications should be further discussed in Romania by IT and advanced technology specialists, as well as by the community, authorities, business environment, lawyers, and consultants. Currently, states and private companies adopt different approaches to these issues based on their guiding principles and values.

Technological advancements in the field of artificial intelligence (AI) have led to the development of advanced communication tools, such as ChatGPT (Generative Pre-trained Transformer). The emergence of ChatGPT has generated interest and controversy globally, resulting in different reactions from states and private companies regarding the use of this revolutionary technology.

ChatGPT is an artificial intelligence system based on neural networks capable of generating coherent and relevant responses in conversations with users, according to the program's description. However, such use has legal implications and requires different approaches from countries or private companies worldwide, according to the representative of Hațegan Attorneys.

In her opinion, one of the crucial aspects of using ChatGPT in professional or commercial activities is data protection and information confidentiality. For ChatGPT to function effectively, it needs access to certain data, including previous conversations or user information. In this regard, to ensure the program's operation is as compliant as possible, it should provide data protection and adhere to the norms and laws regarding personal data protection in each country.

The ChatGPT language model stores the information provided by its users for continuous learning purposes, as designed by its creators. In this sense, it has the possibility of disclosing sensitive information it comes into contact with to other users or inappropriate third-party contexts. The risk of personal or confidential data with which it interacts being disclosed contrary to applicable legal provisions in a given geographic area exists in any domain where such a system could be used, whether governmental, commercial, or legal. From this perspective, the use of AI systems should be in accordance with the applicable legal norms of each country (e.g., GDPR in Europe, CCPA in California, etc.).

Ioana Chiper Zah highlights that another important aspect is legal liability in cases where the use of ChatGPT causes damage or generates negative consequences. This situation raises questions about legal responsibility in the event of harm to the company where the user operates through the use of AI programs. In their day-to-day work, numerous employees from various fields have direct contact with a multitude of information that constitutes valuable trade secrets for their employers, often without realizing it. Disclosure of such information to a system like ChatGPT, even with the most innocent intentions, can cause colossal damage to the respective company and create a rather uncomfortable situation for the individual involved. In such a situation, the question arises as to who should bear these damages, which in most cases are enormous.

In practice, Samsung employees have used ChatGPT at work and leaked source codes. Samsung's semiconductor division allowed engineers to use the service to help them troubleshoot source code issues. However, employees inadvertently entered sensitive data, such as source code for a new program, internal meeting notes, and hardware-related information. Alarmingly, on this occasion, Samsung discovered three other cases where employees disclosed confidential information through ChatGPT within a span of 20 days. As a result, Samsung's trade secrets became the intellectual property of OpenAI, as ChatGPT retains user-entered data to further refine its algorithms.

Attorney Hațegan Attorneys highlights that the use of ChatGPT can also raise ethical and discrimination concerns. While language models are trained to be as impartial and neutral as possible, there is a risk that they may reflect or perpetuate existing biases and discrimination. A judge in Columbia was the first to claim that they used ChatGPT to render a judicial decision. Although the law does not prohibit the use of artificial intelligence in rendering judicial decisions, it is important to consider that a language model like ChatGPT does not have the ability to "understand" information in the true sense of the word.

 

The AI system formulates answers to the questions asked by synthesizing sentences based on the probabilities derived from millions of training text examples. Essentially, the system performs a mathematical analysis of the words and letters in the question and formulates an answer based on the probability of those letters or words appearing together. ChatGPT does not "know" the answer to the question asked, but it tries, with impressive accuracy, to predict a logical sequence of human language text that best answers the question. This could be the reason why the responses generated by the system are often incomplete or false.

However, the answers generated by such technology can be biased, discriminatory, or even incorrect, which is why using this technology in the process of judicial decision-making can cause significant harm to the legal subjects affected by those decisions, according to Ioana Chiper Zah.

At this moment, technology cannot keep up with the complexity of the law or legal cases. While ChatGPT can provide automated answers to simple legal questions, it cannot grasp more complex concepts or interpret case law. This lack of understanding leads to inaccurate or incomplete advice. In this regard, a lawyer in New York used ChatGPT to conduct "legal research" and cited a series of non-existent cases in court.

Internationally, the use of ChatGPT has encountered different approaches, both at the state level and among private companies. Some have established clear rules and standards to prevent discrimination, harm, or breaches of personal data, while others have adopted a more flexible attitude towards AI.

Among the states that have taken a cautious approach and imposed restrictions on the use of ChatGPT is Italy. Initially, Italy banned public access to ChatGPT on the grounds that personal data would not be adequately protected. Currently, ChatGPT is available in Italy after implementing the privacy and data requirements set by Italian authorities.

In the corporate sector, Apple has restricted external artificial intelligence tools like ChatGPT to focus on developing its own artificial intelligence. Samsung follows a similar approach to Apple.

Other countries have adopted a more balanced approach, recognizing the potential and benefits of using ChatGPT in various fields but insisting on proper monitoring and specific regulations. The regulation of AI has even been encouraged by the creator of ChatGPT. The CEO of OpenAI urged the US Congress to adopt licensing and safety standards for advanced artificial intelligence systems. Sam Altman praised some of the benefits of AI but also stated that government intervention is essential to mitigate the risks associated with increasingly powerful AI models. He proposed a three-point plan, including the establishment of safety standards to evaluate AI models and a government agency to license these models.

On the other hand, while India acknowledges the innovative potential of ChatGPT and supports its development and use in various domains, the Ministry of Electronics and Information Technology of India explicitly stated that the Indian government does not consider regulating artificial intelligence in the country.

On June 14th, the European Parliament (EP) adopted a report that represents the negotiating position regarding the Artificial Intelligence (AI) Act, with 499 votes in favor, 28 against, and 93 abstentions. This report serves as the basis for negotiations between the EP and the Council of the EU, which represents the member states. Members of the European Parliament establish obligations for providers and implementers of AI systems based on the level of risk that artificial intelligence may present. Thus, AI systems with an unacceptable level of risk to human safety would be prohibited, such as those that use social behavior profiling (classifying people based on social behavior or personal characteristics). Eurodeputies included prohibitions regarding intrusive and discriminatory uses of AI. Providers of general-purpose AI systems, a rapidly evolving field in AI, will need to assess and mitigate potential risks (related to health, safety, fundamental rights, environment, democracy, and the rule of law) and register their models in the Union's database before launching them on the EU market. Generative AI systems based on such models, like ChatGPT, should adhere to transparency requirements and introduce safeguards against generating illegal content.

Although there is currently a regulatory gap that could lead to imbalances in the use of AI, Ioana Chiper Zah believes that through proper widespread management, especially through unified regulation, the use of ChatGPT and upcoming AI technologies can bring significant benefits to professional and commercial activities, improving efficiency and user experience while respecting values and legal norms. Additionally, she believes in the need for deeper debates in Romania involving all interested parties and stakeholders in the field of AI and ChatGPT.