Dollar

35,2830

0.14 %

Euro

36,7853

0.25 %

Gram Gold

2.976,6500

0.45 %

Quarter Gold

4.908,6300

0.00 %

Silver

33,5400

-0.10 %

Continents

Categories

The European Union has led the world in drawing up a legal framework to regulate one of the exciting technological innovations, which also carries moral and ethical implications for humanity.

The AI conundrum: From ‘should we regulate?’ to ‘how should we regulate?’

By Dr Basak Ozan Özparlak

Although we generally compare the age of big data and AI with the steam engine revolution, geographical discoveries could also provide a different benchmark in this respect.

While driving humanity to create new commercial, economic, and political structures, these discoveries, however, could not create equal opportunities for all.

Even in the writing of history, there is a lack of empathy in the term "discovery" of America. Civilisations were living in America before 1493.

This one-sided understanding of history needs more empathy and is an inadequate and dangerous approach to educating tomorrow's children.

Today, for AI systems to help humans in learning the mysteries of the universe, improving agriculture to ensure that everyone has access to food, and developing preventive health services, we need to be able to achieve what we, as humanity, have not been able to accomplish before: a fair technological development.

For this, we need legal hardware with ethics embedded in it.

This is why it is essential that the development and usage of AI systems, which are much more comprehensive and practical than other technologies, are regulated by law.

With this aim, a provisional agreement on the most comprehensive regulation on the development and use of AI in the world was reached in the European Union in December 2023.

But let's rewind the story of the AI regulatory odyssey.

By the end of the 2010s, lawmakers, legal professionals and academics working in the fields of law, engineers and social scientists engaged in the AI field had one question in mind: What should be done?

The answers to this question have transformed into AI ethical frameworks, most of which have been published by many institutions worldwide in the late 2010s.

Some of these ethical frameworks on AI, like the EU's AI Ethics Guidelines for Trustworthy Artificial Intelligence (2019), formed the basis of soon-to-be regulations.

During the same decade, National AI Strategies, including SWOT analysis, were also released by many countries worldwide.

One of the most comprehensive of those strategies, Türkiye's first National AI Strategy Report was published in 2021 by the Ministry of Industry and Technology and the Digital Transformation Office of the Presidency of Turkiye, whose work supports the country’s digital roadmap.

The year 2021 was also crucial for the EU's AI strategy since the 27-member grouping decided to form a common regulatory framework.

In light of its 2019 AI Ethics Guidelines, a regulation on AI systems was proposed by the European Commission in 2021 and has almost reached its final form.

It is expected to be published in the EU Official Journal before the end of the year 2024.

Rocky roads

However, the journey has been arduous, and sometimes, the roads were rocky.

In November 2023, right after an AI Safety Summit hosted by the UK and an executive order on AI by the US, the EU's big three – France, Germany and Italy – issued an unofficial text aimed at preserving the technology-neutral and risk-based approach of the EU’s AI Act stating that the risks associated with machine learning systems are related to the implementation, not the technology itself.

They argued that codes of conduct should be implemented instead of legal regulations regarding the foundation models of generative AI, bringing the work on the EU's AI Act to a complex process.

Fortunately, a compromised text was finally reached, and the efforts of many years gave its reward: the EU AI Act. Also, most recently, an AI Board was established to provide effective implementation of the Act.

As underlined in the first article of the compromised text of the AI Act, this regulation aims to ensure a high level of protection of health, safety, fundamental rights, democracy, rule of law, and environmental protection. The scope of the AI Act is extra-territorial, like the GDPR.

Whether established within the territory of the EU or not, the AI Act is applicable to the providers of AI systems if they are to be placed in the EU market or put in the service in the EU.

Therefore, if a Turkish or a US company would like to sell its product or put it into service, it must comply with the AI Act's rules.

In the EU AI Act text, a technology-neutral approach was preferred. This is because the AI Act is intended to be future-proof in the face of constantly developing technology.

In this way, AI systems will be used within the scope of relatively new technologies such as augmented reality (AR) and brain-machine interfaces (the technology that Neuralink of Elon Musk depends on), which will become a part of our lives with new generation wireless communication technologies shortly, can also be evaluated within the scope of the regulation.

Risk-oriented approach

Additionally, the EU AI Act has a risk-oriented approach. Risk here is understood as risks to health, safety, and fundamental rights. This risk-based approach refers to a legal compliance scheme directly proportional to the dangers that artificial intelligence systems may pose.

AI systems that carry unacceptable risks are prohibited. For example, emotion recognition systems that are used in workplaces or educational institutions are deemed to pose unacceptable risks.

So, according to the EU's AI Act, an AI system that is used to evaluate the mood of workers, whether they are sad or happy, or students' attentiveness or moods, would be prohibited. Any AI-based education technology programme provider should consider this when entering the EU market.

On the other hand, if an AI system is to pose a high risk as defined in the EU regulation, the providers of these systems are obliged to fulfill the audit and other conditions specified in the regulation.

According to the AI Act, tools for screening job candidates during recruitment would be an example of a high-risk AI system. Ever since these tools began to be used for job ads or job interviews, many legal issues around bias and discrimination tend to appear.

Therefore, like any provider of a high-risk AI system, providers of human resources technologies must keep regulatory compliance and its burdens in mind.

However, more heavy obligations are set to be complied with by the providers of the general purpose AI models like OpenAI’s ChatGPT. According to the compromised text, testing an AI system in the real world will not be prohibited only if necessary safeguards are applied.

However, with a motive to support innovation, AI systems and models specifically developed and put into service for the sole purpose of scientific research and development are excluded from the scope of the AI Act.

Until the release of ChatGPT in 2022 by OpenAI, some argued that ethical principles and/or codes of conduct practices were sufficient to prevent the risks of AI systems.

Mainly in the US, there was a common belief that any regulatory attempt would hinder innovation. This was why the US took its time with any regulatory effort.

Instead, beginning during the Obama presidency, AI-related roadmaps were designed. However, a minimum-regulation, maximum-innovation approach proved insufficient since large language models pose many risks that cannot be dissolved only by ethical frameworks.

Thus, a US presidential executive order on AI was unearthed right before the UK AI Safety Summit held in November 2023 in London.

Looking at the US, we can say that regulation is no longer a distant possibility. The rapid developments and possible effects of the Open AI company's ChatGPT model, announced to the public in 2022, have begun to be reflected in the US.

The privacy and security risks of generative artificial intelligence models are much higher than those of previous technologies, so minimising these risks is possible with technological measures and effective legal regulations.

Based on this, legal regulations regarding artificial intelligence have been included in the US's agenda. On October 30, the US's most comprehensive Artificial Intelligence rule framework was revealed within this order.

The order demands greater transparency from AI companies about how the AI models they develop work. Accordingly, it will create several new standards, such as labelling content created by artificial intelligence.

The purpose of the order is to increase "AI safety and security," according to the White House. However, there are doubts about the effectiveness of the order because it is "too soft," and its binding nature is controversial.

Notably, the text includes the obligation for developers to share security test results for new artificial intelligence models with the US government if security tests show that the technology may pose a risk to national security.

The year 2024 promises to be a milestone in regulatory efforts on the development and deployment of AI, which will turn out to be real-life laws accompanying laws enacted under data strategies.

Since data and the control of data define the AI revolution, regulation on data governance, data protection, and legal frameworks for open data spaces will be the corpus of the AI law, along with the specific rules on this technology.

For a bright future, it is important to find a formula for a legal framework for AI where everyone benefits from these technologies.

In his book The Hobbit, JRR Tolkien said that trouble-free and perfect things do not make stories; all stories are worth telling in complex and troubled times.

Today, as has often occurred, we are in a challenging and uncertain time. It is also more accessible and more meaningful to be deterministic rather than merely optimistic.

And Tolkien's quote will be the guiding force as all create new stories in our lives in the 21st century.

The author, Dr Basak Ozan Özparlak, is an assistant professor and IT Law researcher at Özyegin University Faculty of Law, specializing in smart systems, AI, 5G/6G, data protection, and cyber security.

Disclaimer: The views expressed by the author do not necessarily reflect the opinions, viewpoints and editorial policies of TRT Afrika.

Click here to follow our WhatsApp channel for more stories.

Comments

Comment

Comment Your email address will not be published. Required fields are marked*

No comments Yet