The European Union has passed the first laws for regulating the artificial intelligence market. The Artificial Intelligence Act is a pioneering attempt to enforce a comprehensive legal framework on artificial intelligence. The laws are based on the European Union’s strategic values proposed in the white paper to build confidence for European citizens to accept AI-based solutions, considering the meteoric rise of generative AI and the dangers related through its applications. The holistic values defining the use of AI are built on the following parameters: safe, transparent, ethical, unbiased, and under human control. The genesis of moving towards regulating artificial intelligence laws started in 2018 when the commission and member states started a coordinated plan to draft a series of policies to better and boost the excellence of AI.
According to DW Chief Technology Correspondent, Janosch Delcker, this law will have a limited impact immediately but will have a significant impact gradually as it is implemented over the next 2 to 3 years. The social scoring system will be banned in the European Union once this regulation enters into force
The purpose of this regulation is to make AI systems much safer in the EU safer, more transparent, socially and environmentally friendly, among other goals. These projects are in the realm of healthcare, sustainable transportation, climate change, logistics, media, etc. For example, FANDANGO (Fake News discovery and propagation from big data Analysis and artificial intelligence Operations) is an EU-funded project that aims to detect and analyze fake news across various social media platforms. This is imperative looking at how misinformation can drive political results.
This can many times mislead consumers to believe in attributes of products that are not factual, leading to serious consequences for the consumer in some cases. The EU has been a world leader in spreading initiatives within the sustainable and international development sector. The EU came out with the EU Sustainable Finance Framework for transitioning to a sustainable economy. The EU has been a champion in driving sustainable finance and for incorporating ESG (Environmental, Social, and Governance) metrics within the investment management sector.
The father of AI, Geoffrey Hinton, quit Google’s AI unit when ChatGPT was released globally. The hype around AI is tangible, looking at Nvidia’s stock. Nvidia recently released the Blackwell superchip, which has 208 billion transistors on a single wafer beating the superchip Hopper H100 that boats 80 billion transistors. Nvidia develops parallel processing chips that are used for artificial intelligence-driven applications. The Middle Road is taking the rise in Nvidia’s stock as a proxy for the increase in AI-driven applications. Moreover, Nvidia has partnered with Microsoft to build a massive cloud AI computer to power ChatGPT and other advanced AI models. This collaboration aims to make AI more accessible and affordable for businesses and developers worldwide. This showcases how fast the AI-driven applications are rising exponentially and emphasizes the need to regulate this technology at the earliest, making it even more important. This is not stock advice for investing in Nvidia but is mentioned here to explain the rise and dangers involved with AI-driven technology
The regulation categorizes AI systems based on a risk-based approach. The riskier the AI system, the more the technology will be regulated. The law outright bans certain practices. One excellent regulation is to ban facial recognition on CCTV live streams in public places, except in exceptional circumstances, such as when used by law enforcement for carrying out their work. This is justified if the technology can be used for social good. Biometric identification and categorization of people and social scoring by classifying people under their demographic or genetic profiling is an unacceptable risk and faces a ban. One of the worst measures that AI can harm society is through cognitive manipulation. This is where timely regulation can aid in protecting vulnerable groups, especially children. Social profiling based on racial or demographic profiles is disrespectful, and this is where regulating artificial intelligence is thought leadership. These risks are listed as unacceptable risks, at the highest level of risks facing humanity through AI. Others are on a lower ladder, for example, disclosing if the content is generated through AI. The EU is setting up a governance board to closely monitor and implement the AI framework, including fining companies for non-compliance. The Middle Road applauds this futuristic legal framework for bettering humanity by addressing the rise in artificial intelligence.
The establishment of rules and regulations is positive with the aim of harnessing the positive aspects of this killer transformative technology while mitigating its harmful effects. The EU has set up testing hubs to facilitate startups and medium enterprises to run AI models before they are released to the world. By establishing clear guidelines and standards for the development and deployment of AI systems, the EU seeks to harness the potential benefits of this transformative technology while mitigating the risks associated with its misuse. There is a risk of misuse of the law by the government; however, looking at how the philosophy of Europe is built on citizen-first values, this risk looks limited. It will not be easy to have a consensus on how the regulation would be implemented, but the public-private partnership model is excellent, considering the cogent effects of the AI-driven system. This is a significant step in protecting humanity on the emergence of one of the most critical and fast-evolving yet complex technologies whose ramifications could far outweigh its positive impact on society if not regulated judiciously at the earliest within a specific period of time.