en
MENU
22 December 2023

Artificial intelligence: one step ahead of the world with the AI Act

The EU wants to legislate on what should be permitted in the use of artificial intelligence and how individual applications should be monitored and regulated. With this AI Act, Europe would take on a pioneering role – and could become a role model for other countries when it comes to AI.

In terms of public perception, the USA is the absolute leader when it comes to AI. Since the Californian company Open AI launched the first generative AI for the masses at the end of November 2022 with its chatbot ChatGPT, ChatGPT and co. have dominated the headlines. A veritable race for future applications (and therefore potential sources of monetization) of artificial intelligence has begun, especially among large technology companies such as Microsoft, Apple and Alphabet (Google). And Europe? Once again, it feels like it is lagging behind. At least that’s the impression. But this is only partly true.

In terms of technology, Europe can certainly keep up, especially Germany. German computer scientist Jürgen Schmidhuber is considered the father of modern artificial intelligence and Aleph Alpha is one of the world’s leading AI companies. The Heidelberg-based start-up’s technology can be used to track how AI models arrive at their decisions – an issue that is still unresolved with ChatGPT and other common AI systems and causes a lot of headaches. This makes Aleph Alpha an interesting alternative for the use of generative AI in critical areas such as the healthcare sector, public administration or legally sensitive topics. Last August, the Artificial Intelligence Innovation Park (Ipai) in Heilbronn announced a comprehensive partnership with the start-up. Ipai is set to become the largest AI ecosystem in Europe. The initiators plan to build real-world laboratories (Reallabs), a data center and a startup center with shared offices on a 30-hectare site by 2027. Ipai is largely financed by the Dieter Schwarz Foundation, which is fed by distributions from the Lidl Foundation and the Kaufland Foundation. The German government is also active. A new AI action plan is to provide a total of 1.6 billion euros by 2024 and, among other things, establish six AI competence centers and create 150 additional AI professorships.

Europe, or rather the European Union, is completely unchallenged when it comes to regulation. The EU has launched the Artificial Intelligence Act (AI Act) and aims to become the first economic area in the world to regulate the use of AI by law. The AI Act aims to prevent incalculable risks and unwanted excesses of artificial intelligence while at the same time providing researchers and companies with a regulatory framework in which they can develop AI systems. On June 14, 2023, the AI Act was passed in the European Parliament after years of preparation. One major problem was incorporating the rapid technological developments, including the debates surrounding ChatGPT and the like, into the proposed legislation. In the next step, the 108-page text must now be agreed with the EU Commission and the individual member states before it officially comes into force. An agreement should be reached by the end of the year. Companies will then have two years to adapt to the changed framework conditions. In order to bridge this gap, the EU wants to oblige large technology companies and AI developers to engage in voluntary self-regulation.

In its regulatory project, the EU is pursuing an approach that differentiates between risks. It is not just about commercial offerings, but also about the use of AI in the public sector. For example, “unacceptable” AI systems that use social scoring models to classify people according to their social behavior or ethnic characteristics are to be completely banned. This includes the collection of biometric data in online networks or AI systems for facial recognition in public spaces, as is the case in China. Biometric facial recognition should only be permitted retrospectively following a judicial decision and for the investigation of serious crimes.

The AI Act uses a three-tier classification system to categorize AI systems according to their risk: high-risk, limited-risk and low-risk or risk-free applications. The higher the risk of an AI is classified, the stricter the legal regulations should be.

High-risk AI programs can be found in various industries and areas. These include systems that control critical infrastructure such as the power supply and could therefore potentially endanger people’s lives and health. Technologies in medical devices or programs that assess the creditworthiness of citizens also fall into this category. According to the EU draft law, companies offering such AI systems must meet significant requirements in terms of risk management, accuracy, cybersecurity and the provision of certain information to users.

For most AI providers, this type of product regulation is new, so they are facing major investments. Before high-risk AI systems can be placed on the EU market, they are subjected to close scrutiny to ensure that they meet all legal requirements. This is a headache for many experts. Large tech companies from the US and Europe are likely to be able to fulfill these conditions due to their extensive teams of consultants and experts. For medium-sized companies and start-ups in Europe, however, the heavy regulation could be problematic. Due to the bureaucratic hurdles, there is a risk of being left behind, even if they are technologically outstanding.

Chatbots such as ChatGPT are AI systems with limited risk. The focus here is mainly on transparency. Users must be informed that they are interacting with artificial intelligence, unless this is obvious. These disclosures should ensure that, for example, deepfake photos can be distinguished from real images. Providers must also ensure that no illegal content is generated and publish detailed summaries of the copyrighted data used for training.

AI systems that do not fall into these categories are not subject to compliance.

The AI Act is a complex regulatory project that is not perfect. Nevertheless, it could serve as a model for a new era of AI regulation. Well-designed regulation is not a barrier to innovation, but provides a clear framework for companies to develop attractive products. The EU has the opportunity to adopt such a regulatory framework – giving European companies a competitive advantage in one of the most exciting markets of the future.