CEE Digital Coalition’s Call to Enable Innovation in the EU Through Sensible AI Regulation

Embracing AI technology is key for the European economy and opens up a unique chance for more growth in the rapidly evolving digital landscape of Central and Eastern Europe. However, to make the most of this opportunity, we need to make sure we establish a sensible regulatory framework for AI in the European Union. As members of CEE Digital Coalition, an informal gathering of digital and advanced technologies industry organizations from Central Eastern Europe, we would like to propose our recommendations for the remaining stages of the interinstitutional trilogue negotiations concerning the Regulation of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence (AI Act).

We want to stress our alignment with the Commission’s perspective on advancing AI technology. We fully support the goal of boosting research and industrial capabilities while prioritizing safety and fundamental rights of European citizens. Nevertheless, we would like to propose recommendations to prevent stifling innovation by the regulatory framework. We also advise against introducing abrupt changes to the AI Act at this stage of negotiations without impact assessment and consultation, as this could inadvertently affect AI providers, deployers, and users.

As members of the CEE Digital Coalition, we believe that the emphasis in regulating AI should be put on safeguards that do not hinder its positive applications. Similar to other technologies, the primary risks tied to AI arise from its application, not the technology per se. Consequently, we endorse AI regulation that is risk-based and neutral towards specific technologies. The aim should be to strike a balance where industries and users can enjoy the socio-economic advantages of AI while safeguarding our citizens.

To achieve this, we offer the following recommendations for the remaining parts of the trilogue negotiations:
  • We urge co-legislators to dismiss the proposed asymmetric obligations, particularly for foundation models, general-purpose AI (GPAI), and generative AI. These categories should not be subjected to intricate, multi-level frameworks lacking clear definitions.
  • The AI Act should not mandate pre- and post-marketing external testing and the transparency requirements should be applied with caution.
  • Prohibitions should only include specific use cases. Imposing broad bans may inadvertently hinder the beneficial applications of AI technology.
  • The high-risk classification should only apply to clearly defined use cases presenting real risks. The risk associated with AI systems is contingent on their context and usage, it’s not an inherent trait. The EU should uphold its risk-based and technology-neutral approach by designating only clearly defined use cases posing actual risks as high-risk.
  • We strongly recommend that co-legislators reject the Parliament’s proposal to designate AI used in recommender systems by very large platforms as high-risk.