본문영역 바로가기 메인메뉴 바로가기 하단링크 바로가기

KISDI 정보통신정책연구원

KISDI 정보통신정책연구원

검색 검색 메뉴

KISDI News

  • KISDI Publishes “Policy Directions for Fair Competition and User Protection in the AI and Digital Ecosystem”

    • Pub date 2025-10-24
    • File There are no registered files.
※ URL(Korean): https://www.kisdi.re.kr/bbs/view.do?bbsSn=114756&key=m2101113055776&pageIndex=1&sc=&sw=

KISDI Publishes “Policy Directions for Fair Competition and User Protection in the AI and Digital Ecosystem” (October 23, 2025)
-    Presents policy directions to promote fair competition and protect user rights in the AI and digital ecosystem -
.................................................................................................................
With the rapid spread of AI and digital technologies, public demand is increasing for fair competition and stronger user protection.

The report emphasizes the need for a “Korean-style competition policy” to address emerging technologies such as generative AI.

It proposes co-regulation between the public and private sectors and structural improvements to digital platforms to supplement the limits of self-regulation.

It offers a comparative analysis of the digital competition promotion laws and policy directions of major countries, including the EU, the U.K and Japan 

The Korea Information Society Development Institute (KISDI, President Sangkyu Rhee) recently published its second special report under the “KISDI Premium Report: Special Edition for Advancing AI and Digital National Policy Agendas,” titled “Policy Directions for Fair Competition and User Protection in the AI and Digital Ecosystem.”

The report outlines specific policy measures to build a fair and innovation-friendly AI and digital ecosystem, while creating a safe and trustworthy AI and digital environment as part of the government’s national agenda implementation.

As AI and digital technologies evolve from convenient tools complementing humans into transformative forces reshaping the economy and society, calls have grown for policies that protect user rights and ensure a fair and safe AI and digital environment. The national policy agenda finalized at the Cabinet meeting on September 16, 2025, also includes related initiatives such as building a user-centered and safe digital environment by addressing digital risks, fostering a transparent and fair digital competitive environment, and promoting a fair platform ecosystem.

Regarding the promotion of fair competition in digital markets, the report notes that while some unfair practices have been addressed through self-regulation, other areas require careful review and may call for pro-competitive interventions targeting monopoly structures within ecosystem-based services.
It stresses the importance of developing policies tailored to Korea’s specific conditions, grounded in empirical evidence and the evolving nature of AI and digital markets. In particular, the emergence of disruptive innovations such as generative AI must be recognized as a key factor in shaping future competition and reflected proactively in policy design.

The report further points out that the AI ecosystem is a rapidly evolving market where technological progress can swiftly alter market structures. Therefore, close monitoring and understanding of current trends and issues are critical. It also notes that principles such as fair trade, interoperability, and user choice can help ensure effective competition that treats both businesses and users fairly and transparently.
This aligns with the objectives of existing digital competition promotion laws such as the EU’s Digital Markets Act (DMA), the UK’s Digital Markets, Competition and Consumers Bill (DMCC Bill), and Japan’s Smartphone Software Competition Promotion Act. The report further suggests that similar approaches could be extended to emerging domains such as agentic AI and physical AI.

Senior Research Fellow Hyunsoo Kim noted that “given the rapid evolution and wide-reaching influence of online content, along with concerns about government censorship, self-regulation by service providers to curb the spread of illegal or harmful information and disinformation is, to some extent, inevitable.” However, he emphasized that “to ensure the effectiveness of such self-regulation, it is necessary to establish a certain legal framework and introduce a public–private co-regulation model under which companies disclose their compliance status, the government verifies it, and recommends improvements as needed.” He further stressed that since platform design fundamentally determines how information is distributed and how often it is exposed, responses should move beyond simple deletion-based measures toward a redesign of platform architectures themselves.

In addition, Senior Research Fellow Kim proposed “guaranteeing users’ practical rights of access and choice in AI and digital services, while strengthening protections for children and adolescents against extremism and over-immersion. He also stated, “It is necessary to conduct comprehensive reviews and develop countermeasures against personalization, algorithmic discrimination, and dark patterns that exploit user vulnerabilities,” adding that “in protecting users of AI services, as with existing algorithm-related regulations, the key principles should be transparency, assurance of user choice, and risk management, with preventive measures and self-regulation serving as effective means.”

KISDI plans to continue research on the current status and structural evolution of the AI and digital ecosystem, analyzing its economic and social value as well as potential side effects. Through these studies, the institute aims to assess the overall health of the ecosystem and propose the revision, repeal, or introduction of policies as needed to enhance its sustainability.