본문영역 바로가기 메인메뉴 바로가기 하단링크 바로가기

KISDI 정보통신정책연구원

KISDI 정보통신정책연구원

검색 검색 메뉴

KISDI Media Room

  • 2nd AI Ethics Policy Forum Launching Ceremony

    • Pub date 2023-04-10
    • PlaceGold Hall, L Tower, Seoul (B1)
    • EVENT_DATE2023-04-07
    • File There are no registered files.

■ Event: 2nd AI Ethics Policy Forum Launching Ceremony

■ Date: April 07, 2023 (Friday) 14:00

■ Venue: Gold Hall, L Tower, Seoul (B1)

On April 7 (Friday), KISDI, the Ministry of Science and ICT, and the Telecommunications Technology Association jointly held the launching ceremony for the ‘2nd AI Ethics Policy Forum’ at the L Tower in Seochu-gu, Seoul.

Having gathered expert opinions at last year’s forum, this year’s forum publicly announced the ‘AI Ethics Standards Voluntary Inspection Table (proposed) (chatbot, writing, video)’ and the ‘Guidelines on Developing Reliable AI (proposed)(general, public society, medical care, autonomous driving)’. In response to growing concerns about the potentially negative effects of large-scale generative AI, such as the dissemination of false information, algorithmic bias, and invasions of privacy, the participants in this year’s forum engaged in discussions about the direction of policies for establishing AI ethics and reliability.

Future meetings of the forum will discuss domestic and overseas trends in AI-related technologies and ethics issues. The forum will suggest - from a balanced perspective – future AI ethics policies that South Korea ought to adopt in order to develop and operate large-scale, generative AI in an ethical and reliable manner. Finally, the forum is also planning to collect opinions about AI-related policy tasks, such as setting up a system for the auditing and certification of AI reliability.

With these goals in mind, Professor Kim Myung-joo from the Department of Information Security, Seoul Women’s University has assumed the position of chairperson, while thirty other members of the forum are currently providing their expertise in AI, philosophy, education, law, and administration.

To enhance the operational efficiency of the forum and raise its overall level of expertise, the forum is composed of three expert sub-committees tasked with discussing the following three major items of agenda: ① promoting the AI ethics system (referred to as the Ethics Subcommittee); ② building the technological foundation for securing AI reliability (referred to as the Technology Subcommittee); and ③ improving AI literacy and ethics education (referred to as the Education Subcommittee).

The Ethics Subcommittee (headed by Director Moon Jung-wook of KISDI’s Center for AI & Social Policy) will discuss measures for dealing with impairments in AI, such as biases in large-scale, generative AI, false information in AI, and ways of building transparency and accountability in AI. It will also collect expert views on the development of the 'Framework for Assessing the Impact of AI Ethics’.

The Technology Subcommittee (headed by Director Lee Kang-hae of TTA’s AI Digital Convergence Group) will identify the risks posed by large-scale, generative AI and discuss the related technological issues, such as the need to gather evaluation data in order to secure reliability and to acquire evaluation technologies. The committee will also collect opinions on the deployment of an 'AI Reliability Verification/Certification System'. In 2022 in particular, the subcommittee provided consulting to AI companies, diagnosed the reliability of their AI solutions, and suggested possible improvements, such as the addition of protective functions to mitigate bias, and the development of functional and procedural countermeasures in the event of deteriorating AI performance. With the operation of this forum, the performance of companies in improving these AI weak points will be continuosly assessed.

The Education Subcommittee (headed by Professor Byun Sun-yong from the Department of Ethics Education of Seoul National University of Education) will discuss various education-related issues, such as leveraging ethics to prevent or minimize the malicious use of large-scale, generative AI, and will also collect opinions about the development of AI ethics education materials for the general public.