본문영역 바로가기 메인메뉴 바로가기 하단링크 바로가기

KISDI 정보통신정책연구원

KISDI 정보통신정책연구원

검색 검색 메뉴

KISDI Media Room

  • Field Meetup for Reinforcing AI Ethics and AI Reliability

    • Pub date 2023-05-24
    • PlaceGenesislab, Seoul
    • EVENT_DATE2023-05-11
    • File There are no registered files.

■ Event: Field Meetup for Reinforcing AI Ethics and AI Reliability

■ Date: May 11, 2023 (Thursday), 14:30~16:30

■ Venue: Genesislab, Seoul

■ (Participants) 20 persons from the Ministry of Science and ICT, industry, academia, and the relevant government agencies

On May 11 (Thursday 2:30 PM), KISDI, the Ministry of Science and ICT, and the Telecommunications Technology Association jointly hosted a field meet-up at the Seoul main office of Genesislab (Inc.), with the aim of establishing AI ethics and reliability, as the use of large-scale generative AI is growing by leaps and bounds, as exemplified by the massive popularity of ChatGPT. Representatives from digital companies and academia were invited to the meetup.

As a follow-up event to the ‘New Digital Order Establishment Plan’ (May 2), which was included on the agenda discussed at the Cabinet meeting, the format of the meet-up was devised to encourage vibrant interactions with members of the press. For instance, journalists also took part in the discussions, posing pertinent questions to the participants.

At present, the rapid emergence of ChatGPT has led everyone to realize that AI could achieve human-level intellectual capabilities, which has in turn raised serious concerns about the unintended dangers of AI, such as the generation of false information and increased bias.

In 2020, the Ministry of Science and ICT and KISDI established the AI Ethical Standards, which consist of three principles and ten core requirements,* for the purpose of creating a human-centered AI, using the AI recommendations published by the OECD and the EU as references. The Ministry of Science and ICT and

KISDI then announced that all members of society involved in developing and utilizing AI should adhere to these rules.

* Three Principles: 1. The Principle of Human Dignity, 2. The Principle of the Common Good of Society, and 3. The Principle of Purposefulness of Technology

** Ten Core Requirements: ① guarantee human rights, ② protect privacy, ③ respect diversity, ④ prohibit infringements, ⑤ provide a public service, ⑥ promote solidarity, ⑦ manage data management, ⑧ enforce accountability, ⑨ guarantee safety, and ⑩ ensure transparency.

As a venue for discussing specific measures for putting into practice ethical norms rather than merely presenting a proposed set of ethical standards, this meet-up also shared cases of how ethical norms are applied by the government and the civilian sector, and its future plans for promoting AI ethics and AI reliability.

As the first speaker, Manager Choi Dong-won from the Ministry of Science and ICT explained the government’s efforts to secure AI ethics and reliability, and the government’s policies for establishing a self-regulatory ethics system led by the civilian sector, from the establishment of the AI Ethics Standards to the creation of a plan for putting them into practice (AI Ethics Standards Voluntary Inspection Table, guidelines on developing AI). He also discussed the operation of the ethics policy discussion group (forum)), and announced its future plans.

Director Moon Jung-wook from KISDI gave an overview of the AI Ethics Standards Voluntary Inspection Table, a tool developed jointly by KISDI and the Ministry of Science and ICT that can be used by AI operators to diagnose their own levels of compliance with the AI ethical standards. He then presented examples of companies’ use of the table in various AI areas (chatbots, writing, video production, etc.).

President Lee Kang-hae of the Telecommunications Technology Association (TTA) presented the Guidelines on Developing AI Ethics, which contain the technological requirements that developers can use as references, i.e. four core requirements that can be realized with current technologies, and also discussed how this guideline has been used in public society, medical care, and autonomous driving.

Next, Director Song Dae-seop of NAVER gave a presentation on NAVER’s AI Code of Ethics, which NAVER developed together with Seoul National University's AI Policy Initiative (SAPI). This talk was followed by Vice-President Kim Yoo-cheol, head of LG’s AI Research Institute, who explained LG’s independent efforts to strengthen AI ethics and reliability, with the focus on 'LG’s AI Ethical Principles', and mentioned LG’s future plans. As the final speaker, CEO Lee Young-bok of Genesis Lab spoke about how developmental guidelines were applied to secure reliability in recruitment, an area where fairness is especially important.

In addition, as part of an in-depth discussion about some of the possible approaches to establishing AI ethics and reliability, representatives from the participating companies explained their own independent efforts to implement AI ethics and reliability in their business operations, and also discussed the tasks that must be tackled in order to firmly embed AI ethics throughout our society. Next, the participants evaluated the current levels of risk posed by AI and examined whether they are within a controllable range.

Park Yoon-gyu, the 2nd Vice-Minister of the Ministry of Science and ICT, remarked, “The advent of ChatGPT has resulted in the rapid adoption of large-scale, generative AI by many industries and companies, to the extent that it is becoming ubiquitous in our daily lives. Given this situation, AI ethics and reliability issues are entering a new phase.” He added, “Ensuring AI ethics and trustworthiness at all stages of technology development and utilization is critical to a company's survival. Therefore, we will strive even harder to secure AI ethics and reliability, and naturally we will refer to the issues discussed at this meet-up.”