본문영역 바로가기 메인메뉴 바로가기 하단링크 바로가기

KISDI 정보통신정책연구원

KISDI 정보통신정책연구원

검색 검색 메뉴

KISDI Media Room

  • The Third Open Forum of Artificial Intelligence (AI) Ethics Policy

    • Pub date 2022-08-29
    • PlacePost Tower in Seoul
    • EVENT_DATE2022-08-29
    • File There are no registered files.

Hosting the third open forum of AI ethics policy

- Sharing the cases of enterprise autonomous activities to ensure AI ethics

ㅇ Date: Aug. 26, 2022 (Fri.) 2:00-4:00 p.m. /

ㅇ Venue: Post Tower in Seoul

KISDI kicked off the Third Ethics Policy Forum in Post Tower located in Jung-gu Seoul on August 26 jointly with the Ministry of Science and ICT.

The forum was hosted to discuss the measure of enterprise autonomous activities to propagate and promote AI ethics. This forum was a place to get a glimpse of how AI startups are putting AI ethics in place. Scatter Lab (CEO Jong-yoon Kim) that developed the AI chatbot “Iruda”, Alchera (CEO Young-gyu Hwang) that developed an AI-based video recognition solution, and Wrtn Technologies, Inc. (CEO Se-young Lee) that provides AI-based writing solutions all shared their experiences as to how they made efforts with which to address concerns in the AI service development process.

First, Jong-yoon Kim, the CEO of Scatter Lab opened the discussion as he presented the final draft of the “Checklist of Scatter Lab AI Chatbot Ethics”. Scatter Lab has conducted an open beta test for “Iruda 2.0”, the relation-oriented AI chatbot that can have a daily conversation like a friend. The “Checklist of Scatter Lab AI Chatbot Ethics” provides checklist items to realize corporate ethics in the whole process of planning, development, and operation of the “Iruda” chatbot. It addresses concrete questions in the Code of Ethics for AI Chatbots according to the 10 key requirements of “AI Ethics Standards (Dec. 2020)” proposed by the Ministry of Science and ICT. When Scatter Lab announced the “Code of Ethics for AI Chatbots” in March 2022, it also opened discussion on which values are contained in the AI chatbot “Iruda” such as AI development for humans, respect for various values of life, and privacy protection.

Research Head Min-kook Cho from Alchera presented the results of the technical reliability verification of the AI-based wildfire detection solution. Alchera develops and provides anomaly detection and identity recognition solutions using image recognition technology. For the technical reliability verification of Alchera, a field application was conducted by modifying the “Standardized AI Development Guide (hereinafter referred to as the “Development Guide”) to meet Alchera's characteristics. The Development Guide provides a pool of candidates among technical verification items for developers to check whether reliability is ensured in the AI product and service development process. It can be used as internal guidelines for companies according to their products and services to select and change verification items at their discretion. Alchera selected the reliability requirements geared to the field of detecting anomaly situations among the requirements proposed in the Development Guide, and derived inspection results and improvements after checking whether the reliability is ensured.

CEO Se-young Lee from Wrtn Technologies presented the objectives and draft of the “Ethical Checklist for Wrtn AI Writing Tools”. Wrtn Technologies created an assistant writing solution called “WRTN” based on natural language processing and super-giant AI. The first draft of “the Ethical Checklist for WRTN AI Writing Tools” was developed by modifying the “AI Ethical Self-Checklist (hereinafter referred to as the “Ethical Checklist”)”, which was announced at the forum inauguration ceremony in February 2022, according to the organizational characteristics of Wrtn Technologies. Its purpose is to establish checklist items suitable for the writing assistance field by referring to the pool of candidates among checklist items in the “Ethical Checklist” for use as the internal guidelines.

The Ministry of Science and ICT disclosed the “AI Ethical Self-Checklist” and “Standardized AI Development Guidelines” that can be referenced for the ethical practice and reliability verification in February 2022 at the forum inauguration ceremony, with the private sector planning and developing AI products and services. This aims to provide a candidate pool of checklist items that do not specify industrial sectors so that checklist items can be selected, reorganized, and utilized according to the purpose of AI uses and the specific characteristics of the organization.

This forum serves as a place for sharing enterprise cases of autonomous activities to ensure AI ethics. For example, Scatter Lab, Alchera, and Wrtn Technologies modified the “AI Ethics Self-Checklist’ into internal guidelines to suit the characteristics of their services, while also using the “Standardized AI Development Guide” to verify the reliability of their services.

The Ministry of Science and ICT is planning to disclose the candidate pool of checklist items by industry field, allowing it to be generally used by other companies in service sectors such as chatbot, crisis detection, and writing by summarizing the deliverables of this forum and difficulties faced by enterprises.