Connect with us

Business

Naver unveils AI safety framework to respond to possible risks

Image
Image



Seoul, June 17
Naver, South Korea's biggest internet portal operator, on Monday unveiled a proactive scheme designed to assess and manage risks associated with artificial intelligence (AI) as part of its efforts to safely develop and use the fast-evolving technology.

Naver said its AI Safety Framework (ASF) defines potential AI-related risks as severe disempowerment of the human species and misuse of the technology.

Under the framework, Naver will regularly assess the danger of its AI systems, with evaluations occurring every three months for up-to-date AI technologies, referred to as "frontier AIs."

In detail, the company will conduct extra assessments when an AI system's capacity grows more than six-fold in a short period of time, reports Yonhap news agency.

The company will also apply its AI risk assessment matrix to check the potential for technology misuse, considering the system's purpose and risk level before distribution.

Naver said it will keep improving its ASF to reflect more cultural diversity to help governments and companies at home and abroad develop their sovereign AIs.

"Naver will continue to develop sovereign AIs for the global market and advance its ASF to contribute to creating a sustainable AI ecosystem where many different AI models that reflect the culture and values of different regions can be safely used and coexist," CEO Choi Soo-yeon said.