Ethical risks assessment of Artificial Intelligence in practice
ABOUT
OBJECTIVES
WHAT WE DO
Ethical risks assessment of Artificial Intelligence in practice
Ethical risks assessmeNt of Artificial intelligenCe in pracTice (ENACT) is a project funded by the Research Council of Norway. The project aims to develop a methodology governing ethical principles and guidelines for the Norwegian public and private sector deploying AI-based systems.
ENACT will establish the methodology as a fit-for-purpose approach, tailored for Norwegian organizations/businesses, when deploying their own AI-based systems, by combining employee-driven innovation with the application of Ethics by Design.
AI Ethics Research
Education and Training
Coaching Seminars
Public and private sector collaboration
AI Policy
Podcasts, TedTalks and Webinars
PROJECT OBJECTIVES
ENACT offers support and AI Ethics training to Norwegian public and private actors who are deploying AI- based systems in effort to mitigate ethical risk by increasing AI literacy.
COLLECT
Collect requirements for adequate stakeholders'/students' training regarding their ethical AI literacy, and review existing solutions to support training.
DESIGN
Design a training program/course for stakeholders/students (i.e., define course objectives, structure, activities, etc.) and co-develop the content with them. The training will introduce the ENACT methodology: a tool to use for modeling and discussion of ethical dilemmas during the design of AI systems for analysis and assessment of ethical risks of AI.
EVALUATE
Evaluate the ENACT methodology via empirical pilots, focusing on the validation, acceptance and the overall impact of the methodology on building ethical AI solutions in organizations.
ESTABLISH
Establish the ENACT methodology as a fit-for-purpose tool and explore its potential for adoption at-scale by the Norwegian market.
DISSEMINATE
Disseminate outcomes to research community, industry, and policy makers via intensive practices/channels aiming to increase ethical practice of AI.