top of page

Ethical AI in Practice: Lessons from the ENACT Project

  • admin
  • 22m
  • 3 min read

On 1 September Ethical risks assessmeNt of Artificial intelligenCe in pracTice (ENACT Project) hosted an event where they presented a method for ethical risk assessment, shared experiences from actual use, and invited participants to reflect and discuss on how we together can ensure the responsible use of artificial intelligence. 


ree

From the nuclear bomb to social media, from advances in health technology to facial recognition, almost all new technologies have presented ethics related issues and concerns in one way or another. But perhaps none as much as artificial intelligence. That’s why, at the core of ENACT is the following question: How can we assess and manage ethical risks when using artificial intelligence in practice? 


For many organizations, “AI ethics” is still a vague, almost abstract concept — something mentioned in strategy documents but rarely translated into daily practice. This gap between principles and action is precisely what the Ethical risks assessmeNt of Artificial intelligenCe in pracTice (ENACT Project) seeks to address. Supported by the Research Council of Norway, ENACT brings together researchers, practitioners, and companies to develop a methodology for turning high-level ethical principles into practical steps.  


This Event was a key step in ENACT´s work to disseminate their work and share the results. This event provided concrete tools for handling ethical risk assessment of AI in practice. There are currently a number of different overarching frameworks for the use of AI such as ALTAI (Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment | Shaping Europe’s digital future), created by the EU and other actors. These are important, but are at a general level and can be difficult to apply in a practical setting. ENACT aims to develop a methodology to help the Norwegian public and private sectors do this. 


Project Manager Hans Torvatn presented the background for and the idea behind ENACT. He was followed by Natalia Murashova who presented the ENACT Methodology for Ethical Risk Assessment of AI. Benjamin Semujanga presented preliminary results from completed assessments. Magnus Stavik Rønning from DNB and Robindra Prabhu from NAV presented their experiences with the ENACT methodology. At the end, there was a panel discussion with Jim Tørresen (UiO), Diana Saplacan Lindblom (UiO), Leonara Bergsjø (HiØ) and Patrick Mikalef (NTNU). 


You can watch recordings of the event on YouTube


From Frameworks to Practice - The ENACT Methodology 

The ENACT methodology approach is based on an existing model-based method for security risk assessment, also known as CORAS. To capture the core of CORAS but tailor it to organizational context and needs, ENACT limited the number of steps to three: context establishment, risk assessment and risk treatment. 


At the heart of ENACT’s approach are three structured two hour workshops designed to help organizations move step by step from reflection to action around the above mentioned three steps. 


Workshop 1: Setting the Context Participants begin by identifying core organizational values, relevant actors, and the benefits and risks of the AI system under consideration. This exercise forces participants to articulate what matters most to them — a step that is often overlooked in traditional risk assessments. It also broadens the conversation to include those who might be indirectly affected, ensuring that ethical reflection is not limited to compliance officers or engineers. 


Workshop 2: Exploring Scenarios Once the values are clear, the group brainstorms possible scenarios: ways the AI system might undermine or strengthen those values. This is often the most creative and revealing part of the process. Participants share their hopes, fears, and “what if” questions, surfacing hidden assumptions and potential unintended consequences. 


Workshop 3: Planning Action The final step is to prioritize findings and decide what to do next. What risks are acceptable? Which require mitigation? What changes to design, policy, or training are needed before deployment? By the end of the process, organizations have a clearer understanding of both the benefits and ethical challenges of their AI solution — and a plan for addressing them. 


To learn more about the ENACT Methodology, please read their Blogpost A Practical Methodology for Ethical Risk Assessment of AI in Organizations


About ENACT 

Ethical risks assessmeNt of Artificial intelligenCe in pracTice (ENACT) is a project funded by the Research Council of Norway. The project aims to develop governing ethical principles and guidelines for the Norwegian public and private sector deploying AI-based systems. 


ENACT will establish the methodology as a fit-for-purpose approach, tailored for Norwegian organizations/businesses, when deploying their own AI-based systems, by combining employee-driven innovation with the application of Ethics by Design. 

 
 
bottom of page