top of page

ENACT: A Practical Methodology for Ethical Risk Assessment of AI in Organizations

  • admin
  • Sep 24
  • 3 min read

Last year, ENACT research team together with business partners from DNB, NAV, Posten Bring, Hypatia Learning and MedSensio designed the first iteration of Ethical Risk Assessment of AI in practice (ENACT) methodology. Through iterative, inclusive and value-based design, first we have identified business’s needs and wishes for Ethical Risk Assessment of AI, then together developed and tested ENACT methodology in cross-sectoral settings.

ree

To begin with, what are methodology and risk assessment? A methodology is a clear roadmap for conducting a coordinated activity for achieving set goals. Methodologies guide us through complex tasks and ensure that they are done in alignment with set objectives and organizational needs. In the context of risk assessment, methodology defines the framework of risk analysis aligning it with existing standards, norms and best practices in the field.

Due to the increased presence of AI based systems at the workplace, the risks connected to their design and deployment are not only material but also ethical. How can we, as an organization, ensure inclusive and fair design and use? How can we align the off-the-shelf AI solution with our core values and community responsibilities? Many more ethical dilemmas are in front of those who buy, embed and deploy AI systems at their workflow.


What is the ENACT methodology and what does it consist of? ENACT methodology was developed by a cross sectoral consortium to offer Norwegian organizations a process-oriented tool to identify, discuss and find solutions to ethical dilemmas connected to design, use and deployment of AI based tools. The ENACTmethodology approach is based on an existing model-based method for security risk assessment, also known as CORAS. To capture the core of CORAS but tailor it to organizational context and needs, we limited the number of steps to three: context establishment, risk assessment and risk treatment.


ree

These steps were distributed into three workshops of 2 hours each and are following:

  1. Understanding the context for the use case analysis (through identification of values, actors, acceptance criteria, and conducting a high-level analysis),

  2. Assessing possible ethical risks for the organisation, its employees and end-users (through risk scenarios connected to the use-case),

  3. Evaluating the level of ethical risk and decided on preventive measures and risk handling strategies accordingly.


What’s needed from the organization? To successfully implement the ENACT methodology in organizational setting we recommend to

  • Choose a use case relevant to the business (any AI-based system, own or off-the shelf, at any time of the system’s life cycle - from idea to well-used system)

  • Assign ENACT use-case holders and facilitators responsible for preparing the workshops, signing up the participants, conducting the workshops and reporting,

  • Gather a variety of expertise and seniority for the assessment (up to 10 participants, preferably to form an interdisciplinary and diverse group of stakeholders),

  • Tailor some of the ENACT suggested tasks to your organizational settings and current needs,

  • Make sure to engage all the participants in the workshops through facilitation techniques or Story-Dialog method.

  • Focus on the ENACT process and discovery!


Findings from a cross-sectoral testing. ENACT was preliminary tested in a cross-sectoral setting, in digital format with over 27 participants (non-unique) form logistic, welfare, finance, education and medical services.

Findings that were identified in the initial design phase were reported in the academic article and published in 2025 as a part of the proceedings of the Eighteenth International Conference on Advances in Computer-Human Interactions.


The key takeaways are

  • The method application in cross-sectoral setting is complicated because of business confidentiality, AI application context and different sectoral traditions.

  • The structure of the workshops has to be flexible and resource efficient to address organizational needs and fit in everyday workflow.

  • Limited number of participants, awareness of existing power-relationships are crucial for structured dialog and critical reflection during the workshops.

The paper, “Ethical Risk Assessment of AI in Practice Methodology: Process-oriented Lessons Learnt from the Initial Phase of Collaborative Development with Public and Private Organisations in Norway”,has received the “Best Paper Award”.If you wish to read more about our findings, please follow this link.


ree


References

Murashova, N., Lindblom, D. S., Omerovic, A., Dahl, H. E., & Bergsjø, L. O. (2025) Ethical Risk Assessment of AI in Practice Methodology: Process-oriented Lessons Learnt from the Initial Phase of Collaborative Development with Public and Private Organisations in Norway. International Conferences on Advances in Computer-Human Interactions ACHI. Nice, France: 33-40.


Saplacan, D., et al. (2018). Inclusion through design and use of digital learning environments: issues, methods and stories. Proceedings of the 10th Nordic Conference on Human-Computer Interaction.


Saplacan, D., et al. (2018). Reflections on using Story-Dialogue Method in a workshop with interaction design students, CEUR Workshop Proceedings Castiglione della Pescaia (GR), Italy.


Lund, M. S., et al. (2010). Model-driven risk analysis: the CORAS approach, Springer Science & Business Media.



 
 
bottom of page