2025-04-25

Professor Ahmed Elragal om framtiden för AI inom revision

Ahmed Elragal är en framstående forskare med 30 års erfarenhet som forskare, konsult och lärare inom AI. Hans forskning spänner över flera områden men är främst inriktad på data science, datadrivet beslutsfattande och människocentrerad AI. Ahmeds är professor på avdelningen för Informationssystem vid Luleå tekniska universitet (LTU), där han leder ett forskarteam som arbetar med Senseworks på AI-tillämpningar för finansiell revision.

Ahmed, AI is all the hype in financial auditing at the moment. You have been involved in multiple projects over the years - where do you see the most interesting applications of AI being applied in financial auditing?

The possibilities are enormous. For example, AI can change how auditors collect and analyze data from financial systems, such as ERP systems. AI systems can swiftly, accurately, and comprehensively read and analyze journal and general ledger records, regardless of volume. By utilizing prompt engineering, you can formulate natural language questions such as how many transactions a specific customer has made, how much money has been spent on a specific account, and what the balance of specific accounts are, among others. These inquiries can be addressed using a retrieval-augmented generation (RAG) LLM model. AI can identify trends, capture outliers, segment customers, and make predictions. AI systems, in the form of Robotic Process Automation (RPA) tools, could automate searches, data entry, or support tasks. Furthermore, we can leverage AI to analyze invoices, identify trends, and generate projections while providing insights for decision-making through dashboards. Generative Adversarial Network (GAN) algorithms could be employed to create pre-text, such as draft reporting.

Do you envision AI replacing human auditors in some areas, or will it always remain a tool to augment human judgment?

Well, that is an essential question. Here, we need to explain different options on how authors are supposed to interact with AI. The human-in-the-loop (HITL) and human-on-the-loop (HOTL) are designed to balance the level of autonomy experienced and the need for human control. In HITL, human operators - auditors on that occasion - often approve specific actions and interact directly with the AI while making decisions. In contrast, HOTL allows humans to monitor AI without making decisions about every action, but they can intervene when necessary. It is worth mentioning that the EU advocates for human-in-the-loop. Accordingly, we envision AI as a tool to support, enhance, and possibly revolutionize the auditing profession. However, it is not intended to replace human auditors but to make them more efficient, accurate, and comprehensive.

What upcoming innovations or research areas in AI excite you the most for their potential application in financial auditing?

As we speak, agentic AI is one of the exciting AI areas. Agentic AI is a class of autonomous systems that aim to accomplish complex tasks over long periods; they learn context and make decisions. They are designed to operate with a certain level of autonomy, allowing them to deal with unexpected situations and optimize performance over time. While most AI systems are currently overly focused on individual model capabilities, often ignoring broader emergent behavior, agentic AI aims to solve long-horizon tasks through sophisticated reasoning. I expect agentic AI to be used in financial auditing in the near future.

Traceability is a key theme in financial auditing. How can you ensure transparency and explainability in AI-driven audit decisions?

The exponential increase and penetration of AI in our lives are undeniable. According to the McKinsey AI Survey from 2017 and 2021, the percentage of companies that adopted AI in at least one function rose from 20% in 2017 to 56% in 2021. This number jumped to 65% in the 2024 report. Although such penetration and adoption occur in various areas, AI has revealed challenges such as a lack of explainability, bias, and accountability. Adoption needs to address such challenges adequately through different frameworks and tools that identify bias and make the use sustainable and compliant. If we do not govern and control how we use AI, AI will control us, in financial auditing and elsewhere. 

Financial auditing is also a highly sensitive domain where developers of AI solutions have to pay close attention to regulatory and data security concerns. What are some of the risks that need to be considered when applying AI in the financial auditing domain and how can you address them?

Technology, including AI, is not neutral; it is shaped by those who develop, configure, and use it. Thus, establishing a regulatory framework is essential to reducing associated risks. The EU AI Act will become effective in 2026 and require compliance from AI systems. The AI Act defines four levels of risk for AI systems: unacceptable (which will be prohibited), high (which requires conformity assessment), limited (which requires transparency), and minimal risk. Those using AI-assisted financial auditing in the EU will be subject to the same law and must adhere to its demands.

What makes you excited around the work you are doing with Senseworks at the moment?

We have previously worked with Senseworks on an AI project where we identified fraudulent transactions in financial records using ML algorithms. Now, we are collaborating on utilizing LLM for the auditing process, aiming to complete it in less time and in a more efficient and comprehensive way. Both projects demonstrate that Senseworks' vision aligns with ours as researchers: use AI to make an impact!

Thanks for your input Ahmed and looking forward to continuing working together going forward!

Andra nyheter