AI-enabled change offers new pathways to value, better operational efficiencies and services evolution. But frequent failings in customer facing scenarios suggest that success–at least initially–will emerge from internal solutions that use RAG. This article explores its use in managing key operational risks, such as for people.
Remind me: What is RAG?
The briefest translation of the academic literature
Think back to your first role in digital technologies. Academic knowledge of horizontal tech in hand, it took time to apply those concepts vertically, to concrete business problems. Translating theory to practice required knowledge of the organisation's tech landscape, process integration and workforce adoption. Better context (organisational, commercial) led to value.
A similar need is apparent with large language models (LLMs). Pre-trained on the noise of the open internet for generic task completion, they lack the ‘enterprise truth’ needed for reliability, accuracy, traceability and timeliness.
Enter Retrieval-Augmentation Generation. RAG allows for more timely, accurate, and contextually relevant information to support the generative model. While these new sources don’t update the underlying model, they do provide users with better insights, grounding answers in specific, factual sources. (Lewis 2020, DAIR.AI 2025, et.al.)
Think of all the institutional knowledge in your organisation. That includes email stores, intranet assets, learning management systems, data lakes, customer portals and investor decks. Ingesting these troves of specific, expertly curated context via RAG can underpin generated responses. It offers an avenue to the accuracy, traceability and timeliness the enterprise requires.
How does AI impact operational risk management?
Operational risk goes beyond technology risks
Operational risk regulations require organisations to manage a multitude of critical risks. This spans everything from tech, data and cyber risks to people, supplier, market, financial and climate risks. And while this regulation (CPS 230) directly impacts Australian financial services firms, the responsibilities also pass to critical suppliers. That means organisations financed by or as material suppliers to Australia’s financial sector are required to govern and mitigate these risks. (See PS6/21 and BoE 2025 for the relevant context for the UK.)
In other words, effective management of operational risks can lead to greater business resilience in the face of deepening, diversifying threats and disruptions.
Proper management of operational risks requires good data. That spans information about the operating environment, the capabilities and role designations for people, changes in the regulatory and market landscape, and issues along the supply chain.
In this way, AI can enable operational risk management and is a type of risk covered by the regulation. On the one hand, AI involves technology, data and privacy risks. On the other, AI can identify and predict changes in the exposure to risk and how well existing controls can address those issues. The global outage of July 2024 brought technology risks into focus. Recent and forthcoming regulatory changes for privacy and responsible AI are likely to do the same for data, people and autonomic risks.
Use case: How can RAG help manage people risks?
Apply RAG to your most critical people risks
Recently I spoke with several executives about the potential for AI in managing people risks. They represented organisations in what may be termed the ‘heavy industries.’ Think construction, mining, energy and transport. Their context involves managing:
· Physical safety. These risks are typically visible onsite, involving the dangers associated with working in the same environment as gargantuan plant and machinery. These literal moving parts are a threat to workers’ occupational health and safety in confined working spaces, the dangers of working at height, or conducting inspections and maintenance in isolated or remote areas.
· Psychosocial safety. As the name suggests, psychosocial hazards are a mix of psychological and interpersonal risks. They include poor organisational justice, ambiguous role boundaries, interpersonal conflict, bullying, harassment, and extended periods of overwork. Hazards from home and increasing isolation also play into psychosocial safety. (Comcare 2024 and Safe Work 2022.)
These physical and psychosocial hazards interact. Often psychosocial hazards build over time (typically 6-12 months) towards a critical safety incident or near-miss. Ultimately, these hazards cause a brief loss of mindfulness or personal resilience which results in a catastrophic injury, or worse. But there are two key difficulties in these scenarios. The first is pinpointing the exact root cause(s) of safety incidents, given how many events may occur and magnify each other over this timeframe. The second and most intractable is being able to identify, prioritise and address potential hazards in a continuous way. This is due to the resourcing and effort required to conduct in-person inspections manually across hundreds of sites or in geographically remote areas.
So, how can digital leaders support frontline managers with these critical risks? Implementing AI risk management systems with RAG can help. In general terms:
1. Use RAG with policy documents, safety controls and learnings from human resources. This harnesses reliable, recent knowledge to find critical risks.
2. Inject anonymised workforce wellbeing trends, site inspection reports and salient points from incident investigations. That identifies workforce realities.
3. Regularly review, correct and enrich generated recommendations. Do so with safety experts, organisational psychologists and key people managers. Store those aggregated learnings as another source of trusted data for future prompts.
This general approach can enhance the reliability of commodity AI risk management tools. Using RAG for specific organisational context, traceable evidence and timely trend insights offers practitioners the assurance they need. It enables a continuous approach to reducing risks, moving upstream to deal with this complex set of hazards at their source.
At the same time, take care to avoid privacy breaches. While these systems can surface critical risks to people, they may also identify an individual's personal or health information without their consent. Careful governance is required to focus on the what (the hazards and their cause) rather than the who. When moving upstream to deal with these complex hazards at the source it should suffice to deal in metadata rather than individualised information. Work closely with quality, risk and human resources leaders to assure these protections.
Architecting a compliant AI risk management system
Why care for the tech foundations and information architecture matter
As oft-quoted author and educator Seth Earley puts it: “There’s no AI without IA.” Getting the information architecture fit for the purpose of RAG is aligned to the data strategy more generally. As in, it remains important to make the flow of information robust against attack, able to flex at scale, with traceable provenance through to insight and action.
Some factors to consider as you work to augment the existing IA for RAG include:
1. Data platforming. RAG relies on vector techniques, so your current data lake or lakehouse needs to be fit for that purpose. If your current platform doesn’t have this capability the time to start planning is now.
2. Model management. Risk management is just one of many things the modern enterprise requires of AI. Don't be tempted to take a ‘one LLM, multiple RAG’ approach. That will quickly run into scalability and traceability issues. Design a multi-model strategy from the outset to avoid unnecessary complexity.
3. Blend commodity and bespoke solutions. Effective commodity tools in narrow scenarios have emerged (Yuan 2025). Even so, retaining the ownership and bespoke development of RAG in-house is critical to maintain accuracy and IP. That’s particularly true when dealing with the risks of modern slavery and worker isolation.
4. Minimise data transfer. The less information transits between solutions, the better. Using zero-copy architectures, where faithfully implemented, reduces the exponential costs and security risks associated with data movement.
5. Modern infrastructure. Because AI systems, critical data and legacy infrastructure are operational risks, effective multi-cloud foundations are critical enablers of the information architecture. Coordinate execution for infrastructure, IA and AI in lockstep.
In other words, succeeding with RAG corresponds with maturing the information and infrastructure of the organisation.
Conclusion
The problems with generic LLMs are well documented. Solving them involves grounding in your enterprise truth. That’s particularly true when applying AI to mitigate operational risks, such as the hazards faced by the workforce. Choose RAG when accuracy, safety, assurance or trust are critical for your success.
References
1. P. Lewis et.al, ‘Retrieval-augmented generation for knowledge-intensive NLP tasks,’ Advances in neural information processing systems (2020), accessed 25 Mar 2025. (Lewis 2020)
2. ‘Retrieval Augmented Generation,’ Prompt Engineering Guide, DAIR.AI via GitHub, Jan 2025, accessed 25 Mar 2025. (DAIR.AI 2025)
3. ‘APRA finalises new prudential standard on operational risk,’ Australian Prudential Regulation Authority, Jul 2023, accessed 25 Mar 2025. (CPS 230)
4. ‘Operational resilience of the financial sector,’ Bank of England, Mar 2025, accessed 25 Mar 2025. (BoE 2025)
5. ‘PS6/21 | CP29/19 | DP1/18 Operational Resilience: Impact tolerances for important business services,’ Bank of England, Prudential Regulation Authority and Financial Conduct Authority, Mar 2021, accessed 25 Mar 2025. (PS6/21)
6. ‘Psychosocial hazards,’ Comcare, Australian Government, 2024, accessed 25 Mar 2025. (Comcare 2024)
7. ‘Psychosocial hazards’, Safe Work Australia, Jul 2022, accessed 25 Mar 2024. (Safe Work 2022)
8. A. Yuan et.al, ‘Improving workplace wellbeing in modern organisations: A review of large language model-based mental health chatbots,’ ACM Transactions on Management Information Systems, Vol. 16, No. 1, Feb 2025, pp6–9, accessed 25 Mar 2025. (Yuan 2025)