Emerging Technology Risk – Building Digital Trust
At our recent round table event focusing on Emerging Technology Risk, that took place in psd’s Manchester office, Sofia Ihsan (Trusted AI Lead at EY) provided insight into how the Financial Services sector is responding to the use of technology and AI, including perspectives from Consumers, Business leaders and Regulators.
Over the summer months prior to the event, Consultants in the psd Risk and Compliance Practice spoke with numerous leaders across the Financial Services sector about their current challenges. This included Chief Executives, Chief Risk Officers, Directors, Heads of Technology Risk and Operational Risk, from organisations varying in size and location across the Midlands, North England and Scotland. This was to ensure that the Round Table discussion was topical and delved into some of the biggest challenges within the sector.
For AI to be trusted it needs to have the right balance of ethics, social responsibility, accountability and reliability.
Some of the common themes were around functionality, culture and appetite towards risk, and emerging technologies such as AI being used to increase productivity.
The participants were taken on a journey by John Manning, former Head of Technology Risk at Prudential, who spoke about the basics of risk management, technology risk, and technology risk in a changing world.
Following this discussion Sofia Ihsan transitioned into exploring emerging technologies such as artificial intelligence (AI) and the key elements in the control framework (EY Trusted AI Framework). Attendees generously gave their views and experiences with the key takeaways being:
- Technology Risk, including AI, is a specialist component of Operational Risk in that if risks crystallise they will impact the business processes.
- ‘Risk Management’ is just ‘Good Management’ in that organisations need to define and maintain their processes to their desired quality – a balance between too good and not good enough (AKA Risk Appetite).
- Organisational resourcing/funding to get appropriate IT performance needs to consider steady state processes, business and technology change and have a means to allow for extreme events (black swans).
- For AI to be trusted it needs to have the right balance of ethics, social responsibility, accountability and reliability.
To conclude the session, Sofia asked attendees to explore examples of AI and the nature of ethics behind the decision to implement it in a business. This raised an interesting discussion around the use of AI vs Privacy, and AI bias creating automated decisions based on flawed data profiles. One example detailed an exercise on gathering information on potholes from smart phones, however in the real world key areas of the city had low penetration of smartphones so the AI driven pothole programme had a bias not to fix the potholes in these potentially less affluent areas.
This was a lively session which touched upon some of the current key issues within Risk Management. We thank all our attendees for their engagement and input.