DiligenceVault and Shadmoor Advisors co-hosted a panel covering the state of cyber threat and views on Generative AI for the financial services industry in London on May 18th, 2023. We were joined by cyber and privacy experts from a leading law firm, a leading compliance consultant as well as the UK’s national crime unit.
Thanks to a very engaging panel and audience discussion, we have distilled key insights below:
What are the top cyber and privacy priorities for firms in the investment management industry?
- How do you build the expertise and cyber program that is as good as your peers and is staying ahead of regulatory expectations?
- How to build an optimal framework to oversee cyber risk over a broad set of service providers?
- What is the privacy regulation around AI?
- Regulatory landscape differs between the UK, EU, US, and China. What is a practical way of managing privacy frameworks across multiple jurisdictions?
What is the biggest cyber threat?
Ransomware has emerged as the biggest cyberthreat that the industry is dealing with. This has become a national security issue in a short period of time as the online cybercrime ecosystem is now able to easily launder proceeds across jurisdictions via cryptocurrencies.
For cyber criminals, ransomware is yielding 99% gains as they are now using a high volume approach targeting small and medium sized firms. It has a very low barrier to entry as compared to previously used DDOS attacks which require technical specialization. Furthermore, encryption less data theft is becoming more frequent.
Majority of these threats originate from select overseas countries, and as a result, managing jurisdiction to mitigate these issues is a challenge for the law enforcement.
What defense frameworks and best practices are the most effective against ransomware and other cyber risk threats?
- Cyber essentials: Doing the basics right is critical. Ensure all your systems and endpoints have the right patches, your employees are trained not to reuse passwords across multiple applications, use MFA, and are vigilant about phishing. Having the basics right will protect the firm against 90% of the threat.
- Authentication & conditional access: Select firms have introduced biometrics in their authentication framework, and have to consider the concerns around individual consent and permission to not breach any privacy regulations. Many managers have managed service provider relationships who use shared accounts, and as a result MFA is difficult to implement. Setting up conditional access is helpful in these cases to have pre-approved devices which can access corporate data.
- Cloud adoption: As more and more firms move to public cloud, it’s a net positive that the firms are trusting leading firms such as Microsoft or Amazon to secure their environment. However, using their out of box security configuration leaves them vulnerable. The out of box setup is prioritized for convenience and not security. Another area to consider is that the attackers are designing attacks for a uniform environment (365 emails business compromise), as they can attack multiple firms at once and breach controls of firms which are vulnerable.
- Payment fraud: Majority of successful attacks are payment fraud, so it’s important to introduce call back validation.
- Backups: For firms that are ransomware victims, they did have offline backups of data, but there were no registry backups which makes the data backups less useful. It’s important to ensure the backup includes not just the data, but also the setup.
How should firms react to ransomware if affected? Should firms pay ransom?
Generally, the regulators and legal advisors are not encouraging payment of ransom. Some stakeholders view paying ransom as the least harm option, while other viewpoint is that it perpetuates the problem and the cost of paying is higher vs not paying. A lot less firms are paying ransom now than a few years ago. Ultimately, paying ransom is a business decision when it’s an existential problem.
The best defense is to have the right risk management framework, process and controls. These controls make your firm a difficult target and the cybercrime perpetuators will move on if it’s not easy to breach. This type of cybercrime is a high volume effort on their end, so they look for easier targets.
The next best thing after building a risk framework, is to have a response strategy in the event of being a victim of such an attack. Table top exercises on backup restoration and speed of response around breach notification are key.
How are Generative AI applications transforming our day to day work?
We are in the age of anxiety as professional services jobs are at a potential risk. BT announced 50k jobs are being replaced by AI as it has emerged as a multifunctional piece of software.
Generative AI is also impacting key business processes and introducing new risk factors. A few examples were discussed:
- As one of the risk factors, AI provides significant efficiency for cybercrime in generating realistic and personalized phishing email vs. historic poorly spelt emails and text. This further reduces the barrier to entry for cybercrime.
- Job assessment in knowledge management and professional services roles will be more difficult as applicants are using Generative AI tools to strengthen their applications. Whereas in the past, some recruitment process included a written assessment for candidates whose English was not the first language as a reliable method of screening, nowadays over 90% of the candidates pass the written assignments with the assistance of ChatGPT.
What risk return assessments are firms watching out for when implementing Generative AI applications?
As firms are evaluating Generative AI and overall AI applications, the questions which are top of mind include:
- Can we turn off AI if we need to?
- What will AI regulation look like?
- How do we manage risk around confidential data and PII exposure to generative AI models?
- Who owns the output generated by AI? Copyright of ownership is tied to content generated by humans.
- How do we keep up with the speed of acceleration? From GPT3 to 4, the parameters expanded from 175mm to 1tn
A general observation is that firms are locking down use of open data ecosystem Generative AI tools, some even considering blocking access to such tools until they develop internal guidelines to manage the risk of data breaches. Most firms are considering creating Generative AI Acceptable Use Policies which would provide guidance on the use of confidential and PII data in these models, the quality of the output, ownership of the output, use of AI generated output, the resulting liabilities, and how to mitigate these risks.
Thank you for joining us for the evening, and we look forward to continuing the dialogue at the next event.