AI and Banks
ChatGPT, a generative Large Language Model (LLM) developed by Artificial Intelligence (AI) research laboratory OpenAI, has received widespread attention and acclaim upon launch. Compared to traditional dialogue models, ChatGPT utilises a massive amount of data to train the model, allowing it to obtain a stronger understanding of natural language and generate more fluent dialogue according to the context. The integration of technology in the field of finance is an inevitable aspect of the social division of labour, driven by advances in science and technology. ChatGPT will undoubtedly expand the limits of AI applications in the field of finance, greatly promoting reform and aiding financial institutions in serving the economy. The banking industry should strive to make full use of the opportunities brought about by programs such as ChatGPT in order to accelerate the implementation of various beneficial applications, such as personalised customer service, multilingual translation, intelligent financial management, sentiment analysis, improved customer experiences, automatic risk assessment, and fraud detection. However, at the same time, the banking industry must also be aware of the risks that may accompany AI implementation, taking steps to guard against and prevent aforementioned potential dangers.
New Challenges
The banking industry has faced new and increasing challenges in recent years, of which one of the most significant is protecting customers from fraud. According to the Hong Kong Police Force, the number of fraud cases in Hong Kong has increased significantly in recent years, with the number of fraud cases in 2022 increasing by 45.1% as compared to 2021. Of these fraud cases, investment scams and telephone scams cost huge sums of HKD 1.8 billion and HKD 1 billion respectively [1]. As such, finding ways of detecting and preventing fraud has become one of the greatest challenges banks face today. Moreover, AI technology, represented by ChatGPT, also brings new challenges to anti-fraud endeavours. In the past, anti-fraud measures taken by banks were mainly aimed at preventing and combating common fraud methods, such as phishing emails, fake websites, fake transactions, etc. However, with the emergence of AI, fraudsters have begun to utilise these technologies in their fraudulent activities, presenting new challenges to banks in their attempt to fight fraud.

- AI has allowed the scammers to improve their efficiency. In the past, fraudsters mostly made manual calls to individual victims to carry out fraudulent activities. Due to the limitations of manpower, scammers were limited to the number of fraudulent activities per day they could carry out. For example, if a scammer makes one scam call every minute for 10 hours a day, the maximum number of calls they can make in a day is 600. However, the introduction of AI opens up the possibility for far greater efficiency. Referring back to the calling example, AI phone dialling technologies available on the market today can make up to 3,000 calls per day without human intervention. If a scammer were able to manage 10 of these devices simultaneously, the scammer would be able to make 30,000 calls a day, becoming 50 times more efficient.
- AI has allowed fraudsters to greatly increase their success rates. In the past, fraudsters often impersonated organisations such as public security bureaus, procuratorates, courts, government officials, etc, taking advantage of the victim’s panic to defraud them. In the present day, scammers can use AI models to generate fake voice and video messages that mimic real organisations, using these to commit crimes. In today’s advanced technological age, it can be extremely difficult to properly discern these false messages from the genuine article, making fraudulent activity even more difficult to prevent. For example, the Baotou police shared a typical case on their public account, where AI was used in an instance of fraud [2]. The fraudster first stole the WeChat account of the victim’s friend, and then used AI to impersonate the friend’s face and voice. The fraudster successfully defrauded the victim of 4.3 million yuan in 10 minutes.
- When using AI tools like ChatGPT, data and privacy are easily leaked. Due to the weak awareness of financial information security of some users and the lack of rules for the use of large-language models in specific banking scenarios, fraudsters may use the data leaked to carry out targeted fraud.
New Opportunities
While AI technologies such as ChatGPT bring about new challenges for banks, it also does bring about new opportunities. ChatGPT can aid banks in conducting anti-fraud operations in the following ways:

- ChatGPT can help banks in detecting anomalies. By analysing a large amount of data regarding the historical transactions and behaviour of customers, it can effectively learn the behaviour of consumers, identifying abnormal transactional behaviour and reducing potential risk of fraud. For example, if a customer suddenly and unexpectedly transfers a large amount of money from their account to an unfamiliar account, the program can alert the bank, which can then issue alerts and warnings, freezing the transaction before gaining confirmation from the customer, thereby reducing the chances that the customer will be scammed or defrauded.
- ChatGPT and provide banks with aid in analysing behaviour. Through examining data detailing a customer’s behaviour and conversations, ChatGPT is capable of understanding the user’s background and needs, thereby better assessing whether a transaction performed by the customer is a risk, and if it is, how large of a risk.
- ChatGPT can be used to automate the handling of customer queries and reports, thereby improving the efficiency of a bank’s anti-fraud operations.
- ChatGPT can overcome time and space limitations, providing customers with 24/7 services. For example, the AI-based automatic duty system of the Agricultural Bank of China Suzhou Branch circumvents the limitations of traditional manual monitoring, being capable of providing functions such as real-time monitoring and alerts, achieving around-the-clock customer service [3].
- ChatGPT is capable of assessing the authenticity of information. Through analysing the authenticity of videos and messages, ChatGPT can determine whether there are instances of fraud, thereby preventing customers from incurring losses.
In fact, AI technology has been increasingly applied in anti-fraud in recent years. For instance, the ‘AI police officer’ deployed in China’s Yuhang District (part of the Zhejiang Province), makes use of technologies such as intelligent speech, semantic recognition, automated processing and big data analysis to carry out anti-fraud activities. The introduction of these ‘AI police officer’ have led to a 71.4% decrease in fraud cases, and a reduction in case loss of 95.1% in the region [4].
The Future
In summary, AI programs, such a ChatGPT, have brought both challenges and opportunities for banks with regards to anti-fraud endeavours. Banks must make good use of emerging AI technologies in order to improve their ability in anti-fraud activities and better serve their customers.
References
[1] Law and order situation in Hong Kong in 2022, Hong Kong Police Force, https://www.police.gov.hk/ppp_en/01_about_us/cp_ye.html
[2] Yang, Z. (May 22, 2023). Police warn of AI fraud after man duped by ‘friend’. https://www.chinadaily.com.cn/a/202305/22/WS646b4fd3a310b6054fad4731.html
[3] Z. Liu. (2023). The Application of ChatGPT in Commercial Banks. China Finance, 61-61.
[4] 24/7 AI Officer Specializing in Anti-Fraud Officially Launched”, Yuhang Morning Post, https://www.yuhang.gov.cn/art/2022/7/12/art_1532133_59019074.html