By Nathan Stevenson, CEO, ForwardLane
Since ChatGPT dethroned TikTok as the fastest app to amass one million users, generative pre-trained transformative (GPT) technology has been hailed as the hands-free solution to every writing need. From composing emails in business to writing content articles and even trip planning, the common perception is that GPT technology is here to replace humans. This notion has also raised some concerns about the role that AI technology plays in our lives. Yet behind the public hyperbole, there are a myriad of serious use cases for GPT technology. This includes the financial services sector and within that the wealth management and financial advice space. Many financial services professionals are excited about the productivity and efficiency gains of such technology; but are equally concerned about the risks it may pose from a legal, compliance, and data privacy perspective. Here we explore GPT technology in more detail, outline the associated limitations and risks, and how to mitigate them.
GPT technology in context
GPTs are artificial intelligence (AI) models used to generate natural language text through a conversational interface. By pre-training on large volumes of real data such as Wikipedia and the whole internet, they can understand natural language, recognize words and grammar, infer intent, and have good reasoning capabilities. When provided with a prompt, they can generate long form text, summarize text, answer questions, translate languages, and more thanks to “emergent capabilities” – new capabilities only seen in large models.
Open AI has led the charge with the release of ChatGPT powered by the GPT 3.0 model. Such is the rapid pace of development, that GPT 4.0 was released four months later, and those one million users reached 100 million. GPT-4 is many orders of magnitude better than ChatGPT, finishing the Uniform Bar exam in the 99th percentile vs, bottom 10th for ChatGPT. On most subject measures, including macroeconomics, math, statistics, psychology, written and verbal communication, business law, and science, GPT-4 grades in the 75%-90% range of performance. It has improved factual performance, better reasoning, and steerability, which allows it to adopt the mindset and skills of a specific role and answer in the appropriate tone. It is available today through a waitlist on Microsoft Bing, through OpenAI’s ChatGPTPlus – $20/month subscription – and through Microsoft Azure. The AI arms race is on with Google’s Bard, Meta, Amazon, Baidu, and well-funded start-ups, such as DeepMind, PI (backed by LinkedIn founder Redi Hoffman), and Anthropic all creating offerings.
Using GPTs in wealth management
In the wealth management space, the benefits for both advisors and clients are substantial. Client engagement plans can be created with detailed prompting, financial education and answers on just about any topic, which is tremendously beneficial for retail, mass affluent and HNW investors. And all of this with 24x7x365 availability – something an advisor simply can’t provide. Other interesting use cases may focus on how to help advisors deal with challenging client circumstances from a psychological support perspective, through to identifying a client’s needs and the questions to ask them to construct a financial plan. In using this technology, the advisor dramatically cuts down on busy work and can devote more time to engaging clients, thereby boosting productivity, deepening the client relationship, and providing the client with a more personalized experience.
The limitations and risks of GPTs
While GPTs have many benefits, they also have their limitations. It is essential to recognize and understand these limitations, so that they can be factored into any effective deployment of the technology. One of the major risks of GPTs is that there may be inherent bias from the nature of the wide range of data that they are trained on. This can range from reference content, such as Wikipedia and message boards, to news sources such as the NYTimes and the Guardian, and also forums such as Reddit, all of which, according to research by The Washington Post, can introduce bias. GPTs also have limited understanding of financial terminology, which means they may not produce context relevant language. And lastly, they do not have a human’s complex decision-making capabilities or intuition. Combined, these limitations raise legitimate concerns that GPTs could create incorrect outputs, leading to poor outcomes for customers and causing legal and compliance headaches.
The competent knowledge worker principle
Given the potential risks, it is all too easy to dismiss GPT technology, yet its undoubted language skills mean that it has a valuable role to play. To use GPTs safely and effectively, the overriding principle is that GPTs should be viewed as competent knowledge workers, not experts, that help humans in their work. GPTs do not replace humans, and human oversight when using GPT technology is essential. Once the principle of GPTs as a competent knowledge worker is established, there are a number of key steps that wealth managers can take to ensure their effective and safe use.
Train, test, and check
Firstly, GPTs should be trained specifically for the institution. If solely trained on the Internet, there will be inherent biases in the output. Correctly trained GPTs with clean inputs, however, are highly unlikely to generate false outputs. GPTs should therefore be customized to the institution by training them on enterprise data sets, not just external ones. It is also good practice to get inputs from internal experts for the GPT to draw on. For example, internal research or economic teams can create insights to feed the GPT. To gain the financial terminology capabilities, GPTs should also be trained on financial market data. By undertaking these steps, GPTs draw on verifiable facts and use the right language context. The launch of BloombergGPT, which has been trained on Bloomberg’s considerable data and information resources, is an excellent example of how internal data sets can be used to customize the technology to the sector.
Secondly, GPT outputs should be tested to establish trust in its capabilities. Just as any new service or tool would undergo rigorous testing, so too should any service using GPT technology. To start, wealth managers need to identify an application of the technology, for example, client support automation or helping advisors to deliver personalized advice. They then need to select the appropriate key performance indicators (KPIs), such as an increase in the resolution of customer queries or a reduction in the time taken to prepare for client meetings. Finally, they need to assess the output and refine and adjust how the technology is used.
Lastly, coming back to the principle that GPTs are a tool to help humans, human checks also need to be built into any workflows where GPT technology is used. In the case of customer support information, checks need to be made of responses and GPTs need to revert to humans for more complex queries. Or if GPTs are being used to assist with providing personalized financial advice, all output should be checked by the advisor before being used with the client. This will help to ensure that it meets all regulatory standards, and that compliance is not circumvented.
What if I don’t know how to train or deploy a GPT?
With this explosion in capabilities and use cases, many incumbents, AI fintech firms and wealthtech start-ups are working feverishly to test and deploy GPTs in ways that are safe, compliant and factual. Getting the right mix of enterprise data and GPT capabilities is a tough organizational problem to solve, so one way to bring these capabilities to your organization is to partner with firms that are doing the heavy lifting for you and crafting the tooling to meet the financial regulatory requirements. This being said, go out and experiment, use it yourself and get familiar as it is akin to the dawn of the internet, those that were ahead of the curve and figured out how to harness the new technology made fortunes.
The real risk of GPTs? Not using them!
To conclude, the highly regulated nature of wealth management means there are natural concerns about the use of GPT technology. However, the proven language capabilities of GPTs means that wealth management firms cannot afford to ignore them. Instead, by adopting the principle of GPTs as a competent knowledge worker (rather than an all-seeing and knowing expert) and by putting in place a framework to train models effectively, test the applications, and incorporate human oversight, wealth managers can realize substantial benefits in improving work rate and providing a better customer experience. The ultimate risk in the whole GPT debate is whether wealth managers can afford not to use GPT technology if they hope to remain competitive.