How a built-in (generative) AI assistant can support the underwriting process at different stages, to improve communication and decision making efficiency.
Their desire
Our client specializes in trade-related insurance solutions that help businesses protect their financial stability. They primarily offer trade credit insurance, which safeguards companies against losses when customers fail to pay invoices, along with fraud insurance to cover damages from internal or external fraud, and surety bonds/guarantees that support contractual obligations.
Whenever a new request comes in, the underwriting process is started. In this process, our client evaluates the risk of the request (e.g. do we want to cover this obligator), and determines various commercial aspects (e.g. contract value, or impact of geopolitical aspects). Given the international position of our client, a lot of variables come in to play for making the correct decisions.
The data to make effective decisions exists, but is scattered in databases and documents. In addition, every request requires a different set of data to make a decision. Our client wanted an intuitive way to ask any type of context-related question, that is easily accessible. But data cannot leave the infrastructure of our client. Compliance is at utmost importance.
Our solution
Finaps introduced a chat-window on all the pages of the main portal, supporting the underwriting process. This chat-window is a gateway to interact with the Large-Language Model (LLM) deployed within the infrastructure of our client. With each conversation, the context for the underwriting process is provided to the LLM. The LLM is used to determine what data the application needs to retrieve using techniques like Model Context Protocol (MCP) and Retrieval-Augmented Generation (RAG). In addition, the LLM is instructed to generate an SQL-statement to retrieve data from the application’s own database. The data from the different sources is combined, and the LLM is used to produce a reliable and human-readable answer to the question asked.
“The LLM is used to determine what data the application needs to retrieve using techniques like Model Context Protocol (MCP) and Retrieval-Augmented Generation (RAG).”
Industry
Technology
Mendix, Java, Generative AI