In today's fast-paced business environment, leveraging AI is essential for maintaining a competitive edge.
Frontlines of AI Innovation provides insights from a team of experts leading AI research and applications, with a focus on financial services and beyond. This series explores the real-world challenges and opportunities AI brings, demonstrating its role in transforming data analysis, elevating customer experiences, streamlining operations, and more.
Our goal is to share our bold perspectives and game-changing discoveries, offering a deeper understanding of how AI can drive growth and solve complex problems across industries.
For financial executives, accessing timely and accurate insights is critical for decision-making. Yet, many struggle with BI systems when queries are vague or complex. For example, a Risk Leader needing to assess portfolio risks may pose questions like, “Which assets are at the highest risk based on market trends?” Traditional BI tools often misinterpret such queries, yielding incomplete results. Complex, multi-step queries—like correlating performance metrics with market trends—are even more challenging. They require data analysts to manually refine and transform data, slowing down the process. This reliance on technical teams delays crucial insights, leaving financial leaders waiting for answers when swift decisions are needed.
In a landscape where data-driven insights are crucial, many Business Intelligence (BI) tools—including those powered by AI, are struggling to provide actionable and trust-worthy insights to financial executives. Despite advancements, significant challenges remain:
AI-enabled BI tools often misinterpret vague user queries, leading to incomplete or misleading insights, leading to results that lack precision and erode user confidence in the insights provided.
Traditional and AI-based tools struggle to implement complex, layered queries, requiring manual intervention and slowing down the process, leading to inefficiencies.
Despite AI potential, many BI systems require significant manual input from data analysts to refine queries and ensure accuracy. This delays access to insights and slows decision-making.
AI-enabled BI tools can handle basic data transformations, but they often fall short when queries involve more advanced logic—like merging datasets or conducting intricate calculations, leading to slower analysis and delays in gaining insights.
AI models in BI tools often lack a deep understanding of the specific business context, leading to outputs that might technically be accurate but miss the strategic relevance.
What is the approach?
This approach utilizes a dual-LLM (Large Language Model) framework to seamlessly transform user queries into actionable data insights from structured data sources. This framework includes a Conversational LLM, which translates natural language queries into precise, step-by-step instructions; and a Code-Generation LLM, that converts these instructions into executable code for data extraction and transformation. This approach ensures that even vague or complex queries are interpreted accurately, automating the generation of structured data outputs, summaries, reports, and analytics. The result is faster, more reliable access to critical insights for financial decision-making, reducing the need for manual input.
The Conversational LLM processes user inputs, even when they are unclear, to understand the intent and break down complex queries into simple, actionable steps.
It translates the interpreted query into a series of logical instructions, outlining the exact actions needed for data extraction or transformation.
The Code-Generation LLM takes these instructions and produces accurate, executable code, automating the extraction and transformation of data.
The system validates the generated code against the user’s requirements, ensuring that the results are accurate and optimized for performance.
Businesses can unlock faster, more accurate, and consistent data insights by using multiple LLMs and automating code generation, enabling quick and informed decisions
Automating data extraction enables rapid conversion of queries into actionable insights, reducing decision-making time for financial executives.
The multiple-LLM approach ensures precise data extraction from complex queries, improving data reliability and confidence in results, while lowering operational costs.
Business users can directly access structured data and reports, freeing up data analysts for more strategic tasks.
To address inefficiencies in automated data extraction, this approach employs a dual-LLM (Large Language Model) framework that automates code generation by interpreting user queries. The framework comprises two specialized models that work together to enhance query handling, accuracy, and automation.
The Conversational LLM processes natural language inputs, even when queries are unclear or loosely defined. It serves as an intermediary between the user’s request and the data extraction process, ensuring accurate translation of complex instructions into actionable steps.
This module analyzes the user's query to discern the underlying intent, even when the language is imprecise or the request lacks technical expertise.
For multi-layered or complex queries, the Conversational LLM breaks them down into logical components, identifying necessary tables, attributes, and data transformations.
After interpreting the query, the LLM organizes it into a clear series of instructions, outlining each step needed for data extraction or transformation, readying it for the next stage of automation.
The Code-Generation LLM converts structured instructions from the Conversational LLM into executable code, automating the data extraction process and minimizing the need for manual coding by analysts.
Using the detailed instructions provided, this model generates the exact code needed to extract or transform data, even for queries requiring complex logic or multiple datasets.
For requests involving sequences of actions—like filtering, aggregating, or joining datasets—the Code-Generation LLM automates each step, ensuring a seamless execution of the entire process.
The LLM is designed to produce efficient and accurate code, continuously optimizing performance throughout the data extraction, ensuring that results align precisely with user needs.
This architecture represents a dual-LLM-based data extraction process. It starts with Natural Language Input, processed by the Conversational LLM to translate queries into structured instructions. These instructions are then converted into code by the Code Generation LLM. The code removes data from structured databases, and the Code Execution Engine ensures accuracy before delivering the final Structured Data Output to the user.
The instruction-based code generation approach, powered by a dual-LLM framework, redefines automated data extraction by addressing core technical challenges. It combines natural language understanding with precise code automation, making it uniquely suited for complex, real-world applications in financial services. Here's why this approach is technically superior:
Unlike traditional BI tools, which often misinterpret vague or complex user inputs, this Conversational LLM excels at understanding user intent and breaking down ambiguous queries into precise instructions. This ensures that even loosely defined requests result in accurate data extraction.
The integration of a Code Generation LLM allows for seamless automation of multi-step processes. This model can handle intricate transformations—such as aggregations, joins, and filters, eliminating the need for manual intervention and reducing turnaround times.
By combining the strengths of both LLMs, the framework ensures that the generated code is not only accurate but also optimized for efficiency. This reduces the risk of errors and improves the overall reliability of the extracted data, leading to better decision-making.
As data environments become more complex, with increasing volumes and varied data types, this dual-LLM framework remains robust. The ability to manage complex data transformations while maintaining scalability ensures that businesses can continue to derive value as their data challenges grow.
The dual-LLM approach, combining Conversational and Code-Generation models, is particularly suited to solving complex challenges in data extraction and analysis for financial institutions. Here are key use cases where this advanced AI approach can drive value:
Wealth managers need to provide personalized reports for high-net-worth clients. The system can interpret client details requests like "Generate a performance report including alternative investments and ESG ratings"—and automatically compile data into a structured, client-ready report, saving time and improving client satisfaction.
Managing large portfolios requires analyzing asset performance, risk metrics, and market data. The Conversational LLM breaks down queries like "assess risk-adjusted returns for high-yield bonds,” while the Code-Generation LLM automates data extraction, allowing analysts to focus on strategy.
A marketing team wants to identify high-value customers for a credit card campaign. The Conversational LLM interprets segmentation criteria—like spending patterns and income history—while the Code-Generation LLM creates the code to extract and analyze the data, enabling precise targeting.
A fraud prevention team needs to monitor for unusual transaction patterns. The system interprets queries like "Flag transactions above $10,000 in a 24-hour window,” automating the extraction and analysis of transaction data, enhancing fraud detection and reducing false positives.
Financial institutions face strict reporting needs, like SEC and Basel III requirements. The dual-LLM handles requests such as "Generate a report explaining leverage ratio compliance for Q2,” automating data extraction for accurate, timely reports.
BI teams assess credit risk by analyzing structured datasets. The dual-LLM processes complex risk queries like "Evaluate default rates for loans above $500,000,” creating accurate and automated scoring models for better credit decisions.
BI tools are used to gauge market sentiment from structured text, like earnings calls. The dual-LLM automates sentiment queries, such as “Summarize positive trends in recent earnings reports,” extracting and organizing insights in an easily interpretable format for decision-makers.
Investment analysts require up-to-date data from multiple sources to maintain dashboards. This approach automates the creation of scripts that aggregate data from unstructured databases, keeping accurate data up-to-date without manual data entry, allowing analysts to focus on insights.
AI for BI has transformative applications across different industries like the Risk, Banking, Wealth, Brokerage, Payments, etc., that help unlock insights and actionable results from allowed sources for business stakeholders
A global B2B fintech platform, offering Brokerage-as-a-Service, enabling banks, brokers, and fund was struggling to empower its Data and BI team to respond to large volume of data insight requests from business executives and stakeholders.
Our Business Intelligence solution, using the Instruction Based Code Generation, leveraged multiple LLMs (Generative AI) to simplify data analysis and insights generation. Business users could ask natural language questions to query data, analyze data, and visualize outcomes—enabling faster, more informed decision-making without the need for technical expertise.
Business Query Response Time reduced from 12 Hours to a couple of Minutes resulting in higher productivity
compared to 50% accuracy of Baseline Out-of-the-Box models and other leading AI/BI tools in the market
that allows simple English language-based questions to gain quick and accurate insights
Innovation Labs Lead, SiriusAI
Vijay is an Innovation Labs lead at SiriusAI, specializing in developing component AI solutions for both structured and unstructured data. He has extensive expertise in generating AI-driven data extractions, and AI-powered customer experience analytics. Vijay has successfully delivered over 15 AI-based products for financial services, focusing on enhancing prospect acquisition and customer experience through advanced data interlinking and AI-driven insights. In his previous roles, Vijay has led global solution development, delivery, and architecture teams at leading consulting firms. Prior to SiriusAI, he was a tech consultant. He developed AI-enabled data solutions for major banks in Thailand and the US, and delivered AI-powered customer experience analyzers to over 10 clients in the US.
Senior AI Consultant, SiriusAI
Parikshit is a senior AI consultant with 8 years of experience. With an MBA from IIM Calcutta, he provides key business solutions. He excels at customizing AI capabilities with strategic business needs. At SiriusAI, he has led projects like developing an AI-driven report generation solution for a leading US banking private investment group, enabling streamlined and ultra-high net-worth client care. He has also played a key role in helping brokerage firms leverage AI for business intelligence. In previous roles, Parikshit has actively used AI for strategic decision-making. Parikshit also specializes in the implementation of AI-to-AI solutions—from identifying high-impact use cases to implementing tailored strategies—empowering businesses to transition smoothly from AI-active to AI-native, driving efficiency, enhancing customer experience, and unlocking new growth opportunities.