FRONTINES OF
AI INNOVATION

Expert Perspectives on Advancing
AI Applications

Introduction

In today's fast-paced business environment, leveraging AI is essential for maintaining a competitive edge.

Frontlines of AI Innovation provides insights from a team of experts leading AI research and applications, with a focus on financial services and beyond. This series explores the real-world challenges and opportunities AI brings, demonstrating its role in transforming data analysis, elevating customer experiences, streamlining operations, and more.

Our goal is to share our bold perspectives and game-changing discoveries, offering a deeper understanding of how AI can drive growth and solve complex problems across industries.

Perspective on Instruction Based Code Generation using Multiple LLMs

Gaining Accurate Insights for Swift Decision-Making

A Productivity and Efficiency Challenge

Brief Problem Description

For financial executives, accessing timely and accurate insights is critical for decision-making. Yet, many struggle with BI systems when queries are vague or complex. For example, a Risk Leader needing to assess portfolio risks may pose questions like, “Which assets are at the highest risk based on market trends?” Traditional BI tools often misinterpret such queries, yielding incomplete results. Complex, multi-step queries—like correlating performance metrics with market trends—are even more challenging. They require data analysts to manually refine and transform data, slowing down the process. This reliance on technical teams delays crucial insights, leaving financial leaders waiting for answers when swift decisions are needed.

Challenges with Present Business Intelligence Systems

Why BI systems are struggling to meet demands of financial executives

In a landscape where data-driven insights are crucial, many Business Intelligence (BI) tools—including those powered by AI, are struggling to provide actionable and trust-worthy insights to financial executives. Despite advancements, significant challenges remain:

01. Imprecise Queries Yield Inaccurate Results

AI-enabled BI tools often misinterpret vague user queries, leading to incomplete or misleading insights, leading to results that lack precision and erode user confidence in the insights provided.

02. Difficulty with Complex, Multi-Step Queries

Traditional and AI-based tools struggle to implement complex, layered queries, requiring manual intervention and slowing down the process, leading to inefficiencies.

03. Dependence on Data Analysts

Despite AI potential, many BI systems require significant manual input from data analysts to refine queries and ensure accuracy. This delays access to insights and slows decision-making.

04. Inadequate Handling of Complex Data

AI-enabled BI tools can handle basic data transformations, but they often fall short when queries involve more advanced logic—like merging datasets or conducting intricate calculations, leading to slower analysis and delays in gaining insights.

05. Limited Contextual Understanding

AI models in BI tools often lack a deep understanding of the specific business context, leading to outputs that might technically be accurate but miss the strategic relevance.

Perspective on Instruction Based Code Generation using Multiple LLMs

Instruction Based Code Generation using Multiple LLMs

Deriving the Business Impact

What is the approach?

This approach utilizes a dual-LLM (Large Language Model) framework to seamlessly transform user queries into actionable data insights from structured data sources. This framework includes a Conversational LLM, which translates natural language queries into precise, step-by-step instructions; and a Code-Generation LLM, that converts these instructions into executable code for data extraction and transformation. This approach ensures that even vague or complex queries are interpreted accurately, automating the generation of structured data outputs, summaries, reports, and analytics. The result is faster, more reliable access to critical insights for financial decision-making, reducing the need for manual input.

How Does it Work?

01. Interpreting User Queries

The Conversational LLM processes user inputs, even when they are unclear, to understand the intent and break down complex queries into simple, actionable steps.

02. Generating Step-by-Step Instructions

It translates the interpreted query into a series of logical instructions, outlining the exact actions needed for data extraction or transformation.

03. Automating Code Creation

The Code-Generation LLM takes these instructions and produces accurate, executable code, automating the extraction and transformation of data.

04. Ensuring Accuracy and Consistency

The system validates the generated code against the user’s requirements, ensuring that the results are accurate and optimized for performance.

Key Business Benefits

Businesses can unlock faster, more accurate, and consistent data insights by using multiple LLMs and automating code generation, enabling quick and informed decisions

Faster Insights, Quicker Decisions

Automating data extraction enables rapid conversion of queries into actionable insights, reducing decision-making time for financial executives.

Increased Accuracy and Efficiency

The multiple-LLM approach ensures precise data extraction from complex queries, improving data reliability and confidence in results, while lowering operational costs.

Reduced Dependence on Technical Teams

Business users can directly access structured data and reports, freeing up data analysts for more strategic tasks.

Perspective on Instruction Based Code Generation using Multiple LLMs

Instruction Based Code Generation using Multiple LLMs

An Advanced AI Approach
Technical Deep Dive: Dual-LLM Framework for Automated Data Extraction

To address inefficiencies in automated data extraction, this approach employs a dual-LLM (Large Language Model) framework that automates code generation by interpreting user queries. The framework comprises two specialized models that work together to enhance query handling, accuracy, and automation.

01. Conversational LLM: Natural Language Processing for Query Interpretation

The Conversational LLM processes natural language inputs, even when queries are unclear or loosely defined. It serves as an intermediary between the user’s request and the data extraction process, ensuring accurate translation of complex instructions into actionable steps.

Intent Analysis for Accurate Interpretation

This module analyzes the user's query to discern the underlying intent, even when the language is imprecise or the request lacks technical expertise.

Query Decomposition for Complex Data Requests

For multi-layered or complex queries, the Conversational LLM breaks them down into logical components, identifying necessary tables, attributes, and data transformations.

Instruction Generation for Data Processing

After interpreting the query, the LLM organizes it into a clear series of instructions, outlining each step needed for data extraction or transformation, readying it for the next stage of automation.

02. Code-Generation LLM: Automated Code Synthesis for Data Extraction

The Code-Generation LLM converts structured instructions from the Conversational LLM into executable code, automating the data extraction process and minimizing the need for manual coding by analysts.

Precise Code Generation for Complex Queries

Using the detailed instructions provided, this model generates the exact code needed to extract or transform data, even for queries requiring complex logic or multiple datasets.

Workflow Automation for Multi-Step Processes

For requests involving sequences of actions—like filtering, aggregating, or joining datasets—the Code-Generation LLM automates each step, ensuring a seamless execution of the entire process.

Optimization for Code Accuracy and Efficiency

The LLM is designed to produce efficient and accurate code, continuously optimizing performance throughout the data extraction, ensuring that results align precisely with user needs.

Perspective on Instruction Based Code Generation using Multiple LLMs

Multiple LLM Architecture

This architecture represents a dual-LLM-based data extraction process. It starts with Natural Language Input, processed by the Conversational LLM to translate queries into structured instructions. These instructions are then converted into code by the Code Generation LLM. The code removes data from structured databases, and the Code Execution Engine ensures accuracy before delivering the final Structured Data Output to the user.

How this Approach Stands Ahead?

The instruction-based code generation approach, powered by a dual-LLM framework, redefines automated data extraction by addressing core technical challenges. It combines natural language understanding with precise code automation, making it uniquely suited for complex, real-world applications in financial services. Here's why this approach is technically superior:

Enhanced Interpretation of User Queries

Unlike traditional BI tools, which often misinterpret vague or complex user inputs, this Conversational LLM excels at understanding user intent and breaking down ambiguous queries into precise instructions. This ensures that even loosely defined requests result in accurate data extraction.

Automation of Complex Data Workflows

The integration of a Code Generation LLM allows for seamless automation of multi-step processes. This model can handle intricate transformations—such as aggregations, joins, and filters, eliminating the need for manual intervention and reducing turnaround times.

Accurate and Efficient Code Generation

By combining the strengths of both LLMs, the framework ensures that the generated code is not only accurate but also optimized for efficiency. This reduces the risk of errors and improves the overall reliability of the extracted data, leading to better decision-making.

Adaptive to Growing Data Complexity

As data environments become more complex, with increasing volumes and varied data types, this dual-LLM framework remains robust. The ability to manage complex data transformations while maintaining scalability ensures that businesses can continue to derive value as their data challenges grow.

Perspective on Instruction Based Code Generation using Multiple LLMs

Areas of Application

The dual-LLM approach, combining Conversational and Code-Generation models, is particularly suited to solving complex challenges in data extraction and analysis for financial institutions. Here are key use cases where this advanced AI approach can drive value:

01. Personalized Wealth Management Reports

Wealth managers need to provide personalized reports for high-net-worth clients. The system can interpret client details requests like "Generate a performance report including alternative investments and ESG ratings"—and automatically compile data into a structured, client-ready report, saving time and improving client satisfaction.

02. Portfolio Analysis and Risk Assessment

Managing large portfolios requires analyzing asset performance, risk metrics, and market data. The Conversational LLM breaks down queries like "assess risk-adjusted returns for high-yield bonds,” while the Code-Generation LLM automates data extraction, allowing analysts to focus on strategy.

03. Customer Segmentation for Targeted Campaigns

A marketing team wants to identify high-value customers for a credit card campaign. The Conversational LLM interprets segmentation criteria—like spending patterns and income history—while the Code-Generation LLM creates the code to extract and analyze the data, enabling precise targeting.

04. Fraud Detection and Transaction Analysis

A fraud prevention team needs to monitor for unusual transaction patterns. The system interprets queries like "Flag transactions above $10,000 in a 24-hour window,” automating the extraction and analysis of transaction data, enhancing fraud detection and reducing false positives.

05. Complex Compliance and Regulatory Reporting

Financial institutions face strict reporting needs, like SEC and Basel III requirements. The dual-LLM handles requests such as "Generate a report explaining leverage ratio compliance for Q2,” automating data extraction for accurate, timely reports.

06. Credit Risk Scoring Automation

BI teams assess credit risk by analyzing structured datasets. The dual-LLM processes complex risk queries like "Evaluate default rates for loans above $500,000,” creating accurate and automated scoring models for better credit decisions.

07. Sentiment Analysis from Financial Reports

BI tools are used to gauge market sentiment from structured text, like earnings calls. The dual-LLM automates sentiment queries, such as “Summarize positive trends in recent earnings reports,” extracting and organizing insights in an easily interpretable format for decision-makers.

08. Real-Time Data Aggregation for Investment Dashboards

Investment analysts require up-to-date data from multiple sources to maintain dashboards. This approach automates the creation of scripts that aggregate data from unstructured databases, keeping accurate data up-to-date without manual data entry, allowing analysts to focus on insights.

09. AI for Business Intelligence

AI for BI has transformative applications across different industries like the Risk, Banking, Wealth, Brokerage, Payments, etc., that help unlock insights and actionable results from allowed sources for business stakeholders

Perspective on Instruction Based Code Generation using Multiple LLMs

AI for Business Intelligence

A Real Case of Application
Real Business Problem Description

A global B2B fintech platform, offering Brokerage-as-a-Service, enabling banks, brokers, and fund was struggling to empower its Data and BI team to respond to large volume of data insight requests from business executives and stakeholders.

Our Approach

Our Business Intelligence solution, using the Instruction Based Code Generation, leveraged multiple LLMs (Generative AI) to simplify data analysis and insights generation. Business users could ask natural language questions to query data, analyze data, and visualize outcomes—enabling faster, more informed decision-making without the need for technical expertise.

Results
Reduced Response Time

Business Query Response Time reduced from 12 Hours to a couple of Minutes resulting in higher productivity

90%+ Accuracy of response from AI for Industry BI solution

compared to 50% accuracy of Baseline Out-of-the-Box models and other leading AI/BI tools in the market

Highly interactive Gen AI based UI

that allows simple English language-based questions to gain quick and accurate insights

About the Authors

Vijay Saini

Innovation Labs Lead, SiriusAI

Vijay is an Innovation Labs lead at SiriusAI, specializing in developing component AI solutions for both structured and unstructured data. He has extensive expertise in generating AI-driven data extractions, and AI-powered customer experience analytics. Vijay has successfully delivered over 15 AI-based products for financial services, focusing on enhancing prospect acquisition and customer experience through advanced data interlinking and AI-driven insights. In his previous roles, Vijay has led global solution development, delivery, and architecture teams at leading consulting firms. Prior to SiriusAI, he was a tech consultant. He developed AI-enabled data solutions for major banks in Thailand and the US, and delivered AI-powered customer experience analyzers to over 10 clients in the US.

Parikshit Bawa

Senior AI Consultant, SiriusAI

Parikshit is a senior AI consultant with 8 years of experience. With an MBA from IIM Calcutta, he provides key business solutions. He excels at customizing AI capabilities with strategic business needs. At SiriusAI, he has led projects like developing an AI-driven report generation solution for a leading US banking private investment group, enabling streamlined and ultra-high net-worth client care. He has also played a key role in helping brokerage firms leverage AI for business intelligence. In previous roles, Parikshit has actively used AI for strategic decision-making. Parikshit also specializes in the implementation of AI-to-AI solutions—from identifying high-impact use cases to implementing tailored strategies—empowering businesses to transition smoothly from AI-active to AI-native, driving efficiency, enhancing customer experience, and unlocking new growth opportunities.