top of page

Supply Chain ERP AI : Multi-Agent LLM app : Code-Snippets

Updated: Jan 28

Core Innovation

Recent advances in Large Language Models (LLMs) have enabled a transformative approach to supply chain operations and optimization. By combining LLM based agents with traditional optimization solvers and operational tools, organizations can bridge the gap between human-language interactions and complex supply chain systems while maintaining mathematical rigor and operational efficiency. This note leverages the learnings from systems like OptiGuide (MSFT) and access to early reasoning systems like OpenAI’s o1 to define the next generation of supply chain platforms.


[Note: This summary article uses code snippets to describe the functionality of the platform described here-in. For a more complete version of this summary – please see https://www.dakshineshwari.net/post/multi-agent-llm-apps-for-supply-chain-operations-optimization]


Technical Architecture - Key Components

The multi-agent structure allows the Supply-chain LLM app to access an provide LLM services, while leveraging LLMs with different cost/performance characteristics to provide optimal results. Broadly, there are 5 different kinds of functionality, described in the picture below.

  • Collaborative agents

  • Multiple LLMs

  • Execution helpers that run different aspects of this basic supply-chain  AI platform

  • Data Services geared towards providing better context and improved reasoning,

  • ERP integration an ingestion services



The sections below describe the main architectural components (not all) through the ue of small and sometimes incomplete code snippets. [Note that the optimization sections of this work leverages the learnings from OptiGuide Supply Chain Research, MSFTT]


Execution Flow - Key Components



The above execution flow shows the main interactions within this supply-chain LLM gen-ai platform – note that some elements have been left out of the flow depiction for simplicity.


Code Snippets for Key Components

1.      Service/Scenario Query

The LLM processing is activated due a service/scenario request from the user. It is effectively, the entry point into the Gen-AI patina to Supply Chain ERP.


Example Usage



2.      Multi-Agent Framework

Modern implementations utilize multiple specialized agents:

  • Planner Agent: Handles strategic decision-making and optimization/process planning

  • Meta-prompt Agent: Optimizes the prompts for better accuracy – leverages the evaluation framework

  • Execution Agent: Manages operational tool integration and execution flow

  • Safeguard Agent: Ensures solution feasibility and constraint satisfaction

  • Interpretation Agent: Transforms technical outputs into business insights


Code samples below show select elements of the multi-agent implementation.


Multi-Agent Manager


Specialized Agent Implementation (Code Writer Execution Flow)



3. Core Planning System

The planning system generates a plan and provides guidance to the execution flow agent

4. Tool Integration Framework


The LLM-based Supply Chain Application orchestrates functionality like  inventory optimization through integrated tools that handle demand forecasting, stock level optimization, and reordering decisions. Provides the following sub-functionality:

•         Universal tool registry with validation framework

•         Standardized interfaces for ERP, WMS, and TMS integration

•         Support for both synchronous and asynchronous operations

Core Registry



Tool Interface


Tool Sequence Management



LLM Integration



Example: Multi-Echelon Inventory Optimization


The framework enables the LLM to orchestrate inventory optimization by coordinating demand forecasting, safety stock calculation, and reorder point determination while maintaining execution context and respecting operational constraints.


5. Context State & Management

The context state is the data extracted from base ERP systems that constraints the execution flow, as well as drives the result generation, The context state code below only describes he structure of the state through some examples, but this state is managed through the following capabilities:

•         Atomic state transitions

•         Full audit capabilities

•         Robust monitoring & recovery

 

Example state maintained for some supply chain processes


6. Execution Recipe Generation

The execution system converts high-level plans into concrete operational steps as described below:



7. Enterprise Integration

Support for key enterprise systems includes:

•         ERP Systems: SAP, Oracle, Microsoft Dynamics

•         Data Pipeline Management: ETL operations, data synchronization

•         Warehouse Management: Inventory control, picking optimization

Transportation Management: Route optimization, load planning



8. Mathematical Optimization Integration (from OptiGuide MSFT)

Not necessarily used in the first version of the platform, but well described in the following repo: https://github.com/microsoft/OptiGuide

  • Mixed Integer Programming (MIP) solver integration

  • Support for multiple solver backends (Gurobi, CPLEX, etc.)

  • Real-time constraint handling and solution validation


9. Meta-Prompting System

The meta-prompting system optimizes LLM interactions


10. Evaluation Framework for Supply Chain LLM apps

The framework employs a rigorous approach to evaluation – please note that this code base will change significantly change, as LangChain and others are changing available APIs significantly




Structured Assessment:

o   R experiments per scenario

o   T question sets per experiment

o   Q test questions per question set

o   Three retry attempts for runtime/syntax errors


Comprehensive Scenarios: Includes facility location, multi-commodity network flow, workforce assignment, and traveling salesman problems


Accuracy Metrics:

o   Formal accuracy computation across scenarios, experiments, and question sets

o   In-distribution: 93% (GPT-4 with nearest-neighbor example selection)

o   Out-of-distribution: 84% (GPT-4 with random example selection)

Example Selection: Performance comparison between random selection and nearest-neighbor approaches for contextual examples


11. Evaluation Dataset

The dataset structure and evaluation examples used to validate LLM-supply chain optimization & Ops apps. The dataset includes real procurement scenarios with corresponding ground truth examples.

Dataset Organization- Core Dataset Classes

The evaluation dataset is built around three core classes that provide structure for validation:


Ground Truth Example: Purchase Requisition


Validation Support - Example Question & Answer Pairs

The framework includes question macros for generating test cases:



12. Fivetran based Data Ingestion

For ingestion from external data sources, Fivetran is used



13. Example Plan generated for Production Planning

The first step in the execution of this platform is to take a user’s service query request and generate a plan which acts as a detailed guide for the execution recipe generation step. The sample snippet shows a generated plan for production planning.


Service Query and Context



Generated Plan



14. Example Execution Recipe for Production Planning

Generated from Production Plan (PROD_PLAN_X200_Q2_2024)

Recipe Implementation




Conclusion:

This technical framework represents a significant advancement in supply chain management, combining the power of LLMs and multi-agent collaboration with traditional optimization techniques while maintaining operational rigor and efficiency.


Arindam Banerji, PhD (banerji.arindam@dakshineshwari.net)


 
 
 

コメント


bottom of page