THE DISPATCHER: BRIDGING THE PROBABILISTIC GAP IN AUTOMATED DECISION MODELING

Authors

DOI:

https://doi.org/10.20998/2413-3000.2025.11.9

Keywords:

Large Language Models; Automated Decision Modeling; Petri Nets; Validation and Verification; Business Process Management

Abstract

In the contemporary landscape of Software Engineering and Business Process Management (BPM), the integration of generative artificial intelligence has precipitated a paradigm shift from manual, deterministic specification to automated, probabilistic generation. While offering scalability, this transition introduces a fundamental volatility known as the "Probabilistic Gap"—the chasm between the fluid, high-variance output of Large Language Models (LLMs) and the strict, zero-tolerance syntactic requirements of execution engines like DMN (Decision Model and Notation). This paper addresses the "Struc-Bench Paradox," highlighting the limitations of transformer architectures in generating complex structured data without rigid orchestration. The study formally defines and implements the "Dispatcher," a pivotal control plane component designed to function as an intelligent resource arbiter and quality gatekeeper within a neuro-symbolic architecture. The theoretical framework shifts the economic focus from Baumol’s Cost Disease, which addresses production speed, to Boehm’s Law of Software Economics, which emphasizes the exponential cost of defects propagated to production. To operationalize this, the Dispatcher represents a discrete deterministic process modeled using Cost-Colored Petri Nets rather than Finite State Machines (FSMs). The Petri Net formalism allows for precise modeling of concurrency, state accumulation, and the strict enforcement of "Retry Budgets," thereby mathematically guaranteeing system termination and preventing infinite loops of costly regeneration. The architectural implementation utilizes a "Test-First" generation philosophy: the system first synthesizes validation criteria (JSON test cases) utilizing Schema Injection and RAG, and subsequently grounds the generation of DMN logic (XML) in these pre-validated scenarios. Experimental analysis was conducted using a controlled set of 200 generation cycles to evaluate two distinct error-recovery strategies: Strategy A (Independent regeneration of DMN tables only) and Strategy B (Joint/Dynamic regeneration of both DMN and Test Cases). Quantitative results demonstrate that Strategy B is economically superior, achieving a 6.06% reduction in total cost and an 8.44% reduction in token consumption compared to the independent patching approach. The findings indicate that simultaneous regeneration empowers the LLM to resolve semantic incoherence and hallucinations more effectively than iterative repairs, prioritizing logical consistency over partial code retention. The study concludes that the Dispatcher effectively bridges the neuro-symbolic divide by transforming validation from a post-production manual review into a pre-production automated cycle. By enforcing a "Stop-Loss" mechanism driven by economic constraints, the framework minimizes the Total Cost of Ownership and serves as a critical "Trust Proxy," mitigating automation bias and ensuring that AI-generated artifacts meet the rigorous reliability standards required for enterprise deployment.

References

Baumol, W. J. (1967). Macroeconomics of Unbalanced Growth: The Anatomy of Urban Crisis. The American Economic Review, 57(3), 415–426. DOI: 10.2478/ie-2023-0066.

Boehm, B. W. (1981). Software Engineering Economics. Prentice-Hall. DOI: 10.1109/TSE.1984.5010193.

Boehm, B. W., & Basili, V. R. (2001). Software Defect Reduction Top 10 List. Computer, 34(1), 135–137. DOI: 10.1109/2.962984.

Hasic, F., & Vanthienen, J. (2019). Complexity metrics for DMN decision models. Computer Standards & Interfaces, 65, 15–37. DOI: 10.1016/j.csi.2019.01.001.

Tang, X., Zong, Y., Phang, J., Zhao, Y., Zhou, W., Cohan, A., & Gerstein, M. (2024). Struc-Bench: Are Large Language Models Good at Generating Complex Structured Tabular Data? Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 12–34. DOI: 10.18653/v1/2024.naacl-short.2.

Goossens, A., Vandevelde, S., Vanthienen, J., & Vennekens, J. (2023). GPT-3 for Decision Logic Modeling. Proceedings of the 17th International Rule Challenge @ RuleML+RR 2023, CEUR Workshop Proceedings, Vol-3485.

Bhuyan, B. P., Ramdane-Cherif, A., Tomar, R., & Singh, T. P. (2024). Neuro-symbolic artificial intelligence: a survey. Neural Computing and Applications, 36, 12809–12844. DOI: 10.1007/s00521-024-09960-z.

Gao, Y., Xiong, Y., Gao, X., Jia, K., Pan, J., & Bi, Y. (2023). Retrieval-Augmented Generation for Large Language Models: A Survey. arXiv preprint. DOI: 10.48550/arXiv.2312.10997.

Weske, M. (2019). Business Process Management: Concepts, Languages, Architectures (3rd ed.). Springer. DOI: 10.1007/978-3-662-59432-2.

Kaplan, A. D., Kessler, T. T., Brill, J. C., & Hancock, P. A. (2023). Trust in Artificial Intelligence: Meta-Analysis. Human Factors. DOI: 10.1177/00187208211013986.

Kaplan, A. D., Kessler, T. T., Brill, J. C., & Hancock, P. A. (2023). Trust in Artificial Intelligence: Meta-Analysis. Human Factors, 65(2), 337–365. DOI: 10.1177/00187208211013988.

Etikala, V., Van Veldhoven, Z., & Vanthienen, J. (2020). Text2Dec: Extracting Decision Dependencies from Natural Language Text for Automated DMN Decision Modelling. Business Process Management Workshops (BPM 2020). Lecture Notes in Business Information Processing, 397. DOI: 10.1007/978-3-030-66498-5_27.

Goossens, A., De Smedt, J., & Vanthienen, J. (2023). Extracting Decision Model and Notation models from text using deep learning techniques. Expert Systems with Applications, 211, 118667. DOI: 10.1016/j.eswa.2022.118667.

Bork, D., Ali, S. J., & Dinev, G. M. (2023). AI-Enhanced Hybrid Decision Management. Business & Information Systems Engineering, 65(2), 179-199. DOI: 10.1007/s12599-023-00790-2.

Abedi, S., & Jalali, A. DMN-Guided Prompting: A Low-Code Framework for Controlling LLM Behavior. 2025. arXiv preprint arXiv:2505.11701. DOI: 10.48550/arXiv.2510.16062

Downloads

Published

2026-01-22