A Method for Detecting Knowledge Conflicts in Chinese Intelligent Agent Interactions

Huanyu Cheng, Yincheng Gu, Mengting Xi, Qiuyuan Zhong, Liu Wei
Interdisciplinary Journal of Information, Knowledge, and Management  •  Volume 20  •  2025  •  pp. 012

This study aims to address the knowledge conflict issues encountered in multi-agent collaboration, particularly when agents based on large language models (LLMs) provide inconsistent answers or recommendations due to varied knowledge sources or errors caused by hallucinations.

The paper tackles the limitations of intelligent agents that cannot dynamically detect or resolve knowledge conflicts. The accuracy of agent responses is enhanced by introducing an automated conflict detection and resolution method.

We propose a Knowledge Conflict Resolution (KCR) method that leverages prompt engineering and fine-tuned LLM agents for conflict detection and resolution. The method is evaluated in task-oriented dialogue scenarios, comparing it against baseline models in terms of consistency, task success rate, and user satisfaction.

This study proposes a novel approach to detecting knowledge conflict in intelligent agents. Our innovative approach offers three key advantages over existing solutions:

Higher Accuracy: Achieves 97.3% conflict detection rate compared to 85-91% in current methods.

User-Friendly Design: Simplifies complex coordination between agents without technical expertise.

Practical Implementation: Works effectively across different LLM platforms without requiring system overhauls.

Experimental results show that our KCR method significantly outperforms existing approaches in resolving conflicts and maintaining coherent multi-agent conversations, notably improving user-perceived reliability.

Incorporating conflict detection mechanisms in intelligent agents can improve knowledge management and enhance user satisfaction, particularly in knowledge-intensive industries.

Ensuring the consistency and accuracy of knowledge from different sources is crucial. This paper proposes an effective knowledge conflict detection method to enhance the consistency of knowledge.

Enhances the reliability and accuracy of intelligent agents in professional fields, facilitates organizational knowledge consistency, and promotes the practical adoption of large language models in complex scenarios.

Future research should broaden the scope of this method to include English, investigate its applicability in multimodal large models, and further develop strategies to ensure organizational knowledge consistency in intelligent agents.

intelligent agent, large language model, knowledge conflict, ChatGPT
21 total downloads
Share this
 Back

Back to Top ↑