Authors: Mohamad Faraj Makkawi, Hassan Khan
Supervisors: Mihai Andries, Christophe Lohr
Note: For further details, refer to the full paper.
- Project Overview
- What is a "Good" Debate?
- Argumentation Techniques
- Why Symbolic AI for Debates?
- Conversational AI Agent Types
- Evaluation Metrics
- Limitations of Existing Debating Agents
- Conclusion
- Future Directions
- References
This project conducts bibliographical research on designing conversational agents capable of logical, persuasive debate. The primary goal is to support the mental health of socially isolated individuals by leveraging symbolic reasoning to enhance debating agents.
- What defines a debate, and what constitutes a "good" debate?
- What debating techniques are effective for conversational agents?
- What types of conversational agents are currently capable of debating?
A structured exchange of arguments governed by:
- Logical consistency
- Evidence-based reasoning
- Rebuttal mechanisms
| Criterion | Description |
|---|---|
| Logical Flow | Coherence and progression of arguments |
| Evidence Quality | Reliability and relevance of supporting data |
| Clarity | Accessibility and understandability of arguments |
| Relevance | Pertinence of arguments to the debate topic |
Structured methods for constructing, analyzing, and evaluating arguments to reach logical conclusions and persuade audiences.
| Technique | Pros | Cons |
|---|---|---|
| ASPIC+ | Logical rigor, explainability | Manual rule encoding required |
| Multi-Attribute | Adapts to dynamic priorities | Weightings may introduce bias |
| Argument Graphs | Clarity in argument dependencies | Struggles with uncertainty |
- Knowledge Representation: Uses ordered, legible symbols derived from logical reasoning.
- Advantages:
- Logical Consistency: Ensures arguments are free of contradictions.
- Explainability: Transparent reasoning process.
- Verifiability: Easier to validate and audit.
- Clear Communication: Facilitates effective interaction with users.
| Agent Type | Strengths | Weaknesses |
|---|---|---|
| Rule-Based | Predictable outcomes | Rigid structure |
| Retrieval-Based | Dynamic data handling | Dependency on retrieval quality |
| Hybrid | Context-aware reasoning | Complex setup |
| Hierarchical | Personalized arguments | Feedback latency |
| Explainable | Transparent explanations | Depth limitations |
| Agent Type | Accuracy | Response Time | Cost Efficiency | Scalability | Stability | User Satisfaction |
|---|---|---|---|---|---|---|
| Rule-Based | ↑ | ↑ | ↓ | ↓ | ↑ | ↓ |
| Retrieval-Based | — | — | — | ↑ | — | — |
| Hybrid (Rule-Retrieval Based) | — | ↓ | ↑ | — | ↑ | ↑ |
| Hierarchical | — | ↓ | ↑ | ↑ | ↑ | — |
| Explainable | — | — | — | — | — | ↑ |
Legend: ↑ High/Better — Moderate ↓ Low/Worse
References for Evaluation Metrics:
- Bronsdon, G. "Evaluating AI Agent Performance: Benchmarks for Real-World Tasks." Accessed: Apr. 3, 2025. URL: https://www.galileo.ai/blog/evaluating-ai-agent-performance-benchmarks-real-world-tasks.
- Smythos. "AI Agent Performance Measurement." Accessed: Apr. 3, 2025. URL: https://smythos.com/ai-agents/agent-architectures/ai-agent-performance-measurement/.
- SuperAnnotate. "AI Agent Evaluation." Accessed: Apr. 3, 2025. URL: https://www.superannotate.com/blog/ai-agent-evaluation.
- IBM. "AI Agent Evaluation." Accessed: Apr. 3, 2025. URL: https://www.ibm.com/think/topics/ai-agent-evaluation.
- Uncertainty and Big Data Management: Difficulty handling ambiguous or large-scale data.
- Knowledge Acquisition Bottleneck: Manual encoding is time-consuming.
- Real-Time Dynamic Assessments: Limited adaptability in live debates.
- Logical Inconsistencies: Risk of contradictory arguments.
Explainable Conversational Agent
- Why? Balances transparency, user satisfaction, and logical consistency—ideal for mental health support through debate.
- Real-Time Dynamic Adaptation: Enhance agents to respond fluidly to evolving debate contexts.
- Rakshit, G., et al. "Debbie, the debate bot of the future." Advanced Social Interaction with Agents: 8th International Workshop on Spoken Dialog Systems. Springer, 2019, pp. 45–52.
- Tan, C., et al. "Winning arguments: Interaction dynamics and persuasion strategies in good faith online discussions." Proceedings of the 25th International Conference on World Wide Web. 2016, pp. 613–624.
- Engelmann, D., et al. "Argumentation as a method for explainable AI: A systematic literature review." 17th IEEE Iberian Conference on Information Systems and Technologies (CISTI). IEEE, 2022, pp. 1–6.
- Kasif, S. "A Trilogy of AI Safety Frameworks: Paths from Facts and Knowledge Gaps to Reliable Predictions and New Knowledge." arXiv preprint arXiv:2410.06946 (2024). Department of Biomedical Engineering, Program in Bioinformatics, Department of Computer Science, Boston University. URL: https://arxiv.org/abs/2410.06946.