|
490 | 490 | | 25.05 | The Hong Kong University of Science and Technology (Guangzhou) | ACL 2025 | [How does Misinformation Affect Large Language Model Behaviors and Preferences?](https://arxiv.org/abs/2505.21608v1) | **Misinformation**&**LLM Behavior**&**Benchmark** |
|
491 | 491 | | 25.05 | The Hong Kong Polytechnic University | ACL 2025 | [Removal of Hallucination on Hallucination: Debate-Augmented RAG](https://arxiv.org/abs/2505.18581v1) | **Hallucination Mitigation**&**Retrieval-Augmented Generation**&**Multi-Agent Debate** |
|
492 | 492 | | 25.05 | Central South University| ACL 2025 | [CCHall: A Novel Benchmark for Joint Cross-Lingual and Cross-Modal Hallucinations Detection in Large Language Models](https://arxiv.org/abs/2505.19108v1) | **Cross-lingual Hallucination**&**Cross-modal Hallucination**&**Benchmark** |
|
| 493 | +| 25.05 | Hong Kong University of Science and Technology (Guangzhou) | arxiv | [Evaluation Hallucination in Multi-Round Incomplete Information Lateral-Driven Reasoning Tasks](https://arxiv.org/abs/2505.23843) | **Lateral Thinking**&**Multi-Round Reasoning**&**Evaluation Benchmark** | |
| 494 | +| 25.05 | University of Illinois Urbana-Champaign | arxiv | [From Hallucinations to Jailbreaks: Rethinking the Vulnerability of Large Foundation Models](https://arxiv.org/abs/2505.24232v1) | **Hallucination**&**Jailbreak**&**Foundation Models** | |
| 495 | +| 25.05 | National University of Singapore | arxiv | [The Hallucination Dilemma: Factuality-Aware Reinforcement Learning for Large Reasoning Models](https://arxiv.org/abs/2505.24630v1) | **Hallucination**&**Reinforcement Learning**&**Reasoning Model** | |
| 496 | +| 25.05 | University of Arkansas | arxiv | [BIMA: Bijective Maximum Likelihood Learning Approach to Hallucination Prediction and Mitigation in Large Vision-Language Models](https://arxiv.org/abs/2505.24649v1) | **Vision-Language Model**&**Hallucination Mitigation**&**Normalizing Flow** | |
| 497 | +| 25.06 | MBZUAI | arxiv | [HD-NDEs: Neural Differential Equations for Hallucination Detection in LLMs](https://arxiv.org/abs/2506.00088v1) | **Hallucination Detection**&**Neural Differential Equations**&**LLM Internal States** | |
| 498 | +| 25.06 | Université Côte d’Azur | arxiv | [MMD-Flagger: Leveraging Maximum Mean Discrepancy to Detect Hallucinations](https://arxiv.org/abs/2506.01367v1) | **Hallucination Detection**&**Maximum Mean Discrepancy**&**Machine Translation** | |
| 499 | +| 25.06 | Nanyang Technological University | arxiv | [Benford's Curse: Tracing Digit Bias to Numerical Hallucination in LLMs](https://arxiv.org/abs/2506.01734v1) | **Numerical Hallucination**&**Digit Bias**&**Benford’s Law** | |
| 500 | +| 25.06 | University of Technology Sydney | arxiv | [Shaking to Reveal: Perturbation-Based Detection of LLM Hallucinations](https://arxiv.org/abs/2506.02696v1) | **Hallucination Detection**&**Perturbation**&**Intermediate Representation** | |
| 501 | +| 25.06 | Fundación Centro Tecnolóxico de Telecomunicacións de Galicia | arxiv | [Ask a Local: Detecting Hallucinations With Specialized Model Divergence](https://arxiv.org/abs/2506.03357v1) | **Hallucination Detection**&**Specialized Model Divergence**&**Multilingual LLM** | |
| 502 | +| 25.06 | Soochow University | arxiv | [Mitigating Hallucinations in Large Vision-Language Models via Entity-Centric Multimodal Preference Optimization](https://arxiv.org/abs/2506.04039v1) | **Vision-Language Model**&**Hallucination Mitigation**&**Preference Optimization** | |
| 503 | +| 25.06 | Tsinghua University | arxiv | [Joint Evaluation of Answer and Reasoning Consistency for Hallucination Detection in Large Reasoning Models](https://arxiv.org/abs/2506.04832v1) | **Hallucination Detection**&**Large Reasoning Model**&**Reasoning Consistency** | |
| 504 | +| 25.06 | Peking University | arxiv | [When Thinking LLMs Lie: Unveiling the Strategic Deception in Representations of Reasoning Models](https://arxiv.org/abs/2506.04909v1) | **LLM Deception**&**Chain-of-Thought Reasoning**&**Representation Engineering** | |
| 505 | +| 25.06 | Institute of Automation, Chinese Academy of Sciences | ACL 2025 | [Establishing Trustworthy LLM Evaluation via Shortcut Neuron Analysis](https://arxiv.org/abs/2506.04142v1) | **Trustworthy Evaluation**&**Shortcut Neuron**&**Data Contamination** | |
| 506 | +| 25.06 | Mohamed bin Zayed University of AI | arxiv | [DRAG: Distilling RAG for SLMs from LLMs to Transfer Knowledge and Mitigate Hallucination via Evidence and Graph-based Distillation](https://arxiv.org/abs/2506.01954v1) | **RAG Distillation**&**Small Language Models**&**Hallucination Mitigation** | |
493 | 507 |
|
494 | 508 |
|
495 | 509 | ## 💻Presentations & Talks
|
|
0 commit comments