This repository contains the implementation of DiCoRe [paper] (Divergent-Convergent Reasoning), a novel approach for zero-shot event detection that leverages a two-stage reasoning process to improve event extraction performance.
DiCoRe comprises three components:
- Dreamer: This component conducts divergent open reasoning to find all possible solutions and emphasizes on improving the recall.
- Grounder: This component grounds and maps the open predictions from the dreamer to the task constraints.
- Judge: This component is a simple verification that ensures that the predictions are highly precise.
This approach enables effective zero-shot event detection across multiple domains without requiring task-specific training data. Below we show an illustration.
-
/code/: Contains the core implementation files: -
/scripts/: Contains shell scripts for running experiments: -
/data/: Contains datasets and processed data for evaluation:
- Acknowledge the original license/policy of the respective datasets
- Cite the original dataset papers appropriately
- Follow any usage restrictions specified by the original dataset creators
- Python 3.8+
- CUDA-compatible GPU (recommended)
- VLLM for efficient LLM inference
# Create conda environment from environment.yml
conda env create -f environment.yml
# Activate the environment
conda activate dicore# Run DiCoRe on ACE05 dataset with Llama-3 model
./scripts/run_dicore.sh ace llama3_8b 2 1 0.4 "" 8# Run baseline model on ACE05 dataset
./scripts/run_dicore_base.sh ace llama3_8b 2 1 0.4 "" 8# Run DiCoRe with Chain-of-Thought reasoning
./scripts/run_dicore_cot.sh ace llama3_8b 2 1 0.4 "" 8dataset: Dataset name (ace, maven, casie, genia, fewevent, speed)llm: Language model (llama3_8b, gpt4, etc.)n_gpu: Number of GPUs to use (default: 2)n_runs: Number of runs (default: 1)temp: Temperature for generation (default: 0.6)suffix: Output suffix (default: "")batch_size: Batch size for inference (default: 8)
We utilize TextEE for the evaluation setup. We extract a single script and provide it here for faster evaluation.
python code/evaluate_llm_preds.py --pred_file <pred-file> --gold_file <gold-file>If you use this code or find it helpful, please cite our paper:
@inproceedings{parekh2025dicore,
title={DiCoRe: Enhancing Zero-shot Event Detection via Divergent-Convergent LLM Reasoning},
author={Tanmay Parekh and Kartik Mehta and Ninareh Mehrabi and Kai-Wei Chang and Nanyun Peng},
booktitle={Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year={2025}
}For questions or issues, please contact the lead author Tanmay at tparekh@g.ucla.edu
We thank the creators of the datasets used in this work and the open-source community for the tools and frameworks that made this research possible.
