Official code repository for our CSCW 2025 paper:
EchoMind: Supporting Real-time Complex Problem Discussions through Human-AI Collaborative Facilitation
The system generates real-time issue maps for ongoing conversations to facilitate complex problem discussions. Code in this repository implements extra features beyond the paper:
- User & discussion management: multiple remote users can join the same discussion session.
- Post-discussion review with transcript, issue map, and recorded audio (if configured to do so).
- Internationalization (i18n) support.
The project is a full-stack web application with a Python backend and a React frontend. It is structured as follows:
backend: A FastAPI server that handles API requests, manages real-time communication with SocketIO, and interacts with the database and AI models. It also serves the frontend static files if configured.frontend: A React single-page application (SPA) that provides the user interface.
- OpenAI API key(s), or other compatible LLM API keys.
- FunASR service URI for real-time speech recognition.
- FunASR is open source and free to deploy its runtime service on your own server. Follow the FunASR runtime guide, and check out the improved startup scripts.
- You can also use other ASR services by implementing a compatible asynchronous client in the backend.
Change to the backend directory (cd backend):
-
Install backend dependencies.
(a) We use uv to manage the Python virtual environment and dependencies.
-
Create a virtual environment
.venvand install the dependencies:uv sync
-
Then activate the virtual environment using
source .venv/bin/activate(macOS/Linux) or.venv\Scripts\activate(Windows).
(b) You can also use
pipif you prefer (activate your virtual environment first).pip install -e .[dev]
-
-
Set up configuration.
Copy the template configuration file to
config.yaml.cp config.template.yaml config.yaml
Then edit
config.yamlto at least set up the OpenAI API key and the FunASR service URI.
Change to the frontend directory (cd frontend).
-
Install frontend dependencies.
# install from package-lock.json npm ci -
Build the frontend static files for production.
npm run build
By default, the backend server will serve the frontend static files (spa_path configured in config.yaml).
Run the backend server in the background (cd backend):
# start server in the background
dmon start
# check server status
dmon status
# stop the background server
dmon stopOr run it in the foreground:
dmon execThen, open your web browser and navigate to http://localhost:8000 to access the application.
Note
dmon is a lightweight, cross-platform daemon manager that runs commands as background processes, without Docker. It is already included in the backend dependencies. Feel free to use it and give it a star ⭐️!
To run the frontend separately for development (cd frontend):
-
Copy the template environment file to
.env.development.cp .env.development.template .env.development
It sets the backend API URL (
VITE_API_BASE_URL=http://localhost:8000) for development mode. -
Start the frontend development server with hot-reload (HMR):
npm run dev
Then open your web browser and navigate to http://localhost:5173 to access the application.
The API client code in frontend/src/client is generated from the OpenAPI specification provided by the FastAPI backend.
To regenerate the API client, run the following command in the backend directory:
bash ./scripts/gen_client.shSocketIO models in frontend/src/lib/models.ts are generated from the Pydantic models defined in the backend.
To regenerate them:
bash ./scripts/gen_models.shPrompt templates are stored as .hprompt files in the backend/app/core/prompts directory.
The human-friendly mark-up format is designed and consumed by HandyLLM, which is already included in the backend dependencies. You can test and run prompts directly without running the whole application:
handyllm hprompt <your_prompt>.hpromptFor editor support with syntax highlighting, use the VSCode extension or Sublime Text package.
Note
HandyLLM is for rapid prototyping of LLM applications. Feel free to give it a star ⭐️!
If you find our work useful, or if you use the code or prompts from this repository, please cite our paper:
Weihao Chen, Chun Yu, Yukun Wang, Meizhu Chen, Yipeng Xu, and Yuanchun Shi. 2025. EchoMind: Supporting Real-time Complex Problem Discussions through Human-AI Collaborative Facilitation. Proc. ACM Hum.-Comput. Interact. 9, 7, Article CSCW406 (November 2025), 38 pages. https://doi.org/10.1145/3757587
@article{chen_echomind_2025,
author = {Chen, Weihao and Yu, Chun and Wang, Yukun and Chen, Meizhu and Xu, Yipeng and Shi, Yuanchun},
title = {EchoMind: Supporting Real-time Complex Problem Discussions through Human-AI Collaborative Facilitation},
year = {2025},
issue_date = {November 2025},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {9},
number = {7},
url = {https://doi.org/10.1145/3757587},
doi = {10.1145/3757587},
abstract = {Teams often engage in group discussions to leverage collective intelligence when solving complex problems. However, in real-time discussions, such as face-to-face meetings, participants frequently struggle with managing diverse perspectives and structuring content, which can lead to unproductive outcomes like forgetfulness and off-topic conversations. Through a formative study, we explores a human-AI collaborative facilitation approach, where AI assists in establishing a shared knowledge framework to provide a guiding foundation. We present EchoMind, a system that visualizes discussion knowledge through real-time issue mapping. EchoMind empowers participants to maintain focus on specific issues, review key ideas or thoughts, and collaboratively expand the discussion. The system leverages large language models (LLMs) to dynamically organize dialogues into nodes based on the current context recorded on the map. Our user study with four teams (N=16) reveals that EchoMind helps clarify discussion objectives, trace knowledge pathways, and enhance overall productivity. We also discuss the design implications for human-AI collaborative facilitation and the potential of shared knowledge visualization to transform group dynamics in future collaborations.},
journal = {Proc. ACM Hum.-Comput. Interact.},
month = oct,
articleno = {CSCW406},
numpages = {38},
keywords = {complex problems, group discussions, human-AI collaboration, issue mapping, large language models}
}