Skip to content

[Bug] tuwunel CPU 占用持续上升导致服务变慢 || [Bug] tuwunel CPU usage continues to increase, causing service to slow down #566

@lcq225

Description

@lcq225

HiClaw Bug Report - tuwunel CPU 高占用问题

Bug Description

Description:
tuwunel 进程 CPU 占用持续上升,从正常值(1-2%)逐步上升到 400-800%,导致 Element Web 响应变慢甚至无法访问。

Expected Behavior:
tuwunel 进程 CPU 占用应保持在合理范围内(通常 <10%)。

Steps to Reproduce

  1. 启动 hiclaw-manager 容器(v1.0.9)
  2. 正常使用 Matrix 服务(有 3-5 个客户端连接)
  3. 等待约 10-15 分钟
  4. 观察 tuwunel 进程 CPU 占用持续上升:
    • 0-5 分钟:1-2%(正常)
    • 5-10 分钟:50-100%(异常开始)
    • 10-15 分钟:200-400%(严重异常)
    • 15+ 分钟:可达 700-800%(服务几乎不可用)
  5. 重启容器后恢复正常,但问题会重复出现

Environment

  • HiClaw Version: v1.0.9
  • Docker Image: higress-registry.cn-hangzhou.cr.aliyuncs.com/higress/hiclaw-manager:latest (ID: 6d0dfb57f50c)
  • tuwunel Version: 1.5.0-48 (6c91aa1ddc)
  • Host OS: CentOS 7
  • Container OS: Ubuntu 22.04.5 LTS
  • Host Resources: 16GB RAM, 8 CPU cores

Current Workaround

临时解决方案:

  1. 设置 Docker CPU 限制:--cpus=4
  2. 设置定时任务每 2 小时重启容器

Logs

容器启动时的 tuwunel 状态:

root         8  1.2  0.6 653304 103032 ?        Sl   14:22   0:00 tuwunel

运行 3 分钟后:

root         8 67.5  0.8 696824 130544 ?        Sl   14:44   2:23 tuwunel

运行 11 分钟后:

root         8 129  1.0 706040 166216 ?        Sl   14:32  14:34 tuwunel

Additional Context

  • Matrix 客户端连接数:3-5 个
  • tuwunel 数据目录大小:121MB
  • 问题在升级到 v1.0.9 前后都存在
  • 重启服务可临时恢复正常,但问题会在 10-15 分钟后复现

Possible Cause

tuwunel 1.5.0-48 版本可能存在:

  1. 内存泄漏导致的性能问题
  2. 某些请求触发的死循环
  3. RocksDB compaction 相关的性能问题

Related Issues


请确认以上内容是否可以提交?需要修改或补充什么?


HiClaw Bug Report - tuwunel CPU high usage problem

Bug Description

Description:
The CPU usage of the tuwunel process continues to increase, gradually rising from the normal value (1-2%) to 400-800%, causing Element Web to respond slowly or even become inaccessible.

Expected Behavior:
tuwunel process CPU usage should be kept within a reasonable range (usually <10%).

Steps to Reproduce

  1. Start the hiclaw-manager container (v1.0.9)
  2. Use the Matrix service normally (with 3-5 client connections)
  3. Wait about 10-15 minutes
  4. Observe that the CPU usage of the tuwunel process continues to increase:
    • 0-5 minutes: 1-2% (normal)
    • 5-10 minutes: 50-100% (abnormal start)
    • 10-15 minutes: 200-400% (serious abnormality)
    • 15+ minutes: up to 700-800% (service almost unavailable)
  5. After restarting the container, it returns to normal, but the problem will recur.

Environment

  • HiClaw Version: v1.0.9
  • Docker Image: higress-registry.cn-hangzhou.cr.aliyuncs.com/higress/hiclaw-manager:latest (ID: 6d0dfb57f50c)
  • tuwunel Version: 1.5.0-48 (6c91aa1ddc)
  • Host OS: CentOS 7
  • Container OS: Ubuntu 22.04.5 LTS
  • Host Resources: 16GB RAM, 8 CPU cores

Current Workaround

Temporary solution:

  1. Set Docker CPU limit: --cpus=4
  2. Set a scheduled task to restart the container every 2 hours

Logs

tuwunel status when the container starts:

root 8 1.2 0.6 653304 103032 ? Sl 14:22 0:00 tuwunel

After running for 3 minutes:

root 8 67.5 0.8 696824 130544 ? Sl 14:44 2:23 tuwunel

After 11 minutes of running:

root 8 129 1.0 706040 166216 ? Sl 14:32 14:34 tuwunel

Additional Context

  • Number of Matrix client connections: 3-5
  • tuwunel data directory size: 121MB
  • The problem exists before and after upgrading to v1.0.9
  • Restarting the service can temporarily return to normal, but the problem will reappear after 10-15 minutes

Possible Cause

tuwunel 1.5.0-48 version may exist:

  1. Performance issues caused by memory leaks
  2. Infinite loop triggered by certain requests
  3. Performance issues related to RocksDB compaction

Related Issues


**Please confirm whether the above content can be submitted? What needs to be modified or added? **

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions