Skip to content

MPC node panics on startup in TEE #1224

@netrome

Description

@netrome

Background

After I reconfigured one of our TDX servers in https://github.com/near/mpc-private/issues/27 I deployed the latest MPC node with dstack and noticed the following error:

2025-10-03T13:29:16.401476357Z Starting mpc node...
2025-10-03T13:29:16.409440888Z 
2025-10-03T13:29:16.409452142Z thread 'main' panicked at /usr/local/cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.47.1/src/net/unix/stream.rs:894:18:
2025-10-03T13:29:16.409457347Z there is no reactor running, must be called from the context of a Tokio 1.x runtime
2025-10-03T13:29:16.409461653Z stack backtrace:
2025-10-03T13:29:16.409476073Z 2025-10-03T13:29:16.407018Z DEBUG mpc_node::config: p2p and near account secret key not found. Generating...
2025-10-03T13:29:16.409480856Z 2025-10-03T13:29:16.408162Z DEBUG mpc_node::config: p2p and near account key generated in /data/secrets.json
2025-10-03T13:29:16.414723793Z    0:     0x55f7ebceb010 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h6d42cc84fc840290
2025-10-03T13:29:16.414743454Z    1:     0x55f7ebd15c43 - core::fmt::write::h5af61a909e3ec64d
2025-10-03T13:29:16.414748276Z    2:     0x55f7ebce64b3 - std::io::Write::write_fmt::h5a7b54aa6e4a315d
2025-10-03T13:29:16.414759191Z    3:     0x55f7ebceae62 - std::sys::backtrace::BacktraceLock::print::h555579e7396c26ac
2025-10-03T13:29:16.414763525Z    4:     0x55f7ebcebd6a - std::panicking::default_hook::{{closure}}::h9128866118196224
2025-10-03T13:29:16.414767596Z    5:     0x55f7ebcebbd5 - std::panicking::default_hook::h52e9e7314e0255f6
2025-10-03T13:29:16.414771426Z    6:     0x55f7ebcec702 - std::panicking::rust_panic_with_hook::h541791bcc774ef34
2025-10-03T13:29:16.414778355Z    7:     0x55f7ebcec4aa - std::panicking::begin_panic_handler::{{closure}}::h6479a2f0137c7d19
2025-10-03T13:29:16.414782284Z    8:     0x55f7ebceb529 - std::sys::backtrace::__rust_end_short_backtrace::ha04e7c0fc61ded91
2025-10-03T13:29:16.414786436Z    9:     0x55f7ebcec13d - rust_begin_unwind
2025-10-03T13:29:16.414792364Z   10:     0x55f7e933ca70 - core::panicking::panic_fmt::h5764ee7030b7a73d
2025-10-03T13:29:16.414796104Z   11:     0x55f7e932e3a1 - tokio::runtime::scheduler::Handle::current::panic_cold_display::hdd0abbb7943e8dd1
2025-10-03T13:29:16.416426156Z /app/start.sh: line 119:    13 Aborted                 (core dumped) /app/mpc-node start $tee_authority

I believe this is because we're calling tee_authority.generate_attestation(report_data).await?; outside of a Tokio runtime,
which works for local attestations but when creating the live attestations it's not unreasonable to assume this function
assumes a running Tokio runtime.

Naturally we should fix so that this function runs within a Tokio runtime. Also it would be good to think how we can reduce the risk of similar problems reoccurring, although we've greatly reduced the surface of these potential bugs since we removed the tee feature flag.

User Story

As a node operator I want functioning software.

Acceptance Criteria

The MPC node doesn't panic on startup in a TEE.

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions