-
Notifications
You must be signed in to change notification settings - Fork 766
Open
Labels
status: waiting-triageThis issue/PR has not yet been triaged by the team.This issue/PR has not yet been triaged by the team.type: bugIssues that need priority attention -- something isn't workingIssues that need priority attention -- something isn't working
Description
Is there an existing issue for this?
- I have searched the existing issues
What happened?
After some point, I am seeing this in my logs on a validator node and this corresponds with drastically higher CPU usage than expected:
9:54PM ERR Failed to load block meta blockstoreBase=25224989 blockstoreHeight=27031768 height=26440900 module=consensus peer="Peer{MConn{136.60.138.41:17156} 76ddf905e939dfed05b92242420e5ad2d7216b7f in}"
9:54PM ERR Failed to load block meta blockstoreBase=25224989 blockstoreHeight=27031768 height=26440900 module=consensus peer="Peer{MConn{136.60.138.41:17156} 76ddf905e939dfed05b92242420e5ad2d7216b7f in}"
9:54PM ERR Failed to load block meta blockstoreBase=25224989 blockstoreHeight=27031768 height=26440900 module=consensus peer="Peer{MConn{136.60.138.41:17156} 76ddf905e939dfed05b92242420e5ad2d7216b7f in}"
... this line goes over 20 times
I see this set of logs lines apparently every block, with this specific IP, but previously saw it with another IP.
Do not have this on a public node, so unsure what can cause this.
Things I've tried so far:
- using another addrbook
- resyncing from snapshot
- removing all persistent_peers
I don't recall such a behaviour prior to upgrade, so likely relevant.
Gaia Version
v25.1.0
How to reproduce?
Honestly no idea, it just happens.
Metadata
Metadata
Assignees
Labels
status: waiting-triageThis issue/PR has not yet been triaged by the team.This issue/PR has not yet been triaged by the team.type: bugIssues that need priority attention -- something isn't workingIssues that need priority attention -- something isn't working