-
Notifications
You must be signed in to change notification settings - Fork 265
Batch SQL writes for packet stats #5874
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎
1 Skipped Deployment
|
| export ENVIRONMENT=${ENVIRONMENT:-"mainnet"} | ||
|
|
||
| probe_git_ref="nym-vpn-core-v1.10.0" | ||
| probe_git_ref="nym-vpn-core-v1.4.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
downgrade?
| .inspect(|_| { | ||
| let elapsed = now_utc() - started_at; | ||
| info!( | ||
| "📊 ☑️ Packet stats successfully committed to DB (took {}s)", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: use humantime::format_duration which is more readable, especially for smaller values. I think it will be better than say 0.1234567s : )
| tokio::spawn(async move { | ||
| scraper.start().await; | ||
| }); | ||
| let scraper = node_scraper::PacketScraper::new( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the nittest of nits, but can you rename this guy? at first glance, before finishing my morning coffee, I got quite confused by having two things defined as scraper
nym-node-status-api/nym-node-status-api/src/metrics_scraper/mod.rs
Outdated
Show resolved
Hide resolved
| let results_clone = Arc::clone(&results); | ||
|
|
||
| task_set.spawn(async move { | ||
| match scrape_packet_stats(&node).await { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we add some sort of timeout here in case task gets stuck trying to scrape particular node?
| } else { | ||
| success_count += 1; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can't we set the TASK_ID_COUNTER to 0 explicitly here instead (since our join_Set is empty) and also then not make it static? (because at that point it would be fully local to this scope)
|
you'd probably also want to rebase against develop to deal with 1.88 clippy (and potentially add extra fixes specific to your branch) |
42c03e0 to
c62ab3a
Compare
This change is