Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How much is too big? #1396

Open
EVAUTOAI opened this issue Dec 16, 2024 · 2 comments
Open

How much is too big? #1396

EVAUTOAI opened this issue Dec 16, 2024 · 2 comments

Comments

@EVAUTOAI
Copy link

According to https://docs.evidentlyai.com/support/f.a.q. its recommended to use sampling for large datasets.
Can you please help me understand what is a "large" dataset that would require sampling?

Like how many (rows * columns) would cause issues?

@elenasamuylova
Copy link
Collaborator

Hi @EVAUTOAI, there is no fixed answer here since

  1. Evidently can evaluate hundreds of different metrics where each has its computational footprint (e.g., there are metrics like "text content drift" that train a whole machine learning model on your data vs. more straightforward metrics that compute the mean value in the column). You can also combine multiple metrics in the same report.
  2. The computation happens in memory, so the limitation will depend on your infrastructure.

So the simple answer is: if your computation takes longer than you want or fails to compute otherwise, you may consider sampling. Also, sampling often makes sense for metrics like data distribution drift.

@EVAUTOAI
Copy link
Author

Thank you @elenasamuylova !

Are there any recommendations? or examples or mappings regarding memory size with Dataset size?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants