Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scope of ML based benchmarks in MLPerf. #741

Open
rakshithgb-fujitsu opened this issue May 28, 2024 · 0 comments
Open

Scope of ML based benchmarks in MLPerf. #741

rakshithgb-fujitsu opened this issue May 28, 2024 · 0 comments

Comments

@rakshithgb-fujitsu
Copy link

Over the years, MLCommons has primarily concentrated on deep learning. However, many real-world applications continue to rely on traditional machine learning algorithms such as k-means clustering, Support Vector Machines (SVM), and Logistic Regression etc.

The existing public benchmarks for machine learning workloads are outdated and poorly maintained. MLCommons has the opportunity to standardize these workflows and incorporate them into the MLPerf benchmarks.

As a starting point, Dataperf already includes several machine learning workflows, such as the 2023 speech selection task. These workflows could be standardized and integrated into MLPerf. I am interested in your thoughts on this proposal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant