Skip to content

Commit

Permalink
Merge pull request #2107 from recommenders-team/staging
Browse files Browse the repository at this point in the history
Staging to main: Fix bug with Sklearn and info for contributors
  • Loading branch information
miguelgfierro authored and actions-user committed Jun 5, 2024
1 parent f6dbbb6 commit f4894b5
Show file tree
Hide file tree
Showing 3 changed files with 62 additions and 17 deletions.
71 changes: 58 additions & 13 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,18 @@ Licensed under the MIT License.

Contributions are welcomed! Here's a few things to know:

- [Contribution Guidelines](#contribution-guidelines)
- [Steps to Contributing](#steps-to-contributing)
- [Coding Guidelines](#coding-guidelines)
- [Microsoft Contributor License Agreement](#microsoft-contributor-license-agreement)
- [Code of Conduct](#code-of-conduct)
- [Do not point fingers](#do-not-point-fingers)
- [Provide code feedback based on evidence](#provide-code-feedback-based-on-evidence)
- [Ask questions do not give answers](#ask-questions-do-not-give-answers)
- [Steps to Contributing](#steps-to-contributing)
- [Ideas for Contributions](#ideas-for-contributions)
- [A first contribution](#a-first-contribution)
- [Datasets](#datasets)
- [Models](#models)
- [Metrics](#metrics)
- [General tips](#general-tips)
- [Coding Guidelines](#coding-guidelines)
- [Code of Conduct](#code-of-conduct)
- [Do not point fingers](#do-not-point-fingers)
- [Provide code feedback based on evidence](#provide-code-feedback-based-on-evidence)
- [Ask questions do not give answers](#ask-questions-do-not-give-answers)

## Steps to Contributing

Expand All @@ -33,15 +37,56 @@ Here are the basic steps to get started with your first contribution. Please rea

See the wiki for more details about our [merging strategy](https://github.com/microsoft/recommenders/wiki/Strategy-to-merge-the-code-to-main-branch).

## Ideas for Contributions

### A first contribution

For people who are new to open source or to Recommenders, a good way to start is by contribution with documentation. You can help with any of the README files or in the notebooks.

For more advanced users, consider fixing one of the bugs listed in the issues.

### Datasets

To contribute new datasets, please consider this:

* Minimize dependencies, it's better to use `requests` library than a custom library.
* Make sure that the dataset is publicly available and that the license allows for redistribution.

### Models

To contribute new models, please consider this:

* Please don't add models that are already implemented in the repo. An exception to this rule is if you are adding a more optimal implementation or you want to migrate a model from TensorFlow to PyTorch.
* Prioritize the minimal code necessary instead of adding a full library. If you add code from another repository, please make sure to follow the license and give proper credit.
* All models should be accompanied by a notebook that shows how to use the model and how to train it. The notebook should be in the [examples](examples) folder.
* The model should be tested with unit tests, and the notebooks should be tested with functional tests.

### Metrics

To contribute new metrics, please consider this:

* A good way to contribute with metrics is by optimizing the code of the existing ones.
* If you are adding a new metric, please consider adding not only a CPU version, but also a PySpark version.
* When adding the tests, make sure you check for the limits. For example, if you add an error metric, check that the error between two identical datasets is zero.

### General tips

* Prioritize PyTorch over TensorFlow.
* Minimize dependencies. Around 80% of the issues in the repo are related to dependencies.
* Avoid adding code with GPL and other copyleft licenses. Prioritize MIT, Apache, and other permissive licenses.
* Add the copyright statement at the beginning of the file: `Copyright (c) Recommenders contributors. Licensed under the MIT License.`

## Coding Guidelines

We strive to maintain high quality code to make the utilities in the repository easy to understand, use, and extend. We also work hard to maintain a friendly and constructive environment. We've found that having clear expectations on the development process and consistent style helps to ensure everyone can contribute and collaborate effectively.

Please review the [coding guidelines](https://github.com/recommenders-team/recommenders/wiki/Coding-Guidelines) wiki page to see more details about the expectations for development approach and style.
Please review the [Coding Guidelines](https://github.com/recommenders-team/recommenders/wiki/Coding-Guidelines) wiki page to see more details about the expectations for development approach and style.

## Code of Conduct

Apart from the official [Code of Conduct](CODE_OF_CONDUCT.md), in Recommenders team we adopt the following behaviors, to ensure a great working environment:

#### Do not point fingers
### Do not point fingers
Let’s be constructive.

<details>
Expand All @@ -51,18 +96,18 @@ Let’s be constructive.

</details>

#### Provide code feedback based on evidence
### Provide code feedback based on evidence

When making code reviews, try to support your ideas based on evidence (papers, library documentation, stackoverflow, etc) rather than your personal preferences.

<details>
<summary><em>Click here to see some examples</em></summary>

"When reviewing this code, I saw that the Python implementation the metrics are based on classes, however, [scikit-learn](https://scikit-learn.org/stable/modules/classes.html#sklearn-metrics-metrics) and [tensorflow](https://www.tensorflow.org/api_docs/python/tf/metrics) use functions. We should follow the standard in the industry."
"When reviewing this code, I saw that the Python implementation of the metrics are based on classes, however, [scikit-learn](https://scikit-learn.org/stable/modules/classes.html#sklearn-metrics-metrics) use functions. We should follow the standard in the industry."

</details>

#### Ask questions do not give answers
### Ask questions do not give answers
Try to be empathic.

<details>
Expand Down
6 changes: 3 additions & 3 deletions examples/00_quick_start/lightgbm_tinycriteo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -717,7 +717,7 @@
"source": [
"test_preds = lgb_model.predict(test_x)\n",
"auc = roc_auc_score(np.asarray(test_y.reshape(-1)), np.asarray(test_preds))\n",
"logloss = log_loss(np.asarray(test_y.reshape(-1)), np.asarray(test_preds), eps=1e-12)\n",
"logloss = log_loss(np.asarray(test_y.reshape(-1)), np.asarray(test_preds))\n",
"res_basic = {\"auc\": auc, \"logloss\": logloss}\n",
"print(res_basic)\n"
]
Expand Down Expand Up @@ -904,7 +904,7 @@
],
"source": [
"auc = roc_auc_score(np.asarray(test_y.reshape(-1)), np.asarray(test_preds))\n",
"logloss = log_loss(np.asarray(test_y.reshape(-1)), np.asarray(test_preds), eps=1e-12)\n",
"logloss = log_loss(np.asarray(test_y.reshape(-1)), np.asarray(test_preds))\n",
"res_optim = {\"auc\": auc, \"logloss\": logloss}\n",
"\n",
"print(res_optim)"
Expand Down Expand Up @@ -959,7 +959,7 @@
],
"source": [
"auc = roc_auc_score(np.asarray(test_y.reshape(-1)), np.asarray(test_preds))\n",
"logloss = log_loss(np.asarray(test_y.reshape(-1)), np.asarray(test_preds), eps=1e-12)\n",
"logloss = log_loss(np.asarray(test_y.reshape(-1)), np.asarray(test_preds))\n",
"\n",
"print({\"auc\": auc, \"logloss\": logloss})"
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -610,7 +610,7 @@
"source": [
"## 4 Discussion\n",
"\n",
"BiVAE is a new variational autoencoder tailored for dyadic data, where observations consist of measurements associated with two sets of objects, e.g., users, items and corresponding ratings. The model is symmetric, which makes it easier to extend axiliary data from both sides of users and items. In addition to preference data, the model can be applied to other types of dyadic data such as documentword matrices, and other tasks such as co-clustering. \n",
"BiVAE is a new variational autoencoder tailored for dyadic data, where observations consist of measurements associated with two sets of objects, e.g., users, items and corresponding ratings. The model is symmetric, which makes it easier to extend auxiliary data from both sides of users and items. In addition to preference data, the model can be applied to other types of dyadic data such as document-word matrices, and other tasks such as co-clustering. \n",
"\n",
"In the paper, there is also a discussion on Constrained Adaptive Priors (CAP), a proposed method to build informative priors to mitigate the well-known posterior collapse problem. We have left out that part purposely, not to distract the audiences. Nevertheless, it is very interesting and worth taking a look. \n",
"\n",
Expand Down

0 comments on commit f4894b5

Please sign in to comment.