Skip to content

Commit

Permalink
Merge pull request #927 from microsoft/staging
Browse files Browse the repository at this point in the history
Staging to master
  • Loading branch information
miguelgfierro authored Sep 17, 2019
2 parents cadf54f + 56f80b6 commit 444e6c4
Show file tree
Hide file tree
Showing 11 changed files with 196 additions and 142 deletions.
53 changes: 20 additions & 33 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,21 @@
# Recommenders

[![Documentation Status](https://readthedocs.org/projects/microsoft-recommenders/badge/?version=latest)](https://microsoft-recommenders.readthedocs.io/en/latest/?badge=latest)

This repository contains examples and best practices for building recommendation systems, provided as Jupyter notebooks. The examples detail our learnings on five key tasks:
- [Prepare Data](notebooks/01_prepare_data/README.md): Preparing and loading data for each recommender algorithm
- [Model](notebooks/02_model/README.md): Building models using various classical and deep learning recommender algorithms such as Alternating Least Squares ([ALS](https://spark.apache.org/docs/latest/api/python/_modules/pyspark/ml/recommendation.html#ALS)) or eXtreme Deep Factorization Machines ([xDeepFM](https://arxiv.org/abs/1803.05170)).
- [Evaluate](notebooks/03_evaluate/README.md): Evaluating algorithms with offline metrics
- [Prepare Data](notebooks/01_prepare_data): Preparing and loading data for each recommender algorithm
- [Model](notebooks/02_model): Building models using various classical and deep learning recommender algorithms such as Alternating Least Squares ([ALS](https://spark.apache.org/docs/latest/api/python/_modules/pyspark/ml/recommendation.html#ALS)) or eXtreme Deep Factorization Machines ([xDeepFM](https://arxiv.org/abs/1803.05170)).
- [Evaluate](notebooks/03_evaluate): Evaluating algorithms with offline metrics
- [Model Select and Optimize](notebooks/04_model_select_and_optimize): Tuning and optimizing hyperparameters for recommender models
- [Operationalize](notebooks/05_operationalize/README.md): Operationalizing models in a production environment on Azure
- [Operationalize](notebooks/05_operationalize): Operationalizing models in a production environment on Azure

Several utilities are provided in [reco_utils](reco_utils) to support common tasks such as loading datasets in the format expected by different algorithms, evaluating model outputs, and splitting training/test data. Implementations of several state-of-the-art algorithms are included for self-study and customization in your own applications. See the [reco_utils documentation](https://readthedocs.org/projects/microsoft-recommenders/).


For a more detailed overview of the repository, please see the documents at the [wiki page](https://github.com/microsoft/recommenders/wiki/Documents-and-Presentations).

## Getting Started
Please see the [setup guide](SETUP.md) for more details on setting up your machine locally, on Spark, or on [Azure Databricks](SETUP.md#setup-guide-for-azure-databricks).
Please see the [setup guide](SETUP.md) for more details on setting up your machine locally, on a [data science virtual machine (DSVM)](https://azure.microsoft.com/en-gb/services/virtual-machines/data-science-virtual-machines/) or on [Azure Databricks](SETUP.md#setup-guide-for-azure-databricks).

To setup on your local machine:
1. Install Anaconda with Python >= 3.6. [Miniconda](https://conda.io/miniconda.html) is a quick way to get started.
Expand All @@ -35,27 +37,11 @@ To setup on your local machine:
```
5. Start the Jupyter notebook server
```
cd notebooks
jupyter notebook
```
6. Run the [SAR Python CPU MovieLens](notebooks/00_quick_start/sar_movielens.ipynb) notebook under the 00_quick_start folder. Make sure to change the kernel to "Python (reco)".

**NOTE** - The [Alternating Least Squares (ALS)](notebooks/00_quick_start/als_movielens.ipynb) notebooks require a PySpark environment to run. Please follow the steps in the [setup guide](SETUP.md#dependencies-setup) to run these notebooks in a PySpark environment.

## Install this repository via PIP
A [setup.py](reco_utils/setup.py) file is provided in order to simplify the installation of this utilities in this repo from the main directory.
This still requires the conda environment to be installed as described above. Once the necessary dependencies are installed you can use the following command to install reco_utils as it's own python package.

pip install -e reco_utils

It is also possible to install directly from Github. Or from a specific branch as well.

pip install -e git+https://github.com/microsoft/recommenders/#egg=pkg\&subdirectory=reco_utils
pip install -e git+https://github.com/microsoft/recommenders/@staging#egg=pkg\&subdirectory=reco_utils


**NOTE** - The pip installation does not install any of the necessary package dependencies, it is expected that conda will be used as shown above to setup the environment for the utilities being used.
6. Run the [SAR Python CPU MovieLens](notebooks/00_quick_start/sar_movielens.ipynb) notebook under the `00_quick_start` folder. Make sure to change the kernel to "Python (reco)".

**NOTE** - The [Alternating Least Squares (ALS)](notebooks/00_quick_start/als_movielens.ipynb) notebooks require a PySpark environment to run. Please follow the steps in the [setup guide](SETUP.md#dependencies-setup) to run these notebooks in a PySpark environment. For the deep learning algorithms, it is recommended to use a GPU machine.

## Algorithms

Expand Down Expand Up @@ -90,31 +76,32 @@ We provide a [benchmark notebook](benchmarks/movielens.ipynb) to illustrate how
| [NCF](notebooks/02_model/ncf_deep_dive.ipynb) | 0.107720 | 0.396118 | 0.347296 | 0.180775 | N/A | N/A | N/A | N/A |
| [FastAI](notebooks/00_quick_start/fastai_movielens.ipynb) | 0.025503 | 0.147866 | 0.130329 | 0.053824 | 0.943084 | 0.744337 | 0.285308 | 0.287671 |


## Contributing
This project welcomes contributions and suggestions. Before contributing, please see our [contribution guidelines](CONTRIBUTING.md).

This project welcomes contributions and suggestions. Before contributing, please see our [contribution guidelines](CONTRIBUTING.md).

## Build Status

| Build Type | Branch | Status | | Branch | Status |
| --- | --- | --- | --- | --- | --- |
These tests are the nightly builds, which compute the smoke and integration tests. `master` is our main branch and `staging` is our development branch. We use `pytest` for testing python utilities in [reco_utils](reco_utils) and `papermill` for the [notebooks](notebooks). For more information about the testing pipelines, please see the [test documentation](tests/README.md).

### DSVM Build Status

The following tests run on a Windows and Linux DSVM daily. These machines run 24/7.

| Build Type | Branch | Status | | Branch | Status |
| --- | --- | --- | --- | --- | --- |
| **Linux CPU** | master | [![Status](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_apis/build/status/nightly?branchName=master)](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_build/latest?definitionId=4792) | | staging | [![Status](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_apis/build/status/nightly_staging?branchName=staging)](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_build/latest?definitionId=4594) |
| **Linux GPU** | master | [![Status](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_apis/build/status/nightly_gpu?branchName=master)](https://msdata.visualstudio.com/DefaultCollection/AlgorithmsAndDataScience/_build/latest?definitionId=4997) | | staging | [![Status](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_apis/build/status/nightly_gpu_staging?branchName=staging)](https://msdata.visualstudio.com/DefaultCollection/AlgorithmsAndDataScience/_build/latest?definitionId=4998) |
| **Linux Spark** | master | [![Status](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_apis/build/status/nightly_spark?branchName=master)](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_build/latest?definitionId=4804) | | staging | [![Status](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_apis/build/status/Recommenders/nightly_spark_staging)](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_build/latest?definitionId=5186) |
| **Windows CPU** | master | [![Status](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_apis/build/status/nightly_win?branchName=master)](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_build/latest?definitionId=6743) | | staging | [![Status](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_apis/build/status/nightly_staging_win?branchName=staging)](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_build/latest?definitionId=6752) |
| **Windows GPU** | master | [![Status](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_apis/build/status/nightly_gpu_win?branchName=master)](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_build/latest?definitionId=6756) | | staging | [![Status](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_apis/build/status/nightly_gpu_staging_win?branchName=staging)](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_build/latest?definitionId=6761) |
| **Windows Spark** | master | [![Status](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_apis/build/status/nightly_spark_win?branchName=master)](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_build/latest?definitionId=6757) | | staging | [![Status](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_apis/build/status/nightly_spark_staging_win?branchName=staging)](https://msdata.visualstudio.com/AlgorithmsAndDataScience/_build/latest?definitionId=6754) |

## AzureML Build Status
### AzureML Build Status

These DevOps pipelines run the existing tests on AzureML.
The following tests run on an AzureML [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-compute-target). AzureML allows to programmatically start a virtual machine, execute the tests, gather the results in [Azure DevOps](https://azure.microsoft.com/en-gb/services/devops/) and shut down the machine.

| Build Type | Branch | Status | | Branch | Status |
| --- | --- | --- | --- | --- | --- |
| **nightly_cpu_tests** | master | [![Build Status](https://dev.azure.com/best-practices/recommenders/_apis/build/status/nightly_cpu_tests?branchName=master)](https://dev.azure.com/best-practices/recommenders/_build/latest?definitionId=25&branchName=master) | | Staging | [![Build Status](https://dev.azure.com/best-practices/recommenders/_apis/build/status/nightly_cpu_tests?branchName=staging)](https://dev.azure.com/best-practices/recommenders/_build/latest?definitionId=25&branchName=staging) |
| **nightly_gpu_tests** | master | [![Build Status](https://dev.azure.com/best-practices/recommenders/_apis/build/status/bp-nightly_gpu_tests?branchName=master)](https://dev.azure.com/best-practices/recommenders/_build/latest?definitionId=5&branchName=master) | | Staging | [![Build Status](https://dev.azure.com/best-practices/recommenders/_apis/build/status/bp-nightly_gpu_tests?branchName=staging)](https://dev.azure.com/best-practices/recommenders/_build/latest?definitionId=5&branchName=staging) |


**NOTE** - these tests are the nightly builds, which compute the smoke and integration tests. Master is our main branch and staging is our development branch. We use `pytest` for testing python utilities in [reco_utils](reco_utils) and `papermill` for the [notebooks](notebooks). For more information about the testing pipelines, please see the [test documentation](tests/README.md).

20 changes: 18 additions & 2 deletions SETUP.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,8 @@ This document describes how to setup all the dependencies to run the notebooks i
* [Requirements of Azure Databricks](#requirements-of-azure-databricks)
* [Repository installation](#repository-installation)
* [Troubleshooting Installation on Azure Databricks](#Troubleshooting-Installation-on-Azure-Databricks)
* [Prepare Azure Databricks for Operationalization](#prepare-azure-databricks-for-operationalization)
* [Prepare Azure Databricks for Operationalization](#prepare-azure-databricks-for-operationalization)
* [Install the utilities via PIP](#install-the-utilities-via-pip)
* [Setup guide for Docker](#setup-guide-for-docker)

## Compute environments
Expand Down Expand Up @@ -270,7 +271,7 @@ import reco_utils

* For the [reco_utils](reco_utils) import to work on Databricks, it is important to zip the content correctly. The zip has to be performed inside the Recommenders folder, if you zip directly above the Recommenders folder, it won't work.

## Prepare Azure Databricks for Operationalization
### Prepare Azure Databricks for Operationalization

This repository includes an end-to-end example notebook that uses Azure Databricks to estimate a recommendation model using matrix factorization with Alternating Least Squares, writes pre-computed recommendations to Azure Cosmos DB, and then creates a real-time scoring service that retrieves the recommendations from Cosmos DB. In order to execute that [notebook](notebooks/05_operationalize/als_movie_o16n.ipynb), you must install the Recommenders repository as a library (as described above), **AND** you must also install some additional dependencies. With the *Quick install* method, you just need to pass an additional option to the [installation script](scripts/databricks_install.py).

Expand Down Expand Up @@ -313,6 +314,21 @@ Additionally, you must install the [spark-cosmosdb connector](https://docs.datab

</details>

## Install the utilities via PIP

A [setup.py](reco_utils/setup.py) file is provided in order to simplify the installation of the utilities in this repo from the main directory.

This still requires the conda environment to be installed as described above. Once the necessary dependencies are installed, you can use the following command to install `reco_utils` as a python package.

pip install -e reco_utils

It is also possible to install directly from Github. Or from a specific branch as well.

pip install -e git+https://github.com/microsoft/recommenders/#egg=pkg\&subdirectory=reco_utils
pip install -e git+https://github.com/microsoft/recommenders/@staging#egg=pkg\&subdirectory=reco_utils

**NOTE** - The pip installation does not install any of the necessary package dependencies, it is expected that conda will be used as shown above to setup the environment for the utilities being used.

## Setup guide for Docker

A [Dockerfile](docker/Dockerfile) is provided to build images of the repository to simplify setup for different environments. You will need [Docker Engine](https://docs.docker.com/install/) installed on your system.
Expand Down
2 changes: 1 addition & 1 deletion notebooks/01_prepare_data/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ data preparation tasks witnessed in recommendation system development.
| --- | --- |
| [data_split](data_split.ipynb) | Details on splitting data (randomly, chronologically, etc). |
| [data_transform](data_transform.ipynb) | Guidance on how to transform (implicit / explicit) data for building collaborative filtering typed recommender. |
| [wikidata knowledge graph](wikidata_KG.ipynb) | Details on how to create a knowledge graph using Wikidata |
| [wikidata knowledge graph](wikidata_knowledge_graph.ipynb) | Details on how to create a knowledge graph using Wikidata |

### Data split

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"metadata": {},
"source": [
"## Wikidata Knowledge Graph Extraction\n",
"Many recommendation algorithms (DKN, RippleNet, KGCN) use Knowledge Graphs as an external source of information. We found that one of the bottlenecks to benchmark current algorithms like DKN, RippleNet or KGCN is that they used Microsoft Satori. As Satori is not open source, it's not possible to replicate the results found in the papers. The solution is using other open source KGs.\n",
"Many recommendation algorithms (DKN, RippleNet, KGCN) use Knowledge Graphs (KGs) as an external source of information. We found that one of the bottlenecks to benchmark current algorithms like DKN, RippleNet or KGCN is that they used Microsoft Satori. As Satori is not open source, it's not possible to replicate the results found in the papers. The solution is using other open source KGs.\n",
"\n",
"The goal of this notebook is to provide examples of how to interact with Wikipedia queries and Wikidata to extract a Knowledge Graph that can be used with the mentioned algorithms.\n",
"\n",
Expand All @@ -24,7 +24,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
"System version: 3.6.8 |Anaconda, Inc.| (default, Feb 21 2019, 18:30:04) [MSC v.1916 64 bit (AMD64)]\n"
"System version: 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34) \n",
"[GCC 7.3.0]\n"
]
}
],
Expand All @@ -34,19 +35,17 @@
"sys.path.append(\"../../\")\n",
"print(\"System version: {}\".format(sys.version))\n",
"\n",
"import papermill as pm\n",
"import pandas as pd\n",
"import networkx as nx\n",
"import matplotlib.pyplot as plt\n",
"from reco_utils.dataset import movielens\n",
"\n",
"from reco_utils.dataset.wikidata import (search_wikidata, \n",
" find_wikidata_id, \n",
" query_entity_links, \n",
" read_linked_entities,\n",
" query_entity_description)\n",
"\n",
"import networkx as nx\n",
"import matplotlib.pyplot as plt\n",
"from tqdm import tqdm\n",
"\n",
"from reco_utils.dataset import movielens\n",
"from reco_utils.common.notebook_utils import is_jupyter"
" query_entity_description)\n"
]
},
{
Expand Down Expand Up @@ -548,11 +547,8 @@
}
],
"source": [
"# Record results with papermill for tests - ignore this cell\n",
"if is_jupyter():\n",
" # Record results with papermill for unit-tests\n",
" import papermill as pm\n",
" pm.record(\"length_result\", number_movies)"
"# Record results with papermill for unit-tests\n",
"pm.record(\"length_result\", number_movies)"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion reco_utils/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# Licensed under the MIT License.

__title__ = "Microsoft Recommenders"
__version__ = "2019.06"
__version__ = "2019.09"
__author__ = "RecoDev Team at Microsoft"
__license__ = "MIT"
__copyright__ = "Copyright 2018-present Microsoft Corporation"
Expand Down
18 changes: 8 additions & 10 deletions reco_utils/dataset/wikidata.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,9 @@

import pandas as pd
import requests
import logging

logger = logging.getLogger(__name__)

API_URL_WIKIPEDIA = "https://en.wikipedia.org/w/api.php"
API_URL_WIKIDATA = "https://query.wikidata.org/sparql"
Expand Down Expand Up @@ -57,8 +59,8 @@ def find_wikidata_id(name, limit=1, session=None):
response = session.get(API_URL_WIKIPEDIA, params=params)
page_id = response.json()["query"]["search"][0]["pageid"]
except Exception as e:
# TODO: log exception
# print(e)
# TODO: distinguish between connection error and entity not found
logger.error("ENTITY NOT FOUND")
return "entityNotFound"

params = dict(
Expand All @@ -75,8 +77,8 @@ def find_wikidata_id(name, limit=1, session=None):
"wikibase_item"
]
except Exception as e:
# TODO: log exception
# print(e)
# TODO: distinguish between connection error and entity not found
logger.error("ENTITY NOT FOUND")
return "entityNotFound"

return entity_id
Expand Down Expand Up @@ -133,9 +135,7 @@ def query_entity_links(entity_id, session=None):
API_URL_WIKIDATA, params=dict(query=query, format="json")
).json()
except Exception as e:
# TODO log exception
# print(e)
# print("Entity ID not Found in Wikidata")
logger.error("ENTITY NOT FOUND")
return {}

return data
Expand Down Expand Up @@ -195,9 +195,7 @@ def query_entity_description(entity_id, session=None):
r = session.get(API_URL_WIKIDATA, params=dict(query=query, format="json"))
description = r.json()["results"]["bindings"][0]["o"]["value"]
except Exception as e:
# TODO: log exception
# print(e)
# print("Description not found")
logger.error("DESCRIPTION NOT FOUND")
return "descriptionNotFound"

return description
Expand Down
Loading

0 comments on commit 444e6c4

Please sign in to comment.