import seaborn as sns import matplotlib.ticker as ticker from neighbors import ( NNMF_sgd, estimate_performance, load_toymat, ) def plot_mat(mat): "Quick helper function to nicely plot a user x item matrix" ax = sns.heatmap(mat, cmap="Blues", vmin=1, vmax=100) ax.xaxis.set_major_locator(ticker.MultipleLocator(5)) ax.xaxis.set_major_formatter(ticker.ScalarFormatter()) ax.yaxis.set_major_locator(ticker.MultipleLocator(5)) ax.yaxis.set_major_formatter(ticker.ScalarFormatter())
All toolbox algorithms operate on 2d pandas dataframes with rows as unique users and columns as unique items. Models distinguish between two kinds of datasets:
- Dense data, in which all users rated all items. Such datasets are useful for estimating the performance of an algorithm by testing how well some % of ratings can be masked out and then recovered via prediction. This is useful for benchmarking model performance and simulating a situations with datasets of varying sparsity. Conceptually this is equivalent to supervised learning where we make predictions with knowledge of the "correct answers" that can be used to compute model performance.
- Sparse data, in which some user-item ratings were never observed. This is the primary intended use case of the toolbox. A model can be trained on the observed ratings using various collaborative filterating algorithms to generate predictions about these missing ("unobserved") ratings.
In this tutorial we'll demonstrate basic toolbox features on dense data. The
load_toymat function can be used to generate some sample data for our purposes. Let's generate a dataset in which each of 50 users rated 50 items on a scale from 1-100.
Note: the numbers chosen are just for illustrative purposes and the number of users and items doesn't have to be equal
toy_data = load_toymat(users=50, items=50, random_state=0) plot_mat(toy_data)
Fitting a model¶
Fitting a model works similarily to libraries like
sklearn. You just need to initialize a model object and call its
.fit method. When working with dense data, i.e. every user rated every item, we need initialize the model with a mask or value between 0-1 that indicates what proportion of the observed data should be treated as "missing." This allows us to simulate a situation in which we hadn't observed these ratings at all.
n_mask_items we can masking out 25% of the ratings and retain 75%. Notice how some user-item combinations are now set to
model = NNMF_sgd(toy_data, n_mask_items=.25, random_state=0) # Take a look the first 10 user x item predictions model.masked_data.iloc[:10,:10]
Now we can try to predict these missing ratings by fitting the model and plotting its predictions.
The left matrix is the input data after masking. The middle is model predictions for the missing ratings + the ratings we did observe. The right is a scatter plot model predictions for missing ratings vs the true values of the these ratings.
For convenience the plot title contains the RMSE and correlation of the missing ratings (averaged across users to account for user-level clustering). RMSE is interpretable as the average misprediction on the same scale as the original ratings. In this case 1-100.
To retrieve the matrix containing the model predictions we can use the
.transform method. By default this will return a matrix containing ratings for values that were observed and predictions for values that were missing (i.e. masked out). To return predictions for the observed values as well, i.e. not passing forward these values, set
Now the masked out ratings have been replaced with model predictions:
predictions = model.transform() # Take a look the first 10 user x items after masking predictions.iloc[:10,:10]
NNMF models it's easy to inspect and debug model training using the
.plot_learning function. It's also possible to get more detail while fitting, by passing
The plot title below also displays the final RMSE on the observed ratings during training and indicates whether the model fit converged within the number of iterations.
Scoring a model's predictions¶
Working with dense data affords us a ground truth that can be used to asses the model's performance. We support a number of different metrics to do this (RMSE and correlation in the plots above are just two). To return a model's performance you can use the
.score method. However, the
.summary method maybe more convenient as it returns all supported metrics along with separate scoring for both the observed ratings (model training performance) and missing values (model testing performance)
Additionally, metrics are scored in two different ways.
user metrics below score performance separately by each user first and then average these scores. This approach is more common to the social science where observations are treated as "clustered" by user.
all simply scores all ratings ignoring the fact that multiple ratings come from each user. This method is more common in machine-learning where overall model performance is of primary interest.
User performance results (not returned) are accessible using .user_results Overall performance results (returned) are accesible using .overall_results
Benchmarking a model's performance¶
The performance above is specific the the exact ratings we masked out. But how does the model perform in general when 25% of the data is missing?
While we could repeat the procedure above for different random masks of the same size, doing so by hand is a bit tedious. Fortunately, the
estimate_performance function is designed exactly for this purpose. Just pass it a model class (not a model object), some data, and the amount of masking and it will repeatedly refit the model with new random masks and return the average performance across all iterations. This is functionally equivalent to randomized cross-validation, where the size of the training and testing splits are controlled via the
n_mask_items argument. In the example below, masking 25% of the data is equivalent to 4-fold cross-validation where training is done using 3 folds and testing is performed on the left out fold.
overall_results, user_results = estimate_performance( NNMF_sgd, toy_data, n_iter=10, n_mask_items=.25 ) overall_results
Data sparsity is 0.0%. Using random masking...
We can also see if predictive performance varied by user to identify some users that were particularly difficult to generate predictions for.
estimate_performance only returns performance
missing data. To see performance on all subsets use
return_full_performance = True. You can also use
return_agg=False if you want to see performance for each iteration separately rather than the mean and std across all iterations.