Upgrade to Pro — share decks privately, control downloads, hide ads and more …

RecSys

Sam Bessalah
November 28, 2013
180

 RecSys

Sam Bessalah

November 28, 2013
Tweet

Transcript

  1. Why a recommender system? • Help choose among huge choiceof

    data • Reduce cognitive load on users • Drive business revenue -Netflix : 2/3 of the movies watched are recommended -Amazon: 35% sales generated via recommendations -Google News : 38% more clicks (CTR) via recommender
  2. Search Engine vs Recommender System “ The Web is leaving

    the era of search and entering one of discovery. What's the difference? Search is what you do when you're looking for something. Discovery is when something wonderful that you didn't know existed, or didn't know how to ask for, finds you. ” CNN Money, “The race to create a 'smart' Google” 2007 http://money.cnn.com/magazines/fortune/fortune_archive/2006/11/27/8394347
  3. Order Recommendations, temporal, diversity, pesonalisation, infer business logic Identify items

    of interest to the user High level view of a Rec. Sys Candidate Generation Filtering Ranking Feedback/Test Users Items Find already seen elements, near duplicates, clean … Tracking, CTR, purchase, A/B Test, online Test, Explore/Exploit
  4. Approaches • Non Personalized Recommendations • Content Based Recommendations •

    Neigborhood methods, beter kow as Collaborative Filtering. (We’ll focus on this) • Hybrid approaches
  5. • CF algorithms, infer recommendations from historical user-item interactions, by

    assuming that « Similar users tend to like similar items ». • Two appoaches : - Memory based CF * User based CF * Item based CF - Model based CF (Latent factors models) * Dimensionality Reduction(SVD o PCA) * Matrix Factorization
  6. 1. Identify items rated by the target user 2. Find

    other users who rated the same items
  7. 1. Identify items rated by the target user 2. Find

    other users who rated the same items 3. Select the top K most similar neighbors
  8. 1. Identify items rated by the target user 2. Find

    other users who rated the same items 3. Select the top K most similar neighbors Compute Similarities
  9. 1. Identify items rated by the target user 2. Find

    other users who rated the same items 3. Select the top K most similar neighbors Compute Similarities between neighbors
  10. 1. Identify items rated by the target user 2. Find

    other users who rated the same items 3. Select the top K most similar neighbors Compute Similarities between neighbors
  11. 1. Identify items rated by the target user 2. Find

    other users who rated the same items 3. Select the top K most similar neighbors 4. Predict Rating of the target user based on unrated items
  12. Item based CF example Goal : Predict users rating for

    an item based on their ratings for other items 1. Identify the set of users who rated the target item 2. Find neighboring items 3. Compute similarities 4. Select top K similar items (Rank) 5. Predict rating for the target
  13. Similarities Computations • Pearson Similarity : Doesn’take into account user

    ratings bias • Cosine Similarity : Item are represeted vector ove user space. Similarity is the cosine of angle betwee two vectors : -1<=Sim(i,j) <=1 • Other similarities measures : Jaccard index, Magnitude aware measure …
  14. Ranking • Balance between popularity and predicted ranking. • Predicted

    ranking : « Learning to Rank » • Use a ranking function frank (u,v) = w1 p(v) + w2 r(u,v) + b
  15. Challenges • Data sparsity : Users rarely clicks, rate or

    buy • Cold Start Poblem • Harry Potter problem : correlations can be odious • Long tail recommendations : lesser known items
  16. Model based recommenders • Learn models from latent factors (underlying

    poperties of data) rather from heuristics • Try to identify inter-relationships between between variables • Clustering • Dimensionality reduction (SVD) • Matrix Factorization
  17. Dimensionality Reduction • Generalize movies into latent semantics characteristics :

    • Reduces dimensions and improve scalability • Reduce Data sparsity and improves prediction accuracy e.g User who likes « Star Trek» also likes « Star Gate » … Latent factor : Sci-fi, novel based …
  18. Matrix Factorization For a given user u, p measure the

    extent of interest the user has in items that are high on the corresponding factors. R captures the interaction user-item .
  19. • Two types of recommenders: - Single Machine Recommenders :

    Based on the Taste Framework , focus mostly on neighborhood methods : Recommender encapsulates algorithms, and DataModel handle interaction with data. E.g : SVDPlusPlusFactorizer, ALSWRFactorizer, … - Parallel Recommenders : RowSimilarityJob, ItemSimilarityJob, RecommenderJob, strongly tied to hadoop
  20. Exemple : DataModel dataModel = new FileDataModel(new File(‘’file.csv’’)); UserSimilarity userSimilarity

    = new PearsonCorrelationSimilarity (datamodel) UserNeighborHood neighborhood = new NearestNUserNeighborhood(25, userSimilarity, dataModel) RecommenderBuilder recommenders = new GenericUserBasedRecommender (dataModel, neighborhood, userSimilarity)
  21. Run it • User Id: 1001 • Recommened Item Id

    9010. Strength of the preference: 8.699270 • Recommened Item Id 9012. Strength of the preference: 8.659677 • Recommened Item Id 9011. Strength of the preference: 8.377571 • Recommened Item Id 9004. Strength of the preference: 1.000000 • User Id: 1002 • Recommened Item Id 9012. Strength of the preference: 8.721395 • Recommened Item Id 9010. Strength of the preference: 8.523443 • Recommened Item Id 9011. Strength of the preference: 8.211071 • User Id: 1003 • Recommened Item Id 9012. Strength of the preference: 8.692321 • Recommened Item Id 9010. Strength of the preference: 8.613442 • Recommened Item Id 9011. Strength of the preference: 8.303847 • User Id: 1004 • No recommendations for this user. • User Id: 1005 • No recommendations for this user. • User Id: 1006 • No recommendations for this user.
  22. On Hadoop hadoop - jar mahout-core-0.8-job.jar org.apache.mahout.cf.taste.hadoop.cf.item.RecommenderJob -- booleanData --

    similarityClassname SIMILARITY_LOGLIKELIHOOD -- output output -- input input/data.dat
  23. Evaluate a Recommender • How to know if a recommender

    is good? -Compare implementations, play with similarity measures - Test your recommenders : A/B Testing, Multi Armed Bandits • Business metrics - Does your recommender leads to increase value (CTR, sales, ..) • Leave one out - Remove one preferences, rebuild the model, see if recommended - Cross validation, … • Precision / Recall - Precision : Ratio of recommended items that are relevant - Recall : Ratio of relevant items actually recommended
  24. Diversity / Serendipity • Increase Diversity / Novelty - As

    items comes in remove the ones too similar to prior recommendation - Play with ranking to randomize Top K • Increase Serendipity - Downgrade too popular items …