To my lazy readers, the bottom line is that they are using a simple linear factorization, with a cdf link function. They take the Bayesian approach for computing inference. Implicit ratings are handled by adding random negative samples, as proposed by Pan, Yunhong Zhou, Bin Cao, Nathan N. Liu, Rajan Lukose, Martin Scholz, and Qiang Yang. 2008. One-Class Collaborative Filtering. In Proceedings of the 2008 Eighth IEEE International Conference on Data Mining (ICDM '08). IEEE Computer Society, Washington, DC, USA, 502-511.
Aapo Kyorla reports a great buzz around his GraphChi project - his project was starred by 83 users, +1 by 31 users, 500 downloads of the C++ code base, and 250 downloads of the Java code base. Very impressive for a project launched 3 weeks ago.. It was also covered by MIT Technology Review. Here is an executive summary I wrote a couple of weeks ago.
I learned from Nimrod Priel's Data Science that Amazon EC2 has a new SSD disk instance.
I often get updates from people about some recent lectures. Luckily I filter those out since many of the lectures are both boring and do not say much. One notable lecture I liked last year, is Cloudera Josh Wills' lecture from last year big learning workshop. In a retrospective it should be viewed.. It discusses some common setbacks when you try to deploy machine learning in practice.
I learned from Nimrod Priel's Data Science that Amazon EC2 has a new SSD disk instance.
No comments:
Post a Comment