. 



We scored the papers of Christos Faloutsos, Michael I. Jordan, and Tom Mitchell. For each document for each of these authors we calculate a perplexity score. Perplexity is widely used in language modeling to assess the predictive power of a model. It is a measure of how surprising the words are from the model's perspective, loosely equivalent to the effective branching factor. Our goal here is not to evaluate the outofsample predictive power of the model, but to explore the range of perplexity scores that the model assigns to papers from specific authors. Lower scores imply that the words are less surprising to the model. 