Online reviewers' criteria for judging books

Research output: Contribution to conferenceAbstractScientific


The ODBR database of Online Dutch Book Response contains (at the time of writing) 390,000 (short) online book reviews and 510,000 other book response items, harvested from Dutch-language mass review sites and from the largest online bookseller in the Netherlands. While a resource such as this, based on self-selection of participants, can never pretend to be representative of 'the' reader, the database does bring together a very large collection of non-professional writing about literature and books more generally. My paper for the panel will look at the standards about quality in literature and reading that people explicitly or implicitly express in the reviews and other discussions that the database holds. Because of the size of the collection, the study of the reviews requires a tool for ‘distant reading’, a tool that can summarize the evaluative aspects discussed in reviews. As far as I know, no such tools at present exist, certainly not for Dutch. What exists are tools that determine positive or negative sentiment in a review. More advanced tools can associate sentiment with aspects of simple and standardized products, such as cameras, but certainly not for books. For describing the evaluative aspects in reviews, I use a version of the coding system developed by Linders (2012). Linders distinguishes aspects of books (e.g. style, plot, characters) and characteristics that are applied to these aspects (e.g. emotion, vividness, familiarity). A vivid style can be coded as "A 4": "A" for style and "4" for vividness. Some of the characteristics represent a scale: the axis of familiarity is used both for familiarity and for unfamiliarity. I have added some characteristics to Linders’ system, and also explicitly code whether an evaluation is positive or negative and which end of a scale is applicable. To create a tool that will be able to analyse the corpus I create rules that associate textual patterns with evaluative codes. For instance, the word group ‘in one sitting’ will usually indicate the book was a good read and a rule will be created that associates ‘in one sitting’ with K 18 1 p (= book in general, entertaining, positive end of scale, positive experience). Sometimes the patterns will be much more complex: the word ‘accessible’ is not necessarily evaluative, but used within a certain distance from ‘book’ it probably is; however, if the word ‘not’ occurs in between, the evaluation is probably no longer positive. Patterns that express conditions like these can be formulated using the Corpus Query Language (CQL), as e.g. "book"[word!="not"]{0,5}"accessible". This requires a phrase beginning with ‘book’ followed by between zero and five words unequal to ‘not’, followed by the word ‘accessible’. Patterns can also use lemmas, regular expressions (extended wildcards) and part-of-speech tags. In order to test the feasibility of this approach, I am currently working on simultaneously annotating the reviews of works by the widely read Dutch novelist Tommy Wieringa and creating the corresponding rules. The database contains 393 reviews of Wieringa’s works. By now, 303 annotations have been made to 78 reviews. 253 rules have been created. The average Spearman correlation between the manually applied annotations and the annotations as computed (based on applying the rules to the text of the reviews) is 0.75. This investigation is ongoing, but the correlation suggests that, based on the textual patterns, we can get a pretty good idea of the criteria a reviewer uses in judging a book. This also implies that it should be possible, at a later stage, to do research into e.g. how different criteria are applied in judging different genres or how different readers or reader groups apply different criteria, without the need to manually annotate thousands of reviews. Linders, Y. (2012). Argumentation in Dutch literary criticism 1945–2005. In C. Perry & M. Szurawitzki (Eds.), Sprache und Kultur im Spiegel der Rezension (pp. 261-268). Frankfurt am M.: Peter Lang.


ConferenceIGEL 2018
Internet address


Dive into the research topics of 'Online reviewers' criteria for judging books'. Together they form a unique fingerprint.

Cite this