4 The Center for Hospitality Research • Cornell University
ers’ purchase decision process.
4
Moreover, consumers
have become increasingly adept at evaluating the veracity
of online reviews by triangulating multiple sources and
their own contextual knowledge.
5
Because it’s clear that
hotels can apply online reviews for performance improve-
ment and revenue enhancement, we investigate ways that
management can analyze the rich and dynamic online
review data for insights on aspects of the stay that contrib-
ute to high guest satisfaction and gaps that can be closed.
Although online hotel ratings have been found to be
largely credible,
6
it is worth noting sources of potential
biases in online data, particularly fraudulent reviews,
written by people who have not actually experienced the
service.
7
Another source of bias is self-selection. Even if a
review is genuine, the comments represent the views of
customers who have chosen the online platform to share
their opinions publicly. That group may be dierent in
some way from those who do not post reviews. We also
note that guests have a diverse interpretation of rating
scales,
8
which leads to heterogeneous information.
4
Bassig Migs, “2016 Trends in Hospitality and Travel,” January
18, 2016, http://www.reviewtrackers.com/2016-trends-hospitality-
travel/.
5
Russell S. Winer and Peter S. Fader, “Objective vs. Online
Ratings: Are Low Correlations Unexpected and Does It Matter? A
Commentary on de Langhe, Fernbach, and Lichtenstein,” Journal of
Consumer Research 42, no. 6 (2016): 846–49.
6
Peter O’Connor, “User-Generated Content and Travel: A Case
Study on Tripadvisor. Com,” Information and Communication Technolo-
gies in Tourism 2008, 2008, 47–58; and Julian K. Ayeh, Norman Au, and
Rob Law, “‘Do We Believe in TripAdvisor?’ Examining Credibility Per-
ceptions and Online Travelers’ Attitude toward Using User-Generated
Content,” Journal of Travel Research, 2013, 47287512475217.
7
Eric T. Anderson and Duncan I. Simester, “Reviews without a
Purchase: Low Ratings, Loyal Customers, and Deception,” Journal of
Marketing Research 51, no. 3 (2014): 249–69.
8
Russell S. Winer and Peter S. Fader, “Objective vs. Online
Ratings: Are Low Correlations Unexpected and Does It Matter? A
Commentary on de Langhe, Fernbach, and Lichtenstein,” Journal of
Consumer Research 42, no. 6 (2016): 846–49.
For this analysis, we were assisted by Preferred Ho-
tels & Resorts to collect 95,500 online ratings and reviews
of 99 of its independent hotels posted over a twelve-
month period on three top OTAs—TripAdvisor, Expedia,
and Booking.com. Although the hotels are independent,
they agree to follow the same quality standards as part
of their membership association. By focusing on inde-
pendent operating units in a well-dened segment with
similar quality standards, we control to some extent the
variations in guest preferences and demand, although the
hotels and resorts range in size from under 100 rooms to
well over 250 keys. The properties’ similarities allow us
to focus on the eects of specic operational drivers on
guests’ perceptions of their experience. In this study, we
are primarily interested in nding the answers to three
questions:
What are the drivers that matter the most in terms of
guests’ evaluation of their experience?;
How do these drivers relate to consumer review
scores at the property level?; and
What are the identiable consumer issues found in
the review text?
Although online reviews are widely viewed as reli-
able, we rst examine studies on the reliability of the
online reviews and ratings in assessing performance.
Then, our quantitative analysis uses regression to assess
the eects of key operational drivers on consumer review
ratings, while our qualitative study uses text analytics to
uncover common consumer concerns and to infer what
aspects of the guests’ stay have the greatest eect on rat-
ings.
Online Reviews as a Valuable Source of
Feedback
Online reviews continue to rise in importance, having
become second only to pricing as an element in consum-
E
xhibit
1
Hotel properties: geographic distribution and size
Continent Small
(<100 rooms)
Medium
(101-250 rooms)
Large
(>250 rooms)
Total
Europe 13 18 12 43
North America 3 13 22 38
Asia — 4 12 16
Africa — — 1 1
South America — 1 — 1
Grand Total 16 36 47 99
Bassig Migs, “2016 Trends in Hospitality and Travel,” January 18, 2016,
http://www.reviewtrackers.com/2016-trends-hospitality- travel/.
Russell S. Winer and Peter S. Fader, “Objective vs. Online Ratings: Are Low
Correlations Unexpected and Does It Matter? A Commentary on de Langhe, Fernbach,
and Lichtenstein,” Journal of Consumer Research 42, no. 6 (2016): 846–49.
Peter O’Connor, “User-Generated Content and Travel: A Case Study on
Tripadvisor. Com,” Information and Communication Technolo- gies in
Tourism 2008, 2008, 47–58; and Julian K. Ayeh, Norman Au, and Rob
Law, “‘Do We Believe in TripAdvisor?’ Examining Credibility Per-
ceptions and Online Travelers’ Attitude toward Using User-Generated
Content,” Journal of Travel Research, 2013, 47287512475217.
Eric T. Anderson and Duncan I. Simester, “Reviews without a Purchase: Low Ratings,
Loyal Customers, and Deception,” Journal of Marketing Research 51, no. 3 (2014):
249–69.
Russell S. Winer and Peter S. Fader, “Objective vs. Online Ratings:
Are Low Correlations Unexpected and Does It Matter? A
Commentary on de Langhe, Fernbach, and Lichtenstein,” Journal of
Consumer Research 42, no. 6 (2016): 846–49.