Improving Research Practices and Reducing Inefficiencies
If you follow the world of media and advertising research, you may have heard something about a debate set off by last week’s release of an IAB commissioned report entitled, “An Evaluation of Methods Used to Assess the Effectiveness of Advertising on the Internet.” If you’re not a media or advertising researcher, you probably skipped over the Twitter stream, the trade press and the numerous blog posts about the paper. As a digital salesperson or business development executive or agency chief or CMO, you likely figured it had nothing to do with you.That assumption would be incorrect. This methodological review, a completely independent one, at that, conducted by the esteemed social scientist, Dr. Paul Lavrakas, has everything to do with you and your company. Here’s why…
With permission from Dan Murphy, SVP Interactive Research & Ad Traffic, Univision Interactive Media, we cite the following. “The IAB study was initiated a while back because publishers were being inundated with large numbers of studies - in some cases so many, users were getting multiple survey requests on the same User Session. (Pop-Up blockers anyone?) If the publisher didn’t accept the study, they were told - No Campaign. Putting the issue of recruiting sample with a pop-up window (i.e., a variation of a pop-up ad) to the side, many had questions about sample frame issues from Power-Users, Shared Computers, Users of various bandwidths and so forth. Concurrently, many of us (both buy and sell-side) were seeing peculiarities in the output.”
Simply put, the independent review was taken on to solve a business problem that had become more pressing over time. Should the industry be forced to continue to use research that may or may not be doing the job in order to be included on a buy? Does the research need to be improved or changed in order to preserve the quality of the consumer experience? After all, if as an industry, we do not make the consumer the center of our universe, what kind of businesses will we have? And, lastly, what guidance is being given to the agency staffers who are forcing these studies upon the sellers?
No research for any medium is perfect. We at the IAB did not expect perfection when we commissioned Dr. Lavrakas. We got what we expected: an objective evaluation that clearly expresses confidence in what is good, critiques what is bad and goes on to suggest steps for improving site intercept ad effectiveness research. In addition, the IAB and our member companies are committed to using the paper as the foundation for Best Practices work that force both buyers and sellers to be wiser about how the studies are used.
It is time that as an industry, we invested in making ad effectiveness research better; in moving from ad hoc solutions that sell well to buyers who may not understand of the limitations of studies; in truly building a body of work that proves how well the online media build brands. We cannot get there if we continue to ignore the need to think big, get out of our silos and recognize that we must raise the standards, the practices, the discourse itself to a new level.
Let’s embark upon learning and applying the learnings of Dr. Lavrakas’s work to solutions that make online media attractive destinations for consumers and for brand advertising dollars. And, in the spirit of collaboration shown by the vendors who let us look under the hood for this report, let’s move forward together so that we really can get to the next level.