"What method or methods do you use or recommend for use for measuring the performance of viewer response to digital signage content? And why do you use or recommend those methods?"
There are a number of different methods that can be used in order to measure the performance of viewer response to digital
Content displayed in one set of sites contrasted in control sites where content is not displayed provides the ability to measure impact the content
There are several ways to measure the performance of viewer response to digital
We have tried a number of methods for measuring the performance of viewer response to digital signage content.
I was recently in a meeting where our team was evaluating interactive
The only measurement we use to determine the performance of our digital
The easiest method would be to tie your content scheduling software with your POS software, which gives you the ability to track sales on a real ti
Measuring the effect of digital signage marketing is not an exact science, but traffic to outlets on property is one of the easiest ways to measure
Currently we are using exit surveys to determine viewership, relevance, effectiveness and balance of marketing/entertainment.
Every advertiser wants post-campaign reporting that is transparent and robust.
We've actually gone the low-tech route — we have our signs in areas that are fairly visible to our own staff, and as ours are intended to be direct
Measuring the performance of viewer response to digital signage is extremely new in our specific transportation facility.
Measurement is a hot topic because it is such a tricky answer.
We have used a number of methods and it mostly comes down to what specifically is the client trying to measure and whether it is being measured as
In order to answer this question, one must define the vernacular being used.
First, I feel it's imperative to measure a baseline of potential audience viewers and to that end, strongly recommend adhering to the Digital Place
This is a very interesting and subjective question, because historically, digital signs are a one-way medium.
The focus on communications and business goals, which drives the use of dynamic place-based media, also drives the impact analytics on which invest
THE EXIT INTERVIEW — For the type of digital deployments I've been working on over the past few years, exit interviews regarding recall, awareness,
Performance measurements and understanding consumer response to your efforts are essential to meeting your goals.
Operating an advertiser-supported network, we need to consistently demonstrate the efficacy of Care Media TV networks to brand management and agenc
Methods of measuring viewer response to content include texting campaigns, "leave-behind" surveys (survey instruments left near a screen with a sig
There are two main roadblocks that prevent digital place-based media from truly taking off with advertising agencies —fragmentation and proving tha
Outside of awareness campaigns, we really push our clients to utilize the dynamic capabilities of the medium.
We have tried various methods to measure performance of viewer response to our digital
We use outside research companies (Edison, Nielsen, etc.) to measure our audience engagement with the screens, mostly to supply potential advertise
Zoom Media is not measured through a syndicated database, but is available through Nielsen IMS media planning tools.
We have used a number of ways to measure viewership and response over the years.
Metrics remains one of the Achilles heels of digital signage.
The method with which measure content effectiveness should be measured will vary depending on the environment you are designing for.
Measuring response at the time of experience is ideal for obvious reasons (you've all heard of the television surveys that indicate everyone watche
Awareness, engagement, recall and audience behavior are priority analysis data points that our ad-supported network clients measure in order to be
There are a number of different methods that can be used in order to measure the performance of viewer response to digital signage. We have implemented some techniques, but for the most part because this is a newer concept for digital signage (versus online or other communications channels) we are still learning our way and have not implemented a complete solution that includes more robust features and reporting. To date we have focused on integrated tracking in some of our campaigns and communications pieces. We have relied on four main techniques for engaging viewers, in an attempt to move them to the next action and thus to allow us to track the results.
QR codes are one of the techniques we have used, with varying degrees of success. We use bit.ly to create unique codes for campaign related content that redirects to our website, specifically the campaign content that allows the user to get more information on the current offer or promotion. We use QR codes similarly in other ways — at community events and on some of our branded merchandise — but we have found that adoption has been an inhibitor whether we are using this in digital signage or elsewhere. The response compared to the estimated viewership and frequency of the content is quite low.
In a similar fashion we have used unique URLs to try and direct viewers to online content, where we can then measure the results. We set up a rewritten URL to our Web content in a similar way to how we use the QR codes. Because these are on our primary domain, they can be tracked along with all our other Web traffic, using our Web analytics program. Since the rewritten URL does not get used anywhere else, we can be certain that any traffic arriving from it can be attributed to the digital signage content.
The third technique we use involves setting up unique toll-free numbers that redirect back to our Member Contact Centre. We use a service to set up and manage the toll-free numbers, which also provides a simple analytics reporting package — each time someone calls that number, it is registered on a webpage as a hit at the same time, this way we have a report that shows the number of calls coming in from the unique toll-free number. Our MCC staff keeps a record of the calls that come in and what the result of the call was, depending on the campaign currently active.
The last technique is the simplest as it just involves providing a script and questionnaire to our front-line staff. They can use the questionnaire to ask the members their response to the digital signage, or they can fill it out themselves if someone happens to mention something related to the content in the current schedule. It allows us to spot check the response to digital signage in branches, and whether they are being viewed or not. This is an ongoing process, that takes time and effort to get any results, but it provides an opportunity to capture more information about how people react to the content compared to the other techniques that simply focus on whether the appropriate action or next step was taken.
We have reviewed some anonymous video analytics software, programs that use eye tracking and other techniques to track viewership, but have not implemented anything on our network to date. The reporting capability and results are impressive and certainly have great benefit for measuring the performance of viewer response. This is something we certainly will look at implementing in the future, but at the moment the cost for this is a barrier as we are still establishing and growing our digital signage network.
Content displayed in one set of sites contrasted in control sites where content is not displayed provides the ability to measure impact the content has on customer behavior. Affidavit logs are pulled by our CastNET Scala system to provide Marketing with actual content play data, which can be compared to product sales data for the specific sites. Reason? Direct relationship of digital signage to customer behavior!
Measuring viewer response has never been a top priority for us, as we generally push information, not ads. That said, if we were to take some time and add a QR code to some of the information we are pushing, we might be able to get some sense of response, but I'm not a firm believer in the use of QR codes in general. I think QR codes are not used to the degree that some people would have you believe; think about it, how many people do you know who use QR codes? I don't know any.
What we could use to quantify response is actual turnout based on a registration for an event. The school has been kicking around the idea of using the digital signage to allow students to register for events; the viewer would see a specific email address or URL that the screen would direct the viewer to visit. By using a redirection method, we could collect metrics on the number of viewers who emailed or visited the site, giving us some sense of response.
There are several ways to measure the performance of viewer response to digital signage content. In our "venue" environment we take a simple and direct approach, relying largely on social media responses based on "texted-to" addresses, and discount hits on product codes used on the signage. The hardest response to handle, dreaded the most, cringed at that prompts the 911 panic calls from the management team … is when the directional and room display designators are down, because then you see people aimlessly walking around and into the wrong meetings disrupting everything; an ugly mess or loss of sales revenue. What is life if there isn't a little pressure once in a while?! This is a whole different topic of discussion …
I have a sense that everyone's idea of performance or return on investment is going to vary greatly. Our focus is more on the instant gratification: Tell customers/visitors how to get there now, scan the code and use it at a neighboring booth now, find your meeting room now, etc. Unfortunately, ROI is getting more costly on the technology side and less definable because we are a service-oriented support team with marketing as the driving force behind the content. For example, what is the ROI on the phone line? Is it used once a day, week, month or never? If there are dollars associated with equipment and metrics are available, someone else usually wants the credit. Same holds true for the infrastructure. Could we just get away with old "slider-boards"? Probably, but we wouldn't have a job and we couldn't keep up with the standard "up-isms" marketing bombards us with. Just watch any sci-fi movie and their vision of the future with electronic advertising on every building, floating around in and out of every surface. At what point do you say saturation is too much and people just tune out? Do we live in a name recognition society or that burger you just purchased was because you saw the ad a hundred times?
My world is a simple one, as this point, on performance measures at the office, everything, I mean everything has to have justification and ROI numbers on why we need to do "it" before it is ever funded. The ROI on digital signage technology piece is still to new and considered a novelty, similar to Wi-Fi several years ago. It will catch on as soon as I can show them one report on the performance, then another metric will be added to someone's plate to monitor the performance and return.
Until that happens I will live with discounts taken, manual entry questionnaire machines and impulse purchases until I can give a strong reliable source of information that can make sense and I have metrics to compare to and validate.
We have tried a number of methods for measuring the performance of viewer response to digital signage content. Our preferred method has been through the use of a physical control group where mirror images of a location (such as a quick service dining facility) are each fitted differently with one getting digital signage and the other traditional static signage. Through this method we are able to accurately measure the response behavior through metrics such as average check amount and item count on specific targeted products. While this has been our most accurate method for measuring viewer behavior, it is obviously not always possible given the facilities and does not measure the viewer's perception or opinion of the content. In order to understand how the viewer felt about the content and how it may have influenced their decision or provided additional knowledge, instant surveys have been most effective. Of course, response to a call-to-action is another good method of measuring performance. That response could be to a coupon or offer, a QR code or a link to a specific URL.
I was recently in a meeting where our team was evaluating interactive touchscreens for our patient population. The question on the table was in order to help our leadership understand the value of implementing this new tactic, how can we most effectively measure how the interactive adds or detracts from our patient experience? The answer … the magic is in the mix.
No one form of measurement can get at a tactic's true validity. Only a mixture of measurements can genuinely evaluate how a tactic meets objectives or not. Our team decided to put into place a few types of measurement, that when analyzed together, would offer our leadership a more informed decision on whether or not to fully invest in the tactic.
- Analytics from the touch opportunity … what options did our patients use most, how often did they use those options, etc.
- Staff who are near the opportunity … what did they get asked most, how often did patients seek out staff over using the touch opportunity, etc.
- Onsite observations — did the opportunity frustrate or help our patients, was there a waiting line to use the opportunity, etc.
- Simply ask — talk with the users of the opportunity to learn more about what worked, what didn't work and how could it compliment their experience
By having a mixture of measurements, a more realistic view of what's working and what isn't can be found. That allows for innovation to occur and possibly a more meaningful experience for the user. It also allows for informed decision making to occur which effects how the strategy gets applied or not. The magical mix not only proves a good idea from a bad one, but saves a company a great deal of investment on time, money and resources.
The only measurement we use to determine the performance of our digital signage is the old-fashioned paper survey.
As our employee-messaging screens are informational, during internal events we would have a trivia raffle contest.
The first questions center around the festivities and the last question is usually something like:
How do I stay informed (please check all that apply):
___I'm the last to know (this one usually gets interesting comments).
The easiest method would be to tie your content scheduling software with your POS software, which gives you the ability to track sales on a real time basis with what customers are viewing across your digital network. DSIQ did this very well for Wal-Mart, but if you have issues tracking or don't deal with a typical Point of Sales system, QR codes may be an easy fix as long as you cycle them frequently so your customers have time to capture on their mobile device. At MGM we also track all the data from our interactive screens i.e. restaurant menus, wayfinding, spa menu touchscreens, etc.
Chet Patel (with thanks to XPODIGITAL)
Measuring the effect of digital signage marketing is not an exact science, but traffic to outlets on property is one of the easiest ways to measure impact. We also measure how many times a QR code is scanned if that's part of the ad program, which can help show how much a message is getting through.
With any digital signage deployment, one of the most desirable outcomes is increased sales, higher revenues or improved profit margin. Behind these objectives there is a rather straightforward performance measurement; simply prove any of the following is occurring:
- Higher average sales transaction
- Higher rooftop sales
- More volume of higher margin items
Now, setting achievement targets for these measures and monitoring them over time is where some fun can occur. This is the time to experiment with different executions of content promoting the same product and see which one results in the most desired result or which one has the least impact. You can try airing different content at alternative times of day or try varying the location of key messages within the store or as granular as the area of the screen. Another interesting test is to see if your call to action in the customer's next visit has more impact than an immediate call to action.
With the above suggestions, collecting the POS data over a period of time to quantify sales is then analyzed to draw conclusions. However, in many cases, there are softer objectives to achieve, like being competitively relevant, customer perception of being current or cleaning up cluttered signage. Many of these softer objectives can be measured with exit interviews or interviewing the staff. In many instances, these softer measures outweigh the value of the more easily measured sales results. In the end, it isn't much of a stretch to believe that customers pleased with the digital signage execution will lead to their continued patronage for measurable sales transactions.
Currently we are using exit surveys to determine viewership, relevance, effectiveness and balance of marketing/entertainment. We also survey our employees so that we are aware if the customers are asking questions about our digital media, as well as to help determine if all our objectives are being met including decreasing perceived wait time. Being that our current digital signage is not interactive this is the best way we have found to determine ROO on our in-store digital signage.
Every advertiser wants post-campaign reporting that is transparent and robust. The advertiser will want to know what the campaign delivered and whether it was successful. For every campaign that runs on Walmart's Smart Network we share a report that shows campaign delivery for GRP's, impressions and weeks/dates on air. In addition, because we have a closed-loop system with access to our POS we also share data with the advertiser on sales lift showing pre, in-campaign and post campaign sales volume. The more transparency there is in measurement and reporting, the more successful you can be with campaigns and make changes for future campaigns.
We have one agency partner, DS-IQ, that does all the analytics and post campaign reporting in conjunction with Studio Squared, our creative and sales agency. They along with Wal-Mart personnel review campaign results. All the campaigns and their historical results are housed on a portal that advertisers all have access to for their specific campaign results and these are reviewed with the advertisers on regularly scheduled meetings.
We've actually gone the low-tech route — we have our signs in areas that are fairly visible to our own staff, and as ours are intended to be directional helpers/navigation tools to help lower the amount of people asking for directions at those staff locations, so our own Guest Services staff give me feedback on the performance of the units themselves.
I've also added a counter on our signs, so that I know exactly how many times the interactive signs have been touched each day, week, or month, to confirm those numbers. Considering our initial goal was to reduce the amount of times our staff were asked directions, I think our two methods together are working well for us to understand our signage effectiveness. Obviously as we're not actually marketing a product, our methodology is quite different than analytics on product purchases/uptake.
Measuring the performance of viewer response to digital signage is extremely new in our specific transportation facility. Prior to the recent installation (within the past 30 days) all of our digital signage has been directional, information, or advertising focused with no built-in ability to track performance or individual viewer response. For this legacy-styled signage, we relied primarily on surveys to identify our typical passenger demographic and viewer responses were either anecdotal in nature, or contained in direct feedback from the users of the facility.
Our current interactive signage, however, has both metrics and analytic monitoring built in to the software operating system. This software will allow us as the facility owners to track usage of each individual signs, as well as the features built into the each sign. Since we are just wrapping up the installation phase, we have not had the ability to collect and/or review the data at this time. Our intention is to utilize this data and reporting capability to learn how the users are actually engaging the interactive signage. We are open to provide follow-up after several months of data has been collected.
Measurement is a hot topic because it is such a tricky answer. The truth lies in what the campaign was intended to do holistically and did digital out-of-home deliver its part. The clearest answer is direct response campaigns where measurement runs from the more traditional 800 number, SMS or Web hits, to the newer forms of response such as NFC. Campaigns intended for awareness are a bit tougher to define and become more qualitative in nature. Media planners are pulling from other media delivery expectations such as static OOH, TV, Digital and even print. The bottom line is how valuable is the impact of the interaction with a consumer? A shopper? A buyer? Does that equate to time spent? Context? Content? Engagement? The most important question is always how the consumer received it as your judge and jury.
We have used a number of methods and it mostly comes down to what specifically is the client trying to measure and whether it is being measured as part of a multiple media campaign or by itself. For instance, we have had clients with larger budgets conduct pre- and post-campaign intercept studies that measure a variety of different factors, including the perception of a particular media type. The problem with these types of studies when evaluating a media buy is that the media is only a contributing factor. The media may have been a perfect match to the target audience, but the creative message may have been poor; therefore, potentially providing negative feedback on the campaign's effectiveness that may get attributed to the media plan.
In cases where we are looking to directly tie the effectiveness of a media type, specifically in digital place-based network, we may use a series of calls to action that are unique and therefore able to be tracked back to each network. These may include individual URLs, SMS short-codes, or venue specific offers. At the end of the day, the most important thing to remember is to set very clear and realistic goals and measures for success before the plan is even created. This will ensure that everyone is working with the same objectives in mind and from there the appropriate benchmarks can be implemented. The best research capabilities and resources in the world cannot effectively measure the performance of a campaign if some people are looking for incremental sales increases and the other group of people are evaluating brand awareness.
Lucas Peltonen, (for Dave Matera) OOH Pitch
In order to answer this question, one must define the vernacular being used. We consider "content" to be the programming that plays on the signage. The programming could be health-related specials, news, weather, sports or entertainment, etc. As an advertiser, however, we do not directly measure the performance of viewer response to the content, as that is not our main concern. Our focus is on how the signage will affect the brand advertising on the signage. We measure the viewer response to the ad spots that we place, not the programming content.
In fairness, programming content is a consideration before placing any media buy. The content must be strong, professional and contextually relevant. We as advertisers also look at the network's own studies on the subject to determine the effectiveness of the content. Once we have determined a network worthy and appropriate for our brand, we will actually consider an advertising buy.
Measuring the performance of viewer response to digital signage advertising can be a fascinating exercise. Pharma advertising in doctors' offices is a strong example. Pharma clients often focus on measuring the ROI of investing in doctors' office digital signage by determining the uplift in script-writing practice among doctors where the ads are playing. This type of measurement represents hard numbers and direct profits from advertising. Because the exact results can be measured — aligning perfectly with what the brand is seeking — this is perhaps the strongest type of digital signage viewer response measurement.
Another type of measurement is accomplished through the touchscreen and interactive networks that exist. These networks can measure interactions with the signage and capture data much in the same way that internet advertising can — and networks such as the jukeboxes can often have higher click-through rates than online ads. We have run campaigns that gleaned thousands of entries and hundreds of real-life leads through data capture for our clients to follow up with to continue the dialogue. While not as direct and tangible from a profit-measurement standpoint as the first example of ad measurement, this type of measurement certainly is robust and helps to continue conversation between advertiser and consumer.
The most common type of measurement comes with the majority of digital networks. This measurement comes from surveys taken by third-party research companies and tends to measure aided recall rates, unaided recall rates, intent to purchase, attitudes about the media, notice, etc. This type of survey measurement can also determine viewership numbers and basic demographics of those watching the media. While it doesn't measure ROI or interactions as accurately as the above methods, it still provides valuable information that otherwise would be lacking. Especially in instances when a brand wants to test the effectiveness of a network with the type of messaging that isn't directly ROI-driven or necessitates interactivity, such measurement can illuminate a lot of information on the viewer's response to the digital signage.
As is evident above, measuring the performance of viewer response to digital signage content is not as important to advertisers as the impact of the advertising on the consumer. The way it should be measured depends on 1.) the brand's goals and 2) the type of network being used; meaning, one brand may be completely ROI focused and less interested in the viewership demos, while another brand may aim to capture viewership data and therefore needs to measure CTRs to determine the effectiveness of the campaign accordingly. While the medium's content should be measured to determine its impact on the viewer, the advertiser will consider this information in a broader set of considerations when working on media buy consideration.
Recommending a universal metric for digital signage content suffers from the same challenge that the online video space is currently wrestling with. Namely, there is such a wide variety of types of content, levels of interactivity and user engagement that it can be difficult to unify under a single metric. Reach metrics like ratings points don't account for engagement and engagement metrics are difficult to translate to non-interactive media.
That said, digital signage content that is not inherently interactive, we will typically look to reach metrics combined with intercept surveys to measure impact. In some cases, we can also link exposure to behavioral measures like foot traffic or even sales. We usually also try to incorporate some kind of interactive call to action — often via some sort of mobile mechanic like SMS. This gives us another way to look at the performance data, and it turns a medium that is not inherently interactive into something that is interactive and can be measured as such.
For custom installations or experiences, we use a wide variety of metrics based around the client's business objectives. Engagement metrics such as taps, time spent, navigation path, actions completed, etc. — all tend to become part of the story. Cameras can also enter the fray, enabling eye-tracking analysis and demographic profiling in some cases. And again, as with non-interactive signage, we can often link to post-engagement behavior metrics like mobile interaction, foot traffic or sales.
As with many things digital, there is no shortage of available data. The key is understanding how to filter and focus on what truly matters — and that is all about linking to the business objectives of the installation.
First, I feel it's imperative to measure a baseline of potential audience viewers and to that end, strongly recommend adhering to the Digital Place-based Advertising Association's Audience Metric Guidelines and Nielsen's 4th Screen Report, when applicable.
These guidelines offer a comparable means by which digital place-based networks can be measured and as such provide a common currency for the medium to be planned and bought. Once potential audience is defined then viewer responses to digital signage content can and should be measured. Clearly sales lift is one of the most desirable metrics used to measure response, but not all digital signs are at the 'point of sale'; nor is register/sales data always accessible. Further, it can be a challenge to isolate digital signage from an overall media campaign to identify its impact.
Mobile and social are quickly becoming key measures of consumer engagement and effectiveness for digital signage. Consumers are texting and tweeting directly to the screens and even uploading photos through social media. Measuring these mobile opt-ins can help marketers discern what content is resonating with the consumer and offers a means by which programming, content and advertising, can be optimized.
Mobile and social engagements do not 'stop at the sign.' Digital signage is inherently a location-based medium and as such, social chatter around coded inventory can be mined and serve as an added metric. What's more is that social media can also be used as a barometer to measure 'sentiment' toward content and as such allow marketers to capture how consumers are perceiving the brand and thanks to digital technology respond in near real-time.
While, I don't feel that mobile and social engagements and chatter should be the only measures used to ascertain digital signage response, I do feel they have been underutilized. Leveraging the immediacy of mobile and social can offer the brand or a network operator the opportunity to respond 'early' in a campaign as opposed to waiting for a post-buy analysis at its conclusion. Conventional means for measuring consumer response are also still relevant such as using intercept studies et al, to measure purchase intent and more, but these often costly and time-consuming, thus limiting a brand's ability to 'act' when relevant.
We should also be mindful that not 'everyone' engages via mobile or social as part of their purchase funnel and response to digital signage need not be 'immediate' to prove its importance as part of an effective medium used to reach consumers. It is however, proving out that it is influencing consumer purchasing decisions and owning up to the catch phrase used to describe the medium "at the right place, at the right time."
This is a very interesting and subjective question, because historically, digital signs are a one-way medium. To collect viewer response, you need to add a feedback channel. These channels can be in many forms, e.g., interactive display, Near Field Communications data collection, or offline feedback via some form of survey instrument. Due to the nature of a digital sign being a one-to-many form of communications, measuring viewer response can be difficult!
Signs that are deployed within the reach of the viewer, can offer some form of immediate interactivity that can easily measure viewer response. The easiest is to have some form of input device (e.g., keyboard, touch panel or NFC). With the 'right' software, you can collect subjective information, and/or indirect information such as viewing time! The majority of digital signs DO NOT lend themselves to this form of viewer measurement.
What I recommend is an "incentive"-based response collection, which DOES NOT require proximity or physical interactivity. It requires that within the message of the sign an offer is included than can be redeemed on a webSite, or at a business for a free offer, a discount or other form of incentive payment. AND, in exchange for responding to the sign's incentive, you will have a measurement of viewers who are interested in the content displayed (best for restaurants, retail, et al).
In most digital sign deployments (due to the fact of the one-to-many distribution) it is difficult, if not impossible, to count the pairs of eyes, the duration and or the comprehension of the message of signs that have been viewed!
To be able to measure viewer response, building in a "feedback channel" must be a part of each digital signage installation. The future offers us many novel approaches. Within NFC, I like the ability to communicate to a viewer's smartphone via Wi-Fi, Bluetooth, or the next great, pervasive RF technology!
Measuring viewer response is a truly difficult task, because before you can measure it, you must establish metrics. The concept of simply verifying viewers as "impressions," even when capturing statistical data about the viewer using recognition software, is only part of the solution. Today, software like Intel's AIM system offer comprehensive analytics, but we have to go beyond impressions to investigate how much the message "sticks" with the viewer and did it leave an effective call to action.
I know not strictly a direct answer, but I think it is a bigger challenge facing us. Enjoy!
The focus on communications and business goals, which drives the use of dynamic place-based media, also drives the impact analytics on which investment and optimization decisions are made. Viewer notice and watch times offer excellent basis information, and anonymous viewer analytics offer a cost effective basis to capture these insights.
The more tangible the measure of business value and effectiveness can be, the better. Measures such as sales lift, basket/order size, product/service enquiries are critical to investment validation of dynamic signage. Counts on the traffic to unique websites and mobile engagement including opt-ins, downloads and mobile commerce illustrate the degree to which the objectives of leveraging multiple communications devices is achieved.
Given the importance of messaging for branding and that indicate increased propensity to try or buy in future, un-aided and aided recall of messages has proven an important and valuable metric. Exit interviews offer these message awareness insights and can also be used to capture demographic and life pattern information as well as other viewer perceptions including "reduced perceived waiting time" and impact on the environment.
A comprehensive whitepaper titled "Digital Place-based Media ROI Analytics - Defining Value. ROI or Die!" Is available for free download at http://www.lylebunn.com/Pages/aboutus.aspx.
THE EXIT INTERVIEW — For the type of digital deployments I've been working on over the past few years, exit interviews regarding recall, awareness, plan to purchase and influence on decision making have been the most effective in collecting targeted data for analysis.
Research options are wide-ranging for digital signage networks. I favor primary research approaches because you have more control over what is being measured and can craft the research methodologies to obtain the information that will be most useful to you. Some examples include:
- Exit interviews —asking a series of questions of customers as they leave the venue where the digital signage is located to ascertain, for example, if they noticed the screens and what aspects of the content they can recall.
- Focus groups — bringing together a group of customers in a more formal setting to ask questions, for example, about what they saw, what they recall about the messages, what they liked and disliked.
- Anonymous video analytics — cameras built into digital signs gather anonymous data about the gender, age and race of people watching the sign and also can measure how long an individual watched the screen.
- Video observation — hidden cameras track customer behavior and provide insights into how they interact with digital signs and what actions they may take after seeing the signs.
Secondary research, in which you analyze data collected for other reasons, can also be valuable. For example, one could analyze POS data to determine if a content playlist that is set up to appeal to certain audience segments or to sell certain products during specific times of the day has the desired effect on sales. You can take this a little further by establishing paired sets of control and test sites with similar characteristics and deciding specific metrics to be tracked, such as sales lift on products promoted on digital signs.
We like what face pattern detection technology offers. While much of the marketing spin has been around its ability to dynamically serve ads based on demographic profiles of the viewers, what's more interesting and not hooked just to advertising is how this technology delivers ongoing analytics on how long viewers look at individual pieces of content and whether that changes by things like location, time of day or gender and age range. You can start to tailor creative and message timing to dynamics. If people only look at a screen for 4.3 seconds on average, maybe 5-second spots make more sense than 15-second spots.
Performance measurements and understanding consumer response to your efforts are essential to meeting your goals. But the type and depth of measurement you employ are dependent upon those same goals. Depending upon the depth of your strategy, you may need more or less understanding of the consumers' takeaway.
For instance, if employing digital signage is simply a cost saving measure, or an attempt to modernize and streamline your merchandising efforts, you are likely using the same merchandising strategy and tactics as employed in your pre-digital methods. [We won't discuss here how short-sighted those goals can be]. If that is the case, little research is necessary.
If you are intend to use digital signage to upgrade and enhance consumer response or feedback, understanding how consumers view your content is absolutely imperative. The deeper you dig, the richer your measurements will be. The deeper you dig, the more time and money you will expend. The ideal scenario — and this requires much advanced planning time and expense, both of which are generally in too short a supply — is to conduct both quantitative and qualitative research on the impact of the content prior to its being deployed. This means producing the proposed content and exposing it to a representative sampling of the consumer base (focus groups, etc.) to determine their reactions (e.g. — does the content affect current or future purchase intent positively?). Implicit in this type of research is the fact that a negative response will require modifying or possibly trashing the content prior to its actual deployment. This method is used in package goods to preview TV commercials.
If this scenario is "too rich", then a quantitative study after the fact will permit you to modify future efforts to reach your goals. Again, the depth of the research must match your stated goals. Consumer perceptions and attitudes are constantly changing, so develop the research to attain the answers you seek. Once upon a time, simply asking the aided question, 'What did you like about our digital signage?' was sufficient. Today we need to ask a more un-aided question like, 'Was there anything unique or different about the environment of the (store/location)?' to elicit a usable response.
I recommend steering clear of relying on comparative sales data as your only tool to determine digital signage effectiveness. There are far too many variables which must be considered or eliminated for sales comps to provide more than a modestly indicative arrow of the success range of any merchandising content, digital or otherwise. Sales data must be overlain onto consumer perception data so that the entire picture is revealed.
As with most questions surrounding digital signage the answer is — it depends. It all starts with your communication goals and its associated content strategy. Once these two cornerstones are agreed amongst all of the relevant stakeholders and become firmly established, their associated quantitative success metrics must also be identified and defined. Note that I stated quantitative! Since 1997 I've been involved in many retail/brand digital signage initiatives and in every case, 'enhancing the customer experience' has been one of the top three project drivers but guess what! In every single case, this business driver fell on the editing room floor when the project sought to secure funding for a broader rollout. Why? Because the investors and CFOs will only pay attention to the numbers! Will we sell more stuff? Do we make more than our internal rate of return? (And it better be quite a bit higher because of the perceived risk). How many advertisers and at what rate? These are all questions that investors and executives will ask.
Quantitative evidence is the best way to enhance your business plan. So what quantitative measurement methods are available? Depending upon the aforementioned communication goals and its associated content strategy it could include:
- Number of direct interactions or touches if it is a touchscreen interface.
- Number of QR codes scanned.
- Number of times a website is sought based upon a call-to-action.
- Number of viewers measured by video analytics.
- Number of promoted products sold (that can be isolated to digital signage!)
- Number of potential viewers based upon Arbitron, Nielsen or Scarborough reports (although these provide a weaker case because they are typically not based upon quantitative evidence).
- Number of ???
- Do the math because it's always going to come down to a trusted number derived from quantitative evidence.
Philip M. Cohen
Operating an advertiser-supported network, we need to consistently demonstrate the efficacy of Care Media TV networks to brand management and agency media buyers. The fundamental measurement data we use is unaided vs. aided brand/product recall. When you have to prove your worth to win a share of media marketing dollars you have to show that your content platform is engaging customers and that brand awareness is being achieved. Network operators must not only compete against other digital signage networks, but more so; must prove the worth of digital out-of-home against other forms of media. I would also highly recommend using a well-known research company, such as Arbitron or Nielsen, to enhance validation of the results. Brands and agencies are much more willing to invest in a network that has proven data and metrics reported by a reputable third-party source.
Methods of measuring viewer response to content include texting campaigns, "leave-behind" surveys (survey instruments left near a screen with a sign asking viewers to respond), intercept studies and eye tracking technology. Generally speaking, the two former methods are inexpensive, but they're also unreliable and provide questionable results. These tactics rely on respondent self-selection, meaning the sample is based on people who proactively decide to participate. You're likely to elicit responses from only those viewers with very strong opinions. Intercept studies rely on random sampling, and we have used them several times to measure reaction to specific ads as well as our content. These surveys are time-consuming and costly, but the results are more reliable.
The company you hire to conduct the study will select a representative sample of your locations and then deploy field researchers to each location to survey viewers as they leave the viewing area. Intercepts have become an accepted research method among many digital out-of-home advertisers. Eye-tracking is a relatively new way of measuring viewer engagement. Of course, in order to use eye-tracking, you have to have cameras and the associated software deployed at your sites. Eye-tracking will tell you how long viewers watch specific pieces of content and other data including audience demographics. Eye-tracking is as close to Web analytics as DOOH can get right now.
There are two main roadblocks that prevent digital place-based media from truly taking off with advertising agencies —fragmentation and proving that the media really does work. To prove performance, there are two types of metrics, subjective (intent) and objective (unit sales lift) that justify an ad spend in the mind of a marketer. While we would all love to be able to show unit sales lift for every campaign, like most other traditional advertising channels (TV, radio, etc.), the digital out-of-home channel relies on metrics that are slightly further "up funnel." Some of those metrics include: Ad recall and intent-to-purchase, both aided and unaided. To capture these metrics, a network operator is challenged with conducting studies timely, cost-efficiently and reliably. Most of these requirements are conflicting with one another, so at IZON Media, we mix it up.
We measure performance in two ways: A/viewership studies for each IZON Media network and advertising effectiveness studies for specific campaigns. We conduct regular, comprehensive audience/viewership studies that capture both quantitative as well as qualitative metrics, but at a more aggregate level. On top of that, however, we intersperse numerous custom ad effectiveness studies that capture brand/campaign specific data that augment the aggregate data points to reinforce the performance of our networks.
Specifically for our viewership studies, we use leading companies who specialize in audience measurement, e.g. Arbitron and Nielsen, who use a store intercept methodology, common for this kind of research.
For the custom ad effectiveness studies, online is the common method we embrace. Online methodology is used not only for digital place-based media but also for measuring other media channels. With this common methodology, the opportunity to compare IZON Media campaign performance with other media channel performance is increased. Additionally, online is less expensive, more efficient (pre/post design is not needed) and not encumbered by blackout days (e.g. during the holidays where retailers ban store intercepts). Commonly used experimental design methodology is used to create matched test/exposed and control/not exposed cells. Campaign impacts (difference between test and control) are measured on key, common brand metrics throughout the purchase funnel (unaided and aided awareness, brand opinion and perception, likelihood to recommend, and purchase intent).
At the end of the day, the more we all can build a solid norms database to consistently provide generally accepted performance metrics that brands and agencies will accept, the better. Metrics like brand awareness, brand perception, and emotional impact mean almost as much as sales lift, and those learnings can be used to tailor future campaigns.
Outside of awareness campaigns, we really push our clients to utilize the dynamic capabilities of the medium. For example we did a campaign with Target Pharmacy that displayed in real-time pollen count. The idea was to drive moms to Target to purchase allergy medicine when the pollen count was high. Another proven method is the use of social medium weather it's using a hashtag (#) or streaming live tweets. FOX Broadcasting did a campaign using X Factor and rotated different creative with the judges and had the judges #Brittany etc., from this FOX was able to track the number of tweets that came in that week for each judge on the digital. We continue to push to find new ways to track digital campaigns. We currently have a study with DUKE and Google that will help prove the success of particular digital out-of-home campaigns.
We have tried various methods to measure performance of viewer response to our digital signage content and the method that has yielded the best results is mobile opt-ins. We are now at a level where our content engages viewers and motivates them to opt in to our mobile network to receive educational information on the topics of their choice. We tested slides, text-to-action videos, and sponsored content integrated with mobile opt-in information to build up our subscriber base.
In comparison to offering rewards to people who watch your content and respond by following you via social media, mobile opt-ins allow you track what content models are performing better than others and the use of different numbers and/or keywords helps you track where the viewers are coming from.
On the backend, you are building up a database of subscribers who have actually watched your digital signage content and you now have the opportunity to engage them with digital signage content on their phones even if they never come across your screens again.
Two tips of advice if you decide to incorporate a mobile marketing platform. First, there are various solutions out there to help you integrate mobile to your digital signage network. Do your research and choose the platform that fits your needs best. Second, most mobile platforms use short codes because they are easier to remember. We use full numbers with a local area code because we felt short codes were too commercial and we were concerned about building brand trust. Consider it if your audience spends enough time to capture the full number.
We use outside research companies (Edison, Nielsen, etc.) to measure our audience engagement with the screens, mostly to supply potential advertisers with a deeper understanding of how the viewer engages with our product. We also utilize third-party research providers to conduct pre/post campaign effectiveness studies to understand what happened as a result of the campaign running on our network.
We use three different methods:
Text to …
PROS: It is easy to measure how many people actually responded to the ad and you can later interact with responders.
CONS: It's relative and does not necessary show the true impact of the campaign.
You have to compare response generated by your network against other media where the campaign was published to show true response rates.
PROS: You get paid by what is sold through leads coming from your network, use unsold inventory and there is no risk for the advertiser
CONS: Infomercial knowledge has developed for TV, digital out-of-home is still developing and it is challenging to generate leads.
Experiment as much as you can.
PROS: CPG advertisers love this proof, and it is a no brainer, we are yet to see a CPG product that advertised with us that has not gotten at least a 25 percent sales lift during the campaign.
CONS: you have to be able to get from the stores the number of boxes sold during the campaign period.
You will need to have sales data from the stores to produce a report that will show the results.
Store exit surveys:
PROS: It is easy to conduct, it is based on a traditional research methodology and most agencies understand the reports, actually in some cases they will provide the questions to ask.
CONS: Doing it yourself is fine but some agencies will require a third-party auditor such as Nielsen or Arbitron, but they are expensive to hire.
Zoom Media is not measured through a syndicated database, but is available through Nielsen IMS media planning tools. ZoomFitness is available within Clear Decisions, a system that is very easy to use and available on the desktops of more than 4,000 planners. The metrics available are common audience metrics requested in media plans. ZoomFitness within IMS Clear Decisions is available to those that subscribe this Nielsen Media planning tool.
We have used a number of ways to measure viewership and response over the years. One method was hiring Nielsen to audit and do exit polls at the locations. This was mainly used to gather the necessary data for advertisers, particularly before digital signage was really an industry (Blockbuster TV in the 90s). Since we mainly focus on in-store networks, we like to get sales data, or sales-lift reports based on what we are promoting. We have done this with almost every network we have worked on, including Blockbuster, where the average sales increase was 15.6 percent. We have measured other networks as well, and have had sales increases from 4 percent to more than 500 percent. I have no doubt that customers in our customers stores see our content, and based on the sales reports, it's pretty clear our content has "moved the needle," as we like to say. Our latest test was with Mobil Oil, and we had a 54 percent sales increase, with the only difference in the company's marketing being inclusion of a promotion on their in-store network.
We measure the performance of digital signage by leveraging the best of "bricks and clicks." Several of the larger deployments we work on have touchscreens as part of the overall solution. As soon as interactivity is added to the mix, you are able to measure the "clicks" as you would with more conventional website activity. We look at how much time a viewer spends interacting with the screen and how deep they go. This information is very valuable in determining what's working and what needs adjusting.
To supplement this information we have used intercept interviews within the "bricks." We work with third-party research firms that conduct the in-store interviews based on information our clients and our own team are seeking. During the interview, if it is determined that the interviewee saw and digested the digital signage message, we like to follow-up with "did you notice the technology?" An ideal result reveals that the audience is seeing the message but not distracted by the technology delivering it — this is a good thing.
From an industry best practices perspective, I really appreciate the analytics that are applied to the Walmart Smart Network. Seattle-based DS-IQ developed the complex software that measures point of sales data against the network playlist. Through careful analysis, they are able to determine the effect an advertising spot has on actual store-by-store sales. The solution is even sophisticated enough to allow for A/B testing of different content approaches for the same product. The solution DS-IQ created works in large part because of the very large sample size they have to work with through the Walmart network. We would like to see other networks adopt similar measurement capabilities.
Metrics remains one of the Achilles heels of digital signage. Accurately gauging success starts with having measurement tools that match your objectives. Not every tool is high tech: A simple yet effective exit survey can measure whether your signage improves the customer experience. Your own POS data can reveal if the digital signage might be having an impact on sales. However, if ad revenue is one of your goals, then you will need to measure impressions, possibly using a system that uses facial recognition technology to track customer eye movement. Ultimately, shopper path measurement might be the best all-purpose tool, since it uses the shopper's mobile phone to track their path through the store, record points of engagement, and provide that data to retailers.
The method with which measure content effectiveness should be measured will vary depending on the environment you are designing for. Retail has different objectives than financial institutions or even museums. For example, projects we undertake in public spaces have a wayfinding emphasis and require us to be a bit creative in how we measure. In this scenario we conduct a benchmark analysis against the visitor issues identified prior to the start of the project. We look for improvement over these benchmarks and study the post-effects on visitor satisfaction.
In retail environments we have worked with groups like Arbitron and PrimalGrowth to initially study the consumer and their impression of the brand and their shopping experience. This preliminary analysis helps create a benchmark for the project. We create content to solve the problems identified and through post research can determine changes in consumer behavior or sales.
It's important to any project to conduct an initial study and clarify which issues will be emphasized. It becomes easier to determine success when you have something with which to compare. From performance data to historical sales figures all that should be documented before any changes are made within the environment.
Measuring response at the time of experience is ideal for obvious reasons (you've all heard of the television surveys that indicate everyone watches mostly PBS and Discovery Channel, right?). Methods for immediate or near immediate measurement include on-site observation, intercept studies, eye-tracking and facial recognition. Online surveys, with a pre-qualified viewer sample group, are the next best thing to immediate measurement. Those techniques can be used to measure viewership, engagement, brand recall and attitudinal response. Results can inform content strategy and creative optimization, and for setting the CPM for ad-based networks. Marketers often want to measure sales lift associated with digital signage. Sales lift is an appropriate performance measure when the content plays in a retail location next to the product being advertised, at shelf level. For other digital signage locations, it's best to measure performance with a combination of methods based on the overall network goals.
Awareness, engagement, recall and audience behavior are priority analysis data points that our ad-supported network clients measure in order to be accountable to media buyers; as well as differentiate the network from other media. On location intercept surveys are primarily used for obtaining both quantitative and richer qualitative metrics that demonstrate statistically how an ad and/or the content perform. Deliverables can include media effectiveness, audience insight, screen notice, dwell time, message recall, purchase intent and sales lift. We always recommend using a reputable third-party research company to help define the objectives and develop the overall survey plan. In addition to being able to collect and compile the data, it lends credibility to published results.
While there is pressure to prove the ROI on a digital menuboard, it is difficult to isolate the sales impact of digital signage in a QSR environment because there can be many unrelated factors that contribute to an increase or decrease in sales.
Measuring the effectiveness of digital content by isolating transactions has provided the most reliable data. We are testing software that can identify what content was playing just prior to a customer purchase. If effective content results in a customer purchasing a "sale" or low profit margin item, sales may decrease. It is important to populate the playlist with up-sell or high profit margin products in order to increase the overall transaction. We are also testing different designs for the same promotion in order to determine how much the design affects purchase behavior.