You are currently browsing the tag archive for the ‘quantitative’ tag.
This collection of motion infographics from Bloomberg is pretty amazing. Each takes a single, complex issue and explains it using brief, animated infographic. Beyond simply being a visual expression of data, each video tells a story, leaving the viewer with a full understanding of the issue at hand. Granted, not everyone has the expertise (or budget) to employ motion infographics, but there are little lessons to be learned in each. Enjoy.
The U.S. Census Bureau Center for Economic Studies has long supported (for the past ~5 years) an online system for pulling area-based employment and residence data using a visual map-based selection tool called OnTheMap. This software is fairly intuitive and fun to use, but can also be quite useful in exploring a specific market or region to understand where workers live and work, and how that has changed over time.
OnTheMap is useful for more than work location, however. It’s a multi-layered mapping tool, with companion data on demographics, earnings, industry characteristics. We’ve also used it to identify exact metropolitan statistical areas and radius ranges, to find transportation routes, greenspace, and tribal and military lands, and to simply better understand a physical marketplace.
For years, organizations like the Census Bureau relied heavily on point-in-time estimates, tables of statistics and physical and static maps for data exploration like this. As new systems come online, are developed further, and improved over successive versions, our ability to access information from our desktops is not only facilitated but empowered.
Okay, this one’s a little obtuse… 🙂 Check the article too, whew!
Steve and I have been exploring the online reference site, The Book of Odds. Some of the site’s key functionalities are still in Beta, but for over three years they’ve been compiling odds to create a large database of “the odds of everyday life.” You can sign up for free and provide a little profiling information to begin exploring statements of probability related to your profile, or to anything you want to look up.
The idea is to explore the odds of something happening, and then to calibrate the probability in a comparison. If the topic you explore is included in the database (the four main current topic portals are Health & Illness, Accidents & Death, Relationships & Society, and Daily Life & Activities), you’ll get confirmed probability data on that topic, but you’ll also get leads on unexpected connections, as you compare unrelated events by their likelihood of occurring.
The site also has social and learning functions, and content aside from the odds database (newsletters, blogs, related links, etc.) We’re just getting started exploring this resource, and brainstorming about how we can apply it to our day-to-day reference needs. It’s actually pretty challenging to think about life in terms of probability statements – thinking up queries to get started. But once you dig into the site, there’s quite a bit to learn – not only the small bites of data, but how to calibrate probability, and new approaches to classifying and comparing phenomena.
At what price would you consider this product to be cheap?
At what price would you perceive this product to be too expensive?
At what price would you consider this product to be priced so cheaply that you would worry about its quality?
At what price would you consider this product to be too expensive to even consider buying it?
These four very direct and intuitive questions form the basis of the Van Westendorp pricing exercise – a quantitative research technique that can actually yield robust and compelling data reflecting consumer demand. We’ve been thinking about the wide variety of quantitative analytical techniques we use in our work, and thought we’d provide a quick overview on this one.
The Van Westendorp pricing exercise is a price sensitivity measurement devised by a Dutch psychologist, Peter van Westendorp. This technique uses four questions about a product or service (drafted more or less like those above) and requires the respondent to gauge prices that are too cheap and too expensive in context with the product or service’s offerings and perceived benefits.
Frequency distributions from these questions are derived and plotted, yielding the range of pricing options for the product. As the final step in this process, purchase intent is measured at the highest and lowest prices in the range of pricing options. The optimal price (i.e., the price which maximizes market share while generating the highest possible revenue) can then be computed, along with the precise range of acceptable pricing.
The data points on the example chart are plotted a little loosely, but the point at which the Too Cheap and Too Expensive responses intersect is considered the Optimal Price Point (OPP). The intersection of Expensive and Too cheap yields the Point of Marginal Cheapness (PMC). At this price point, the number of people considering the product to be too cheap is the same as the number considering it to be expensive.
The intersection of Cheap and Too Expensive yields the Point of Marginal Expensiveness (PME). At this price point, the same number of people regard the product to be too expensive as regard it cheap. The range from PMC to PME is the Range of Acceptable Prices (RAP), or the Optimal Price Band.
We also conduct pricing studies using conjoint and discrete choice designs, but the Van Westendorp pricing method is the most efficient way to evaluate price sensitivity itself, as the resulting data resulting is easy to interpret, identifies an entire range of acceptable price points, and provides a solid basis to assess future pricing strategies, ensuring that the optimal price-value balance is established. Contact us if you’d like to learn more about this research technique.