The Vendor Comparison report was published nearly two weeks ago, so it’s worth reporting a bit on its reception.
In terms of popularity, the report was an overwhelming success. There were more than 200 downloads the first day and the total is now close to 500. It’s already the second-most popular item ever and will likely overtake the leader (the Industry Update published more than a year ago) within the next week.
Comments on the substance of the report have generally been enthusiastic. There’s been some sniping by vendors about ratings given to their competitors – possibly justified in some cases, although in specific instances we’ve reviewed so far, the existing ratings seemed correct. Some vendors have also argued for changes in their own ratings. This has yielded a few changes but, again, mostly confirmed the original choices. Now that the list of questions is set, it will be easier to ensure we get answers to all of them when doing our own vendor research, which should improve the accuracy of the published report over time.
There have also been some complaints that the Yes/No approach lacks depth and nuance. I agree. But we accepted those limits because the Yes/No approach seemed better than alternatives. This was discussed in several blog posts before we published the report so I don’t won’t rehash the subject. I’ll simply point on that I’ve tried the other methods in the past, including detailed numeric ratings (the VEST report on B2B marketing automation systems rated 200 features as complete, partial, or missing) and extensive narrative descriptions (the original Guide to Customer Data Platforms in 2013 gave detailed answers to eleven questions per vendor). Both methods are highly labor intensive and neither seemed to give users what they needed. After publishing reports like this for twenty years, my fundamental conclusion is the best any vendor comparison can do is help buyers build a list of vendors to explore. Buyer needs are too varied for a general report to answer their specific questions in advance.
The core philosophy behind the report was that buyers should look for the features they need. Still, human nature being what it is, it’s inevitable that people will count the Yes answers for each vendor and treat the result as a ranking.
The notion of such ranking is fundamentally flawed: users have different needs so it’s impossible to create a list of “best” or “leading” vendors that ranks them for everyone. Feature-based rankings have the additional problem that they reward systems with the most features. I’ve countered this in the past by noting that unnecessary features add cost and complexity and therefore reduce value. In fact, my VEST report deducted points for unnecessary features when ranking systems numerically. We did this by building separate rankings for different user types with different feature weights for each ranking.
I’ve modified the Vendor Comparison introduction to suggest something similar: if you can’t must rank vendors numerically, then assign one point for each Yes on a feature you need and subtract one point for each Yes on a feature you don’t need. As a bonus, this forces you to think about your own requirements. But, I repeat once more, the purpose of the report is to screen vendors, not to rank them.
Engagement Use Cases
Many of the vendor questions about ratings related to the Engagement use cases: content selection, multi-step interactions, and real-time interactions. These are not core features of a CDP but several vendors felt they should be rated as providing them, especially after seeing that some other vendors were. This is an admittedly confusing topic, so I’ll do what I can here to clarify it.
Customer engagement is ultimately about selecting messages for individual customers. There are six items in the report that relate to it. These form a hierarchy of capabilities. They are:
– API/query access: this means data in the CDP can be read by an external system, which might use it as input to an algorithm to select a message. Although the CDP data might actually include something to indicate the appropriate message, this item doesn’t require it.
– Real-time access: this means CDP data for a single customer can be read by the an external system in real time. This is almost always done through an API call, but not all APIs covered by the previous item will include this capability. It requires looking up an individual based on identity information provided by the external system and returning the result quickly enough to support a real time interaction. As with the previous item, this is only about data access, not choosing messages.
– Segmentation: this means the CDP can extract a set of records for customers who meet a set of user-specified characteristics. It’s quite possible the characteristics will describe people who belong in a certain marketing campaign or hould receive a particular message. But this item doesn’t require that the output specify a message, so any connection between the selection logic and messages is external. Note that every CDP in the report meets the segmentation requirement.
– Content selection: this is where we indicate that the CDP can decide who should get which piece of marketing or editorial content. Content selection requires awareness of which content items are available, what the qualification criteria are for each item (such as geographic or language constraints, product ownership, or status level), and the customer’s previous content history (used to exclude items already offered or consumed, to limit message frequency, to distribute selections among content categories, etc.) In other words, true content selection includes features well beyond the generic rules needed to select a list. Things do get murky here because it’s at least theoretically possible to create those functions with a generic rule builder. Informal criteria we apply in assessing this item include whether marketers are actually using the CDP for content selection, whether the system includes a content library, and whether there’s a content selection interface in place.
– Real-time interactions: this means the system can do content selection, as defined above, during a real-time interaction. It specifically requires accepting data during the interaction and using it to guide the content selection. This distinguishes it from real-time access, which means the CDP can return current data but doesn’t require incorporating new information in real time.
– Multi-step campaigns: this is defined very carefully as having “a user interface to set up a single campaign including a series of marketing messages for individual customers over time.” The real use case is delivering a sequence of messages over time, but any system with basic segmentation features can do that if the user is clever enough. We added “user interface” and “single campaign” to limit this to systems that are truly designed to run this particular type of campaign. There’s a good argument that such multi-step campaigns, usually entered through a branching flow chart interface, are a bad idea because they’re too hard to manage. I don’t necessarily disagree. But many users want to do things this way, so this item is intended to help them find systems that meet their needs.
One way we justified limiting the report to Yes/No answers was by promising to link to more detailed explanations from the vendors. I have just a couple of these available and will put those links in the report some time soon. I’ll also remind the other vendors to send them. We’ll be adding more vendors as new Sponsors join the Institute and continue to review the current ratings to ensure they’re accurate. I have no current plans to add more items to the report, but it’s always a possibility.
Quick reminder: we post updated versions of the report as we make changes. You can always download the latest version from the same URL: https://cdpinstitute.org/DL966-CDPI-CDP-Vendor-Comparison