Monday, January 09, 2017

Artificial Intelligence, Virtual Reality, and Government Control: Perfect World or Perfect Storm?

If it weren’t the print edition, I would have sworn today’s New York Times business section had been personalized for me: there were articles on self-driving cars, virtual reality, and how “Data Could Be the Next Tech Hot Button”. That precisely matches my current set of obsessions. It’s especially apt because the article on data makes a point that’s been much on my mind: government regulation may be the only factor that prevents AI-powered virtual reality from taking over the world, and governments may feel impelled to create such regulation in self-defense of their authority. The Times didn’t make that connection among its three articles.  But the fact that all three were top of mind for its editors and, presumably, readers was enough to illustrate their importance.

I’m doubly glad that these articles appeared together because they reinforced my intent to revisit these issues in a more concise fashion than my rambling post on RoseColoredGlasses.Me. I suspect thread of that post got lost in self-indulgent exposition. Succinctly, the key points were:

- Virtual reality and augmented reality will increasing create divergent “personal realities” that distance people from each other and the real world.

- The artificial intelligence needed to manage personal reality be beyond human control.

- Governments may recognize the dangers and step in to prevent them. 

Maybe these points sound simplistic when stated so plainly. I’m taking that risk because I want to be clear.  But depth may add credibility. So let me expand on each point just a bit.

- Personal reality. I covered this pretty well in the original post and current concerns about “fake news” and “fact bubbles” make it pretty familiar anyway.  One point that I think does need more discussion is how companies like Facebook, Google, Apple, and Amazon have a natural tendency to take over more and more of each consumer’s experience.  It's a sort of “individual network effect” where the more data one entity has about an individual, the better job they can do giving that person the consistent experience they want.  This in turn makes it easier to convince individuals to give those companies control over still more experiences and data. I’ll stress again that no coercion is involved; the companies will just be giving people what they want. It’s pitifully easy to imagine a world where people live Apple or Facebook branded lives that are totally controlled by those organizations. The cheesy science fictions stories pretty much write themselves (or the computers can write them for us).  Unrelated observation: it's weird the discussions which Descartes and others had about the nature of reality – which sound so silly to modern ears – are suddenly very practical concerns.

- Artificial intelligence. Many people are skeptical that AI can really take control of our lives. For example, they’ll argue that machines will always need people to design, build, and repair them. But self-programming computers are here or very close (it depends on definitions), and essential machines will be designed to be self-repairing and self-improving.  Note that machines taking control doesn't require malevolent artificial intelligence, or artificial consciousness of any sort. Machines will take control simply because people let them make choices they can’t predict or understand. The problem is that unintended consequences are inevitable and for the first – and quite possibly the last – time in history, there will be no natural constraints to limit the impact of those consequences. Random example: maybe the machines will gently deter humans from breeding, something that could maximize the happiness of everyone alive while still eliminating the human race. Oops. 

- Government intervention. Will governments decide that some shared reality is needed for their countries to function properly?  How closely will they require personal reality to match actual reality (if they even admit such a thing exists)?  Will they allow private business to manage the personal reality of their citizens? Will they limit how much personal reality can be delivered by artificial intelligence? These issues all relate to questions of control. Although there’s an interesting theory* that the Internet has made it impossible for any authority to maintain itself, I think that governments will ultimately impose whatever constraints they need to survive on individuals, companies, and the Internet. This probably means governments will enforce some shared reality, although it surely won't match actual facts in every detail.  It’s less certain that  governments will control artificial intelligence, simply because the benefits of letting AI run things are probably irresistible despite the known dangers.

So, is the choice between having your reality managed by an authoritarian government or by an AI? Let's hope not.  I prefer a world where people control their own lives and base them on actual reality.  That’s still possible but it will take coordinated hard work to make it happen.


___________________________________________________________________________________
*For example, Martin Gurri’s The Revolt of the Public














-

Thursday, January 05, 2017

Optimove Optibot Automates Campaign Optimization

I finally caught up with Optimove for a briefing on the Optibot technology they introduced last September. For a bit of background, Optimove is a Journey Orchestration Engine that focuses on customer retention. It assigns customers to states (which it calls microsegments) and sends different marketing campaigns to people in each state. See my original Optimove review from three years ago (!) for a more detailed explanation.

What’s new about Optibot is that defining microsegments and picking the best campaign actions per segment have now been automated. Optimove previously analyzed performance of microsegments to find clusters within the microsegment with above or below average results. When it found one, it gave users a recommendation to treat these clusters as separate microsegments and potentially stop promoting to the poorly performing group. Optibit takes the human out of this loop, automatically splitting the microsegments into smaller microsegments when it can.

Optibot also automatically tests different actions against people within each microsegment. If it finds that different actions work better for different clusters, it will assign the best action to each group. (In practice, it slowly shifts the mix in favor of the better actions, to be more certain it is making a sound choice while minimizing the opportunity cost of poorly-performing actions.) The system gives reports that compare actual performance with what performance would have been without the additional segmentation and optimization. That’s a helpful reassurance to the user that Optibot is making good choices, and of course a nice little demonstration of Optimove’s value.

Finally, Optibot provides users with recommendations for things they can do, such as create new actions for microsegments that are not responding well to existing actions. I’ll assume that Optibot does this because it really can’t create new actions by itself, and not just so marketers have something to do other than watch cat videos all day.

I’m probably making Optibot sound simpler than it really is. There’s a lot of clever (and fully automated) analysis needed to find the right clusters, given that there are so many different ways the clusters could be defined. Optibot also needs a goal to pursue so it knows which actions and clusters are giving more desirable results. Defining those goals is also still a job for human marketers.  Fortunately, it only has to be done when a program is being set up, so it won’t cut too deeply into precious cat video viewing time.

Sarcasm aside, the real value of Optibot isn’t that it automates what marketers could otherwise do manually. It’s that it manages many more segments than humanly possible, allowing companies to fine-tune treatments for each group and to uncover pockets of opportunity that would otherwise be overlooked. Marketers will indeed need to create more content, and will no doubt find other productive uses for their time. And, frankly, if Optibot meant fewer 60 hour work weeks, that would be okay too.

Tuesday, January 03, 2017

Boxever Puts Airline Data in Context for Better Passenger Experience

Everyone loves a good origin story* and Boxever has a classic: the company started as system to recommend add-on purchases on airline booking sites but found that prospects lacked access to customer data, so it pivoted to build customer databases. Similar stories are common in the Customer Data Platform universe but it’s the details that make each one interesting. So let’s give Boxever a closer look.

Boxever’s foundation is the customer database. The system can ingest data from any source and has prebuilt connectors for standard operational systems used by its clients (mostly airlines and travel agencies). Data can be loaded in real time during Web or call center interactions, by querying external sources through API connections, or through batch uploads. All inputs are treated as events, allowing the system to capture them without precise advance data modeling. But the system does organize inputs into a base structure of guests (i.e., customers), sessions, and orders. Clients can extend this model with additional objects such as order items. The system can usually classify new inputs automatically and flags the remainder for human review.

The system is exceptionally good at capturing the frequently-changing elements peculiar to the travel industry, including location (current and destination), weather (at the current and destination location), prices, available products (e.g. vacant seats and upgrades), loyalty status, and even current flight information. Most of this data is read from external systems at the start of an interaction, used during the interaction to provide context, and stored with the interaction records for future analysis. Again reflecting the specialized needs of travel marketers, Boxever sessions can include things like airport visits, flights, or stays in a location, in addition to the conventional Web site visits or telephone calls.

Boxever also provides extensive customer identification capabilities, both to support real-time interactions and to merge profiles behind the scenes. It can match on specific identifiers, such as a loyalty account number, on combinations of attributes such as last name and birthdate, and on similarities such as different forms of an address. It can assemble profiles on travel companions, who are often not as well known to an airline as the person who booked the ticket. It also calculates personal propensities to buy airline services and from specific partners such as hotels and retailers. These propensities are used to make recommendations.

All this data is assembled by the system into personal profile that includes attributes and an event timeline. The event timeline captures both customer actions and system actions, such as running processes or changing data. The timeline can be displayed to customer service agents or used as inputs for automated decisions. Users can also define segments and contexts using any data in the personal profile.

The decisioning features of Boxever are organized around offers. Users first set up templates for each offer type, in which they define the parameters required to construct an offer. Parameters vary depending on the channel that will deliver the offer and can include text, images, Web links, products and prices (which can be validated against external systems), and other components of the message to be delivered.  Other offer parameters include the context in which it's available and actions to take in other systems if the offer is accepted. Actual offers are created by filling in the appropriate parameters.

Offers are embedded in decision engines. These contain rules that specify when particular offers are available. The decision engines are connected to delivery systems through flows, which can react to requests during a real -time interaction, listen for an external event such as an abandoned shopping cart, or run a scheduled batch process such as generating a mailing list. A decision engine can be limited to a specific context and contain multiple rules, each with its own selection conditions and linked to an offer.

Boxever Segment Builder

During execution, the decision engine finds all rules that the current customer matches. It can return all the related offers or a limited number. Offers can be prioritized in a user-specified sequence that applies to all customers, or they can be prioritized for the individual  based on propensities or other scores. The system provides automated segmentation, predictive modeling, and testing to help find the best offers in each situation.

Boxever Decision Engine Builder

When you strip away the superstructure of offers, flows, and decision engines, the keys to all this are context and rules.  Context can limit eligibility for an entire decision engine or a specific offer within an engine. Contexts are defined with a point-and click query builder that allows complex, nested statements. Rules are defined with a scripting language, which some marketers will find intimidating.  The scripting interface does provide some assistance such as dropdown lists of available operators, segments, and code snippets. Boxever says some of its clients write their own rules and others rely on Boxever staff to write them.

Context is an especially powerful feature.  It plays the role of what I usually call “state”: that is, a general description of the customer’s current condition that narroww the range of potential treatments. For example, a customer who is currently traveling should be treated differently from a customer who is planning a trip. But context in Boxever isn’t tracked as state-to-state movement within a comprehensive journey framework, as required by my definition of a Journey Orchestration Engine or JOE. Nor, despite the name, do Boxever flows specify multi-step message sequences. It's possible to creates such sequences within a single Boxever decision engine but it would take some clever rule design.

Like most Customer Data Platforms and orchestration engines, Boxever leaves the actual delivery of its offers up to external systems. Flows can connect to email, text messaging, mobile apps, display ads and other systems. Boxever can also integrate with order processing, loyalty, and other operational systems. The vendor says a Web site integration can be completed in several weeks, while more complicated integration may take four to six months. A typical Boxever client has about a half dozen source systems and ten delivery systems.

Boxever was founded in 2011 and opened a U.S. office in 2014. Pricing is based on number of active customers or transactions. The company rarely works with clients having less than $100 million annual revenue. Most clients will pay well north of $100,000 per year.  The company currently has 20 clients.


________________________________________________________________________________________________
*I myself was found in an unobtainium-lined box clutching a mysterious amulet.

Monday, December 19, 2016

The World May Be Ending But, If Not: 3 Tips To Be a Better Marketer in 2017

About eighteen months ago I started presenting a scenario of a woman named Jane riding in a self-driving car, unaware that her smart devices were debating whether to stop for gas and let her buy a donut. The point of the scenario was that future marketing would be focused on convincing consumers to trust the marketer’s system to make day-to-day purchasing decisions. This is a huge change from marketing today, which aims mainly to sell individual products. In the future, those product decisions will be handled by algorithms that consumers cannot understand in detail. So consumers’ only real choices will be which systems to trust. We can expect the world to divide itself into tribes of consumers who rely on companies like Amazon, Apple, Google, or Facebook and who ultimately end up making similar purchases to everyone else in their tribe.

The presentation has been quite popular – especially the part about the donut. So far the world is tracking my predictions quite closely. To take one example, the script says that wireless connections to automobiles were banned after "the Minneapolis Incident of 2018". Details aren’t specified but presumably the Incident was a cyberattack that took over cars remotely. Subsequent reports of remote Jeep hacking hacking fit the scenario almost exactly and the recent take-down of the DYN DNS server by a botnet of nanny cams and smart printers was an even more prominent illustration of the danger. The resulting, and long overdue, concern about security on Internet of Things devices is just what I predicted from Minneapolis Incident.

Fond as I am of that scenario, enough has happened to justify a new one. Two particular milestones were last summer’s mass adoption of augmented reality in the form of Pok√©mon Go and this autumn’s sudden awareness of reality bubbles created by social media and fake news.

The new scenario describes another woman, Sue, walking down Michigan Avenue in Chicago. She’s wearing augmented reality equipment – let’s say from RoseColoredGlasses.Me, a real Web site* – that presents shows her preferred reality: one with trash removed from the street and weather changed from cloudy to sunshine. She’s also receiving her preferred stream of news (the stock market is up and the Cubs won third straight World Series). Now she gets a message that her husband just sent flowers to her office. She checks her hair in the virtual mirror – she looks marvelous, as always – and walks into a store to find her favorite brand of shoes are on sale. Et cetera.

There’s a lot going on here. We have visual alterations (invisible trash and shining sun), facts that may or may not be true (stock market and baseball scores), events with uncertain causes (did her husband send those flowers or did his computer agent?), possible self-delusion (her hair might not look so great), and commercial machinations (is that really a sale price for those shoes?). It's complicated but the net result is that Sue lives in a much nicer world than the real one. Many people would gladly pay for a similar experience. It’s the voluntary nature of this purchase that makes RoseColoredGlasses.Me nearly inevitable: there will definitely be a market. Let’s call it “personal reality”.

We have to work out some safeguards so Sue doesn’t trip over a pile of invisible trash or get run over by a truck she has chosen not to see. Those are easy to imagine. Maybe she gets BubbleBurstTM reality alerts that issue warnings when necessary.  Or, less jarringly, the system might substitute things like flower beds for trash piles. Maybe the street traffic is replaced by herds of brightly colored unicorns.

If we really want things to get interesting, we can have Sue meet a friend. Is her friend experiencing the same weather, same baseball season, same unicorns? If she isn’t, how can they effectively communicate? Maybe they can switch views, perhaps as easily as trading glasses: literally seeing the world through someone else’s eyes. That could be quite a shock. Maybe Sue’s friend is the fearful type and has set her glasses to show every possible threat; not only are the trash piles highlighted but strangers look frightening and every product has a consumer warning label attached. A less disruptive approach could be some external signifier to show her friend’s current state: perhaps her glasses are tinted gray, not rose colored, or Sue sees worried-face emoticon on her forehead.

The communication problems are challenging but solvable. Still, we can expect people with similar views to gravitate towards each other. They would simply find it easier and more pleasant to interact with people sharing their views. Of course, this type of sorting happens already. That’s what makes the RoseColoredGlasses.Me scenario so intriguing: it describes highly-feasible technical developments that are entirely compatible with larger social trends and, perhaps, human nature itself. Many forces push in this direction and there’s really nothing to stop it. I have seen the futures and they work.

Maybe you’re not quite ready to give up on the notion of objective reality. If I can screen out global warming, homeless people, immigrants, Republicans, Democrats, or anything else I dislike, then what’s to motivate me to fix the actual underlying problems? Conversely, if people’s true preferences are known do they justify real-world action: say, to remove actual homeless people from the streets if no one wants to see them. That sounds ugly but maybe a market mechanism could turn it to advantage: if enough people pay RoseColoredGlasses.Me to remove the homeless people from their virtual world, then some of that money could fund programs to help the actual homeless people. Maybe that’s still immoral when people are involved but what if we’re talking about better street signs? Replacing virtual street signs for RoseColoredGlasses.Me subscribers with actual street signs visible to everyone sounds like a winner. It would even mean less work for the computers and thus save money for RoseColoredGlasses.Me.

Another wrinkle: if the owners of RoseColoredGlasses.Me are really smart (and they will be), won't they manipulate customers’ virtual reality in ways that lead the city to put up better street signs with its own money?  Maybe there will be a virtual mass movement on the topic, complete with artificial-but-realistic social media posts, videos of street demonstrations, and heart-rending reports of tragic accidents that could have been avoided with better signage. Customers would have no way to know which parts were real. Then again, they can’t tell today, either.

The border between virtual and actual reality is where the really knotty problems appear. One is the fate of people who can’t afford to pay for a private reality: as we already noted, they get stuck in a world where problems don’t get solved because richer people literally don’t see them. Again, this isn’t so different from today’s world, so it may not raise any new questions (although it does make the old questions more urgent). Today’s world also hints at the likely resolution: people living in different realities will be physically segregated. Wealthier people will pay to have nicer environments and will exclude others who can’t afford the same level of service. They will avoid public spaces where different groups mix and will pay for physical and virtual buffers to manage any mixing that does occur.

Another problem is the cost of altering reality for paying customers. It’s probably cheap to insert better street signs.  But masking the impact of global warming could get expensive. On a technical level, bigger changes require more processing power for the computer and better cocoons for the customers.  To fix global warming they’d need something that changes the apparent temperature, precipitation, and eventually the shoreline and sea level. It’s possible to imagine RoseColoredGlasses.Me customers wearing portable shells that create artificial environments as they move about. But it's more efficient for the computer if people to stay inside and simulate the entire experience. Like most of the other things I’ve suggested here, this sounds stupid and crazy but, as anyone who has used a video conference room already knows, it’s also not so far from today’s reality. If you think I’m blurring the border between augmented and virtual reality, it’s not because I’m unaware of the distinction. It’s because the distinction is increasingly blurry.

I do think, though, that the increasing cost of having the computer generate greater deviations from physical reality will have an important impact on how things turn out. So let's pivot from discussing ever-greater personalization (the ultimate endpoint of which is personal reality) to discussing the role of computers in it all.

To start once more with the obvious, personal reality takes a lot of computer power. Beyond whatever hyperrealistic rendering is needed, the system needs vast artificial intelligence to present the reality each customer has specified. After all, the customer will only define a relatively small number of preferences, such as “there is no such thing as global warming”. It’s then up to the computer to create a plausible environment that matches that preference (to the degree possible, of course; some preferences may simply be illogical or self-contradictory). The computer also probably has to modify news feeds, historical data, research results, and other aspects of experience to match the customer’s choice.

The computer must deliver these changes as efficiently as possible – after all, RoseColoredGlasses.Me wants to make a profit. This means the computer may make choices that minimize its cost even when those choices are not in the interest of the customer. For example, if going outdoors requires hugely expensive processing to hide the actual weather, the computer might start generating realities that lead the customer to stay inside. This could be as innocent as suggesting they order in rather than visit a restaurant (especially if delivery services allow the customer to eat the same food either way). Or it could deter travel with fake news reports about bad weather, transit breakdowns, or riots. As various kinds of telepresence technology improve, keeping customers indoors will become more possible and, from the customer’s standpoint, actually a better option.

This all happens without any malevolence by the computer or its operator. It certainly doesn't matter whether the computer is self-aware.  The computer is simply be optimizing results for all concerned. In practice, each personal reality involves vastly more choices than anyone can monitor, so the computer will be left to its own devices. No one will understand what the computer is doing or why. Theoretically customers could reject the service if they find the computer is making sub-optimal choices.  But if the computer is controlling their entire reality, customers will have no way to know that something better is possible. Friends or news reports who tried to warn them would literally never be heard – their words would be altered to something positive. If they persisted, they would probably be blocked out entirely.

I know this all sounds horribly dystopian. It is. My problem is there’s no clear boundary between the attractive but safe applications – many of which exist today – and the more dangerous ones that could easily follow. Many people would argue that systems like Facebook have already created a primitive personal reality that is harmful to the individuals involved (and to the larger social good, if they believe that such a thing exists). So we’ve already started down the slippery slope and there’s no obvious fence to stop our fall.

Or maybe there is. It’s possible that multiple realities will prove untenable. Maybe the computers themselves will decide it’s more efficient to maintain a single reality and force everyone to accept it (but I suspect customers would rebel). Maybe social cohesion will be so damaged that a society with multiple realities cannot function (although so far that hasn’t happened). Maybe governments will decide to require degree of shared reality and limit the amount of permitted diversity (already happens in authoritarian regimes but not yet in Western democracies). Or maybe societies with a unified reality will be more effective and ultimately outcompete more fractured societies (possible and perhaps likely, but not right away). In short, the future is far from clear.

And what does all this mean for marketing? Maybe that’s a silly question when reality itself is at stake. But assuming that society doesn’t fall apart entirely, you’ll still need to make a living. Some less extreme version of what I’ve described will almost surely come to pass. Let's say it boils down to increasingly diverse personal realities as computers control larger portions of everyone’s experience. What would that imply?

One implication is the number of entities with direct access to any particular individual will decrease. Instead of dealing with Apple, Facebook, Google, and Amazon for different purposes, individuals will get a more coherent experience by selecting one gatekeeper for just about everything. This will give gatekeepers more complete information for each customer, which will let the gatekeepers drive better-tailored experiences. Marketing at gatekeepers will therefore focus on gathering as much information as possible, using it to understand customer preferences, and delivering experiences that match those preferences. Competition will be based on insights, scope of services, and efficient execution. The winners will be companies who can guide consumers to enjoy experiences that are cost-effective to deliver.

Gatekeeper marketers will still have to build trusted brands, but this will become less important. Different gatekeeping companies will probably align with different social groups or attitudes, so most people will have a natural fit with one gatekeeper or another. This social positioning will be even more important as gatekeepers provide an ever-broader range of services, making it harder to find specific points of differentiation. Diminished competition, ability to block messages from other gatekeepers, and the high cost of switching will mean customers tend to stick after their initial choice. People who do make a switch can expect great inconvenience as the new gatekeeper assembles information to provide tailored services. Switchers might even lose touch with old friends as they vanish from communication channels controlled by their former gatekeeper. In the RoseColoredGlasses.Me scenario, they could become literally invisible as they’re blocked from sight in friends' augmented realities.

Marketers who work outside the gatekeepers will face different challenges. Brand reputation and trust will again be less important since gatekeepers make most choices for consumers. In an ideal world the gatekeepers would constantly scan the market to find the best products for each customer. This would open every market to new suppliers, putting a premium on superior value and meeting customer needs. But in the real world, gatekeepers could easily get lazy.  They'd offer less selection and favor suppliers who give the best deal to the gatekeeper itself.  The risk is low, since customers will rarely be aware of alternatives the gatekeeper doesn’t present. New brands will pay a premium to hire the rare guerilla marketers who can circumvent the gatekeepers to reach new customers directly.

Jane in her self-driving car and Sue walking down Michigan Avenue are both headed in the same direction: they are delegating decisions to machines. But Jane is at an earlier stage in the journey, where she’s still working with different machines simultaneously – and therefore has to decide repeatedly which machines to trust. Paradoxically, Sue makes fewer choices even though she has more control over her ultimate experience. Marketers play important roles in both worlds but their tasks are slightly different. The best you can do is an eye out for signs that show where your business is now and where it’s headed.  Then adjust your actions so you arrive safely at your final destination.  .

_____________________________________________________________________

*The site's a joke. But I do own the domain if you'd like to buy it.

Wednesday, December 14, 2016

BlueVenn Bundles Omnichannel Journey Management, Personalization, and Single Customer View

BlueVenn has only been active in the U.S. market only since March 2016, although many U.S. marketers will recall its previous incarnation as SmartFocus.* The company offers what it calls an omnichannel marketing platform that builds a unified customer database, manages marketing campaigns, and generates personalized Web and email messages.

The Venn in BlueVenn

The unified database process, a.k.a. single customer view, has rich functionality to load data from multiple sources and do standardization, validation, enhancement, hygiene, matching, deduplication, governance and auditing. These were standard functions for traditional marketing databases, which needed them to match direct mail names and addresses, but are not always found in modern customer data platforms. BlueVenn also supports current identity linking techniques such as storing associations among cookies, email addresses, form submits, and devices. This sort of identity resolution is a batch process that runs overnight.  The system can also look up information about a specific customer in real time if an ID is provided. This lets BlueVenn support real time interactions in Web and call center channels.

Users can enhance imported data by defining derived elements with functions similar to Excel formulas. These let non-technical users put data into formats they need without the help of technical staff. Derived fields can be used in queries and reports, embedded in other derived fields, and shared among users. To avoid nasty accidents, BlueVenn blocks changes in a field definition if the field is used elsewhere. Data can be read by Tableau and other third-party tools for analysis and reporting.

BlueVenn offers several options for defining customer segments, including cross tabs, geographic map overlays, and flow charts that merge and split different groups.  But BlueVenn's signature selection tool has always the Venn diagram (intersecting circles).  This is made possible by a columnar database engine that is extremely fast at finding records with shared data elements. Clients could also use other databases including SQL Server, Amazon Redshift (also columnar), or MongoDB, although BlueVenn says nearly all its clients use the BlueVenn engine for its combination of high speed and low cost.

Customer journeys - formerly known as campaigns - are set up by connecting icons on a flow chart. The flow can be split based on yes/no critiera, field values, query results, or random groups. Records in each branch can be sent a communication, assigned to seed lists or control groups, deduplicated, tagged, held for a wait period or until they respond, merged with other branches, or exit the flow. The “merge” feature is especially important because it allows journeys to cycle indefinitely rather than ending after a sequence of steps. Merge also simplifies journey design since paths can be reunified after a split. Even today, most campaign flow charts don’t do merges.

BlueVenn Journey Flow

Tagging is also important because it lets marketers flag customers based on a combination of behaviors and data attributes. Tags can be used to control subsequent flow steps. Because tags are attached to the customer record, they can be used to coordinate journeys: one application cited by BlueVenn is to tag customers for future messages in multiple journeys and then periodically compare the tags to decide which message should actually be delivered.

Communications are handled by something called BlueRelevance. This puts a line of code on client Web sites to gather click stream data, manage first party cookies, and deliver personalized messages. The messages can include different forms of dynamic content including recommendations, coupons, and banners. In addition to Web pages, BlueVenn can send batch and triggered emails, text messages, file transfers, and direct messages in Twitter and Facebook. Next year it will add display ad audiences and Facebook Custom Audiences. The vendor is also integrating with the R statistical system for predictive models and scoring. BlueVenn has 23 API integrations with delivery systems such as specific email providers and builds new integrations as clients need them.

All BlueVenn features are delivered as part of a single package. Pricing is based on the number of sources and contacts, starting at $3,000 per month for two sources and 100,000 contacts. There is a separate fee for setting up the unified database, which can range from $50,000 to $300,000 or more depending on complexity. Clients can purchase the configured database management system if they want to run it for themselves. The company also offers a Software-as-a-Service version or hybrid system that is managed by BlueVenn on the client's own computers.  BluyeVenn has about 400 total clients of which about two dozen run the latest version of its system. It sells primarily to mid-size companies, which it defines as $25 million to $1 billion in revenue.

_____________________________________________________________________________

*The original SmartFocus was purchased in 2011 by Emailvision, which changed its own name to SmartFocus in 2013 and then sold the business (technology, clients, etc.) but kept the name for itself. If you’re really into trivia, SmartFocus began life in 1995 as Brann Viper, and BlueVenn is part of Blue Group Inc. which also owns a database marketing services agency called Blue Sheep. The good news is: this won't be on the final.

Thursday, December 08, 2016

Can Customer Data Platforms Make Decisions? Discuss.

I’ve had at four conversations in the past twenty four hours with vendors who build a unified customer database and use it to guide customer treatments. The immediate topic has been whether they should be considered Customer Data Platforms but the underlying question is whether Customer Data Platforms should include customer management features.

That may seem pretty abstract but bear with me because this isn’t really about definitions. It’s about what systems do and how they’re built.  To clear the ground a bit, the definition of CDP, per the CDP Institute, is “a marketer-managed system that creates a persistent, unified customer database that is accessible to other systems". Other people have other definitions but they are pretty similar. You’ll note there’s nothing in that definition about doing anything with data beyond making it available.  So, no, a CDP doesn’t need to have customer management features.

But there’s nothing in the definition to prohibit those features, either. So a CDP could certainly be part of a larger system, in the same way that a motor is part of a farm tractor. But most farmers would call what they’re buying a tractor, not a motor. For the same reasons, I generally don’t to refer to systems as CDPs if their primary purpose is to deliver an application, even though they may build a unified customer database to support that application.

The boundary gets a little fuzzier when the system makes that unified database available to external systems – which, you’ll recall, is part of the CDP definition. Those systems could be used as CDPs, in exactly the same way that farm tractors have “power take off” devices that use their motor to run other machinery.  But unless you’re buying that tractor primarily as a power source, you’re still going to think of it as a tractor. The motor and power take off will simply be among the features you consider when making a choice.*

So much for definitions. The vastly more important question is SHOULD people buy "pure" CDPs or systems that contain a CDP plus applications. At the risk of overworking our poor little tractor, the answer is the same as the farmer’s: it depends it on how you’ll use it. If a particular system offers the only application you need, you can buy it without worrying about access by other applications. At the other extreme, if you have many external applications to connect, then it almost doesn’t matter whether the CDP has applications of its own. In between – which is where most people live – the integrated application is likely add value but you also want to with connect other systems. So, as a practical matter, we find that many buyers pick CDPs based on both integrated applications and external access.  From the CDP vendor’s viewpoint, this connectivity is helpful because it makes their system more important to their clients.

The tractor analogy also helps show why data-only CDPs have been sold almost exclusively to large enterprises. Those companies have many existing systems that can all benefit from a better database.  In tractor terms, they need the best motor possible for power applications and have other machines for tasks like pulling a plow. A smaller farm needs one tractor that can do many different tasks.

I may have driven the tractor metaphor into a ditch.  Regardless, the important point is that a system optimized for a single task – whether it’s sharing customer data or powering farm equipment – is designed differently from a system that’s designed to do several things. I’m not at all opposed to systems that combine customer data assembly with applications.  In fact, I think Journey Orchestration Engines (JOEs), which often combine customer data with journey orchestration, make a huge amount of sense. But most JOE databases are not designed with external access in mind.  A JOE database designed for open access would be even better -- although maybe we shouldn't call it a CDP.

To put this in my more usual terms of Data, Decision, and Delivery layers: a CDP creates a unified Data layer, while most JOEs create a unified Data and Decision layer. There’s a clear benefit to unifying decisions when our goal is a consistent customer treatment across all delivery systems. What’s less clear is the benefit of having the same system combine the data and decision functions. The combination avoids integration issues.  But it also means the buyer must use both components, even though she might prefer a different tool for one or the other.

Remember that there’s nothing inherent in JOEs that requires them to provide both layers. A JOE could have only the decision function and connect to a separate CDP. The fact that most JOEs create a database is just the matter of necessity: most companies don’t have a database in place, so the JOE must build one in order to do the fun stuff (orchestration).  Many other tools, such as B2B predictive analytics and customer success systems, create their own database for exactly the same reason. In fact, I originally classified those systems as CDPs although I’ve now narrowed my definition since the database is not their focus.

So I hope this clarifies things: CDPs can have decision functions but if decisions are the main purpose of the system, it’s confusing to call it a CDP.  And CDPs are certainly not required to have decision functions, although many do include them to give buyers a quick return on their investment. If that seems like waffling, then so be it: what matters is helping marketers to understand what they’re getting so they get what they really need.


_________________________________________________________________
*I’ll guess few of my readers are very familiar with farm tractors. Maybe the more modern analogy is powering apps with your smartphone. For the record, I did work on a farm when I was a lad, and drove a tractor.

Wednesday, November 30, 2016

3 Insights to Help Build Your Unified Customer Database

The Customer Data Platform Institute (which is run by Raab Associates) on Monday published results of a survey we conducted in cooperation with MarTech Advisor. The goal was to assess the current state of customer data unification and, more important, to start exploring management practices that help companies create the rare-but-coveted single customer view.

You can download the full survey report here (registration required) and I’ve already written some analysis on the Institute blog . But it’s a rich set of data so this post will highlight some other helpful insights.

1. All central customer databases are not equal.

We asked several different questions whose answers depended in part on whether the respondent had a unified customer database. The percentage who said they did ranged from 14% to 72%:


I should stress that these answers all came from the same people and we only analyzed responses with answers to all questions.  And, although we didn’t test their mental states, I doubt a significant fraction had multiple personality disorders. One lesson is that the exact question really matters, which makes comparing answers across different surveys quite unreliable. But the more interesting insight is there are real differences in the degree of integration involved with sharing customer data.

You’ll notice the question with the fewest positive answers – “many systems connected through a shared customer database” describes a high level of integration.  It’s not just that data is loaded into a central database, but that systems are actually connected to a shared central database. Since context clearly matters, here is the actual question and other available answers:

 The other questions set a lower bar, referring to a “unified customer database” (33%), “central database (42%) and "central customer database” (57%). Those answers could include systems where data is copied into a central database but then used only for analysis. That is, they don’t imply connections or sharing with operational customer-facing systems. They also could describe situations where one primary system has all the data and thus functions as a central or unified database.

The 72% question covered an even broader set of possibilities because it only described how customer data is combined, not where those combinations take place. That is, the combinations could be happening in operational systems that share data directly: no central database is required or even implied.  Here are the exact options:


The same range of possibilities is reflected in answers about how people would use a single customer view. The most common answers are personalization and customer insights.  Those require little or no integration between operational systems and the central database, since personalization can easily be supported by periodically synchronizing a few data elements. It’s telling that consistent treatments ranks almost dead last – even though consistent experiences are often cited as the reason a central database is urgently required.


This array of options to describe the central customer database suggests a maturity model or deployment sequence.  It would start with limited unification by sharing data directly between systems (the most common approach, based on the stack question shown above), progress to a central database that assembles the data but doesn’t share it with the operational systems, and ultimately achieve the perfect bliss of unity, which, in martech terms, means all operational systems are using the shared database to execute customer interactions.  Purists might be troubled by these shades of gray, but they offer a practical path to salvation. In any case, it’s certainly important to keep these degrees in mind and clarify what anyone means when they talk about shared customer data or that single customer view.

2. You must have faith.

Hmm, a religious theme seems to be emerging.  I hadn’t intended that but maybe it’s appropriate. In any event, I’ve long argued that the real reason technologies like marketing automation and predictive modeling don’t get adopted more quickly are not the practical obstacles or lack of proven value, but lack of belief among managers that they are worthwhile. This doesn’t show up in surveys, which usually show things like budget, organization, and technology as the main obstacles. My logic has been that those are basically excuses: people would find the resources and overcome the organizational barriers if they felt the project were important enough.  So citing budgets and organizational constraints really means they see better uses for their limited resources.

The survey data supports my view nicely. Looking at everyone’s answers to a question about obstacles, the answers are rather muddled: budget is indeed the most commonly cited obstacle (41%), followed closely by the technical barrier of extracting data from source systems (39%). Then there’s a virtual tie among organizational roadblocks (31%), other priorities in IT (29%), other priorities in marketing (29%) and systems can’t use (29%). Not much of a pattern there.

But when you divide the respondents based on whether they think single customer view is important for over-all marketing success, a stark division emerges.  Budget and organization are the top two obstacles for people who don’t think the unified view is needed, while having systems that can extract and use the data are top two obstacles for people who do think it’s necessary for success. In other words, the people committed to unified data are focused on practical obstacles, while those who don’t are using the same objections they apply to everything else.


Not surprisingly, people who classify SCV as extremely important are more likely to actually have a database in place than people who consider it just very important, who in turn have more databases than people who consider it even less important or not important at all.  (In case you're wondering, each group accounts for roughly one-third of the total.)

The same split applies to what people would consider helpful in build in building a single customer view: people who consider the single view important are most interested in best practices, case studies, and planning assumptions – i.e., building a business case.  Those who think it’s unimportant ask for product information, vendor lists, and pricing. I find this particular split a bit puzzling, since you’d think people who don’t much care about a unified database would be least interested in the details of building one. A cynic might say they’re looking for excuses (cost is too high) but maybe they’re actually trying to find an easy solution so they can avoid a major investment.

Jumping ahead just a bit, the idea that SCV doubters are less engaged than believers also shows up in at the management tools they use.  People who rated SCV as extremely important were much more likely to use all the tools we asked about. Interestingly, the biggest gap is in use of value metrics. This could be read to mean that people become believers after they measure the value of a central database, or that people set up measurements after they decide they need to prove their beliefs. My theology is pretty rusty but surely there’s a standard debate about whether faith or action comes first.

Regardless of the exact reasons for the different attitudes, the fundamental insight here is that people who consider a single view important act quite differently from people who don’t. This means that if you’re trying to sell a customer database, either in your own company or as a vendor, you need to understand who falls into which category and address them in appropriate terms. And I guess a little prayer never hurt.

3. Tools matter.

We’ve already seen that believers have more databases and have more tools, so you won’t be surprised that using more tools correlates directly with having or planning a database.


Let's introduce the tools formally.  Here are the exact definitions we used and the percentage of people who said each was present in their organization:


Of course, the really interesting question isn’t which tools are most popular but which actually contribute (or at least correlate) with deploying a database. We looked at tool use for three groups: people with a database, people planning a database, and people with no such plans. 

Over all, results for the different tools were pretty similar: people who used each tool were much more likely to have a database and somewhat more likely to plan to build one. The pattern is a bit jumbled for Centers of Excellence and technology standards, but the numbers are small so the differences may not be significant. But it's still worth noting that Centers of Excellence are really tools to diffuse expertise in using marketing technology and don’t have too much to do with actually creating a customer database.

If you’re looking for a dog that didn’t bark, you might have expected companies using agile to be exceptionally likely to either have a database or be planning one. All quiet on that front: the numbers for agile look like numbers for long term planning and value metrics, adjusting for relative popularity. So agile is helpful but not a magic bullet.

What have we learned here? 

Clearly, we've learned that management tools are important and that long term planning in particular both the most common and the best predictor of success.

We also found that tools aren’t enough: managers need to be convinced that a unified customer view is important before they’ll invest in a database or tools to build it.

And, going back to the beginning, we saw that there are many forms of unified data, varying in how data is shared, where it’s stored, how it’s unified, and how it’s used. While it’s easy enough to assume that tight, real-time integration is needed to provide unified omni-channel customer experiences, many marketers would be satisfied with much less. I’d personally hope to see more but, as every good missionary knows, people move towards enlightenment in many small steps.