In 2002, I wrote an academic paper called “GAAPP: A Generic Adaptive Architecture for Profiling and Personalization.” In it, I described a personalization platform that used multiple technologies to solve any number of problems in the personalization space. Today, I’ll talk a little about that architecture and emerging companies that are taking similar approaches.
GAAPP had several specific goals:
- Allow a Web site to use multiple personalization systems at the same time
- Abstract the personalization systems from the front-end Web site to allow for technology-blind implementations of the infrastructure
- Provide an adaptive model that fine tunes itself based on several heuristics, including (in no specific order):
- Recommendation system performance
- Recommendation system granularity
- Company (marketing) control
- System load constraints
- User control
- Provide a system that allows each component’s effectiveness to be tracked, allowing algorithms to compete against each other
- Provide a unified reporting architecture for the company to use to understand the effectiveness of the system and its components
- Separate the process of gathering user knowledge from the process of generating recommendations
- Allow real-time and offline systems to coexist to maintain system response time when a user is interacting with the system, and allow scalability for processes that don’t need to operate in real time
- Allow user-specified data (explicit data) and system-inferred data (implicit data) to coexist and be completely controlled and edited by the end user
The system’s core architecture is too complicated to describe here, but the basic premise is easy to understand: Instead of using one core technology (as most companies’ platforms do), the platform consists of a bus architecture on which many different algorithms and technologies sit. Each algorithm registers itself with the platform and tells the platform what kind of input it can receive and what output it gives.
For example, a collaborative filtering algorithm might take user ratings as an input and give product correlations as an output. A statistical algorithm based on purchases would take sales data as input and give product correlations as output. An algorithm responsible for recommending articles to someone might take meta data as an input and either related meta data or references to related articles as output.
The point of this system is to allow a company to make the best use of available data. By feeding the data into the algorithms that are best suited for that data, the platform can give more accurate results. Plus, as different types of data become available, more sophisticated algorithms can be used.
For example, product recommendations for someone who has just started using the site can’t be based on that person’s purchase history, so the platform would choose an algorithm that’s based broadly on click-through behavior. But as the user buys products, a more finely tuned algorithm that takes purchase behavior into account could be used.
Additionally, if multiple algorithms solve a similar problem, the system can pit them against each other and monitor the success rate of each algorithm (determined by sales or some other metric).
Fast-forward to 2008. Personalization companies are finally realizing they can’t bank their entire company on one magic algorithm. This is what the collaborative filtering companies of the ’90s did.
Companies like MyBuys and richrelevance are taking the multi-algorithm approach. I don’t know what their architectures look like and the above describes my GAAPP architecture, not the architecture of any company in the marketplace. But these companies offer several similar features to the platform I describe. They use multiple algorithms to solve different problems. Plus, they can run tests pitting algorithms against each other or weight the results of different algorithms based on how each user responds to the recommendations.
Personalization has come a long way. In the early days, everyone thought collaborative filtering was the same thing as personalization. Back in 1997 when I was at Open Sesame (a competing personalization company not using collaborative filtering), we tried to tell the world that personalization was not one technology. It’s a number of enabling technologies that solve specific problems.
Personalization is a mindset, an ethos, and a way of crafting a user experience. Any number of technologies must work in concert to achieve that user experience. I developed GAAPP with that philosophy, and I’m excited that companies have come to the same conclusions independently and are now taking a multi-algorithm approach.
Personalization will never solve the world’s problems, but a smart collection of algorithms working together to deliver product recommendations and other personalized experiences is a much better approach than first-generation personalization companies took.
Questions, thoughts, comments? Let me know.
Until next time…
“You cannot succeed in analytics and marketing unless they are central to business operations and are helping business answer the questions that will drive dollars to the top or bottom line,” says Kerem Tomak, Sears Chief Digital Marketing & Analytics Officer.
The use of psychology in marketing and sales is not new, but it may be more useful than ever in an attention economy where time is precious and focus is rare. How can you tap into a demanding consumer to check whether there is an actual interest in your product?
According to a survey conducted as part of OnBrand Magazine's State of Branding Report 2017, marketers are well aware of the new technologies that are expected to be important to their brands in coming years, but the majority aren't rushing to invest in them before they're fully-baked.
Two weeks ago, Foursquare announced what could be the most important component of its data business: the Pilgrim SDK. So what does it do, and what does it mean for location-based marketing?