SEO Blog - Internet marketing news and views  

Talking real time personalized search

Written by David Harry   
Monday, 22 February 2010 13:31

SurfCanyonAn interview with SurfCanyon

That title is a doozy, but it does exist! I recently had the extreme pleasure of meeting the gang at SurfCanyon, one VERY personal search engine (and FF add on). I was first turned onto this groovy tool more than a year ago and it was a thrill to hear from them. When it comes to personalization (and social search even) these folks really do take it to a new level. If you haven't already, be sure to give it a try (search engine here and FireFox add on here)

The following is an interview that sprang from those chats. (more on SurfCanyon at the end).

Enjoy!

Dave; Just for the sake of getting things rolling and for the readership, what are the goals of implicit user feedback? I’ve generally noted it as seeking to understand user satisfaction (or even intent) in accomplishing a given task in a search session. Can you tell us your own?


That’s exactly correct. Furthermore, the “implicit” part seeks to achieve this without directly asking the user questions regarding his or her intent, but rather by solely regarding user behaviour. For example, if every time you order a slice of pizza it is vegetarian, I might implicitly assume that your favourite type of pizza is vegetarian. Should you ever ask me to order a slice of pizza for you, without specifying what type, I might therefore deliver vegetarian.



Dave; I like that one...lol.. Interestingly enough, when we order a pizza from our local shop, they tend to just say, “The usual?” when we call… nice explanation. Ok, in your paper you mentioned early on that;

Joachims et al used eye tracking studies combined with manual relevance judgements to investigate the accuracy of clickthrough data for implicit relevance feedback [4]. They conclude that clickthrough data can be used to accurately determine relative document relevancies.

While he surely knows what he’s talking about, I’ve often found a lot of the data sets used to be in isolation and of relatively small data size. But more importantly, what are your thoughts on click bias? It can really cause problems with click data.


Yes, Professor Joachims is a much respected and widely cited expert in the field of information retrieval. Here is what he says in his paper about the differences between implicit and explicit feedback:

“In contrast to explicit feedback, such implicit feedback has the advantage that it can be collected at much lower cost, in much larger quantities, and without burden on the user of the retrieval system. However, implicit feedback is more difficult to interpret and potentially noisy.”

Part of the noise comes from “position bias” or “trust bias,” which means that, irrespective of the quality of the results, people are generally more inclined to click results that are more highly ranked. The logarithmic graph in Figure 4 of our research paper has a pretty clear depiction of this. This can be a very large problem when it comes to determining the relative relevancy of search results; however, there are techniques to compensate for this, such as swapping the order of results and comparing relative click-through rates (CTRs).

There are, however, other types of bias. Videos and pictures seem to draw disproportionate attention so that results of these types will receive an elevated number of clicks. This type of “distraction” is an issue with Universal Search. Using a pure CTR metric, video results would predominately appear at the top of the page, so other factors need to be taken into account.

Figure 5 in our research paper depicts the CTRs of results that we promote to page 1 as a function of their initial rank. What’s interesting to note is how flat the curve is compared to Figure 4. This is essentially demonstrating that results as deep as page 20 can, depending on the particular user’s information need, be as relevant as those on page 1. Because of position bias, and the considerable amount of effort that would be required to dig that deep, results on page 20 are, not surprisingly, virtually never selected.

This is exactly the issue that our technology seeks to address.




Dave; I certainly agree with many of the issues that come into play there. What are your thoughts on click bias and gleaning valuable data from click data?

And of course, what are your thoughts on the academic testing that is done as far as being generally small data sets and isolated? For example, once more from your paper, “Fox et al. used a browser add-in to track user behaviour for a volunteer sample of office workers[5].”


There is no doubt that tremendous value can be extracted from click data. Google et al. do this every day to improve their results pages. Since implicit feedback is so noisy, however, a considerable amount of data is required in order to run the experiments necessary to remove the click bias. Fortunately for Surf Canyon, the success of our browser application and search engine has enabled us to attain 1 million queries a day, which is enough to execute the necessary experiments to determine, in a relatively short amount of time, with statistical significance, the validity of our claims regarding improving relevancy.

Regrettably, there is a remarkable amount of fascinating academic research that is suffering from a lack of real-world user data. People spoke about this quite a bit at the last SIGIR conference in Boston. As such, we’ve actually been reaching out to various academics to talk about how our system could potentially be a platform from running some of their experiments. We’re hoping that there’s opportunity here for mutually beneficial collaborations.



Dave; Well that’s good to hear… I really would be a kid at a candy store at SIGIR.

Another area that I’d be interested in your thoughts on are elements such as bounce rates. Compared to click data and even time on page, I’ve never felt there was much value to bounce rates as an implicit signal for a few reasons… Do you look at them in your implicit data signals?


As we discussed via email, we refer to the “bounce rate” as the “dwell time,” which is the amount of time a user spends looking at a selected result.

Our tests have determined that dwell times are a significant feature for determining relevancy and so we use this feature in our calculations.



Dave; Oh great, my readers will be thrilled to hear that (I’ve not been a fan of it as a signal)

Speaking of ‘time on page’… if we move away from the academic testing, can this really be a good signal in the wild? What if the user wanders away from the computer for 10 minutes, then upon coming back they return to the search results (time away from SERP). This once more starts to cause noise doesn’t it?


People wandering away from their computers will inevitably introduce noise, and the signal can’t even be used for people who open documents in a new tab or browser window. Nevertheless, across large quantities of data it’s a good indication of relevancy. Direct Hit, which was a Boston-based search engine founded in 1998 and acquired by Ask Jeeves in 2000, pioneered looking at dwell times as an indication of relevancy.

Unless I’m mistaken, they looked at dwell times of 2 seconds or more as a Boolean indication of interest. We have also done internal testing that has indicated something similar, although we’ve expanding on the Direct Hit model. It should be kept in mind, however, that dwell time is just one of many features indicating relevancy and will always have to be appropriately weighted.



What is SurfCanyon?

Dave; Ok, enough geek talk… let’s get into your own offering; SurfCanyon. For the folks riding along let me put the description from the research paper;

The goal of the SurfCanyon technology is to use implicit user behaviour to predict which unseen
documents in a collection are most relevant to the user and to recommend these documents to the user.

How did you guys get the idea for, or motivation to come up with SC? It is a pretty groovy tool; I’d be curious to its history?


The genesis of the idea for Surf Canyon was born out of frustration, which is not uncommon for entrepreneurs. In April 2006 I picked up a random magazine in an airport and read the first part of a very interesting article on the plane. I wanted to read the rest at home, but unfortunately left the magazine on the plane. Strangely, I couldn’t remember the name of the magazine, the title of the article or the author’s name. However, with some key concepts I figured it’d be easy enough to find it online. After an hour of searching I gave up.

A few weeks later I was thinking about my experience with reformulating my queries and digging through the results when I had an epiphany. Everyone’s familiar with the exercise of adding words, removing words, adding quotes, removing quotes and so on. Everyone is probably also familiar with the drudgery of digging through page after page of results. Since I was clicking all sorts of results during the process of my reformulations, trying to find the “magical” query that would bring forth my document, why shouldn’t the search engine take my clicks into account and immediately exploit them by re-ranking the result set? If I get 10 results on a page, but only click one or two, why shouldn’t the search engine figure out, based on that, what one pages 2 to 20 should be relevant to my information need?

It turns out that this is not easy to achieve, but after four years of effort we’re proud to say that we’ve produced something which works exceptionally well.



Dave; Without giving away anything proprietary, can you talk some about the types of implicit data you look at with SC? And which ones you feel are most valuable?


In addition to the dwell time, which I’ve already mentioned, we logically look at the user’s click behaviour. The selection of a result is a very strong indication of real-time intent, so our model weights that heavily.

Additionally, we also look at what the user does not click. If the first selection is result #5, then we infer that result #5 presents an interest to the user. Simultaneously, we infer that results #1-4 present less of an interest. We have developed algorithms to appropriately weigh these signals in order to optimize the accuracy of the re-ranking.

The feature set is naturally much wider than described here, but the end result is that we feel we’ve developed a system that demonstrably improves the relevancy of search results. Going back to our research paper again, Figure 6 is one statically-significant measure of how we enhance the relevancy of Google’s search results. The blind, control group study is an apple-to-apples comparison of how search results that are re-ranked using our algorithm are more likely to be clicked than results presented in the order of their original rank. We use the likelihood of a result being selected as a proxy for relevancy, as discussed in the first question above.

In this study the CTRs were increased from 25-40% depending on the amount of implicit intent data present in the current query. We consider that to be a significant enhancement.



Dave; I hate to ask, but it has to be done…. The main concept is not of being a search engine, but an added layer to existing ones. There must be implications as far as possibly doing it better. Do you worry/plan for the day? Or is an acquisition/working relationship the preferred evolution?


The definition of a search engine can vary widely depending on with whom you speak, but for our purposes, and I guess yours as well, a search engine has to crawl, index and retrieve. Our application doesn’t do any of those things; therefore it’s not a search engine, but rather a layer on top of search. As such, it seemed natural to create a browser add-on that would make this the case both figuratively as well as literally.

This approach has enabled us to focus on the core of our technology (re-ranking search results in real-time) while offering users the ability to experience our benefits on one of the major search engines. The end result is that we have several hundred thousand users who, as mentioned previously, send a million queries through our system daily.

That being said, www.SurfCanyon.com is presented as a search engine. The algorithmic and sponsored results are all provided by third-parties, such as Microsoft or Yahoo!, and no downloadable application is required to experience the benefits of real-time search personalization.



Let's get social

Dave; Nicely done… fancy footwork there my friend...(dancing with the SERPs?). What else? Hmmmm… ah, another fav. The social stream.

One area a lot of my search geek friends and I debate is the world of so-called ‘real time’ and ‘social search’. As you know, I’ve already talked to your friends at OneRiot, and have reservations and hopes for this realm. For starters, are you a fan of social search implementations we’re seeing, and why?


It’s my understanding that “real-time” and “social” search are two different things. In fact, I noticed on your blog (“Should SEOs care about Real Time Social Search?”) that you clarify the distinction. For my part, I agree with Tobias Pegg’s assessment: there is intrinsic value, in many cases, in delivering results with a heavy temporal component.

In other words, delivering results based on information or events that are relatively recent. People are often searching for breaking news, so being able to deliver that with standard keyword search is important.



Dave; What is the goal of social search? In your mind. One Bing engineer said its advantages lay in delivering, “content that is fresh, local, and under-served by general web search.” As well as adding “trust and personal interaction”. How do you see the role of RTS/SS?


For a long time search has had a “social” element to determining relevancy. The concept of looking at the relative CTRs of documents in a result set is not new, and this data is naturally based on the interactions of many people with the results. An interesting question is whether or not an individual’s particular social graph can be exploited to further refine relevancy.

Rather than just looking at the CTRs from the entire universe of people, these could be weighted based on the CTRs, and perhaps other preferences, of an individual’s friends and colleagues, as determined through social networks. Aardvark recently released a research paper suggesting some of the advantages of this approach.



Dave; Personally I don’t find the social stream always works in search. I tend to see it more beneficial in breaking news, entertainment and possibly reviews and the like. There are also some that see it more effective for long tail informational query spaces. Where do you see it being effective?



I would agree with you – it would seem to be more effective for real-time information needs. That being said, we don’t do social search and haven’t had the opportunity to look at any data.



Dave; And to wrap things up (for now?)….

The future; of course we just have to end this back on SurfCanyon. I’ve been using it for nearly a year now and have seen some of the additions that came along. What does the future hold? What are you guys working on now or have planned for the future?


First, thank you for being a loyal user of our application! After so much hard work, it’s always gratifying to hear from people who appreciate what we’re doing, as I would imagine that you do since you’ve been using it for a year.

Recently we’ve been putting additional effort into our own search engine, available at www.SurfCanyon.com. While the browser extension offers portability, there are some interesting things that we can do with our own search page which aren’t otherwise possible. For example, we’re able to “pre-personalize” the top 10 results, as opposed to having to rely solely on what’s presented by Google et al.

The pre-personalization is particularly useful when carrying over the real-time user model during a reformulation. In the short- to medium-term we’re going to continue to optimize our own search engine.



And there we have it gang… I sure hope you all enjoyed this uber geeky chatter and I’d like to take a moment to say a HUGE thanks to Mark.  It's always great fun to talkt to IR geekiness. If you haven't checked out SC, please do. This is no BS peeps, I really have been using there toys for more than a year. Great stuff... even if yer not a mad search geek like me... 

Mark Cramer

Mark Cramer is the CEO of SurfCanyon and has more than 16 years of technology industry experience, from engineer to executive. Please do hook up with him and the gang on Twitter or via Linked in. If you want more, be sure to check out the SC Blog.


About SurfCanyon - As the quantity of information on the internet expands exponentially, it is becoming increasingly difficult, if not impossible, for search engines to provide all relevant results on page one, while also eliminating those that are irrelevant. Furthermore, providing millions of matched documents is of little use when few people venture past the first page.

Surf Canyon develops "real-time search personalization," a technology that disambiguates the user's intent post-query, and, in real time, brings forward to page one the relevant results that might otherwise remain buried. By transforming static lists of links into dynamic search pages that automatically re-rank results on the fly, users are able to more quickly and easily find pertinent information buried among the irrelevant results, significantly accelerating the search process. This patent-pending technology, available at our search site or as a browser add-on for portability to the major search engines, is also known as Discovery for Search™.

 

Comments  

 
0 # Mark Cramer 2010-02-22 18:20
It was an honor and a pleasure to do a little geek talk with you. I really enjoy taking off my Marketing hat and digging into the nerd stuff. Thank you for writing about Surf Canyon and it'll always be my pleasure to respond to any questions or comments from anyone who's interested in what we're doing. All the best.
Reply | Reply with quote | Quote
 
 
0 # Ashesh Bharadwaj 2010-02-22 18:54
Thanks Dave and Mark for this ultra high level infromational discussion.

Seems SurfCanyon would be very useful for me. These days I usally end up on on 5-10 pages for my queries. It seems Google fills the first few pages with authortaive site's crap.
Reply | Reply with quote | Quote
 
 
0 # Dave 2010-02-23 15:40
@Mark - thanks again for dropping in. I am gonna play with the search interface more now as I've been using the FF addon up until now. I anticipate many more good geeky chats sir... Some very interesting approaches going on there, pleasure to know ya!

@Ashesh - dude, it's a hand tool that evolves with you. After a few weeks you should def seem some improvements in relevance (and can turn it on/off for reg SERP analysis). What's also interesting is that it may represent a possible future for search... so on a professional level it is an interesting study.. think OneRiot meets SurfCanyon :0)
Reply | Reply with quote | Quote
 
 
0 # Mark Cramer 2010-02-23 17:06
Dave - I didn't mention this earlier, but the "OneRiot meets Surf Canyon" connection is perhaps a little stronger that you thought. If you install our application and then go to http://my.SurfCanyon.com you can check the "OneRiot" box and then save Preferences. Now, when available, OneRiot real-time results will be added to the Google SERP. You might want to check that out.
Reply | Reply with quote | Quote
 

Add comment


Security code
Refresh

Search the Site

SEO Training

Tools of the Trade

Banner
Banner
Banner

On Twitter

Follow me on Twitter

Site Designed by Verve Developments.