SEO Blog - Internet marketing news and views  

Is Google REALLY using bounce rates as a ranking signal?

Written by David Harry   
Tuesday, 09 December 2008 08:51

Myth Busting 101 - taking media musings as gospel

Search Engine LandOoo look… Search Engine Land is getting into the behavioural metrics game – and creating more headaches.

Now, I love the fine folks at SEL because it’s a steady stream of search goodiness and they’ve been known to promote the Trail now and again. And while I don’t know Eric (Enge), I have a healthy respect for much of his work. But there is a time and a place to even whine at this venerable publication and Eric. (since we recently gave Search engine watch and Bruce Clay a hard time on this). 

You see, we just turned the corner on the whole ‘rankings are dead’ path and things have tempered which is nice to see. This time out more behavioural has been broached with a theory of bounce rate as a (serious) ranking factor. This line of thinking, while logically sensible, can be problematic from the search engines point of view.

Bounce rates as a ranking factor

Do Search Engines Use Bounce Rate As A Ranking Factor? (SEL Article)

The ‘study’ as cited, from SEO Black Hat, has been done to death and there in no way was any form of definitive research... if anything it was contradictory. I have been writing about search patents relating to behavioural metrics for some 18 months and while there is plenty of evidence of interest from Google, that fact remains as one Monsieur Cutts has called them; they are dirty/noisy signals (even an interview Eric did with Matt contains such assertions).

 

Myth Busting 101: Bold statements

Written by David Harry   
Tuesday, 07 October 2008 11:19

Search Engine Watch reports – Google’s algorithm is shifting

OM-friggen-G I am just stunned. Top resources in the SEO world simply SHOULD NOT be making bare-faced statements. They should not be publishing material that take suspicions and suppositions and state them as fact. Furthermore they should be careful what advice they give as a respected source.

For starters, there are MANY algorithms… stating ‘the algorithm' is a DUH… and they are always being tweaked, but whatever, bigger fish to fry here…

Now, I am the last guy that would be going off about peeps recognizing and discussing the evolution of search. Certainly not when mention of behavioural factors are involved as it is a common theme around here. But to write things such as those which were published today over at Search Engine Watch – it is simply bad form. At very least put a disclaimer or temper the language to one less assertive and assumptive. Or….

Show me your data (and I'll show U mine)

 

Google granted a very Cuil search patent

Written by David Harry   
Wednesday, 01 October 2008 00:49

Yet another patent on phrase based indexing and retrieval (YaPaIR)

Unfortunately Cuil buzz has cooled, but at least co-founder Anna Patterson is back on the radar. If only in name and memories of what used to be that is. And if that radar is for patent droolers and semantic simians... then it is a name of note.

You see, things were different back in Oh4 when she was working with Google and toiled on computer learning model for understanding concepts and semantic relationships through phrasing. Ah yes, I remember it like it was yesterday….

That’s Cuil

Oh, we were just talking about Phrase based indexing and retrieval last week? You say Bill wrote about it as well? And we even managed to slip it into yesterday’s rant? Wow… just doesn’t want to go away huh?

Google's Cuil ideas

 

Phrase Based Indexing and Retrieval one more time

Written by David Harry   
Tuesday, 16 September 2008 10:21

How Cuil is that?

It seems phrase based IR (PaIR) is back – momentarily at least. Super search geek Bill Slawski had a post today about a late entry to the set of patents from Google on the topic (see; phrase based indexing and clusters), which is as good a reason as any to revisit it.

Simply put it is one of the methods for understanding relevance of concepts and topical anomalies through phrases and semantics. It is a probabilistic learning model which seeks to add more relevance to the ranking process. A large number of people search in phrases, not singular keywords and this method certainly had some merit.

For more see these;

Phrase based optimization resources (here on the trail)
Phrase based indexing and retrieval (Reliable SEO)
Phrase based IR – a second look (Van SEO Design)
Phrase Based information retrieval and spam detection (SEO by the Sea)

This one, from what I gather, deals with clustering of concepts/related phrases and using occurrences to create a ranking signal. For any given topic (query space) there are related phrases that will also occur in a given document. By looking at ratios of related phrases, clusters can be created to valuate other pages and so forth…  it seems to have some ranking mechanisms not covered previously.

 

Do link spammers leave footprints?

Written by David Harry   
Tuesday, 24 June 2008 08:44

Microsoft on link spam; using temporal tracking

A recent Microsoft search patent came out for a system which detects spam websites by looking at the changes in link information on a given page/set of pages over time. We recently covered some potential ways of going about this with some analysis of Google in ‘Historical ranking factors for link builders’ – be sure to give that a read as well if you’re in the mood to saunter down some related journeys. This time we have;

Detecting web spam from changes to links of websites
Filed
December 14, 2006 : Published June 19, 2008

As we know from the last excursion, link activity over time can unlock potential spam signals for search engines to use. This can be done by looking at a variety of features of the associated link information. As with any analysis the system uses a probabilistic model to judge what is and is not considered to be a spammy link profile.

This is also not limited to inbound links, but to outbound links as well (because a link profile is more than just inbounds right?).  The main problems that web spam creates for a search engine are the obvious lack of meaningful search results, but also bandwidth/spidering resources spent crawling/indexing spammy sites. So some yummy SERPs and good for the bottom line as well!!

 

Link spam temporal footprints

“"Spamming" in general refers to a deliberate action taken to unjustifiably increase the popularity or importance of a web page or web site. In the case of link spamming, a spammer can manipulate links to unjustifiably increase the importance of a web page. For example, a spammer may increase a web page's hub score by adding out links to the spammer's web page.”

Link Spam DetectionSome examples of tactics link spammers may use also included;

  1. create a copy of an existing link directory to quickly create a very large out link structure.
  2. a spammer may provide a web page of useful information with hidden links to spam web pages.
  3.  many web sites, such as blogs and web directories, allow visitors to post links. Spammers can post links to their spam web pages to directly or indirectly increase the importance of the spam web pages.
  4. a group of spammers may set up a link exchange mechanism in which their web sites point to each other to increase the importance of the web pages of the spammers' web sites.

 By looking at the link profiles of spam sites the search engine can create a template of it's linking activity to enable further algorithmic seek and destroy adaptations

As with many probabilistic systems, a set of training documents/websites can be used to train valuations of a spammy link profile. These can come from inputted sites that received a manual review and were deemed to be a spam website. These become the base set used for teaching the algorithm(s) what look for when crawling.

 

 
<< Start < 1 2 3 4 5 7 9 10 > End >>

Page 7 of 11

Search the Site

SEO Training

Tools of the Trade

Banner
Banner
Banner

On Twitter

Follow me on Twitter

Site Designed by Verve Developments.