For weird reasons, the following long comment wasn’t posted successfully. Hence I’m reposting this comment as a post. I think it’s really worth reading, I’m not sure if I agree with everything in it (I’m not exactly an expert!), but here it is anyway. It was written by Elizabeth Adams in response to a posting indicated below. I’d certainly welcome your thoughts! Do check out Elizabeth’s website, it’s worth reading!
Elizabeth Adams | eadams @ silverlink.net | elizabethadamsdirect.com
Re … Google’s Supplementals Hell, Part 2 … 06/03/07
Hello, Kenneth …
You wrote, “It’s not that the posts are published twice, it’s just that the cross-linking in blogs allows Google spiders to find the articles more than once in different places. It appears to Google that the article is published more than once, but in fact there is only one article, published once. There are multiple references to it at the same time, so Google’s ‘smart’ spiders think it is multiple posts!”
I don’t understand. How can a link be misinterpreted as content? Or two links as duplicated content? That doesn’t make sense.
Could there possibly be some other reason besides “multiple references” why articles are being swept into “Supplementals Hell”?
About a year ago, Google’s Matt Cutts said that “PageRank is the primary factor determining whether a url is in the main web index vs. the supplemental results” and that “typically the depth of the directory doesn’t make any difference for us; PageRank is a much larger factor. So without knowing your site, I’d look at trying to make sure that your site is using your PageRank well. A tree structure with a certain fanout at each level is usually a good way of doing it.”
So … well, is your site using your PageRank well? Is your tree structure fanning out a certain amount at each level?
(giggle!) I actually sound like I know what I’m talking about!
Marcia on WebMasterWorld said, “Not one single speck of duplication, what’s Supplemental and what isn’t on the test site(s) is 100% dependent on the amount of link love the pages are getting.
“People who are looking for dup issues where none exist are, unfortunately, chasing their tails.”
Halfdeck of Seo4Fun said, “Answer me this. How can a computer program read, understand, and judge the quality of an article in comparison to other articles written on the same topic? It can’t – until Google discovers Artificial Intelligence. Sure – there are ways to look for on-page spammy finger prints (e.g. illogial sentence structures, excessively high keyword density, overuse of bold and italics). But given two well-written articles, how does a machine decide – based solely on on-page text – which article is more valuable?
“It can’t.
“Relevance for a keyword can, of course, be guessed at by looking at things like the TITLE tag, keyword frequency, keyword location on the page, and keywords in H1. Relevancy, however, has nada to do with page value or page quality.
“How can a program judge the value of a page using on-page text alone when, from its POV, everything looks like a random string of symbols? To gauge a page’s value, there is simply no other option than to analyze off-page factors.”
Your PageRank is 3/10, which seems pretty good to me. How’s your “link love”?
Regards, Elizabeth …
P.S. When are you going to enable smilies?
There’s also more reading at What are Google Supplemental Results? : SEO Book.com