Could someone please explain to me how this really works.
Intrinsically - I always do this for my pages, but in the past I would use very targetted keywords. For example, say I had a page on antique clocks I would use something like:
antique clocks, mantle clocks, grandfather clocks, horology, history of clocks, waltham, dent, etc etc
However, I often see people using keywords that seem incredibly broad. For example, I was just looking a cricket site source code and it used the following:
cricket, sport, scores, live, audio, video, Test matches, Tests, one-day internationals, ODIs, England, Australia, South Africa, West Indies, New Zealand, India, Pakistan, Sri Lanka, Zimbabwe, Bangladesh, Kenya
Now my thinking is that by using countries without attaching any relevance to the site topic you would not be achieving anything.
So I'm guessing my understanding of how meta keywords actually work is incorrect. Does it work in the sense that the bots collectively look at the words - and use combinations of the different words.
I think a 'smart' SE, if it takes any notice at all of such user-provided and thus 'game-able' keywords, will look at them in conjunction with one another and the page content and off-page factors and do a 'cluster analysis'.
Thus 'Kenya' in this context would probably strengthen connections for Kenyan cricket rather than cut-flower growing or political unrest.
Most SEs ignore the meta keywords tag at the moment, precisely because it has been so abused over the years.
I have heard that Yahoo! 'take a look', and spot - for example - alternate and misspellings. But I don't think even they take much account of them.
But that could change; Google never used to worry much about meta descriptions - but now they are an important part of the ranking process.
"Keyword stuffing" in the keyword tag seems to be pretty much ignored, though it may count when a site is reported for other demeanours. And if SEs did care, a mismatch between tag and page would be simple to spot.
So I'd conclude that meta keywords count for little - but if you do use them, use them appropriately.
That means a unique and appropriate tag *for each page*, highlighting the relevant words and phrases *on that page*.
I use keywords too, but take a very haphazard approach to meta tags in general. I even forget to use them on some pages. However, that doesn't stop me getting good placement and traffic from search engines - Google especially.
As search engines have evolved, it really seems to me that they are very good at spidering content of the page itself and turning up appropriate results on most searches despite keywords being used or not.
I agree that it's good practice, and something I also try to make the most of and will continue to do. But I'm really not convinced it makes a difference in practice, or if it does, there's just the possibility it might make the difference of one position between me and a similar site (maybe).
I did hear somewhere (and I don't know if it's true or not) that over-use of keywords in the text doesn't help. There is an acceptable amount. IE they look for keyword stuffing in the article text.
I can absolutely state that it is relevant with regards to adwords - and your QS. Adwords optimized a couple of campaigns for me a while back - and I had inadvertently left some meta tags on a page. IE the page was about antique clocks - the metatags were about boats. (This happened because a copied a page to get the page layout - but forgot to change the page properties).
When they optimized it they obviously didn't visit the page at all - but came back with a load of keywords relating to boats - all keywords had a "Great" QS.
"I even forget to use them on some pages. However, that doesn't stop me getting good placement and traffic from search engines - Google especially."
With the keyword tag, that's probably OK (see above).
But Google is increasingly fussy about meta descriptions; I used identical ones on a site, and most of the pages suddenly disappeared about a year ago. There was a flurry of posts in SEO forums about the issue, and I started adding unique descriptions ... and as I did so, the number of pages indexed went up again.
And I've heard many, many similar experiences. TITLE tags, of course, can make or break a site.
So I'm guessing my understanding of how meta keywords actually work is incorrect.
Quite simply you need to have on-page repetition/reference to whatever your titlebar/keywords/description have.
Therefore if the page is about pink widgets then the url should be /pinkwidgets.html with pink widgets the first reference in the titlebar, the meta description should be "pink widgets technical information etc" and the keywords "pink widgets alternative names etc".
Just looked at Google wenmaster tools, and I'm getting a lot of duplicate meta tags. As they are warning me about them, I guess it's an issue.
The section they are mostly on is dynamically generated, and not very often read. It's there for reference, but apart from me I don't think many people refer to it! I know I can get scripts that will generate different meta tags, but the solution I am thinking of os to remove them altogether. Or would it be better to put in robots.txt to not crawl that section?
OK. I'm planning to have the new site up for the end of next week. The first site is going to be on French Rugby. As the home page will be in a sort of newspaper format - ie it will have a lead story excerp - and two shorter excerps - is it sensible or daft to change the meta description tag each time something new is added, or should I do a fairly standard home page description.