Been trying to resolve some of the 403 errors noted by G in the web crawl stats. I know that 403 is just a blanket "NO" by the server opten with no reason given.
I understand that sometimes this is caused when the crawler goes to a directory and doesn't see an index.htm(l) page, so I've put in redirects to make sure that anyone trying to access all directories will either go to the index.htm or whatever file I want them to. The problem now remais that if someone (or robot) tries to access www.mysite/moreinfo/ they get a 403 error. As far as I can see this is caused by the fact servers won't allow directory browsing and therefore isn't necesarily an error.
Is this really regarded as an error, and can it be resolved?
Think I've got it sussed now - for the redirect to work, there actually has to BE an index.htm in the folder! Having got the redirects in place AND actually putting a page there to redirect from (how odd is that??) visitors and crawlers SHOULD be able to be redirected to the appropriate page if they just select the folder.