Avoid spider traps with several SEO techniques
When the site for some friends for the site look good, cool, etc., but it led to the spider crawling difficulties, which to some extent on the form of a spider trap, then what specific circumstances will appear this trap?
One, JS
Some friends in order to allow users to attract the user's eye, put the navigation, the site's home page and so on are using a lot of JS script, which is also a disadvantage for spider crawling.
Second, the frame structure
Frame structure inside the html usually can be crawled by the spider, but in general the content inside is not complete, the result is that the search engine can not determine the contents of the framework inside the main frame or frame call the file.
Third, there are too many flash
The current flash is also difficult to identify the spider, which for the search engine is also a kind of things can not read, then no matter how good the visual effects, but also make the search engine can not determine the corresponding relevance.
Session ID
Session ID user tracking access, the phenomenon is caused by the user every visit a page, there will be a different Session ID, that is to visit the same page, but there is not the same ID, which resulted in a large number of repetitive content, Also unfavorable search engine optimization.
Five, abnormal jump
Under normal circumstances will not make such an operation, the general gray hat or black hat friends prefer this operation, the use of disguised way to let the page jump, but in fact equal to deceive the search engine and the user.
Six, dynamic url
Dynamic url is currently known for the search engine can be identified, but if a large number of dynamic url is not conducive to spider crawling, a long time, is not conducive to optimization.
Seven, login restrictions
There is a login limit content, for the search engine, it will not register will not log, so the content of the spider is not found.
Eight, forced to use cookies
At present, few sites use this restriction, in order to let users remember the site, login information, track access path, etc., forced the use of cookies, etc., the result is not enabled cookies can not visit the user will prompt page Can not be displayed properly, or the spider can not access the normal identification and so on.
Nine, many pop-up chat windows
Some sites in order to communicate with the user from time to time out of a lot of strong chat window, but these chat window search engine is also unrecognizable.
One, JS
Some friends in order to allow users to attract the user's eye, put the navigation, the site's home page and so on are using a lot of JS script, which is also a disadvantage for spider crawling.
Second, the frame structure
Frame structure inside the html usually can be crawled by the spider, but in general the content inside is not complete, the result is that the search engine can not determine the contents of the framework inside the main frame or frame call the file.
Third, there are too many flash
The current flash is also difficult to identify the spider, which for the search engine is also a kind of things can not read, then no matter how good the visual effects, but also make the search engine can not determine the corresponding relevance.
Session ID
Session ID user tracking access, the phenomenon is caused by the user every visit a page, there will be a different Session ID, that is to visit the same page, but there is not the same ID, which resulted in a large number of repetitive content, Also unfavorable search engine optimization.
Five, abnormal jump
Under normal circumstances will not make such an operation, the general gray hat or black hat friends prefer this operation, the use of disguised way to let the page jump, but in fact equal to deceive the search engine and the user.
Six, dynamic url
Dynamic url is currently known for the search engine can be identified, but if a large number of dynamic url is not conducive to spider crawling, a long time, is not conducive to optimization.
Seven, login restrictions
There is a login limit content, for the search engine, it will not register will not log, so the content of the spider is not found.
Eight, forced to use cookies
At present, few sites use this restriction, in order to let users remember the site, login information, track access path, etc., forced the use of cookies, etc., the result is not enabled cookies can not visit the user will prompt page Can not be displayed properly, or the spider can not access the normal identification and so on.
Nine, many pop-up chat windows
Some sites in order to communicate with the user from time to time out of a lot of strong chat window, but these chat window search engine is also unrecognizable.
Comments
Post a Comment