Myths and Misconceptions about Search Engines
In this article you are going to learn about the myths and misconceptions about search engines.
Myths and Misconceptions About Search Engines
Search Engine Submission
In the early days of SEO (the late 1990s), search engines had submission forms that were part of the optimization process. Webmasters and site owners would tag their sites and pages with keyword information, and submit them to the engines. Soon after submission, a bot would crawl and contain those resources in their index. Since 2001, not only has search engine submission not been required, but the early efforts turned out to be virtually useless.
Once upon a time, meta tags (in particular, the meta keywords tag) were a main part of the SEO process. You would use the keywords for which you desired your site to rank for, and when users typed in those terms, your page could come up in a search engine query. This process was rapidly spammed to death and was ultimately dropped by all the major engines as an important ranking signal.
Many SEO tools still feed on the concept that keyword density is a significant metric. However, it’s not. Ignore quantity and use keywords wisely and with usability in mind. A page containing an extra 10 occurrences of your keyword is far less valuable than earning one good editorial link from a source that doesn’t think you’re a search spammer.
Search Engine Spam
The practice of spamming the search engines — creating pages and schemes intended to artificially expand rankings or abuse the ranking algorithms— has been increasing since the mid-1990s. If you’re able to get your page to the top of search engine results, you’re basically guaranteed that you will receive more traffic/sales.
Thankfully, search engines became aware of these practices and have made it super difficult for anyone to spam their website across search engine results. We strongly recommend that you avoid any type of spam-related activity so that you don’t hurt your search ranking.
Search engines accomplish spam analysis across individual pages and whole websites (domains).
As mentioned earlier, keyword stuffing involves littering keyword terms or phrases repeatedly on a page in order to make it seem more relevant to the search engines. This strategy is almost definitely ineffective and quickly identifiable by search engines. Scanning a page for stuffed keywords is not awfully challenging, and the engines’ algorithms are all up to the task.
Manipulative link acquisition is one of the most popular forms of web spam and aims to abuse the search engines’ use of link popularity in their ranking algorithms to artificially advance visibility. This is one of the toughest forms of spamming for the search engines to overcome because it can come in so many forms. A few of the many ways manipulative links can appear include:
- Reciprocal link exchange programs
- Link schemes
- Paid links
- Low-quality directory links
(To get a better understanding of the previous bullet points to make sure to see the YouTube video!)
There are many more manipulative link building tactics that search engines have learned to recognize.
A basic principle of search engine guidelines is to display the same content to the engine’s crawlers that you’d show to a human visitor. This means, among other things, not hiding text in the HTML code of your website that a normal visitor can’t see.
When this guideline is destroyed, the engines call it “cloaking” and take action to prevent these pages from ranking in their results.
The engines all have methods to determine if a page delivers unique content and value to its searchers. The most frequently filtered types of pages are thin content, duplicate content, and dynamically-generated content pages that provide very little in the way of unique text or value to the user. The engines are against including these pages and use a selection of content and link analysis algorithms to weed out low-value pages.