Every SEO strategy has its good and bad
Search engines have long kept their ranking formulas secret in order to give users the most relevant results and to preclude malicious manipulation. Despite all the secrecy, though, it remains extremely difficult for search engine companies to prevent either direct or indirect manipulation.
We can broadly classify search engine optimization (SEO) strategies into two camps: passive and active. Passive optimization occurs automatically by search engine robots, which scour the web finding and categorizing relevant sites. Most websites listed on the search engine result page (SERP) fall into this category. If it were up to search engine companies like Google and Yahoo, this would be the only type of optimization that existed.
Active optimization takes place when website owners purposefully engage in activities designed to improve their sites’ search engine visibility. This is the kind of search engine optimization we normally talk about when we think of SEO.
Let’s go one step further and classify these deliberate forms of optimization based upon the tactics employed. In general these are: basic principles, exploiting flaws, algorithm chasing, and competitive intelligence.
Basic principles are just that. We know that search engines, just as like human beings, intuitively recognize a website has a lot of information about, say, “classic cars” when that phrase appears many times in the text, has links to other sites about classic cars, and is linked to by similar sites. Using such basic principles, we can optimize our site by creating content-rich web pages that include the targeted keyword phrases. Moreover, we can attract high quality and relevant links, and we can make our website navigation easy for both the search engine crawler and our visitors. This strategy is primarily used by so-called “white hat” SEOs. The problem is that search engines do not always rank every such relevant site highly. It seems unfair, but it’s true. Here is an example.
Exploiting flaws is technique that involves altering a web page in such a way that it confuses search engine ranking algorithms and drives them to give a high ranking to an undeserving page. This strategy is primarily used by “black hat” SEOs. Of course, such tricks are often short-lived—search engine companies are on a constant lookout to plug these holes.
Algorithm chasing is an optimization technique based upon serious study on search engine patents, public research papers, and observable data from search engine result pages. The goal is to identify patterns that might give a near approximation of the ranking formulas employed by major search engines, and thus provide a clear path toward search engine optimization. This strategy is primarily used by what we call “gray hat” SEOs. The drawback to this approach is its inherent difficulty, compounded by the fact that search engines companies regularly change their algorithms.
To illustrate how difficult this actually is, take a moment to consider all the rankings factors listed by Seomoz.org (http://www.seomoz.org/article/search-ranking-factors). This is not a scientific deconstruction of search engine algorithms, but rather a poll of expert opinions—it’s what SEO experts believe is going on in those elusive data centers. Most estimations are no doubt based upon empirical evidence, but some are nothing more than educated guesses. Between 2005 and 2007, these perceptions have changed many times. Of course, so have the algorithms employed by Google, Yahoo, Microsoft and others.
Competitive intelligence is based on the idea that optimization can be achieved by carefully examining high-ranking websites, their content and their links. With just a basic understanding of how search engines operate, we can deduce how those sites achieved their high rankings and then apply that knowledge to the sites we want to optimize. This strategy is also used by gray hat SEOs. The technique can be very effective, but it serves to note that websites with high rankings are listed prominently for varying reasons. Some rank high for just a few days or weeks, and others for several years. Choosing the wrong sites for competitive intelligence can result in unreliable results.
The need for a more dependable SEO strategy
We know that a search engine’s job, first and foremost, is to rank highest the most relevant websites for each and every search we make. These sites are considered web authorities. For example, you couldn’t imagine a web search for “Apple Computer” not listing Apple Inc.’s homepage. Search engines need to find these web authorities and list them first.
The strategy that I’d like to propose combines competitive intelligence with web authorities. That is, we must devise techniques designed to identify such web authorities for the target terms we want to optimize, and apply competitive intelligence to understand why those sites get to be top-ranked. Only by optimizing our sites based on this knowledge can we truly succeed.
Let’s call this strategy SEO Intelligence. At first glance, our technique might seem gray or black hat. In reality, however, it is not at all. We must always keep in mind that, in order to optimize our site and become a web authority, we still need to provide truly useful content. Only then will visitors and the topical community link to our site and grant the authority we need.
A word of caution
One of the most frustrating experiences about learning search engine optimization is the large number of often conflicting ideas and suggestions from SEO experts in blogs and forums. Evidence of this is the “Most Controversial Factors” section of the 2007 ranking factors report. The problem is that some ideas and suggestions are “gut feelings”—conclusions based on incomplete experiments, a misunderstanding of basic principles, or the ever-changing search engine ranking algorithms. As a rule of thumb, it is generally good practice to follow advice that is backed with observable evidence or that you can experiment with on your own to confirm or debunk.
In future posts I will discuss the tactics and techniques necessary to identify web authorities.