Wednesday, October 2, 2013

COMMON SEARCH ENGINE PRINCIPLES


To understand SEO, we should have knowledge about ARCHITECTURE of search engines.
Note: As we have various search engines, each search engine have its own principle components.
Here are the main components of Google search engine:
  • Spider
  • Crawler
  • Indexer
  • Data Base
  • Results Engine
  • Web Server
Spider also known as Search Engine Robots.
  • This is a program , it downloads web pages (just like a web browser) .
  • The difference is that the browser displays the information presented on each page(text, graphics, etc) while a spider doesn’t.
  • A Spider works directly with the underlining HTML code of the page.
Crawler
  • This is a program, finds all the links on each page. Its task is to determine where the Spider should go either by evaluating the links (or) according a pre-defined list of addresses(Home Page, Contact page).
  • The Crawler follows these links and tries to find documents not already known to search engines.
In SHORT,
  • Crawler finds the links.
  • Crawler guides Spider to follow the links.
  • Mainly Crawler finds the documents which are New (unknown things).
Note: Crawler reads from left to right, top to bottom.
Indexer
  • This program analysis each page and analysis various elements such as text, headers, structure of stylistic features, special HTML tags (like all HTML tags)etc.
Data Base
  • This is a storage area for the data that the search engine downloads and analysis. Sometimes it s called the “Index of the Search Engine”.
Results Engine
  • Results Engines, ranks pages which it determines which pages best match a users query and in what order the pages should be listed. This is done according to the ranking algorithm of search engines.
  • It follows the Page Rank, which is a valuable and interesting property and any SEO specialist is most interested in it when trying to improve his site search results.
Web Server
  • The search engine Web Server usually contains a HTML page “with a input field” where the users can specify the search query he/she interested in.
  • A Web Server is also responsible in displaying search results to the users in the form of a HTML page.
Note: More page rank indicates trusted and user friendly website.


Static Analysis

Basic SEO Factors for Instant Analysis:
  • Domain Extension
  • Page Loading Time
  • Page Rank
  • Alexa Rank
  • Google, Bing & Yahoo Indexed Pages and Back Links
  • Domain Age
  • Canonical Redirection
  • Title & Meta Tags
  • Header Tags
  • Image Tags
  • Xml sitemap
  • Html sitemap
  • Robots.txt
  • Google analytics account
  • Google webmaster tools account

SEO Process Tips to Rank

white hat SEO tips to follow according to the latest search engine algorithms in 2011.
  1. Research and pick the right keywords which drive quality traffic, leads and conversions
  2. Analyze competitors who are running with success in similar business and checkout the methods they are following and the keywords they are optimizing
  3. Pick the right Domain extension based geo – targeted business like ‘.in’ for India, ‘.com’ for USA, ‘.co.uk’ for UK etc.
  4. Plan a user& search engine friendly design & navigation with good link architecture
  5. Hire a professional content writer to write good quality content to catch user as well as search engine attention
  6. Plan different sub sections with relevant content for most targeted keywords
  7. Make sure that site has no broken links, timed out urls, high loading time etc. Tools like Xenu and extensions like Pagespeed, Yslow will help us to develop technically clean site.
  8. Use robots.txt file to block all unnecessary pages which are indexing by the search engines.
  9. Create Xml & HTML sitemap for users and search engines
  10. Submit Xml sitemap to search engines webmaster tools account and fix the errors if any
  11. Build only quality back links using White hat link building methods.
  12. Analyze traffic and plan or change the strategies to drive quality traffic which will convert to sales