Title : Common search engine principles
link : Common search engine principles
Common search engine principles
SEO Learning
Common search engine principles
To understand seo you need to be aware of the architecture of search engines. They all contain the following main components:
Spider - a browser-like program that downloads web pages.
Crawler – a program that automatically follows all of the links on each web page.
Indexer - a program that analyzes web pages downloaded by the spider and the crawler.
Database– storage for downloaded and processed pages.
Results engine – extracts search results from the database.
Web server – a server that is responsible for interaction between the user and other search engine components.
Specific implementations of search mechanisms may differ. For example, the Spider+Crawler+Indexer component group might be implemented as a single program that downloads web pages, analyzes them and then uses their links to find new resources. However, the components listed are inherent to all search engines and the seo principles are the same.
Spider. This program downloads web pages just like a web browser. The difference is that a browser displays the information presented on each page (text, graphics, etc.) while a spider does not have any visual components and works directly with the underlying HTML code of the page. You may already know that there is an option in standard web browsers to view source HTML code.
Crawler. This program finds all links on each page. Its task is to determine where the spider should go either by evaluating the links or according to a predefined list of addresses. The crawler follows these links and tries to find documents not already known to the search engine.
Indexer. This component parses each page and analyzes the various elements, such as text, headers, structural or stylistic features, special HTML tags, etc.
Database. This is the storage area for the data that the search engine downloads and analyzes. Sometimes it is called the index of the search engine.
Results Engine. The results engine ranks pages. It determines which pages best match a user's query and in what order the pages should be listed. This is done according to the ranking algorithms of the search engine. It follows that page rank is a valuable and interesting property and any seo specialist is most interested in it when trying to improve his site search results. In this article, we will discuss the seo factors that influence page rank in some detail.
Web server. The search engine web server usually contains a HTML page with an input field where the user can specify the search query he or she is interested in. The web server is also responsible for displaying search results to the user in the form of an HTML page.
Spider - a browser-like program that downloads web pages.
Crawler – a program that automatically follows all of the links on each web page.
Indexer - a program that analyzes web pages downloaded by the spider and the crawler.
Database– storage for downloaded and processed pages.
Results engine – extracts search results from the database.
Web server – a server that is responsible for interaction between the user and other search engine components.
Specific implementations of search mechanisms may differ. For example, the Spider+Crawler+Indexer component group might be implemented as a single program that downloads web pages, analyzes them and then uses their links to find new resources. However, the components listed are inherent to all search engines and the seo principles are the same.
Spider. This program downloads web pages just like a web browser. The difference is that a browser displays the information presented on each page (text, graphics, etc.) while a spider does not have any visual components and works directly with the underlying HTML code of the page. You may already know that there is an option in standard web browsers to view source HTML code.
Crawler. This program finds all links on each page. Its task is to determine where the spider should go either by evaluating the links or according to a predefined list of addresses. The crawler follows these links and tries to find documents not already known to the search engine.
Indexer. This component parses each page and analyzes the various elements, such as text, headers, structural or stylistic features, special HTML tags, etc.
Database. This is the storage area for the data that the search engine downloads and analyzes. Sometimes it is called the index of the search engine.
Results Engine. The results engine ranks pages. It determines which pages best match a user's query and in what order the pages should be listed. This is done according to the ranking algorithms of the search engine. It follows that page rank is a valuable and interesting property and any seo specialist is most interested in it when trying to improve his site search results. In this article, we will discuss the seo factors that influence page rank in some detail.
Web server. The search engine web server usually contains a HTML page with an input field where the user can specify the search query he or she is interested in. The web server is also responsible for displaying search results to the user in the form of an HTML page.
Thats it guys about Common search engine principles
That's an article Common search engine principles This time, hopefully can benefit for you all. Well, see you in other article postings.
You are now reading the article Common search engine principles With link address https://learntogether-seo.blogspot.com/2017/03/common-search-engine-principles.html