Search engines “crawl” websites, moving from one page to another very quickly, acting as a hyperactive reader of speed. They make copies of their pages that are stored in what are called “indexes”, which are like ledgers on the web.
When someone searches, the search engine opens this ledger, finds all the relevant pages and then chooses what they think is the best to display first. To be found, you must be in the book. To be in a book, you must be tracked.
Each site receives a tracking budget, an estimated time or page that will be tracked by search engines every day, based on the trust and relative authority of a site. Larger sites can try to improve their tracking efficiency to ensure that the “right” page is crawled more often. Using robots.txt, internal link structures and special search engines that do not crawl pages with certain URL parameters can improve tracking efficiency.
However, for the most part, tracking problems can be easily avoided. In addition, it’s good practice to use sitemaps, both HTML and XML, to make it easier for search engines to crawl your site. You will find more information about site maps and how to overcome potential crawling problems in the following Search Engine categories:
SEO: Sending Sitemaps
SEO: Tracking and Robots
SEO: Redirects and mobile sites
Remember, “search engine friendly design” is also “human friendly design”!
Am: cellular friendly
More Google searches occur on mobile devices than on desktop computers. Considering this, it is not surprising that Google values sites that are compatible with mobile devices with the possibility of getting a better ranking in mobile searches, while those who do not find it difficult to appear. Bing also does the same thing.
So make your site compatible with mobile devices. You will increase your chances of success with search rankings and also make your mobile visitors happy. Also, if you have an application, consider using indexing and linking applications, which are offered by both search engines.
For more information about mobile device compatibility and indexing applications, see our categories below:
Google: Accelerated Cellular Pages / AMP
Google: Application Indexing
Bing: Application link
Cellular Marketing: Indexing & Application Search
SEO: Mobile search
Announcement: Duplication / canonization.
Sometimes a good book, a search index, gets complicated. When browsing, search engines can find page after page of content that seems almost the same, which makes it difficult to know which pages of many pages must be returned for a particular search. This is not good
Even worse if people actively link to various versions of the same page. These links, indicators of trust and authority, were suddenly divided between the versions. The result is a perception that is distorted (and lower) than the actual value that the user has assigned to that page. That is why canonization is very important.
You only want the page version available for search engines.
There are many ways in which duplicate versions of pages can be created. The site may have a www version and not a www version of the site instead of transferring one to another. E-commerce sites can allow search engines to index their pages. But no one will look for a “red dress from page 9”. Or the filtering parameter can be attached to the URL, which makes it visible (to the search engine) as a different page.
For all available ways to accidentally make URL thickening, there are several ways to fix it. Implementing the right 301 redirects, using tag rel = canonical, URL parameter management, and effective paging strategies can help ensure that you carry out tight transportation.
For more information, see our category relating to the problem of duplication and canonization, SEO: Duplicate Content.