Do you know exactly how blog posts make it to the search results when a user keys in a particular keyword or keyphrase?
If you cannot answer this question and you do not have even the slightest clue, then you have to learn more about search engines and how they work.
As a blogger, it is important that you know how search engines work so you can leverage Search Engine Optimization (SEO) and rank well in the top spots of the Search Engine Results Pages (SERP).
What Does SERP Mean?
SERP, or search engine results pages, are web pages that show up whenever you type in on a search engine’s search bar a query. Queries do not always appear as actual questions, which is why these are often referred to as keywords or keyphrases.
After entering a certain query, search engines like Google provide the user with an SERP. These web pages do not always show up the same as search engines like Google take into consideration several factors when choosing which website or blog page appears on any of the search engine results pages.
Note that SERPs do not only present organic results, or results that made it to the list because of SEO efforts. SERPs also present paid results. These paid results are displayed on the list because an advertiser paid for the spot. You can easily distinguish a paid result from an organic one by simply checking if there’s an “Ad”. This is a part of another content marketing strategy called PPC Advertising which will be tackled in a separate post.
Crawling, Indexing, and Ranking
You can attribute three processes to a search engine: crawling, indexing, and ranking.
What do these terms mean and how exactly can you ensure that your blog post ends up on the top spots of search engine result pages, particularly when queries are made by your target audience?
The key here is making sure that your web page is visible enough, to begin with. Basically, these three processes work through:
- looking for new web pages – crawling
- storing the web pages found into the database – indexing
- determining the quality of the web pages found – ranking
Let’s dive deep into these processes.
Even before a search is done, web crawlers have already been gathering billions of web pages stored in the Search index for when they need to appear in the SERPs of a particular query.
Google begins the crawling process by sifting through past crawls and sitemaps provided by the website owners. Google’s web crawlers crawl the links found on these sources, particularly to find information on the following:
- new websites
- changes to existing sites
- dead links
Whatever information these web crawlers find, they bring back to Google for indexing.
There are instances when a website does not appear anywhere in the search results. In these instances, any of the following could be true:
- you have a brand new website that hasn’t been crawled yet
- no external website has linked to your site
- the navigation in your site is too complex
Robot.txt files serve the directives to Google as to which part of your website should be crawled or not. You want to set this up properly so you keep Google from crawling pages that are not for public viewing.
Your goal is also to make sure that you resolve issues related to the instances mentioned above.
If Robot.txt files are used for giving crawlers directives on which pages to crawl, the meta tags are the ones used for identifying which pages must be indexed or not.
The directives you can issue through meta tags are the following:
You use the “noindex” for trimming the amount of data Google indexes by exempting certain pages of your site.
The follow/nofollow works on the page links. There are links you can ask Google to follow. There are also links that you want to have a “nofollow” directive on. This usually to keep the link equity from being passed on to other pages.
If you want to restrict search engines from saving a cached copy of a certain page on your website, that’s when you use the “noarchive” directive.
The Google Search index is over 100,000,000 gigabytes in size and it’s currently accommodating hundreds of billions of web pages, keeping track of key signals found on the content of the pages.
With the use of a Knowledge Graph, Google analyzes the information found so they can provide search results that do not only match the keyword or keyphrases in your query but also provide you with answers that you need.
The web pages that make it to the top spots of SERPs do not appear by chance, or by some sort of luck. As mentioned earlier, Google takes into consideration different factors. These factors are also called, “ranking signals”.
SEO agencies and content marketers may have plenty of success stories in getting high ranking for the websites they work on, but nobody knows exactly what Google really considers as the biggest determining ranking factors for a web page to rank well in the organic results listings.
Google’s aim is to provide relevant results to queries. You can use this as a clue when working on optimizing your content. This is where linking comes into play in SEO.
Search engines use links to determine whether a website is reliable or not. Much of the SEO strategies involve link building—internal linking and backlink building.
Internal linking is all about linking the pages within your website through anchor texts that are relevant to a certain page. Backlink building is when you get referrals from other websites.
Getting more links from other websites, especially reliable ones, is like telling Google your web pages are important enough.
Aside from link building, content quality also plays a critical role for SEO. The quality of your content is what will secure engagement from your page visitors, thus validating the value of your web pages.
Unless you can validate through your content engagement that your web pages are helpful for the users, you will gradually lose your authority, and rank lower.