When Google launched on September 27, 1998, it also introduced a revolutionary concept, page authority , based on the quantity and quality of links to it . You cannot do SEO if you are not clear about what it is and how to work with this concept.

To understand the revolution that this idea of ??using links as “votes” to measure the level of trust that other pages deserve, suffice it to say that it improved search results so much that Google ousted the dominant search engine of the time in a short time: Altavista .

Although the concept of “authority” is a very logical and intuitive concept that, in its essence, remains the same as it was 20 years ago, time has not passed in vain and it has evolved a lot .

The main change has been from a simple mathematical algorithm to an intelligent algorithm that understands much better the context and semantics surrounding these links and makes decisions based on this information.

If you want to do effective SEO, you have to know what these principles are. Here I will explain them in all the necessary detail.

You might be in interested in What is Google Search Console, what is it for and how does it work

How does “PageRank”, Google’s original authority metric, work?

The idea of ??assessing the level of trust in a web page logically requires metrics .

This metric was originally Google’s PageRank (PR), a metric that assigns a value from 0 to 10 to each page based on the level of trust it deserves according to Google’s criteria.

This metric still exists today, although Google no longer makes it public.

What is essential here is that it is a logarithmic scale, not a linear scale . That is, in terms of difficulty to go up, if going up from PR 0 to PR 1 costs us 10, going up from PR 1 to PR 2 would cost us 100, from PR 2 to PR 3 , 1000 and so on. successively.

Currently, when it comes to positioning a page in the search results (“SERPs”), Google will take into consideration, above all, three criteria to decide which position to assign to you :

  1. How well the content on that page responds to the user ‘s search intent .
  2. The authority of the page and its domain (links).
  3. The user experience (which is deduced from a series of behavioral signals and other factors such as loading speed, for example.).

The crux of the matter is that, in all this, the authority factor continues to have a lot of weight and is what defines the level of difficulty of positioning your page for a certain search.

In other words: the difficulty depends on the authority of the pages that compete with you. If you have very little authority and pages with a significantly higher authority compete with you, even responding worse to the search than you, they will probably be above you.

Therefore, as you can see, gaining authority has to be a fundamental objective on any website.

The idea behind Google’s original search engine algorithm

The basic idea of ??the first version of Google was very simple: links between pages are used as a voting system with a weighted calculation (a probability distribution) of the weight of those votes.

The pages, through the incoming links (those that point to them), receive authority from other pages and, through the outgoing links (those that leave them), they distribute authority to other pages.

In turn, the greater the authority of a page, the more authority it can transfer to others (the more weight your vote has). This authority is often also called “strength” or “link juice” .

The tricky part of this is the calculations of the mathematical model that controls this whole set of amounts of authority passed between pages.

I am not going to go deeper into it because, in the end, it is transparent for you (you are interested in the final result, not how it is achieved). But if you are curious to go deeper, here you can see the original “paper” written by Larry Page and Sergey Brin (the founders of Google) at Stanford University.

And if you prefer that they explain this to you better with a video, here is one that tells the content of the paper in a much more graphic way.

Milestones in the evolution of the Google PageRank algorithm

This simple mathematical algorithm based on the weights of the links was very good for a while, but how could it be otherwise, after a few years more and more spammers and “clever” began to emerge trying to cheat that algorithm.

This has started a battle between Google and spammers who practice “Black Hat” SEO, the type of illegal SEO (in Google’s terms) that tries to trick the search engine to artificially move web pages higher in search results.

Typical examples of this type of Black Hat practice have been things like:

  • Keyword stuffing : “inflate” the texts artificially with the keywords that are intended to be positioned.
  • Link farms : artificial websites that only serve to generate outgoing links to other websites that they want to manipulate to rise in search results.
  • Buying links : there is a “black market” of buying and selling links, a practice totally prohibited by Google.

You will also hear about SEO “White Hat” which is the “good” SEO (the one that respects Google’s terms) and the “Grey Hat” that moves, as its name suggests, in a gray area where things they are not 100% clear.

This battle, to this day, is still in full swing and has pushed Google to increasingly improve the intelligence of its search engine to combat these fraudulent practices with dozens of updates that know how to detect them and penalize the sites that apply them.

If you have been in online marketing for a while , you will have heard of Google Panda , Google Penguin , or RankBrain , which are some of the most prominent and talked about updates.

If you are interested in knowing more details about this, here you can find an exhaustive list of all the updates and the changes that each one has brought to date.

Here are a few examples to give you an idea:

  • The introduction of “nofollow” links in 2005 that indicate to Google that the link should not be followed, nor should authority be transferred. This helped a lot, for example, to control spam in comments (a sink for people trying to sneak artificial links).
  • Penalized domains and “toxic” links (which are those that come from these sites). In any case, these types of links do not transfer authority and can even reduce it.
  • The thematic affinity between sites . In other words, a link from a car site to another car site is worth much more than the same link pointing to another site on another topic.
  • Google no longer works in terms of keywords but in terms of search intent . That is, it understands much better what the user really wants with a certain search phrase. (Eg: “cheeseburger” => knows the user is looking for a recipe).
  • Google now also analyzes user behavior . For example, if the result in position 4 consistently receives more clicks than the one in position 2, consider moving the results. Another example: if it detects that people systematically “bounce” from certain pages among the results (it interprets that they are satisfied) these will drop positions.
  • Brand searches . If many people search for, for example, “Citizen 2.0” , this will increase the authority of this site since Google understands that this is a sign of being a reference and quality site.
  • etc., etc., etc.

How to increase the authority of a domain and its pages

With what you have seen in the previous sections, the obvious question of how you can increase the authority of your domain should also be resolved. It is clear: getting quality links to your pages .

Here what you have to be clear about is the following: depending on the visibility of your website, you will get more or less links passively (probability that someone will link a page of yours) and you can also get them actively working on it, what is known as “ link building ” .

Here are two posts, one about link building and another that teaches you strategies (including payment formulas) to achieve maximum visibility , even if your website is a young website:

And to complete these readings, I also leave you here a post that explores the most important positioning factors in the current version of Google:

How to know the authority of a domain and its pages?

At the time, knowing the authority that Google assigned to a page was easy: PageRank was public . There were multiple tools (free browser extensions etc.) to display the PR of any page.

But since 2013 Google began to restrict access to PageRank and in 2016 it closed its doors for good . Although there is no official version, the reason is probably that they considered that giving this information incites Black Hat techniques.

However, it was not something dramatic because there were already alternatives to Google’s official PageRank on the market that even tried to improve the original version with finer scales (from 0 to 100), differentiating between the authority of a page and the entire domain, etc.

Now, you have to be very clear that all these alternative metrics are mere estimates , more or less reliable, but estimates, after all. Official PageRank data no longer exists.

These estimates are based on simulations of Google’s original algorithm and additional information about other relevant ranking factors that Google has been known to add in updates to its algorithm.

We then talk about the most relevant authority metrics and the companies that have created them.

Calculate domain and page authority with Moz (DA and PA)

The “classic” reference in authority metrics, together with the Majestic metrics (which we will see next), are the page authority (PA) and domain authority (DA) metrics from Moz , one of the reference companies in Global SEO.

The good thing is that you can have access to these metrics in a completely free way thanks to the MozBar , a free browser extension that overlays these metrics to each of the Google results as you can see below:

We can roughly say that these metrics try to replicate Google’s PageRank, but differentiating between the authority of the page and the authority of the domain (which would be something like the sum of the authority of all its pages).

Personally, they are the ones I use the most in practice . They are simple , intuitive and, given that these types of metrics cannot be very precise due to their very nature, I consider that they reach a sufficient level of precision and coherence to be useful in practice.

Measure authority with Majestic’s Trust Flow (TF) and Citation Flow (CF)

As I said above, the other great alternative to estimate the authority of a page is Majestic ‘s Trust Flow (TF) and Citation Flow (CF) metrics , “trust flow” and “citation flow” if we translate them into Spanish .

Majestic introduced an interesting new idea with these metrics: a correlation between the two that gives us additional insight into the quality of those links .

Now, these metrics cannot easily be used with free tools similar to MozBar because Majestic only provides them under a paid service.

Let’s see how the idea of ??TF and CF works:

What is the Majestic Trust Flow (TF) and how does it work?

Trust Flow, which is a trademark of Majestic, is a quality -based score on a scale of 0-100. It is based on a manual work where Majestic collects many sites as “trusted seeds” , based on a manual review of those websites .

The TF measures the authority taking into account the number of jumps (clicks) that separate a given page from one or more of the “seed” pages .

That is, if page A links directly to page B, the link is direct. But it could be that A links to B and B links to C. In that case, there would be transfer of authority, but less quantity because it is an indirect relationship.

What is Majestic’s Citation Flow (CF) and how does it work?

On the other hand, described in a simple way, the Citation Flow would be simply Majestic’s metric that emulates PageRank , similar in that sense to Moz’s PA.

How to estimate authority with the Trust Flow / Citation Flow ratio

The truly differentiating contribution of these metrics is when both metrics are correlated . The basic idea is that the TF/CF ratio would give a more accurate estimate of the authority of the site.

For example:

Let’s first take a page with a TF of 40 and a CF of 40. This would give you a ratio of 1 .

Let’s now take another page that also has a CF of 40, but with a TF of only 10. In this case, the TF/CF ratio of that page would be 0.25 .

According to Majestic, given the same CF value (its emulated PageRank), but with a lower TF, this last page would have a significantly lower authority than the first because the proportion of seed sites identified as “trustworthy” by Majestic would be much smaller.

Or what is the same: the profile of high-quality links is much higher in the first case and, therefore, the first site deserves more trust and be considered more authoritative.

If the ratios of all the pages of a website are represented in a graph like the following, we can also have a global vision of the authority of the website:

On paper, this idea of ??Majestic is very interesting, which leads many people to consider it a superior metric to the Moz metrics. However, I personally don’t see it clearly .

The biggest downside that I see in relation to the reliability of the TF is the fact that it is based on manual work .

With the current size of the web (over a billion sites) and the speed at which it changes, is this manual seeding job really feasible and accurate enough for a mid-sized company like Majestic?

In addition to this, there is also criticism of the quality of Majestic’s link database , compared to other major competitors like Ahrefs for example.

In any case, it is not the purpose of this post to enter into this debate, I simply wanted to raise these reservations about the Majestic system. I refer you to gather more information on this issue in other sources if you want to deepen it.

Alternatives to check the authority of a website and the difficulty of a niche

Moz and Majestic metrics have been the benchmark for many years, with hardly any alternatives. But over time several of them have emerged.

Ahrefs UR and DR metrics

This tool has gained a lot of popularity in recent years and has positioned itself as one of the largest in the SEO tools market.

It uses two metrics that, roughly speaking, can be compared to Moz’s PA and DA, also on a scale from 0 to 100 since they have a similar philosophy: UR (Url rating), which would be the equivalent of Moz’s PA and DR (Domain rating), which would be the equivalent of Moz’s DA.

SEMrush Authority Metrics

SEMrush is, like Ahrefs, another tool that leads the market for SEO tools. In fact, it is the tool we use in this blog and my favorite.

This tool also has its own metric called Authority Score (something like “authority score” in Spanish) and which, in turn, is calculated from the Page Score (page score), Domain Score (domain score) and Trust Score , along with a few other metrics.

Without going into nuances, so that we understand each other, although these metrics are based on SEMrush’s own methodology and algorithms, the idea is a bit like a combination of the ideas we saw with Moz and Majestic.

Page Score would be similar to Moz’s PA and Domain Score to DA. The trust score is a metric that is similar to the Majestic TF and all of this combined would give the Authority Score that is similar to the idea of ????the Majestic TF / CF ratio.

It should be added that SEMrush has some very interesting tools and metrics that influence this, such as the Toxic Score , which comes from one of the “toxic” link detection tools (very low quality domains, probably penalized by Google).

Ubersuggest metrics

One tool that I can’t overlook is the recent Ubersuggest .

Along with the MozBar, Ubersuggest is the only one that provides us with free metrics , at least, without the severe limitations of use that freemium versions of paid tools such as SEMrush, Ahrefs, or Majestic have.

Although it cannot be compared with the set of functionality of SEMrush or Ahrefs, the truth is that Ubersuggest has a very complete functionality for a tool that, today (we’ll see how long it will last…), is 100% free.

In this sense, it also has an authority metric that is the DS (Domain Score) that is similar to the other domain authority metrics, although it does not have a page authority metric .

And what are page and domain authority really for?

All this is very interesting, but you may have wondered what it is really for, how we can use it for our purposes.

The answer is simple: apart from being able to follow the evolution of the authority of our site and its pages, these metrics serve to analyze to what extent we can compete in Google according to what searches and what strategies to use to do it in the best possible way.

What is the weight of page and domain authority in web positioning?

As I mentioned at the beginning, links (and therefore the authority of a website) are among the most important ranking factors . So much so, that with little authority you are going to have it really raw in searches that do not have a low level of competition.

Now, with that said, don’t despair, you can still do things if you’re in a good topic niche (in demand, ie with good search volume).

On the one hand, it is a matter of being humble and focusing on more specialized searches with less competition . This is also known as the “long tail” strategy and we will see it below with a concrete example.

On the other hand, also remember that links are not the only ranking criteria .

You will often find results from domains with a lot of authority, but that do not respond 100% to the search intention . Good opportunities are hidden here by exactly responding to the search intent .

What they are and how to use the various “Keyword Difficulty” metrics

To simplify the estimation of the level of difficulty to compete in a certain search (which is what it is ultimately about) by reducing that difficulty to a single value, almost all current SEO tools also have “keyword difficulty” metrics that typically they also move on a scale from 0 to 100.

In the case of Ahrefs, for example, by its own definition, this metric “calculates the Ahrefs KD score by taking a weighted average of the number of domains linking to the current top 10 ranking pages and projecting the result on a log scale of 0 to 100.”

The details of the algorithm vary in each of the other tools (SEMrush, etc.), but the end goal is basically the same.

SEMrush, like Ahrfs, calls this metric KD, while Ubersuggest talks about SD (SEO Difficulty) .

Anyway, whichever you use, don’t forget that this type of metric is absolute . That is, it does not take into account the authority of your website and, therefore, the comparative information of contrasting the level of authority of your website or specific pages of your website with the others is lost.

How to analyze in which searches you are most likely to position yourself

Now we have seen enough to apply everything we have learned to what really interests us: analyze your possibilities to position yourself for a certain search.

To be practical, here I am going to explain it to you with a very simple example. If you want to see this part in more detail, also download our SEO eBook for beginners:

Let’s go with the example:

Let’s say you’re a very young gardening blog author, with a Domain Authority of 10 , according to the Moz DA, and you’d like to publish an article on caring for bonsai .

You have analyzed the niche with an SEO tool and you have seen that there are many specific searches that respond to this search intention and all of them add up to a total volume of thousands of searches per month. So, for now, we are doing very well with this idea.

The next step is to do the search “how to care for bonsai trees” in Google to analyze the level of competition . To do this, we are going to use the MozBar.

After doing so, you find this panorama on the first page of results (which is the only one that really interests us):

Wow… things look bad. All the results respond quite exactly to the search intent and have a lot of page authority, some even come from domains with a very high authority (uncomo.com), although this is much less relevant than the page authority (as you can see in its positioning).

Let’s not despair, remember what I said above: in these cases a “long tail” strategy usually offers solutions .

This concept of “long tail” or “long tail” of SEO refers to the large number of specialized variants (longer in number of words) of general searches, such as the one in the example.

In this case, for example, there is a long line of search variants according to the type of bonsai: “how to care for a ficus bonsai” , “how to care for a carmona bonsai” , “how to care for a ginseng bonsai” … and so on, dozens of they.

You have enough experience with Carmona bonsai, so you take a look to see if there is enough search volume. If you used SEMrush, you would see the following results:

Good news: although the total volume is noticeably lower than in the more generic search, almost 500 searches per month that add up the different long-tail variants, it is still an interesting volume .

So let’s see how we compete with this search intent. When looking for “how to take care of a carmona bonsai” we get:

If you look at the search results, at first glance, you can be scared that, again, everything is full of websites with enough authority against which you will not have any chance to compete.

But, analyze the results well : have you noticed that only one of the results really responds well to the search intention (the one that is highlighted)?

If you look closely, the others are the results of generic information about this type of bonsai, they do not specifically respond to how to care for them .

Furthermore, notice that the highlighted result that responds well to the quest has much less AP and AD than the ones below it. According to Moz, it wouldn’t even have a single link.

Why is this happening?

Well, because it is the only one that adequately responds to the search .

The result above it will probably answer it in its content (in a subsection, for example), even if it does not reflect it in the title (that is, it is not the main topic of the content), but thanks to its high PA and DA ( and possible some other factor), manages to overcome the exact answer.

Probably just with a little more AP (between 15-20 I estimate) the exact result would already be above.

Also look at the KD values ??shown by SEMrush, they are relatively high degrees of difficulty.

This is more or less consistent with the DAs of the results, however, it is not a reflection of the real situation because there is hardly any content that accurately responds to these searches . Moreover, those that exist have very little authority.

Hence, my criticism of this simplistic metric, I think it trivializes this issue so much that it stops fulfilling its function in practice. That’s why I personally hardly use them.

Conclusions

In short, in the end all this is quite logical and intuitive, don’t you think?

However, if, after reading this post, you have been left with the feeling that all this also has a lot of fuzzy logic , congratulations, you have hit the nail on the head . Indeed it is and so you should take it.

In the example, everything has been squared very well and it has turned out very nice, but in practice you will also find more than once with figures that you do not see much sense in.

Other times you will find that a content that was very clear that you would position without problems according to all the metrics, turns out that it does not position itself even after three.

And it is that we must not forget that, although it is essential to do this analysis when you want to position a content, it is still working with estimates without taking into account, in addition, that the ways of Mr. Google are often as inscrutable as those of the rest. of divine beings.

As for the tools, although I am an absolute fan of SEMrush , especially for its wonderful keyword exploration tool (Keyword Magic Tool), when it comes to analyzing the level of competition I still use the MozBar practically exclusively .

And it is that not only do I continue to square (generally…) the PA and DA metrics with the search results, but I cannot conceive of this analysis outside of the Google results page .

In the previous example it has been possible to see very clearly: it is not about shuffling PA and DA figures like an automaton, but there is a lot of subjective part, a lot of “art” in assessing to what extent the results respond or not to the intention of search having to enter even many times in the content of the result to fine tune this.

For all this, in my personal experience, the MozBar concept, being an extension that integrates into the browser and merges with Google results, has turned out to be the best formula to date for this job.