Dynadot

discuss .Net and .Info registrations struggling

Spaceship Spaceship
Watch
Per Hosterstats.com the total number of registrations for .Net and .Info have been as follows (millions)....

JUL 2018 13.9 & 5.4
JUL 2017 14.9 & 5.9
JUL 2016 15.6 & 5.5
JUL 2015 14.9 & 5.1
JUL 2014 15.2 & 5.7
JUL 2013. 15.1 & 6.5
JUL 2012 14.7 & 7.8
 
4
•••
The views expressed on this page by users and staff are their own, not those of NamePros.
Thank you for providing some detail @jmcc. Just briefly, here are my thoughts...
  • I agree that starting the "possible" use with Alexa 10M, although not as restrictive as 1M, must be taken into account when making conclusions. I suspect it biases against country code, for reasons you indicate, and also the newer of the ngTLDs that have not yet had much time to grow into 10M sites. That is why the point that W3Techs start with 10M was stressed several times in the thread, and in my associated blog post.
  • I still somewhat don't understand your comment about Chinese market. Yes, agree that market, and domain market in general I would say, are highly speculative. If parked or redirected to a lander these would not get counted in W3Techs data at all. It seems to me that is exactly what I want in data that looks at actual web use, which is what I was doing?
  • I totally agree with your last point that web use is hard to obtain and changes over time, and that is exactly what I wanted to look at! So while stressing limitations (e.g. starting list Alexa 10M), W3Techs was the best open, free, long term, broad data I could find. W3Techs are in top 1000 of the 1.9 billion "websites" in the world, so I am far from alone in using their data! (I fully realize most users probably are accessing their detailed data on technical features of websites they visit, not simply overall use)
Anyway, thanks for providing some background for your critical comments on W3Techs. Perhaps we will have to agree to disagree, as I have heard nothing that changes my opinion it was the best option for the kind of use study I wanted to do.

If any reader of this thread wants to suggest alternative data sources that cover many TLDs, are open and free, disclose their methodology, have at least 5 years of data for long term studies, and are from widely used / respected sources, I would be more than happy to do a comparison of the messages the data give.
 
0
•••
I agree that starting the "possible" use with Alexa 10M, although not as restrictive as 1M, must be taken into account when making conclusions. I suspect it biases against country code, for reasons you indicate, and also the newer of the ngTLDs that have not yet had much time to grow into 10M sites. That is why the point that W3Techs start with 10M was stressed several times in the thread, and in my associated blog post.
It is not a good sample and far from random. The ease with which Alexa can be gamed makes it a serious problem. In terms of running a statistical survey, a random sample is required. If a sample is not representative of a TLD then it is misleading. It is certainly so when it comes to ccTLDs.

I still somewhat don't understand your comment about Chinese market. Yes, agree that market, and domain market in general I would say, are highly speculative. If parked or redirected to a lander these would not get counted in W3Techs data at all. It seems to me that is exactly what I want in data that looks at actual web use, which is what I was doing?
Web usage is generally expressed as a percentage of domain names in the TLD (for a full zone survey) or in the sample (for a statistical survey based on a random sample of domain names from the zone). So if domain names don't resolve to a working website, these can drag down the usage percentages. This is exactly what happened with the Chinese Bubble in .COM and .NET TLDs. Usage outside the Chinese market remained relatively stable. It was just that those millions of Chinese registrations were significant enough in volume to have effects on the usage figures for the gTLDs in which they appeared.
I totally agree with your last point that web use is hard to obtain and changes over time, and that is exactly what I wanted to look at!
But unfortunately it is not good data and the methodology is not robust.

(I fully realize most users probably are accessing their detailed data on technical features of websites they visit, not simply overall use)
Again, that's another problem created by the use of Alexa data. It is not representative of technical features (server software, scripting languages etc) of websites in various TLDs.
Anyway, thanks for providing some background for your critical comments on W3Techs. Perhaps we will have to agree to disagree, as I have heard nothing that changes my opinion it was the best option for the kind of use study I wanted to do.
The difference is that I do this kind of web usage measurement in real life. I understand that it was a blog post and that it is hard to get free and accurate data.

Just from a 2.24 million domain name statistical .EU survey in May 2018, the Content/No Content/Redirects percentages were 17.18% - 53.17% - 29.65% (including 2.75% HTTPS redirects).
The CNR percentages are based on approximately 28 different categories of usage.

A 10% .COM survey from April 2018 had a CNR of 19.50% - 56.75% - 23.75% (including 6.09% HTTPS redirects).

The .PRO full zone survey for April 2018 showed 13.19% - 62.30% - 24.51% (including 6.57% HTTPS redirects).
 
1
•••
Thank you for the additional reply @jmcc, although I feel a number of your arguments seem to overlook that the W3Techs is not a sample based approach.
  • You say "It is not a good sample and far from random. " Of course the Alexa 10M used as the starting point (not the final selection) by the W3Techs is not random (it is after all the 10M most visited websites). W3Techs does not use a sample approach. It is not logical to argue it is poor sampling when it is not a sample based approach!
  • You say "Web usage is generally expressed as a percentage of domain names in the TLD" That is only by those who are using a sample approach since they need to correct their data to project to the entire set had they been able to sample it all. It is not relevant to the W3Techs approach, since it is not a sample approach. They (and I in using their data) always stressed that it was based on the most popular sites. I did note in my writeup that might bias the data for some TLDs (I suspect mainly against new and newly popular extensions).
  • I totally accept, and always have, that there are arguments for and against each off the two approaches (sampling vs major website analysis). A most visited websites approach emphasizes those sites with a lot of traffic, whereas by sampling it is possible that a hugely important website is not sampled and data could be skewed as a result, despite the attempted correction factors. Public opinion polls are sampling based, and there have been famous cases where their predictions have been very wrong. On the other hand, the Alexa 10M, or any similar, list could, and probably at least occasionally is, skewed by attempts to make a few sites more popular than they genuinely are. This is partly mitigated by the fact that W3Techs don't use the actual ranking, just the list as a starting point, and they do adjust it for things like redirect traffic and subdomains. The sampling approach is more resource intensive (at least if the sample is large) and the task of sorting out how to do the adjustments (you have given us some idea of the correction complexity in your comments). I think there is no simple answer as to which is better (probably some combination using data from both is best).
I am not arguing that you should abandon your sample and correction based approach. Not at all. As in all big debates, I totally accept that each approach has virtues. No data set is perfect.

I do however feel comfortable with using the W3Tech data. I pointed out earlier that their data is widely used (top 1000 of all global websites). I also, just now, checked how often their data is used in professional circles. I realize there will be occasional duplication, but a quick check on Google Scholar, shows that W3Techs is cited in just over 2500 scientific papers and studies. Obviously the W3Tech dataset is complex, covering many factors and simple web use is not the only or major use, but the fact that their data is so widely used by computer science and public policy professionals is encouraging, at least to me.

I continue to feel that the W3Tech data can inform temporal studies of website use in different TLDs. I accept that you strongly feel that a sample based approach is the only, or at least the best, way to collect the data. I don't think we will settle the debate at NPs, and I don't plan to invest additional effort in detailed responses on this topic, as I think what needed to be stressed we have said.

Should you know of freely available web site use data, with clear methodology stated, that covers most TLDs, and that has at least 5+ years of results is available, I, and I am sure others, would welcome a link.

Thank you for the length, and tone, of your last two replies (the first one, not so much :xf.wink:).
Have a good day.

Bob
 
Last edited:
0
•••
Thank you for the additional reply @jmcc, although I feel a number of your arguments seem to overlook that the W3Techs is not a sample based approach.
It is not a robust methodology and it provides inaccurate results. It is that simple.

You don't know how representative the W3Techs stuff is in terms of usage in a zone. It really is that simple. Web usage in a zone or a random sample of that zone is completely different to W3Techs's stuff.

You say "Web usage is generally expressed as a percentage of domain names in the TLD" That is only by those who are using a sample approach since they need to correct their data to project to the entire set had they been able to sample it all. It is not relevant to the W3Techs approach, since it is not a sample approach. They (and I in using their data) always stressed that it was based on the most popular sites. I did note in my writeup that might bias the data for some TLDs (I suspect mainly against new and newly popular extensions).
By professionals in the industry. There's a difference between what you see on W3Techs and real web usage surveys. Real web usage surveys are sample based and sometimes that sample is the entire zone. There is no fair comparison between a real web usage survey and W3Techs's stuff. To put it quite simply, W3Techs's stuff is not a web usage analysis and it is not reliable in terms of usage in any TLD.

I am not arguing that you should abandon your sample and correction based approach. Not at all. As in all big debates, I totally accept that each approach has virtues. No data set is perfect.
Full zone file surveys tend to be as close to perfect as one can get.

I do however feel comfortable with using the W3Tech data.
The problem is that the W3Techs stuff isn't accurate web usage data.

I also, just now, checked how often their data is used in professional circles. I realize there will be occasional duplication, but a quick check on Google Scholar, shows that W3Techs is cited in just over 2500 scientific papers and studies.
So? It typically means that the authors of most of those papers and studies don't understand the limitations of the W3Techs stuff. Measuring web usage is a highly specialised field. Most of the citations would be very shallow rather than any deep analysis of their methodology. They tend to concentrate on various technologies and webserver types. Netcraft would be far more accurate when it comes to the various types of webserver software in use ( https://news.netcraft.com/archives/2018/07/19/july-2018-web-server-survey.html ).

I continue to feel that the W3Tech data can inform temporal studies of website use in different TLDs.
Try using that argument with some registries and see how long they can keep a straight face without laughing. Registries tend to pay attention to their zones and many realise that those W3Techs figures do not provide accurate views of the usage in their TLDs.

I accept that you strongly feel that a sample based approach is the only, or at least the best, way to collect the data.
That's how it is done in the industry. A statistical sample is used where the zone is very large or where there are time constraints. Full zone surveys are more commonly used where the zonefile is available.

The W3Techs stuff might seem impressive to those who don't understand web usage analysis but the reality is that a good web usage survey allows one to see problems in a TLD and also can be used to see which categories of domain names are most likely not to renew.

This aspect is not captured by the W3Techs stuff. It is actually possible to estimate the probability of domain names renewing. Specific categories of usage tend to have low renewal rates and most of the domain names that are deleted (>70%) have no associated website. There are even hosters/registrars that have patterns of high non-renewal rates. Combined with web usage surveys, this renewal analysis can provide far more information about the health, or otherwise, of a TLD.

Should you know of freely available web site use data, with clear methodology stated, that covers most TLDs, and that has at least 5+ years of results is available, I, and I am sure others, would welcome a link.
There is no reliable, long-term, free web usage data available. Sometimes registries might publish some limited data but, as I said, measuring web usage is a complex task and how websites are used changes over time.
 
Last edited:
0
•••
Jmcc is probably right on this, certainly in the past Alexa results seemed very skewed. Quickly looking at the W3Techs table some of those entries don't feel instinctively in the right order, but its hard to know for sure even in a single language like English alone. The big web companies like Google make the web seem very stable and seamless but underneath it's a lot more dynamic and chaotic with a large percentage of domains fluctuating between different states (classifications) even between monthly survey snapshots.
 
0
•••
Quickly looking at the W3Techs table some of those entries don't feel instinctively in the right order, but its hard to know for sure even in a single language like English alone.
The CNR for .UK was at 21.16% - 52.06% - 26.78% with HTTPS redirects at 4.95% in a recent statistical survey. Language as an indicator isn't reliable because it is used for global trade. This means that a site selling to a global market is more likely to use English rather than its local language.

The big web companies like Google make the web seem very stable and seamless but underneath it's a lot more dynamic and chaotic with a large percentage of domains fluctuating between different states (classifications) even between monthly survey snapshots.
Google does a good job and it is like a seething cauldron with new domain names bubbling up and other domain names disappearing each month. The new 2017 registrations renewal rate for .COM from April 2018 (the renewal cycle can take just over two months to play out) is 59.16%. Approximately 40.84% of .COM domains detected as new in the same period in 2017 did not renew. The blended renewal rate for .COM from ICANN's own data for the equivalent period was 74.12%. The blended renewal rate includes all domain names from 2017 and earlier that were up for renewal in that period which were not deleted. (Basically (Renewals/(Renewals+Deletions))*100)

Some sites do change categories even between monthly surveys. This may be due to no content sites having developed websites uploaded, developed websites disappearing because their domain name hasn't been renewed or websites getting compromised. There are also trends in some of the Chinese gTLDs where a site may change from a gambling or affliate lander to simple PPC or an on-hold/expired PPC page (again more common if the domain name is up for renewal).

The decline of .NET and some of the other non-core gTLDs started as far back as 2009 or so when Domain Tasting was stopped by ICANN's introduction of a restocking fee based on percentages of AGP deletes per month. It was compounded by the changes in PPC advertising (Google/Yahoo etc) revenue as well. Domain names that had been good for PPC revenue stopped making the minimum to justify renewal fees. There were various offers and discounts to promote these gTLDs but it wasn't untile the Chinese Bubble kickstarted some interest in these gTLDs that they started gaining some registrations. The problem is that each of these gTLDs is now split between a Chinese market and a Rest of World market. The RoW market is quite stable because a lot of the registrations are brand protection or veteran registrations. The blended .NET renewal rate was 74.38% but its one year renewal rate was approximately 55.74%. The really weird thing is the performance of .MOBI. Everyone seems to write it off as a failed TLD but it is not. The one year renewal rate for April 2018 is 62.11% with a blended renewal rate of 74.03%. Admittedly it is a much smaller TLD than others. The growth in some of these non-core gTLDs is coming from Asian markets rather than RoW markets but with a gTLD that has a diversified set of markets and a smaller global market it can be quite effective. The problems arise when a gTLD, especially a new gTLD becomes completely dependent on discounting to drive registrations. That's what has happened to some of the leading new gTLDs. Their renewal rates tend to be quite poor but it is the discounting that keeps the volume steady(ish).
 
Last edited:
2
•••
Thanks @jmcc for useful information re .com renewals:
The new 2017 registrations renewal rate for .COM from April 2018 (the renewal cycle can take just over two months to play out) is 59.16%. Approximately 40.84% of .COM domains detected as new in the same period in 2017 did not renew. The blended renewal rate for .COM from ICANN's own data for the equivalent period was 74.12%. The blended renewal rate includes all domain names from 2017 and earlier that were up for renewal in that period which were not deleted. (Basically (Renewals/(Renewals+Deletions))*100)

And this enlightening history with a clear indication of why and when the decline in certain TLDs started:
"The decline of .NET and some of the other non-core gTLDs started as far back as 2009 or so when Domain Tasting was stopped by ICANN's introduction of a restocking fee based on percentages of AGP deletes per month. It was compounded by the changes in PPC advertising (Google/Yahoo etc) revenue as well. Domain names that had been good for PPC revenue stopped making the minimum to justify renewal fees."

I don't doubt that the sort of zone file deep dive that your firm does is what is needed if one wants to predict things like which types of domains will probably renew and what future renewal rates will be (and your explanation of those types of uses of your data has been interesting). However, what I posted was not on the topic of registrations or projected renewals, or even total use of a TLD, but rather how used TLDs are in major websites. For that purpose, I feel that a broad survey like the W3Techs dataset can be informative, with its broad, open, and long term statistics. Even while recognizing the possibility of some bias, using it is far better than vague statements we often hear in arguments. While you may say that what scholars and researchers use does not matter and they don't really understand the business, but the wide use, especially in scholarly settings, of W3Techs data does hold weight in my mind.

But as I said last night, I am moving on to other things from this 'debate' as it seems to keep shifting to arguments based on uses different from those in the original post, leading us both to keep saying the same things over, which is pointless to us all.

I do have one last thing though, see next post... :xf.smile:
 
0
•••
NOW I finally get it! :xf.grin:

The best way to have an unbiased sample of what domain name sales data really is, is to each day go to the entire zone file for every darn domain name there is. Check if ownership has changed since the day before, and if it has, then query the new and old owners to see if it really sold and if so for how much. That would be great. It would have no bias. It would even allow us to predict other things, like maybe the same person has some other domains they are planning to sell soon.

You see NameBio data has biases, like some venues don't even ever report to it! They don't report the sales under $100 too! These influence the average price and extension data. If we looked at every last change in registration we would get better data. No doubt. Now I see it!

Therefore absolutely no value to NameBio, and I plan to stop using it (and W3Techs website use stuff too while I am at it). Maybe the NPs staff should automatically delete every post that mentions NameBio? That would be a job, but well, needs to be done!

Yeah, I know there is nothing better than NameBio for domain name sales, but hey, better to just be vague than to use something that might have some biases. Also using no data in arguments is easier for some arguments.:-P

Thanks. Much clearer now, and without ever touching NameBio again I will have a lot more time in my day too! :xf.wink:

[ps This is attempt at humour. :xf.smile: I am not really giving up NameBio. My point that data sources where we are told the methodology, that are widely used and publicly accessible, can be useful, ARE useful, as long as we are conscious of methodology, limitations and potential biases. NameBio is so valuable because it is available to everyone, and they are open re how it is obtained.]
 
Last edited:
0
•••
  • The sidebar remains visible by scrolling at a speed relative to the page’s height.
Back