IT.COM

strategy What Are The Odds? Applied Probability For Domain Investing

Spaceship Spaceship
Many skills can help you make better decisions as a domain name investor. One of them is quantitative decision making. While probability can never firmly tell you what to do, it can help make better decisions in some cases.

Applied probability can be used to help answer questions such as how many domains will I probably sell this year, should this domain name be renewed, and whether it is better to renew names in advance of a forthcoming price increase.

Basic Ideas

A probability is a numerical estimate of the chance of something happening. For example, if you roll a normal dice, unless it is rigged in some way, the probability of any particular number being rolled 1 chance in 6.

Normally we express probability on a scale from 0 to 1, with 0 meaning no chance that result will happen, and 1 meaning it is sure to happen. In the dice example, the probability of each number would be about 0.167.

Sometimes probabilities are expressed as percentages, from 0 to 100%, rather than from 0 to 1.

If we go back to the dice example, if we roll it twice, the odds of getting 2 followed by another 2 will be about (1/6)*(1/6), or one chance in 36. That is because the events, for a fair dice, are independent.

It is important to realize that probabilities are not always independent though. For example, the probability of a domain name in a certain sector selling, and the probability of a high sales price, are probably related. If demand for a niche or sector goes up, the chance of a name selling and its price, both increase.

A probability value may change with time. For example, the probability of a metaverse domain name selling 10 years ago was probably less than this year. Some sectors that were hot a few years ago are much less active in domain name sales now.

If you want to read more about probability, the Wikipedia entry on probability is well written.

The Sell-Through Rate, STR

The sell-through rate (STR). STR is simply the number of sales during a 12 month period divided by the average number of names in your portfolio during that year.
IMAGE-STR.png

If you have a substantial portfolio, and have been investing in domain names for a number of years, you can calculate your annual sell-through rate (STR) each year.

Many include only sales above some price point in calculating their STR.

EXAMPLE – STR Calculation:
Let’s say someone sets $600 as minimum price for their STR calculation. If they sold 5 domain names above that price during the 12 month period, from a portfolio that ranged from 400 to 600 domain names during the year, averaging 500 names, then the annual STR = 5/500 or 0.01 which is 1%.​

The STR is not a probability, but the probability of selling a domain name during a year is expected to be similar to your annual STR, unless something has changed in your domain investing approach.

As a new investor, you will not have an established personal STR from records from previous years. One approach is to use the domain name selling universe, making the rough assumption that your performance might be equal to the average of all names listed for sale. That is relatively easy to calculate as shown below.

EXAMPLE – Industry-Wide STR:
Let’s look just at .com domain names and the last 5 years to be representative of current conditions. I am going to set the minimum price at $1000 and the maximum at $25,000. See below for a justification on using a maximum.​
With the NameBio interface you can readily find the number of sales over the 60-month period, 75,400, and the average price, $3061. But not all sales are in NameBio, since sales at several popular venues are not generally included. If we assume that 25% of the sales of $1000 plus are in NameBio, we can apply an adjustment factor of 4x, making an estimate of 67,860 sales per year.​
Using Dofo Advanced search, setting to only .com and BIN prices of $1000 to $25,000, there were 11,936,067 domain names listed for sale. But not every domain for sale gets in Dofo listings, and also there are some for sale at make offer, or higher prices, that will end up selling within our price range. On the other hand, there are some old listings still on marketplaces that are not truly for sale. Considering all these factors, I applied a net 1.3x correction factor, obtaining 15,516,887 domain names for sale.​
Now we can calculate the industry-wide annual .com STR for sales of $1000 to $25,000 = 67,860 / 15,516,887 = 0.0044 = 0.44%. The average sales price was $3061.

This does not mean that will be your STR. Perhaps you have acquired very strong names, or use effective promotion of your names, and achieve a higher STR. On the other hand, many starting out in domain investing probably have a lower STR than the average. Your personal STR may be higher or lower than the industry average.

Why did I introduce the upper cutoff of $25,000? It has almost no impact at all on the STR, but the cutoff upper price does impact the average price significantly. Those 6-figure and up sales have a huge impact on the average price. But most domain investors, particularly those just starting out, may never have a sale in 6-figures. In fact, many will not even have domain names priced above $25,000, so I thought it best to introduce an upper cutoff price to obtain a more reasonable average sales price.

How Many Domains Will I Probably Sell This Year?

Some promote domain name investing as easy and fast. This leads to unreasonable expectations that names will sell for good return on investment quickly. The reality, for almost all domain name investors, is far different.

Let’s look at it numerically. If you have, say 50 domain names, how many sales can you expect in your first year? It is a pretty easy calculation, simply the probability that any one name sells during a 12 year period times the size of your portfolio.
IMAGE-AnnualSales.png

Since you don’t have an established personal STR yet, I will assume that you are ‘average’ and the probability of one of your names selling is equal to the industry-wide STR calculated above, 0.44%.

EXAMPLE How Many Sales From 50 Name Portfolio?
Number of sales in 12 months = probability of sale for one name x number of domain names in portfolio.​
Number of annual sales = 0.0044 x 50 = 0.21​
This means an investor with a 50 name ‘average’ .com portfolio will have a bit more than 1 chance in 5 of selling a name at the end of one full year, or will, on average, sell one name in a 5 year period.​

Let me stress again this is the industry-wide average number. If there is an important message, it is that you must strive to be better than average to find success in domain name investing. Also, unless you are very lucky, a lot of patience will be needed. It is hard waiting those many months before your first sale.

It should be stressed that while it is interesting to see how your STR compares with the industry average, the true measure of success if whether you are profitable. Some have a personal STR well below the average, but sell at great prices and are profitable. Others will sell at a higher rate than the industry average STR, but at prices, or with acquisition costs, that make them overall unprofitable.

Should I Renew This Domain Name?

We all face the issue of which domain names to renew. If we think about the question in a quantitative fashion, the product of the annual probability of sale for the domain name times the expected net return from a sale of that domain name should be higher than the annual costs to hold the domain name in order for it to make sense to renew.
IMAGEHoldName.png

EXAMPLE – Should I Renew This Name?
Let’s say the probability of sale of a particular domain name at 0.25%, a bit less than the industry average. If we estimate the net return from a sale, if it happens, based on pricing, expected commission, acquisition cost, etc., would be $1500.​
0.0025 x $1500 = $3.75​
Therefore it would not make sense to renew this particular domain name, assuming that the renewal fee was $9.00.​

Note that the net return is your gross sales price minus commissions and other costs, and minus your acquisition cost.

I have not included parking revenue in the calculation. If your domain name earns enough parking revenue to cover renewal costs it of course justifies keeping.

If you have a large portfolio, and have been successful in domaining for some years, you can estimate the values for a domain name with more confidence than if you are just starting out.

I covered some factors to consider when deciding to renew a domain name in How To Decide What Domain Names To Renew.

Renew In Advance?

Verisign is raising the wholesale price on .com by another 7% at the end of this month. Some investors are renewing domain names in advance of the price increase. Others argue that if the name sells that is essentially lost money, and only renew near expiration.

Let’s quantitatively look at the situation using the annual probability of sale of the name, p. If the name does not sell before the period when the renewal kicks in, you have saved the difference between the current and increased renewal rate. On average, the saving is p times the cost savings. However, there is a chance that the name will sell quickly, and you have wasted the amount spent. In this case, we multiply the current renewal times (1-p) where p has been expressed on the usual 0 to 1 scale. I summarize the two calculations below.
Image-RenewEarly.png

EXAMPLE – Renew In Advance Of Price Increase
You hold a .com name that you have decided you will want to keep for more than one year if it does not sell. To get a representative cost, I took the 5th best current .com renewal rate at TLD-list, which turned out to be $9.13. With a 7% increase, we expect the new retail renewal to be about $9.77.​
Let’s say it is a good name, and you think the probability of sale in any one year is 10%, much better than the industry average. So there is 1 chance in 10 the name sells during the next year before you need that renewal you did in advance. If we multiply $9.13 x 0.10 = $0.91 that would be the probabilistic loss.​
Now let’s look at what probability suggests we save by renewing early. There is a probability of 0.90 that we will need to renew the domain name because it did not sell. We save $0.64 by renewing in advance, so that would suggest a $0.64 x 0.90 = $0.58.​
Since the saving, for these numbers, is less than the $0.91 calculated earlier, for this high probability of sale domain name it does not make sense to renew in advance.​
However, if instead we look at a more typical annual probability of sale, say 1%, the results are different. In this case $9.13 x 0.01 = $0.09 is the average loss by renewing names like this in advance. The expected average saving is now $0.64 x 0.99 = $0.64. Renewing in advance is a no-brainer in this case, if we have the funds and we are sure we want to keep it long term.​

Note this line of thinking does not apply to domain names that you do not feel confident are high enough quality to hold long-term.

Also, even when the calculation shows we save money, on average, by renewing in advance of price increase, only you can say if you have the cashflow to act on it. Even if you do have the funds, you need to ask if this is the best use of money you have. The funds you put into advance renewals will not be available for increasing your portfolio size through acquisitions, which might yield a better return.

Final Thoughts

There are lots of other examples of ways to apply probability ideas to domain name investing decisions.

For example, does it make sense to hold domain names with high premium renewal rates?

What about transferring a name to save on renewal cost, which may make the name not open to fast transfer networks for a period of time, depending on registrar. Do the savings justify the period when your sales chance will be decreased?

An interesting case is for .co and some new extensions where you can get the first year at a discounted rate compared to the regular renewal. The probability argument for this case may suggest the name is worthwhile to hold for one year, but not long term at the higher annual cost.

Perhaps the most important case is using probability estimates to decide whether to accept an offer, or hold out for a higher price, but with some chance the name will never sell.

What about a name you have decided to drop? Does probability suggest it is better to keep the domain name listed at retail price right to the end, or to liquidate for some return?

I may take up some of these other applications in a future article.

I hope you will share interesting ways you use quantitative thinking in your domain name decisions.
 
50
•••
The views expressed on this page by users and staff are their own, not those of NamePros.
So who can calculate what are the odds of rolling 3 at least once after 2 tries with a dice?
 
Last edited:
2
•••
So who can calculate what are the odds of rolling 3 at least once after 2 tries with a dice?
I believe that would be 1-(5/6)^2 = 0.306 or about 31%

In words, 5/6 chance you did not get a 3 the first try, and 5/6 chance you did not on second try, so probability not getting it either of those, you multiply the independent probabilities. Since those were probabilities that you did not get the result, use the 1- subtraction to get probability you did.

Note that as you try more and more the chance does go up, getting really close to 1, but never would quite get there.

Don't ask me to apply it to a practical domain problem, however. That would be the tough question! :xf.grin:

Bob
 
3
•••
Interesting twist on the discussion into whether mini-development is profitable.

I would love to give statistics on mini-site development in some other article, if someone can point me to a reliable source of recent data on how many such sites exist, how many make various amounts per year.

I think the skill set to be efficient and successful at development is different than the skill set to be successful at buying and selling domain names. There are undoubtedly some who are interested, and good, at both, but not many I would guess.

It seems to me that it might well be possible to monetize to the degree that it is worthwhile, but I also agree with those who say without showing us specific examples of sites they are sceptical.

I think the best bet is if you hold domain names suited to development that are in an area you are already passionate and knowledgeable in, and as in anything, finding where there is a hole in existing offerings on the web.

Even then, one should not under-estimate the effort in maintaining a stream of relevant content that is adequately researched and edited to the degree it is professionally presented.

It is unfortunate that Google so control the success of such enterprises, and how quickly the rules can change.

Bob
 
12
•••
I believe that would be 1-(5/6)^2 = 0.306 or about 31%

In words, 5/6 chance you did not get a 3 the first try, and 5/6 chance you did not on second try, so probability not getting it either of those, you multiply the independent probabilities. Since those were probabilities that you did not get the result, use the 1- subtraction to get probability you did.

Note that as you try more and more the chance does go up, getting really close to 1, but never would quite get there.

Don't ask me to apply it to a practical domain problem, however. That would be the tough question! :xf.grin:

Bob

If you give someone that math question you are normally going to get a 1/3 answer as it seems logical at first glance, but I believe your math is correct.

The bottom line though with my comments is the larger the sample size, the more of each outcome you are likely to get. If you roll a dice 10 times you might not get a 6 once, it is certainly possible.

If you roll it 1000 times, it is basically a statistical impossibility to not get 6 many times in practice.
That is more the business model of large portfolios.

Or you can think of them like fishing hooks, if you have (1) hook in the water you might not catch anything. If you have 1000 the probability goes up dramatically.

Brad
 
2
•••
Well then here's a question. What is the applied probability of the average domainer earning more profit from: Sales or development? Seeking calculations not opinions.
I think while question may seem simple there are embedded things that would make all the difference. One mentions 'average domainer'. On the buy/sell side, there is enough wiggle room in number interpretation (e.g. how many unreported sales) that probably the case can be made from losing money to very slight profit (if no account of invested time) as the average.

I really don't know a source of numbers for development. They probably exist, I just don't know them. I suspect like in most things with low entry bar, the majority end up making a slight amount if they don't value their time fully, and not if they do. But I know you asked for numbers. If someone in development community can point to relevant statistics that would be helpful.

Bob
 
3
•••
I believe that would be 1-(5/6)^2 = 0.306 or about 31%

In words, 5/6 chance you did not get a 3 the first try, and 5/6 chance you did not on second try, so probability not getting it either of those, you multiply the independent probabilities. Since those were probabilities that you did not get the result, use the 1- subtraction to get probability you did.

Note that as you try more and more the chance does go up, getting really close to 1, but never would quite get there.

Don't ask me to apply it to a practical domain problem, however. That would be the tough question! :xf.grin:

Bob

You are absolutely right. That is the beauty of getting the handle of probabilities. Once you master them, you can navigate in the world of the seemingly independent events.

Yes, each domain sale probability is mostly independent of each other (there is, of course, macro-economic correlation, the correlation within niches etc.).

But... If you look at it as Bob showed above, suddenly it appears completely different.

You have 99% chance that your 1 domain won't sell this year (full year).

Now, what are chances that you have 0 sales with 2 domains (assume the name quality and the pricing result in 1% STR for each)? (0.99)^2 = 98.01%. So now you chance of making at least one sale is 1.99%.

What if you have 100 names? (0.99)^100 = 36.6%. So now with 100 names you are almost twice (1 - 36.6% = 63.4%) as likely to sell at least 1 name than you are to end up with 0.

If you have 300 names? Just 5% chance you won't make a sale in the year. Basically, with 300 decent names you are almost certain to make a sale.

And if you have 10 000 good and reasonably priced names, chances are you'll end up making around 100 sales (I'd estimate that you have 80% chance of being within 80 to 120 sales and 95% chance of being within 70 to 130 names. For exact number on the probability ranges, simulations would need to be run).
 
7
•••
You are absolutely right. That is the beauty of getting the handle of probabilities. Once you master them, you can navigate in the world of the seemingly independent events.

Yes, each domain sale probability is mostly independent of each other (there is, of course, macro-economic correlation, the correlation within niches etc.).

But... If you look at it as Bob showed above, suddenly it appears completely different.

You have 99% chance that your 1 domain won't sell this year (full year).

Now, what are chances that you have 0 sales with 2 domains (assume the name quality and the pricing result in 1% STR for each)? (0.99)^2 = 98.01%. So now you chance of making at least one sale is 1.99%.

What if you have 100 names? (0.99)^100 = 36.6%. So now with 100 names you are almost twice (1 - 36.6% = 63.4%) as likely to sell at least 1 name than you are to end up with 0.

If you have 300 names? Just 5% chance you won't make a sale in the year. Basically, with 300 decent names you are almost certain to make a sale.

And if you have 10 000 good and reasonably priced names, chances are you'll end up making around 100 sales (I'd estimate that you have 80% chance of being within 80 to 120 sales and 95% chance of being within 70 to 130 names. For exact number on the probability ranges, simulations would need to be run).

That is useful information, especially when it comes to the mechanics of how larger portfolios operate.

Unless you have all terrible domains, ridiculous prices, or no promotion whatsoever you are highly likely to make many sales with a large enough portfolio.

Brad
 
Last edited:
3
•••
Thank you for your comments, @poweredbyme.

I agree that each domain name is unique. That is also true of all sorts of things that we apply probability to. Each sports event is unique to varying degrees, but probability is applied in establishing betting odds. All sorts of specialized things are unique, but insurers use statistics from somewhat similar events to, on average, estimate odds of various outcomes. Same when a doctor tells a patient under informed consent the chances of different outcomes from a procedure. Each person and operation is unique. But the doctor can tell me, based on people with somewhat similar physical characteristics and over many operations by different surgeons, what my odds are.

Let's look at startups. Each is unique, has a different idea, different people, different competitive landscape. One can look at a collection of many startups to see typical failure rates. That could be used to estimate the fraction of investments in startups that will pay off. Probability will not let you predict with certainty which startups will succeed, but if done carefully will give an approximate number that will fail if you invest in 100 different startups.

Probability never tells you the precise outcome for any one event. It will never tell you voice.com will sell for $30 million, or if any one particular name will sell this week, this year, this decade or never.

As @bmugford replied above, the key idea is having a sufficiently large portfolio to apply probabilistic ideas to. The degree to which the portfolio is similar to the one on which the probability is calculated will determine how much, or little, confidence we have in the result.

For example, if only 3 single-letter .com were grandfathered, and you owned all 3, probability based on other domain sales probably would not be very helpful. And even in the best of times, it will not tell you the result for individual domain names. Simply because as you say, each name, and each seller and buyer, is unique.

Without similarities/correlation/relation between sequential events we can't calculate probability of the next similar event(s). I will try to explain the reason below. I don't think there is similarity/relation between 2 or more domain sales. Each domain sales is statistically independent, in my opinion. We may observe some sort of dependencies if we sell too many domains. But it would be fallacy. Because if there is, the relation between 2 or more sales is not enough to be sure. For example, if one observers sunrise for 10,000 times, s/he may find relations between sunrise and too many events that are unknown as of now. People who make too many domain sales may notice some similarities. But it would be personal knowledge as a result of personal experience. I mean another person can not experience the exact same events. Hence, another person can not have that knowledge even if the number of sales of those 2 sellers is the exact same.

Assuming that the dice is 'fair' it is indeed 1/36 and it is because the second event does not depend on the first. The chance of rolling that first 2 is 1/6. The chance when we roll again does not depend on the first roll result, so it too has a chance of 1/6 of being a 2. Independent probabilities are multiplied to establish an overall probability.

Wikipedia explain Gambler's Fallacy like this, "the incorrect belief that, if a particular event occurs more frequently than normal during the past, it is less likely to happen in the future (or vice versa)". So someone with gambler's fallacy would say, well the last time I did not roll a 2, so there is a better chance of rolling a 2 this time. That is not true. Each time the chance is the same, 1/6, and by the multiplication rule for probabilities the chance that we roll a 2 the first time and a 2 the second time is (1/6)*(1/6).

Khan Academy have a good explanation of the multiplication rule for independent probabilities.
https://www.khanacademy.org/math/ap...iplication-rule/a/general-multiplication-rule

Bob

If the probability was 1/36, before the 37th roll, one would roll 2 for the second time. Similarly, probability of rolling something in dice from 6 possibilities in the first roll is not 1/6. That 1/6 possibility is hypothetical. In real world experiences, that 1/6 would happen in lower or higher than 1/6 frequency. 1/6 would almost never happen. Those probabilities, probability between unrelated events, are in fact unknown. If one claims that 1/6 and 1/36 probabilities are correct, s/he would easily fall into gambler's fallacy. Then why do we calculate 1/6 and 1/36 if wrong? Because those calculations are hypothetically correct in a perfect world, aka ceteris paribus. But incorrect in a real world.
 
Last edited:
1
•••
You have 99% chance that your 1 domain won't sell this year (full year).

Now, what are chances that you have 0 sales with 2 domains (assume the name quality and the pricing result in 1% STR for each)? (0.99)^2 = 98.01%. So now you chance of making at least one sale is 1.99%.

What if you have 100 names? (0.99)^100 = 36.6%. So now with 100 names you are almost twice (1 - 36.6% = 63.4%) as likely to sell at least 1 name than you are to end up with 0.

If you have 300 names? Just 5% chance you won't make a sale in the year. Basically, with 300 decent names you are almost certain to make a sale.
Really nice way to think about it, @Recons.Com.
I wish I had thought of a presentation along those lines as part of the original article. :oops:
Thank you so much for your logical and quantitative contributions on this topic, and clearly explained as well.
-Bob
 
1
•••
1
•••
@redemo
It would be nice if you can share some of your 5 websites which are successfully making consistent revenue. This will give all of us a better picture of what you're saying and what it really looks like after going through your sites.
Thanks!
 
1
•••
@redemo
It would be nice if you can share some of your 5 websites which are successfully making consistent revenue. This will give all of us a better picture of what you're saying and what it really looks like after going through your sites.
Thanks!
Hi

sorry to interrupt this thread, but since an interruption was thrown in by another, then here is a previous answer given by the interrupter, to above question.
don't know if anything has been revealed since then.

https://www.namepros.com/threads/how-to-develop-domain-names-for-profit.1242996/page-2#post-8345348

as for probabilities,
if you only own 4 letter.com and 3 letter.org and some two-word.com, then probabilities of a sale are higher than having a wider variety of domains in mix of extensions.

but as always, it all depends on what you're holding.
such things like news events, new products, services etc could all change the probability.

imo...
 
2
•••
Hi

sorry to interrupt this thread, but since an interruption was thrown in by another, then here is a previous answer given by the interrupter, to above question.
don't know if anything has been revealed since then.

https://www.namepros.com/threads/how-to-develop-domain-names-for-profit.1242996/page-2#post-8345348

as for probabilities,
if you only own 4 letter.com and 3 letter.org and some two-word.com, then probabilities of a sale are higher than having a wider variety of domains in mix of extensions.

but as always, it all depends on what you're holding.
such things like news events, new products, services etc could all change the probability.

imo...

It's not of much use then from redemo to share all the stuff without sharing the domain names.
 
0
•••
A probability is a numerical estimate of the chance of something happening.
Idea. Let's take a random domain name for sale at $100 and calculate the mathematical probability of earning $1000 over a year from selling, parking, developing with adverts or developing and selling. @Bob Hawkes you choose the domain name. I'll do the maths.
 
0
•••
STR is simply the number of sales during a 12 month period divided by the average number of names in your portfolio during that year

Let’s look just at .com domain names and the last 5 years to be representative of current conditions.

Now we can calculate the industry-wide annual .com STR for sales of $1000 to $25,000 = 67,860 / 15,516,887 = 0.0044 = 0.44%

I think there is some confusion here.

If STR is annual, why do we calculate it over 5-year data?

Moreover, we cannot reach a correct result with assumptions such as "Let's estimate that 25% is reflected in the market and multiply it by 4".

It is also a mystery whether Dofo's database records are correct, and how many years those 12 million .com domains have been renewed.

I think STR depends entirely on one's personal success and luck; while this rate is 2-3% for some, it may remain at 0% for others.
It's hard to pin this to a market-wide average.
 
1
•••
There's always going to be a margin of error in these type of calculations, but I think @Bob Hawkes has created a very useful topic for debate here. The more mathematical our decisions as domainers become, the more success the average domainer is likely to have. I don't listen to 1% or 2% S.T.R. as a guide, because there's too many factors. Different keywords, different extensions, different global events and trends, list is endless. Pretty sure a top domainer can have 100 domain names and easily sell 100% in one year where another less experienced domainer can have 1000 domain names and not sell any in a year. Thanks for sharing these percentages but I think more information is need to make it relevant. I think these probability calculations are more suited to parked domains or developed domains when you can have X traffic and X cost of advert action = X income for domainer. Some might disagree. That's democracy.
 
Last edited:
1
•••
Important debate, even if we disagree on most points, and highly relevant to applied probability of profit from domain name investing.

Is buying and selling cars so easy? No. Do car traders make profits? Yes. Is mowing lawns so easy? No. Do gardeners make a profit? Yes. Is collecting cans all day so easy? No. Do can collectors make profits? Yes. I could go on. You're jumping from making a $90 profit over 365 days to being able to afford a Porsche 911 and it just doesn't sit well with the theme of this discussion. I'm not an expert, or a millionaire, just saying it works. You're saying it doesn't work. I'm simply disagreeing with you, but trying to stay on point.

You could eat an apple through a letterbox, what's your point?

Yes, I'm doing that. No, I'm not sharing domain names. I've had at least 5 messages from established members saying sharing domain names is a bad idea. I took their advice. You believe what you want. Again, you're saying it doesn't work and I'm saying it works.

I've answered every one of your points individually, out of courtesy and respect for the debate. That's called being very specific.

Well that's not going to happen so you may as well stop asking. I could give you examples from any industry, but you haven't asked for an example yet?

Well now you're jumping again. Share one single message public or private where I have pitched anything from which I stand to make personal gain. A single post. One sentence. You can't just make things up and expect to not be challenged.

End of the day, all I'm doing is trying to get a conversation going about how average John Doe or Mary Jane domainer can end the misery and make some profit. This actually benefits you because a domainer might buy a domain name off your and develop it. Have you considered that?
I have done a bit of reading in the niche development groups. You never share your sites because a fellow developer can easily copy it, see where it’s ranking, write slightly better articles, look at the back link profile and try and gain the same back links and then all of this would (probably) put the new site at a higher ranking.
 
5
•••
I have done a bit of reading in the niche development groups. You never share your sites because a fellow developer can easily copy it, see where it’s ranking, write slightly better articles, look at the back link profile and try and gain the same back links and then all of this would (probably) put the new site at a higher ranking.
Spot on mate. Having a nice day?
 
2
•••
Personally, I do not track STR as described, for my portfolio. But what I do keep track of per domain sold, in addition to basic things like "sales price", "total sales amount" and "profit", is hold time, and number of days between consecutive sales. These numbers improve over the years, and tell me whether I'm on the right track.

For example, for my sold domains this year (2022, unfinished), the average number of days between consecutive sales is 17, and average hold time is 2.93 years. For 2021: 21 days between sales (avg) and 6.91 years hold time (avg). For 2020: 51 days between sales (avg) and 6.66 years hold time (avg).

My challenge is to further improve these numbers.
 
Last edited:
3
•••
Personally, I do not track STR as described, for my portfolio. But what I do keep track of per domain sold, in addition to basic things like "sales price", "total sales amount" and "profit", is hold time, and number of days between consecutive sales. These numbers improve over the years, and tell me whether I'm on the right track.

For example, for my sold domains this year (2022, unfinished), the average number of days between consecutive sales is 17, and average hold time is 2.93 years. For 2021: 21 days between sales (avg) and 6.91 years hold time (avg). For 2020: 51 days between sales (avg) and 6.66 years hold time (avg).

My challenge is to further improve these numbers.
Hi

nice post!

to meet the challenge and further improve those numbers -
then one would have to sell domains more frequently to reduce number of days with no sales...
while also continue "replenishing.com" the portfolio with new acquisitions.

if that is or part of the challenge, and considering the increasing cost and competition, to acquire quality domains in aftermarket...
how would or do you, see that aspect as an additional hurdle or is it inconsequential?

Thanks

puff, puff...


imo...
 
1
•••
Hi

nice post!

to meet the challenge and further improve those numbers -
then one would have to sell domains more frequently to reduce number of days with no sales...
while also continue "replenishing.com" the portfolio with new acquisitions.

if that is or part of the challenge, and considering the increasing cost and competition, to acquire quality domains in aftermarket...
how would or do you, see that aspect as an additional hurdle or is it inconsequential?

Thanks

puff, puff...


imo...
Thanks @biggie!

In my situation, it's because I've gained more experience over the years and also because I've focused entirely on domain investing in recent years. Furthermore, I am stricter with removing bad domains from my portfolio.

In terms of new registrations in recent years --these are mostly handregs-- I have developed a method that allows me to make domain discoveries relatively easily, based on pre- and suffix word lists. This is done in a traditional bash script that does the magic. Eventually, a large CSV is created in which I select the most interesting domains for registration. One day I check domains up to 7 characters, the other day with only 8, 9, 10 characters, etc. I usually fish in the same pond with these combinations as the existing portfolios of NameFind, BD and HD. After all, they can't have everything (or can they? :xf.wink:)

I've bought domains in the aftermarket in the past trhough BIN pricing, but I'm glad I don't participate in domain auctions at all. It would give me a lot of stress. Especially with the increasing cost and competition, as you indicate.
 
3
•••
If readers to this post are looking to get into probability at a much deeper level, the author of this university-level textbook on probability for data science allows you to download it absolutely for free. Before the download will ask where you are and course, but as instructor notes, just say self-study if using it outside a course. They simply are trying to track how widely it is used, and where. I am only in early part, but it starts with basic ideas of probability, then takes you much deeper.

The author, Stanley H. Chan, teaches at Purdue.

-Bob

https://probability4datascience.com/
 
5
•••
Let me summarize probability theory. In full generality. we are working on a set X with a measure, and every measurable set has nonnegative mesaure and measure of the whole set X is 1. In finite sets it is easy, there are measurable sets, and unions and complements (and so intersections) of measurable sets are also measurable by definition of measure, and measure of union of disjoint measurable sets is sum of their measures (additivity). In infinite sets the requirement is the same, except that instead of additivity we have additivity over countable number (smallest infiinite cardinal, using others don't even make sense in this case) of sets, not just finite number of sets.

Example throw 2 dices. There are 36 possible outcomes, by definition each outcome has probabilty 1/36.
So probabilities of other events are defined by addition: probability of getting the same number is 6x1/36=1/6.

Throw a coin infinite number of times. What should be the set of possibilities. We have H and T each having
probability 1/2 (we should define it that way). We need to define a measure on the set of sequences ofH,T
like HTHHTTHTHHHT... probability of any such singleton event must be zero because it is 1/2 x1/2x... =0
Why multiply: because we want tossings to be independent of each other. Each (cylinderical) event like
"third one is Tail AND fifth one is Head" , should be measurable, and its probability is defined like above by multiplication.
What are other measurable sets: Consider Borel sets (the smallest sigma-algebra) containing such cylindirical sets, and a unique measure can be added on such sets by extending the definion above.
We don't need to understand what Borel sets are ,
only know that such a construction exist , but we can make computations.

(Number of Borel sets is smaller than the set of all subsets (no 1-1 correspondence possible in this case),
so there are unmeasurable sets, but we can't mention them,..all we can directly define are really measurable.
, so, don't worry about encountering an unmeasurable set whose measure you have to compute).

In the above model, replace H with 0 and T with 1 , so we are working on sequences like 1001111010100.., and then probability of getting 1/2 (as limit as n goes to infinitiy) on the average for first n numbers is 1 (while intuitively one might think having a limit less than 1/2 should be 1/2 while it is zero), this is called law of large large numbers.
Related to Gauss/Normal curve, but you don't need it to prove it. Combination numbers like n choose k,
(pick k balls out of n balls), are concentrated near k=n/2, and you should expect a deviation in the order of square n. Clear from central limit theorem, but can be done directly in this case.

Random variable: a measurable real valued function on a given probability space X. Measurable function means, in meaure theory sense, that inverse image of an interval is measurable. For example the event that random variable f takes values in (3,8) interval, should be a measurable subset of X.
So you can talk about percentage of people who are 5 foot to 6 foot tall. We can't allow such sets to be unmeasurable. (btw uncomputable doesn't mean unmeasurable of course).

("the same logic" appears in the whole math: Inverse image of a good set must be good, in measure theory, good=measurable. Also target space tend to be more important than the "domain". )

Independent random variables: f, g are independent, if for each pair of intervals, (a,b) and (c.d),
the probability of the event that f takes value in (a, b) and g takes value in (c,d) (so this is intersection of two subsets of X), is equal to probability that f takes value in (a,b) TIMES probability that g takes value in (c,d)
(product of probabilities of the sets we were intersectiong above).
This is very intuitive, because look at tossing a coin case, or throwing a dice case. We make definitions to make things independent (for example value of the first coin and the value of the second coin must be independent of eachother) .

We usually don't pay much attention to the domain X, because we only care about distributions of random variables f: X-->R we care about, on the real line, and of course their joint distributions (may need to expand the space where random variables are defined, to accomodate several random variables at the same time.)

Next define Expectation (E (.) )of a random variable. It is weighted sum of possible outcomes in finite case.
In infinite spaces (or more generally) , it is just a Lebesgue integral giving the same result in finite case.
I mean definition is easy once you know basics of measure theory, but you can still intuitively know
how to find expectations, without knowing all the foundation. Expectation is like center of mass,
or momentum of center of mass, so it is a linear thing as a formula ( for a given random variable f, it is just a number E(f)).

Variance: How much a random variable deviates from its expected value. It is defined as weighted average of squares of deviations, and in general case it is an integral as above. It is a quadratic formula, namely Var(f)= E((f-E(f))^2)

Although variance is quadratic, variance of sum of independent random variables is sum of their variances.
For example if f and g are independent Var(f+g)=Var(f)+Var(g), but Var(f+f)= 4 Var (f), NOT 2Var(f).
When things are nice enough, you can get CENTRAL LIMIT THEOREM for sums (or averages) of independent random variables. This means, we have Gaussian/Normal curve as precise "weak limit" (limit in distribution sense).

Example: consider the space X of tossing a coin. We have H or T as outcome each having probability 1/2.
Toss again: Everything is the same, but we want these tossings to be independent. So we can't work on X,
because no event on X has probability 1/4; we have 0, 1/2, and 1 as possible probabilities (measures).
We have 2 random variables "on X", having all the same properties when they are alone. But jointly we want to make them independent. How to make it. Easy: use product space XxX (X times X). It has natural measure coming from X. f and g are redefined on XxX by ignoring the second or the first coordinate respectively (f(x, y)=f(x) here by abuse of notation we use f for "the same" random variables on different spaces). (Here, when we create product of measure spaces), we need Fubini's theorem
but for most people knowing existence of the product space with its basic property that m(AxB)=m(A)xm(B), without proof, would be more than enough).

Correlation coefficient: E((f -E(f)). (g - E(g)))/ Sqrt (Var( f).Var ( g) ). By Cauchy-Schwarz inequality this is a number between -1, and 1. For independent random variables we get zero. For identical or proportional ones we get 1 (highly correlated). ArcCosine of this number can naturally be considered as the angle between these random variables. I mean , we are defining a (semi-) inner product on the space of random variables, and this corelation coeeficient is the cosine of the angle between two such random variables (defined unless f or g is constant almost everywhere, not to divide by zero).

Let me add Brownian motion /random walk to this list. It is about random movements. Someone goes one unit to the right or left randomly. How far he would go after n steps: About square root n. Of course he can go to +n or -n. But probability that he arrives somewhere between a.sqrt( n ) and b.sqrt( n ) after n steps approaches to the limit integral from a to b c.exp(-d.x^2) (c, d some positive constants).
In financial markets charts seem to move like brownian motion. They are not smooth. They are like fractals (not exactly, because of nonlinear scaling). They move up and down sharply even in very small intervals, but this movement tends to be about as much as square root of size of the interval, not much smaller, not much bigger , in general.
 
Last edited:
2
•••
Let me summarize probability theory. In full generality. we are working on a set X with a measure, and every measurable set has nonnegative mesaure and measure of the whole set X is 1. In finite sets it is easy, there are measurable sets, and unions and complements (and so intersections) of measurable sets are also measurable by definition of measure, and measure of union of disjoint measurable sets is sum of their measures (additivity). In infinite sets the requirement is the same, except that instead of additivity we have additivity over countable number (smallest infiinite cardinal, using others don't even make sense in this case) of sets, not just finite number of sets.

Example throw 2 dices. There are 36 possible outcomes, by definition each outcome has probabilty 1/36.
So probabilities of other events are defined by addition: probability of getting the same number is 6x1/36=1/6.

Throw a coin infinite number of times. What should be the set of possibilities. We have H and T each having
probability 1/2 (we should define it that way). We need to define a measure on the set of sequences ofH,T
like HTHHTTHTHHHT... probability of any such singleton event must be zero because it is 1/2 x1/2x... =0
Why multiply: because we want tossings to be independent of each other. Each (cylinderical) event like
"third one is Tail AND fifth one is Head" , should be measurable, and its probability is defined like above by multiplication.
What are other measurable sets: Consider Borel sets (the smallest sigma-algebra) containing such cylindirical sets, and a unique measure can be added on such sets by extending the definion above.
We don't need to understand what Borel sets are ,
only know that such a construction exist , but we can make computations.

(Number of Borel sets is smaller than the set of all subsets (no 1-1 correspondence possible in this case),
so there are unmeasurable sets, but we can't mention them,..all we can directly define are really measurable.
, so, don't worry about encountering an unmeasurable set whose measure you have to compute).

In the above model, replace H with 0 and T with 1 , so we are working on sequences like 1001111010100.., and then probability of getting 1/2 (as limit as n goes to infinitiy) on the average for first n numbers is 1 (while intuitively one might think having a limit less than 1/2 should be 1/2 while it is zero), this is called law of large large numbers.
Related to Gauss/Normal curve, but you don't need it to prove it. Combination numbers like n choose k,
(pick k balls out of n balls), are concentrated near k=n/2, and you should expect a deviation in the order of square n. Clear from central limit theorem, but can be done directly in this case.

Random variable: a measurable real valued function on a given probability space X. Measurable function means, in meaure theory sense, that inverse image of an interval is measurable. For example the event that random variable f takes values in (3,8) interval, should be a measurable subset of X.
So you can talk about percentage of people who are 5 foot to 6 foot tall. We can't allow such sets to be unmeasurable. (btw uncomputable doesn't mean unmeasurable of course).

("the same logic" appears in the whole math: Inverse image of a good set must be good, in measure theory, good=measurable. Also target space tend to be more important than the "domain". )

Independent random variables: f, g are independent, if for each pair of intervals, (a,b) and (c.d),
the probability of the event that f takes value in (a, b) and g takes value in (c,d) (so this is intersection of two subsets of X), is equal to probability that f takes value in (a,b) TIMES probability that g takes value in (c,d)
(product of probabilities of the sets we were intersectiong above).
This is very intuitive, because look at tossing a coin case, or throwing a dice case. We make definitions to make things independent (for example value of the first coin and the value of the second coin must be independent of eachother) .

We usually don't pay much attention to the domain X, because we only care about distributions of random variables f: X-->R we care about, on the real line, and of course their joint distributions (may need to expand the space where random variables are defined, to accomodate several random variables at the same time.)

Next define Expectation (E (.) )of a random variable. It is weighted sum of possible outcomes in finite case.
In infinite spaces (or more generally) , it is just a Lebesgue integral giving the same result in finite case.
I mean definition is easy once you know basics of measure theory, but you can still intuitively know
how to find expectations, without knowing all the foundation. Expectation is like center of mass,
or momentum of center of mass, so it is a linear thing as a formula ( for a given random variable f, it is just a number E(f)).

Variance: How much a random variable deviates from its expected value. It is defined as weighted average of squares of deviations, and in general case it is an integral as above. It is a quadratic formula, namely Var(f)= E((f-E(f))^2)

Although variance is quadratic, variance of sum of independent random variables is sum of their variances.
For example if f and g are independent Var(f+g)=Var(f)+Var(g), but Var(f+f)= 4 Var (f), NOT 2Var(f).
When things are nice enough, you can get CENTRAL LIMIT THEOREM for sums (or averages) of independent random variables. This means, we have Gaussian/Normal curve as precise "weak limit" (limit in distribution sense).

Example: consider the space X of tossing a coin. We have H or T as outcome each having probability 1/2.
Toss again: Everything is the same, but we want these tossings to be independent. So we can't work on X,
because no event on X has probability 1/4; we have 0, 1/2, and 1 as possible probabilities (measures).
We have 2 random variables "on X", having all the same properties when they are alone. But jointly we want to make them independent. How to make it. Easy: use product space XxX (X times X). It has natural measure coming from X. f and g are redefined on XxX by ignoring the second or the first coordinate respectively (f(x, y)=f(x) here by abuse of notation we use f for "the same" random variables on different spaces). (Here, when we create product of measure spaces), we need Fubini's theorem
but for most people knowing existence of the product space with its basic property that m(AxB)=m(A)xm(B), without proof, would be more than enough).

Correlation coefficient: E((f -E(f)). (g - E(g)))/ Sqrt (Var( f).Var ( g) ). By Cauchy-Schwarz inequality this is a number between -1, and 1. For independent random variables we get zero. For identical or proportional ones we get 1 (highly correlated). ArcCosine of this number can naturally be considered as the angle between these random variables. I mean , we are defining a (semi-) inner product on the space of random variables, and this corelation coeeficient is the cosine of the angle between two such random variables (defined unless f or g is constant almost everywhere, not to divide by zero).

Let me add Brownian motion /random walk to this list. It is about random movements. Someone goes one unit to the right or left randomly. How far he would go after n steps: About square root n. Of course he can go to +n or -n. But probability that he arrives somewhere between a.sqrt( n ) and b.sqrt( n ) after n steps approaches to the limit integral from a to b c.exp(-d.x^2) (c, d some positive constants).
In financial markets charts seem to move like brownian motion. They are not smooth. They are like fractals (not exactly, because of nonlinear scaling). They move up and down sharply even in very small intervals, but this movement tends to be about as much as square root of size of the interval, not much smaller, not much bigger , in general.


In short, you need to toss a coin for infinite number of times to say the probability is 1/2. But in real life, everything measurable/countable is finite. Infinity is added to calculations as a hypothetical thing or an assumption. We know we can't, but we do. Because we need. We need to add "infinity" thing to calculation. That's math.

In real life, probabilities of independent events are unknown, at least to me and other economists. You mentioned some math studies, theories and names of some math people above. But those studies, theories have no use, at least in economics/business. The reason is simple. You can't toss a coin for infinite of times to prove its probability is 1/2. In real life, you can't know possibility, probability of independent events. Furthermore, you can't know for sure which events are independent or dependent. Additionally some events can be partially independent, some events may transform themselves from being more independent to more dependent, or from 100% dependent from 100% independent and vice-versa. Why and how? In real life, there are factors involved in events. Those factors can change everything. In math and statistics, we assume those factors are not existed, ceteris paribus.

In real life, nothing measurable/countable could be 100% independent or 100% dependent. That's just another wrong assumption (ceteris paribus) in math or statistics. Even, nothing is perfectly countable. Nothing is perfect in real life, but everything is perfect in math and statistics. We distinguish nouns, financial assets as tangible and intangible but we do it imperfectly. So, probability of independent events are unknown. If one claims probability of independent events is known for sure, that claim would be gambler's fallacy.

Your post is the most informative one on this topic. Thanks a lot or thank for 100 times :) Thanking could be counted? We can't know for sure.
 
1
•••
  • The sidebar remains visible by scrolling at a speed relative to the page’s height.
Back