Dynadot

information What Is True And What Is Not?

Spaceship Spaceship
How do we have confidence that something is true? That is a critical question in all things, including the domain world.

Last week in the NamePros Blog, I shared my experience with a new AI-powered appraisal tool at OceanfrontDomains.com. That tool does an incredible job of analyzing the structure of a name, brainstorming potential uses, and presenting the case for the value of the domain name. However, it has become increasingly clear that many of the comparator sales did not exist. I’ve made an update to that article to better reflect the frequency of errors. There is still much to like in the tool, but the level of comparator sales hallucinations in the current release is unsettling.

This article uses 'lie' in the sense that a lie is simply "something that is not true" with no implication that there was an intent to deceive, the so-called non-deceptionist viewpoint.

AI Can Lie Convincingly

One problem is that AI can lie really convincingly.

With OceanfrontDomains, you can add to the name in the prompt conditions. If you tell it to not include comparator sales, it won’t. So I tried this prompt duet.cc only include reliable comparator sales with NameBio references.

At first glance, it did follow my directive. The comparator sales were all in the same extension, of similar length, and, most importantly, all gave a NameBio price and date.

Just to be sure, I went to NameBio to check the data. The first comparator sale suggested was love in the .cc extension selling for $13,500 in 2017. But there is no record of that sale on NameBio! Second on list, it told me hero sold for $5000, also in 2017. When I check on NameBio, there is no record of that name ever selling in the .cc extension. Next it suggested that star sold for $6000 in 2018, but it didn’t, at least no NameBio listing. The next comparator was eco, I don’t think a terribly apt comparator for my original query word duet, but it doesn’t matter, since there is no sale listing on NameBio. The last comparator name, bet does have a sale listed in NameBio! But, OceanfrontDomains tells me it sold for $6000 with a 2018 NameBio listing, whereas it really has a NameBio listed sale at $2500 in 2011. So four of the comparator sales were not listed at all, and the fifth gave data inconsistent with NameBio. I found similar results for a few other names that I checked.

Update: In continued testing over the past two days it appears that OceanfrontDomains appraisal no longer list any comparator sales as NameBio. This observation is based on a relatively small number of names rechecked, but it appears that none now claim NameBio as the source.

But it is so convincing. The sales prices the tool suggested are believable, and it gives a specific NameBio reference for each. Surely no one would lie about that, since it is so easily checked. But it did lie. Over and over. The danger in AI tools, not just in domains but in everything, is that they can lie so convincingly.

By the way, each run produces a somewhat different result, which is to be expected with AI models, so your results may not be identical to the ones reported above.

AI Lies A Lot – Across All Models

I sought more information on hallucinations, one term for information made up by AI tools. There are many articles that qualitatively state that AI frequently hallucinates, but I wanted to find an actual recent study that looked at multiple AI models.

I recommend that anyone interested in this topic read AI Hallucination: Comparison of the Most Popular LLMs (’25) by Cem Dilmegani at AIMultiple.com. That article defines hallucinations:
Hallucinations happen when an LLM produces information that seems real but is either completely made up or factually inaccurate.

The study investigated AI hallucination rates across 13 different LLMs, including versions of GPT, Claude, Grok, Llama, DeepSeek, Gemini and others. It found that the hallucination rates varied between 15% to 57%, with GPT 4.5 and Grok 3 among the best, and Gemini and GPT 4o the worse.

Even the ‘best’ LLMs had a disconcerting frequency of hallucinations. Note this was a small research study, limited to 60 questions with each LLM, and using one type of information resource.

The article also discusses the risks associated with hallucinations, and the how they come about, as well as steps for mitigation.

Better Accuracy Through RAG Tools

In researching this topic I learned a new term Retrieval-Augmented Generation (RAG). In a different article, Best RAG tools: Embedding Models, Libraries and Frameworks, Cem Dilmegani writes
Retrieval-Augmented Generation (RAG) is an AI method that improves large language model (LLM) responses by using external information sources. RAG provides current, reliable facts and lets users trace their origins, boosting transparency and trust in AI.
In the domain context, a RAG system might provide verifiable domain sales data.

Agentic Systems and Hallucinations

The high rate of hallucinations is of particular concern in the move toward autonomous or semi-autonomous agentic systems. Let’s say you have an agentic system ‘operate’ a retail business, making decisions on product lines, pricing, inventory, supply chains, marketing and more.

If part of the system has serious hallucinations, such as making up sales data for particular merchandise lines, disastrous results are possible. The move to agentic systems needs to be slow, with attention to robust systems to minimize and mitigate hallucinations in data.

The NamePros Blog covered agentic systems in the article Agent, Agentic and More: Domain Name Investment Opportunities.

For domain investors, these concerns can also lead to opportunity, though. Will there be demand for domain names suited to verification and accuracy in agentic systems or in AI more generally?

As noted in AI Hallucination: Comparison of the Most Popular LLMs (’25), hallucinations are a particular concern when AI is applied in critical systems such as healthcare, legal, and financial sectors, among others.

What Can We Learn From Science

Most of my career before domains was in science, and I think there are lessons in validation and trust in results from science that could be applied in domains.

Research Details
In science the details of the experiment or research study must be included in the paper. That makes sure that we are clear on exactly what was found, but it also allows another group to replicate the experiment. Sometimes in domain names we are told in a vague way that an experiment supports some result, or shown results without details of the study, or the actual numbers even.

Statistical Significance
It is easy to be potentially fooled by a result that is really nothing more than noise. I might tell you that my sell-through rate doubled when I did something. But without knowing if that was going from 1 sale per year to 2, or from 500 to 1000, or how irregular my sales normally are, the statement means nothing.

Peer Review
The heart of scientific validation is peer review. That simply means that prior to a result being published, several researchers who are expert in the field, but without affiliation to the authors, have carefully reviewed the results. While peer review can, and does, make mistakes, it is a critical component of validation.

Replication
Anything important in science will be replicated by multiple independent groups. There is competition in almost any niche, and that is good. Results that do not stand the test of being replicated by others will no longer have status.

Discussion
Almost all papers have a section called Discussion. That is the place where the significance and implications of the work is laid out, but also includes a balanced look at how the research relates to other results, limitations in the research done, and ideas for next steps. I think we could benefit from full discussion commentary on domain experiments.

Community Review
Following peer review and acceptance, the study is published, and becomes part of the scientific record. Journals properly guard their reputation, making sure to publish only deserving contributions to knowledge. Yes, sometimes things slip through, but most of the time, quality things get published. Contrast this to some ‘theory’ widely shared on social media, possibly starting from a noise coincidence or faulty assumption.

Share Views
Most scientific results get discussed at scholarly meetings, both formally in paper presentations and informally
at the event. It would be wonderful if the naming conferences moved to have a component specifically for discussion of research that was at a scholarly level.

NamePros Role

While it is not a true scholarly mechanism in the academic sense, the NamePros community plays an important role in pressing for details, evaluating significance of claims, sharing results, and discussion.We each play our part in making sure that happens.

Insist Multiple AI Sources Agree

A key part of the the science validation process outlined above is that multiple routes support some finding. That is everything from details allowing replication of research studies, to multiple peer reviews supporting publication, to the broader community discussion processes.

I am surprised, given the high hallucination rates, that we do not insist that any AI result be supported by multiple independent paths. For example, if we had two different and independent LLMs each suggesting the same comparator sales, that would give us more confidence. It would seem easy to do this - have an agent that consulted two different AI environments, and required consistency in order to include a result.

For that matter, would it not be trivial for a different AI tool check the verifiable data? For example, if sales at a venue are listed, check that the data is correct. Could not an AI agent perform the check I did on comparator sales?

Check Anything That Matters

Until that becomes commonplace, when you are using any result, whether AI generated or not, always check independently anything that matters to you.

I am increasingly worried that as a society we are trusting results generated by AI way more than is warranted.

I welcome comments in the discussion below on any aspects of this topic.


Updates:
1. May 5, 2025 I added one line in the introduction to make clear that use of the term 'lie' is simply as something not true, with no implication about intent to deceive or not.
2. May 7, 2025 I have not tested extensively, but there seems to have been a recent change at OceanfrontDomains so that they no longer list comparator sales as being NameBio listed. Section updated to reflect this.



Special thanks to Cem Dilmegani who wrote a number of articles related to this topic. Two were cited in the article, and you can browse all his recent articles at this link.
 
Last edited:
54
•••
The views expressed on this page by users and staff are their own, not those of NamePros.
Well, that's easy:
1. Everything written on the Internets is true.
2. If someone in real life makes a statement that can't be verified with a google query, it's false.
 
5
•••
Whatever my wife says... It's "true". Nothing else matters.
 
7
•••
Use common sense, if you have any, and you 'll do well.
 
4
•••
I like oceanfront because their appraisals exact matches with my BINs
 
6
•••
As always, thanks Bob for another job well done!

Just a few thoughts — some of which you have already touched on:

Humans have free will; AI does not (and I don’t mean to get philosophical here). The responsibility for AI behavior, as much as possible, lies with its creator. While it’s true that users must exercise caution afterward as you encouraged, the onus for this particular program lies with those who developed it. The apparent complacency in allowing this to continue — especially at the expense of others — calls for reflection and accountability.

Another point: much of what you wrote applies just as well to how people share information. Most domain name platforms don’t provide full data transparency to the community. While that may be understandable from a commercial standpoint, it ultimately leaves us piecing together insights from selectively presented outputs. We can only hope their moral compass keeps them from shaping the data to fit a preferred narrative — because the truth is, we often don’t know what the truth is.
 
Last edited:
5
•••
thanks for sharing bob
 
3
•••
Ai is not there yet for accurate information at all. I had two websites that I was writing content for. One, i wanted a list of historical paintings. I then looked them up to get images for them and most didnt exist! The second was a very simple directive, tell me the names of some bands who were one hit wonders. It then told me a lot of bands that were definitely not one hit wonders. If it cant give me simple information, then it is useless.

I now only use it for ideas and as my personal cheerleader, as it tells me all my ideas are fantastic :ROFL:
 
6
•••
ty Mr. Bob. :)
 
3
•••
I like oceanfront because their appraisals exact matches with my BINs
Just for laugh, they should match with your/my hand regs. :ROFL::glasses:
 
1
•••

What Is True And What Is Not?​


Hi

according to president, only what he says is true and everything else is fake news.

sure, the thread is about ai and yes, ai can lie...
because it was built by humans, who also can lie.

the pendulum swings in ai's favor for now
but a back swing is coming

resistance, is not futile

imo....
 
5
•••
I think it's just another website that uses chatgpt API to get results. there are thousands of them online.

A serious domain AI needs to be built from scratch for domainers in mind, to work as intended.
 
Last edited:
2
•••
Thanks for your detailed article on AI and its role in misinformation.

A lie is a false statement made with the intent to deceive. In this case, it's not a lie because AI doesn’t actually know whether the comps are real or not. It has no awareness or intent to mislead anyone.

Intent requires consciousness or will, and AI doesn’t have that. It doesn’t have goals, desires, or self-awareness. It just generates responses based on patterns in the data it was trained on.

We learned the definition of a lie back in first grade. Intent matters. Understanding this, always double-check AI-generated information before relying on it.
 
2
•••
6
•••
Well my teenage daughters told me what they say is always true and and my wife and I are always wrong:)
 
1
•••
We learned the definition of a lie back in first grade. Intent matters. Understanding this, always double-check AI-generated information before relying on it.
It is a good point. Thank you.

The question of intent and AI is I think not crystal clear, but is attributed to the intent of those who create the AI. For example, fi the prompt encourages it to use nonfactual information, then it could be considered intent, perhaps. From the reading I did, it seems that the newer LLM releases are even more attached to false information. That is, earlier models, when pushed to give a citation to support something, would tend to back down from pushing wrong information, and admit to the deception. Newer AI releases seem to hold steadfast to pushing the misinformation.

But your point is well taken, should that be called a lie or not, and can we ascribe intent to an AI at all, ever.
Thanks.

-Bob

BTW i looked to see what Merriam-Webster defines lie. It gives the following. Note that while 1a is very much in keeping with what you learned in elementary school, definitions 1b and 2 do not require the intent aspect. https://www.merriam-webster.com/dictionary/lie
1a: an assertion of something known or believed by the speaker or writer to be untrue with intent to deceive
He told a lie to avoid punishment.

b: an untrue or inaccurate statement that may or may not be believed true by the speaker or writer
the lies we tell ourselves to feel better
historical records containing numerous lies

2: something that misleads or deceives
 
Last edited:
2
•••
It is a good point. Thank you.

The question of intent and AI is I think not crystal clear, but is attributed to the intent of those who create the AI. For example, fi the prompt encourages it to use nonfactual information, then it could be considered intent, perhaps. From the reading I did, it seems that the newer LLM releases are even more attached to false information. That is, earlier models, when pushed to give a citation to support something, would tend to back down from pushing wrong information, and admit to the deception. Newer AI releases seem to hold steadfast to pushing the misinformation.

But your point is well taken, should that be called a lie or not, and can we ascribe intent to an AI at all, ever.
Thanks.

-Bob

BTW i looked to see what Merriam-Webster defines lie. It gives the following. Note that while 1a is very much in keeping with what you learned in elementary school, definitions 1b and 2 do not require the intent aspect. https://www.merriam-webster.com/dictionary/lie
Straight from the horse's mouth, unless GPT is lying ... oh wait a minute, it can't lie ... or can it? I just hope they don't name the next GPT Arnold.

An AI, including language models like me, does not possess consciousness, beliefs, or intentions. Therefore, if an AI provides an answer that is not true, it cannot be said to be "lying" in the human sense because it lacks the capacity for intent to deceive.

When an AI generates a response that is inaccurate or incorrect, it is typically due to limitations in its training data, misunderstandings of the prompt, or inherent challenges in language processing, rather than a deliberate attempt to mislead. In this context, the concept of lying, which involves intent and awareness of truth, does not apply to AI.

So, while an AI can produce false information, it does not lie in the way a human might, as it cannot form intentions or have awareness of the truth or falsehood of its statements.
 
2
•••
Language evolves. Merriam-Webster have definitions of 'lie' that do not depend on intentions. Without AI part of it, for people too. See 1b and 2 in previous quote from the dictionary.

Re intentions and AI, I have not read extensively, but most of what I came across in a cursory way is that consistent with what you say the AI itself does not have intentions, at least until AGI is achieved some argue, but that if the person who prompted/programmed inserted something that lead to incorrect information, in a sense the intention lies with them.

But my key point is that in the current definitions of 'lie' in the dictionary, only 1 of the 3 definitions of lie as deception requires that there be intent to mislead in order to call something a lie.

-Bob
 
1
•••
Language evolves. Merriam-Webster have definitions of 'lie' that do not depend on intentions. Without AI part of it, for people too. See 1b and 2 in previous quote from the dictionary.

Re intentions and AI, I have not read extensively, but most of what I came across in a cursory way is that consistent with what you say the AI itself does not have intentions, at least until AGI is achieved some argue, but that if the person who prompted/programmed inserted something that lead to incorrect information, in a sense the intention lies with them.

But my key point is that in the current definitions of 'lie' in the dictionary, only 1 of the 3 definitions of lie as deception requires that there be intent to mislead in order to call something a lie.

-Bob
The first definition of a lie, which emphasizes intent to deceive, is the only one that truly matters for adults. The other two definitions seem to trivialize the concept and can be used as excuses to avoid responsibility for defaming someone.
 
1
•••
Not surprising varying views since experts actively argue the point. This is summary that Google AI provides when asked for scholarly opinion on whether intent needed for something to be a lie. I used lie in the sense of the non-deceptionist view. Totally respect that others have different views.

-Bob

PS I added a one-line update in the introduction to the article making clear that the term 'lie' is used in simple sense of something not true, without any implication on intent to deceive or not.
LieIntention.png
 
Last edited:
2
•••
  • The sidebar remains visible by scrolling at a speed relative to the page’s height.
Back