Dynadot

Tool bulk nameserver deleted domains

Spaceship Spaceship
Watch

Jonh Borin

Established Member
Impact
7
hello guys Are there a tool that gives bulk nameserver to previously ended domains
 
0
•••
The views expressed on this page by users and staff are their own, not those of NamePros.
The point is, I just realized, this whole thing will have to be a byproduct of another product - bulk Whois data. You cannot know what a domain status is without Whois bigdata. If this one ever gets a chance.

That is a BIG business.

Here's an example: https://www.whoisxmlapi.com/whois-database-download.php

They used to have some prices (in many hundreds or thousands and up). Now, it's just request a quote. No wonder.

So I need to build this first before doing the other (byproduct). Side note I know this well, been working on it for a while (few years) as well as other tech.

But once that is built, unsure if I (or anyone else) might be interested in doing that domain research tool anymore. It's simply... pennies vs the real money and our lifetime is limited.

Anyway, just thoughts.
 
Last edited:
0
•••
Important though there is no guarantee a domain has been approved or just pointed via nameservers but never made it into SH.

This is what makes it a hard problem. I suspect most you'll find will just be pointed at the SH nameservers, but not approved.
 
0
•••
Sure, can't wait for RDAP. I'm sick and tired working with Whois already as each TLD is a separate nightmare and they tend to change things, so overnight now your whois isn't working anymore.

Howeber, as ICANN says:

"In the short term, RDAP will not replace web-based or port 43 WHOIS. Based on current policies and agreements, contracted parties will be required to implement RDAP in addition to (port 43) WHOIS and web-based WHOIS."

So Whois is still here to stay, with its limitations of course.

But again, you don't necessarily need Whois in this case, perhaps just on a tiny subset (your end records).

A SH activated domain would have nameservers activated anyway, which means it's captured via resolution and cached in our archive. And once a domain has expired, it will either point to registrar parking page OR drop the resolution completely. Both will be detected by our bot network and flagged as a change to look at. Edit: A Whois call is needed for detecting neither.

Whois is just a bonus and actually more needed for other applications. (Edit: but useful to the end user via our live whois calls, to see when domain they want to snap in the end will drop; but they can check that on their registrar's whois page as well.)

Furthermore there will be check of status for live domains, via the web server for example.

Things like Cloudflare are a problem, but for this particular case, there is no problem actually.

These domains will have the common identifier = the SH or similar platform's nameservers. Same as everything put on sale via afternic will use nsX.afternic.com or nsY.dan.com or whatnot.

So again you're pointing at issues, but for other things really.
 
Last edited:
0
•••
ICANN is heading south. On multiple aspects. But it's all about corporate benefit... what did we expect?

Do no evil, Google. (ICANN)

It is a nightmare to get right. Most sites never do the necessary design work at the start and end up trying to retrofit scalability.
Retrofitting most often doesn't work. With scaling you either 1) have a shitload of money to start from scratch if you f*ed it up, or 2) start well from the beginning and you know what you are doing.

I spent years on the scaling aspect alone. That experience is precious.
It sounds like a set of sites rather than one single site.
There is only one site in this project, really. Dotible.

Everything else is servers running entirely via cron jobs and scripts, and databases connecting all the stuff together. It's all automated and requires close to no maintenance. They feel ... empty, yet busy. Say one supervisor task plants a flag and then various bots go at work depending on the job.

I believe in data-centric, data-driven, disconnected architecture. It's what takes to scale correctly in my opinion. People today over-complicate things a lot due to tech fluff and hype. But of course there is useful new tech as well such as NoSQL and stuff, but I don't need them that much.
The guy who started Majestic used to post back on Webmasterworld back in the day. Markus Frind, the guy who started Plenty Of Fish, also posted there and his description of how he set up the site is definitely worth reading even now. He took a completely different approach to design than the large dating sites at the time and beat them. The guy who built ZFbot used to post here on Namepros. Many of the largest websites start with one developer with an idea rather than well resourced teams.
How about Linus Torwalds? He started by posting about his hobby project... linux... it's just a hobby... it will never grow as much HP-UX or whatnot.
The simultaneous users number is always a concern. The key to this is a bit counter-intuitive but it has to do with limiting the user's options.My HTML has been know to make grown web developers cry. :)
Don't worry about that. I do have half old GUIs as well and Dotible is nowhere near the fluff needed nowadays.

There will be new interface though but I'm keeping things rather simple for now.

Side note you can improve performance a lot but it depends on the tricks up your sleeve. Sometimes a significant project architecture change is needed (and more servers / higher cost) . Other times you can get away with little.

Dotible tasks can be extremely CPU consuming but I've been working on that. For example the appraiser is nowhere near 500 lines of code or whatever. Took years to refine it. Users want bulk access, well, I need code performance, not just server performance, in order to be able to make a profit.

The guy who started Majestic used to post back on Webmasterworld back in the day. Markus Frind, the guy who started Plenty Of Fish, also posted there and his description of how he set up the site is definitely worth reading even now. He took a completely different approach to design than the large dating sites at the time and beat them. The guy who built ZFbot used to post here on Namepros. Many of the largest websites start with one developer with an idea rather than well resourced teams.

Ah, WMW ... those were some nice times. I felt bad when it basically died. But it was a sign of its times passing.

I used to run some large sites back then, one of mine was top 5 in my country. But over the years sEO changed and I didn't want to go the PBN route, although it's the only way to make SEO performance nowadays.

Final note, domain numbers are increasing anyway, so I guess performance and scale will count much more in upcoming years.

Question, do you fear your model / site is at threat due to the existential threat vs. Whois itself?
 
0
•••
ICANN is heading south. On multiple aspects. But it's all about corporate benefit... what did we expect?
It is a multistakeholder model with various constituencies having their input. It has its problems but it could have been a lot worse.

There is only one site in this project, really. Dotible.
As you've outlined, that's the frontend to a lot of other processes.

Everything else is servers running entirely via cron jobs and scripts, and databases connecting all the stuff together. It's all automated and requires close to no maintenance. They feel ... empty, yet busy. Say one supervisor task plants a flag and then various bots go at work depending on the job.
The idea of updating a large site in realtime is unsettling but it is your design and you understand it best.

How about Linus Torwalds? He started by posting about his hobby project... linux... it's just a hobby... it will never grow as much HP-UX or whatnot.
It started on Usenet, I think. Once it became somewhat stable a lot of the Bulletin Board Systems downloaded it from the Internet (fun with FTP and Gopher and slow connections) and it took off from there. There wasn't much of a WWW at that time.

Don't worry about that. I do have half old GUIs as well and Dotible is nowhere near the fluff needed nowadays.
People need data not distractions. Ideally, it might be best to offer a variety of data formats (HTML, CSV, TSV etc).

Ah, WMW ... those were some nice times. I felt bad when it basically died. But it was a sign of its times passing.
Before Google decided to turn to the dark side. Matt Cutts used to post there.

I used to run some large sites back then, one of mine was top 5 in my country. But over the years sEO changed and I didn't want to go the PBN route, although it's the only way to make SEO performance nowadays.
PBNs are remarkably obvious and Google could end them quickly if it was so motivated. They have unnatural social networks.

Final note, domain numbers are increasing anyway, so I guess performance and scale will count much more in upcoming years.
The funny thing is that most of them are not used for websites. The new gTLDs effectively created a lot of one-hit-wonder registrations of randomly generated domain names that were registered once, dropped and never registered again. There are more deleted .COM domain names than there are active .COM domain names. Some registrations were by businesses or individuals. Some where speculative and a lot were junk (Domain Tasting).

Question, do you fear your model / site is at threat due to the existential threat vs. Whois itself?
HosterStats doesn't use WHOIS data. I had to provide WHOIS cover for a registry when it was moving servers/premises a long time ago and it was enough to make that decision obvious.

The idiocy of GDPR was only the start of problems caused by the European Union and te European Commission. There's the NIS2 directive which is even worse and it seems to have been formulatd by people who hadn't a clue about DNS or how it works. The directive wants the operator of every DNS identified.

The ICANN CZDS serves the gTLD zone files now. It makes things a lot easier. I was on the advisory group for that but the problem with the 90 day access renewal was not part of the specification for the CZDS. The provision of zone file access is part of the gTLD registry contracts (some of the registries may deny access on certain grounds). Most ccTLDs do not allow zone file access.

Regards...jmcc
 
0
•••
@jmcc

I just realized I haven't replied to your last comment here, so here I go.

As you've outlined, that's the frontend to a lot of other processes.

True. But you said sites, well there's only one sites. Anyway this is just about terms used and the meaning we give them. You are right, there are a lot of other processes as well. Some are monitoring, others are queueing and some are doing the cleanup after.

The one thing difficult is to make "everyone" "behave" and stay in their place. Just like crowds they are hard to control.

The idea of updating a large site in realtime is unsettling but it is your design and you understand it best.

I dont' see it unsettling at all. there are mechanisms in place to take care stuff goes well.

Will see later whether a background load-and-switch process is needed. I have that prepared too, but messing with the live workers currently. TBH I never had a DB crash in the past due to this, even on sheer concurrency. It might however delay a bit some larger read jobs from clients, as long as data is still being written to and some workers might be in a little queue over the same particular zone.

People need data not distractions. Ideally, it might be best to offer a variety of data formats (HTML, CSV, TSV etc).
True.

I'll check the site out. Who knows maybe there's something I need too.
Before Google decided to turn to the dark side. Matt Cutts used to post there.
Never liked Cutt Matts, sorry.

Too much BS.

PBNs are remarkably obvious and Google could end them quickly if it was so motivated. They have unnatural social networks.

My only take on this is that they benefit from that somehow. Only deep pocket guys can truly build and maintain a serious PBN. Favoring the little guy? Nah, that doesn't pay so much.

HosterStats doesn't use WHOIS data.
That's good.

Hmm I just realized using Whois data, and especially further selling / providing Whois data, without the approval of the registrar, even if obtained from a public Whois server, might be grounds for some serious lawsuit.

I'm probably going to stay out of it. For what I need, it's more or less okay without Whois.

Going to shoot you a DM with some question BTW.

The idiocy of GDPR was only the start of problems caused by the European Union and te European Commission. There's the NIS2 directive which is even worse and it seems to have been formulatd by people who hadn't a clue about DNS or how it works. The directive wants the operator of every DNS identified.

I haven't even dared yet to search / look at the NIS2 directive. Has it been issued already?

The EU is going south btw - on so many levels. I'm in the EU so I do know this well. They only aggravate me, really. I spent countless hours reading the GDPR and working with our lawyers as theres' so much stuff there that doesn't add up. In the end, it all went clear. I would sum everything in just one word: bollocks.

The intention is good if you ask me, at least in principle. There's need for better data protection; but this is the format they came up with? Those folks are only lawyers and politicians; and they have no clue whatsoever as to how tech works. I doubt they even asked the tech people a thing. They just decided: "Do this, I don't care how" and the result is NOT better data protection, but rather hindering and damaging serious business and services that important things might depend on.
 
0
•••
  • The sidebar remains visible by scrolling at a speed relative to the page’s height.
Back