As someone that works a lot with AI related technologies, here are my thoughts.
1) YouTube, TikTok, IG and others, now require people uploading content to check a box if the content is AI generated. A lot of people who did not care about AI generated videos, now find it annoying because the volume of it is insane and you can't tell it's AI sometimes. Recently, the police in New Jersey
went looking for monkeys on the loose because someone posted AI generated images. The social media platforms obviously do checks as well for those that didn't disclose their content is AI and can ban you.
2) Music is facing the same issue. It wasn't a problem for a while but now AI music is everywhere and done right is impossible to detect. The reason is because human singers use AI (and have been for years) to fix parts of the song so when actual quality AI is used for a complete song, you cannot tell. Spotify needed to roll out
new rules and tech because of this.
3) LinkedIn is full of AI posts. Instead of having you select a box if your content is AI, they actually offer (if you are a paid member) to help AI rewrite your post. I am including a screenshot:
--------------------
LinkedIn's user experience without a doubt has been degraded by the flood of AI posts. People don't like replying to AI posts (unless they comment AI slop) and now when people suspect something is AI, they are less likely to communicate. This will happen on all platforms that offer communication. People will avoid the AI posts and mistake legitimate posts for AI and it will only degrade the overall experience.
4) The problem with automated tools checking on a forum if the text is AI or not, is the fact that some people that do not speak the default language or people that want their question to sound better, use AI to help them write it. This is now common with emails. I get emails from people I know but I can tell they wrote a few words in their email and asked AI (now built into many email clients) to write it out. It rarely helps in communication, better people stick to their own writing even if it has bad grammar and minor misspellings.
5) Not all AI content is created equal. I see posts here that I know are AI, however, I can tell the person put in manual work to correct and rewrite parts because AI didn't get something right about this industry. These posts carry value and were guided by a human within the industry and there is a real human with experience behind it. Should these not be allowed? I don't think so. Would it be good for the poster to mark it as AI? Probably. Not because they don't carry value, but because some people, old and young, don't want any AI if given the choice. So in the future, most major platforms will allow users to select in settings "hide AI content."
If someone makes a posts and the post itself is written by a human, but they used AI to help generate a graphic or video to help explain what they write, I think those are fine as long as the image or video helps in someway the reader. I think people should be responsible and tell people when something is AI.
I think we can all agree that AI bots on forums that post and reply using AI should be banned. My guess is the reason we do not see that a lot here on NP is because the mods are playing whack-a-mole all day. NP ranks very high in search and I doubt the bots aren't showing up here in droves.
I believe that the more companies and platforms push AI, the more there will be a desire to speak to an actual human for certain matters, especially, personal experiences with things, business, challenges etc.
The dedicated
section on this forum for AI images I think was a great idea. It allows those interested in AI to browse and submit such content and it also can keep community members in the loop of some of the capabilities.
My general thought about AI not directly related to the OP's post:
With the release on Sora 2 and Veo 3.1, more people are waking up the dangers of AI video. OpenAI knew this. They place 3 big watermarks on videos (many remove them, and you can generate it without it using the API but only after you provide open AI with government ID). There are content creators creating rage bait videos (fake AI generated video showing something to trigger anger and comments) so they can make some money. Sometimes even with the clear tag under the video that the video is AI, a lot of people in the comments don't notice it.
AI generated text content is also full of errors. It makes up a lot of things (this is known as AI hallucinations). Recently a lawyer was fined for using AI in a case that actually made up caselaw.
And if you think AI generated text, images and videos are creating a mess, just wait for the mess being generated by the "vibe coded" websites and apps. When a developer with experience uses AI to generate code and they review it, usually it results in getting things done faster. Not always though, but if the developer limits and controls what exactly he needs, it can be good. However, for anyone not reviewing the code, especially people with no experience, they are risking the security of their users. Obviously, for a static website with just some text and images, this is not a big issue. However, for sites with forms, ordering systems, admin dashboards or API's that connect to different services, this will turn out to be a big disaster. These vibe coded sites will make WordPress look secure.
I think within the next 12 months the negative effects on the job market due to AI will become undeniable. First it is hitting a lot of people that made some money from blogs, freelancing and researchers. Next it will hit people with full time jobs. The jobs AI will create will not be nearly enough to make up for the job losses.
People that are quick to tell others to "embrace AI" probably didn't experience enough of it.
For now, there is a race between global AI companies (US and Chinese based companies mostly). No consideration at all is given to the negative side of this race.