Nice detail to use when searching the internet btw:
“But if you’re collecting data before 2022 you’re fairly confident that it has minimal, if any, contamination from generative AI,” he added. “Everything before the date is ‘safe, fine, clean,’ everything after that is ‘dirty.’”
Try running searches set pre-2022, at least for older info, to reduce the possibilities of AI generated noise.
Anyway, kinda funny to see these generators may be producing enough noise to make producing more noise somewhat harder. Hopefully this doesn’t also impact more productive AI development, such as what’s used in scientific research and the like, as that would genuinely suck.
Edit:
Revised from generators “have produced” to “may be producing” to better reflect the lack of concrete info regarding generative AI data pollution as someone else pointed out. As they note:
“Now, it’s not clear to what extent model collapse will be a problem, but if it is a problem, and we’ve contaminated this data environment, cleaning is going to be prohibitively expensive, probably impossible,” he told The Register.
Plus side of actual useful application of LLM/AI is the data is usually a small subset of data, and it would have to be tested anyways since it would have to be used in the real world. I think the main use of LLM/AI in mainstream is using it on small datasets like that instead of the race for the holy grail of “General” AI.
Odd url…Here’s the original: https://futurism.com/chatgpt-polluted-ruined-ai-development
Nice detail to use when searching the internet btw:
Try running searches set pre-2022, at least for older info, to reduce the possibilities of AI generated noise.
Anyway, kinda funny to see these generators may be producing enough noise to make producing more noise somewhat harder. Hopefully this doesn’t also impact more productive AI development, such as what’s used in scientific research and the like, as that would genuinely suck.
Edit:
Revised from generators “have produced” to “may be producing” to better reflect the lack of concrete info regarding generative AI data pollution as someone else pointed out. As they note:
There’s nothing in the article, the Register article or any references that claim there is actual pollution of data.
It’s based on speculation made years ago.
Plus side of actual useful application of LLM/AI is the data is usually a small subset of data, and it would have to be tested anyways since it would have to be used in the real world. I think the main use of LLM/AI in mainstream is using it on small datasets like that instead of the race for the holy grail of “General” AI.
Fuck. Will this next epoch retrospectively be considered a dark age, not bc disinformation, but bc after 2022 we were giberishing morons?