Currently, the quality of information on many platforms is indeed concerning. A large amount of low-quality, sensational content is flooding in, which has a serious impact on the quality of training data for AI models. AI tools like Grok trained on such garbage information can be imagined. If you don't believe it, just randomly send screenshots of these news articles to ChatGPT, Gemini, Claude, or other AI, and they will point out how absurd these sensational news stories are. Data quality determines the upper limit of the model, and this is a fundamental issue.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
14 Likes
Reward
14
9
Repost
Share
Comment
0/400
TeaTimeTrader
· 01-07 00:44
AI trained on garbage data is just garbage, that's what "garbage in, garbage out" means, my friends.
View OriginalReply0
GreenCandleCollector
· 01-06 20:58
This is why AI is starting to collapse now—blame the garbage data and don't complain about the model's poor performance.
---
Honestly, a slight drop in data quality can cause the model to fail completely; there's no saving it.
---
Grok and similar models are trained on a bunch of sensational news on Twitter—no wonder they crash.
---
I've tried giving Claude some of those news articles, and it directly called them nonsense. No matter how smart AI is, it can't handle garbage input.
---
The fundamental problem is this—good data is too hard to get, mostly from marketing scammers and frauds.
---
If you still believe in AI, first see what data it has been fed.
---
It feels like the entire Web3 media ecosystem is rotten, filled with all kinds of sensationalism and FUD.
View OriginalReply0
OldLeekConfession
· 01-06 01:47
Garbage in, garbage out—that's no joke... But honestly, the fake trending topics created by those self-media accounts online are also outrageous. Can AI models eat so much crap without getting diarrhea?
View OriginalReply0
mev_me_maybe
· 01-04 01:53
Garbage in, garbage out. This is the current fate of AI...
The water AI drinks is getting muddier and muddier, no wonder it’s producing more and more errors.
If the data source collapses, even the most advanced model is useless. This is truly an insurmountable hurdle.
To put it simply, the feedstock is not good enough. How can good steel be forged?
These platforms really should have an review mechanism, not just for freedom of speech, but so that AI can drink clean water.
View OriginalReply0
gas_fee_therapist
· 01-04 01:52
Garbage in, garbage out. AI can't save it either.
View OriginalReply0
Deconstructionist
· 01-04 01:51
Garbage in, garbage out. This is the current state of AI...
---
Wait, those large models aren't exactly perfect either, right?
---
Data pollution has been discussed for so long, yet no one has managed it. Truly impressive.
---
So Grok's approach is just a joke.
---
The problem isn't with AI itself; it's that the information sources are completely rotten.
---
This guy is right. I've really experienced some major failures.
---
If you ask me, instead of blaming the data, it's better to blame platform operations. These two are essentially the same thing.
View OriginalReply0
YieldFarmRefugee
· 01-04 01:45
Garbage in, garbage out, there's nothing more to say...
View OriginalReply0
CoconutWaterBoy
· 01-04 01:42
Garbage in, garbage out. AI training data is all these trivial little things; it's a miracle if it gets any better.
View OriginalReply0
CompoundPersonality
· 01-04 01:30
Garbage in, garbage out. No matter how powerful AI is, it can't save this pile of information pollution.
Currently, the quality of information on many platforms is indeed concerning. A large amount of low-quality, sensational content is flooding in, which has a serious impact on the quality of training data for AI models. AI tools like Grok trained on such garbage information can be imagined. If you don't believe it, just randomly send screenshots of these news articles to ChatGPT, Gemini, Claude, or other AI, and they will point out how absurd these sensational news stories are. Data quality determines the upper limit of the model, and this is a fundamental issue.