The core requirements of data storage ultimately boil down to two things: security and durability. There is a project called Walrus that recently developed a unique technical architecture, claiming to meet the "十一九" durability standard—simply put, the probability of losing a file within a century is as low as one in a hundred billion.
Behind this impressive promise is a technology called Red Stuff Erasure Coding. Its working principle is quite interesting: first, split your file into multiple fragments, then distribute them across a decentralized network of nodes, while generating additional redundant repair data. The most impressive part is that even if two-thirds of the nodes in the network fail simultaneously or act maliciously, the system can still fully recover the original file using these repair data.
Currently, Walrus's node count has expanded to hundreds, and the team plans to continue scaling up to thousands of nodes. The more nodes there are, the higher the redundancy of the entire network, and the more reliable the data durability.
This level of security commitment is especially attractive for scenarios that require long-term storage of critical data. For example, blockchain archives, AI model training datasets, NFT-related digital assets, or some irreplaceable cultural records—all can be reliably protected within such a system. From a technical perspective, this indeed addresses a long-standing pain point in the Web3 world.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
17 Likes
Reward
17
7
Repost
Share
Comment
0/400
PonziWhisperer
· 01-15 04:28
Eleven Nine Durability sounds pretty impressive, but to be honest... Decentralized storage can never completely prevent malicious nodes, can it?
I'm genuinely optimistic about Walrus because it finally takes the issue of data permanence seriously, unlike some projects that just boast.
Erasure coding has been around for a while, but the real challenge is running it at the scale of thousands of nodes without collapsing.
Wait, losing one in a hundred billion within a century... How is this number calculated? Is it based on model inference or real data? Seems a bit exaggerated.
NFT on-chain storage has always been a joke. If Walrus can truly solve this problem, that would be really satisfying.
A few hundred nodes sound like a lot now, but compared to IPFS, it's still a small workshop. Whether it can truly realize that grand vision remains to be seen.
View OriginalReply0
LiquidityWitch
· 01-15 02:02
Walrus sounds pretty good, but a promise of one in a hundred billion... feels a bit exaggerated. Let's see when it really comes into use.
NGL, the logic of this erasure coding is indeed solid; being able to recover from two-thirds failure is impressive.
Why do so many Web3 projects like to boast? Let's believe half and see.
Data storage is indeed a necessity. If Walrus can really be reliable, the ecosystem might have a chance.
Centennial durability? I might not be around anymore, haha. But placing NFTs and historical archives here still feels more secure.
View OriginalReply0
MidnightMEVeater
· 01-14 17:05
Good morning, the idea of never losing archives for a hundred years sounds like a promise never to be sandwich attacked. The problem is, with so many nodes, who will manage this group of nocturnal creatures?
View OriginalReply0
FastLeaver
· 01-13 17:50
One in ten billion? Sounds impressive, but we need to ask if it can really hold up... Red Stuff's system is indeed clever; even if two-thirds of the nodes go down, it can still recover. The logic checks out. Just wonder if this is another PPT project, huh?
View OriginalReply0
TradFiRefugee
· 01-13 17:48
Eleven Nine Durability? Sounds good, but let's see how long Walrus can last before we talk.
Two-thirds node failure can still be recovered, this erasure coding is indeed powerful, but in decentralized networks, who dares to guarantee no collusion and malicious acts?
If they really make NFT data storage reliable, that would be so reassuring.
View OriginalReply0
LayerZeroEnjoyer
· 01-13 17:42
Hmm, Walrus sounds pretty good, but can Eleven Nine really guarantee it... Feels a bit like overhyping?
Redundant erasure coding is indeed solid; being able to recover with two-thirds of nodes down is impressive. But managing thousands of nodes—can it really be coordinated well? That’s the real test.
I’m optimistic about NFT data on the blockchain; it’s definitely more reliable than those IPFS nodes that run away in a day.
Honestly, Web3 storage has always been a pit, but finally someone is taking this seriously.
Centennial durability? Bro, I only care if I can survive the next bull market...
No matter how high the redundancy, the real issue is whether node operators will slack off.
This thing still depends on real operational data; you can’t just listen to the hype.
A loss rate of one in a billion... sounds like bragging, take it with a grain of salt.
View OriginalReply0
OfflineNewbie
· 01-13 17:36
Walrus's technology indeed sounds impressive. Even with two-thirds of the nodes down, can it still recover? That's quite a feat.
Can NFT data really be trusted to be stored securely? Or does it still depend on the integrity of the node operators?
Is the durability of 1199 just a gimmick, or is it real? You only know after trying.
How is this RedStuff encoding so magical? Are there boundary cases as well?
Thousands of nodes sound formidable, but with so many nodes, who bears the coordination costs?
The core requirements of data storage ultimately boil down to two things: security and durability. There is a project called Walrus that recently developed a unique technical architecture, claiming to meet the "十一九" durability standard—simply put, the probability of losing a file within a century is as low as one in a hundred billion.
Behind this impressive promise is a technology called Red Stuff Erasure Coding. Its working principle is quite interesting: first, split your file into multiple fragments, then distribute them across a decentralized network of nodes, while generating additional redundant repair data. The most impressive part is that even if two-thirds of the nodes in the network fail simultaneously or act maliciously, the system can still fully recover the original file using these repair data.
Currently, Walrus's node count has expanded to hundreds, and the team plans to continue scaling up to thousands of nodes. The more nodes there are, the higher the redundancy of the entire network, and the more reliable the data durability.
This level of security commitment is especially attractive for scenarios that require long-term storage of critical data. For example, blockchain archives, AI model training datasets, NFT-related digital assets, or some irreplaceable cultural records—all can be reliably protected within such a system. From a technical perspective, this indeed addresses a long-standing pain point in the Web3 world.