Home Blockchain dcSpark CTO explains why Cardano is “one of the worst blockchains for storing data”

dcSpark CTO explains why Cardano is “one of the worst blockchains for storing data”

0
dcSpark CTO explains why Cardano is “one of the worst blockchains for storing data”

On Saturday (August 13), Sebastien Guillemotthe CTO of the blockchain company dcSparkstated that the L1 Cardano ($ADA) blockchain is “definitely one of the worst blockchains for storing data,” and went on to explain why he thinks so.

In case you were wondering what dcSpark does, according to its development team, the main goals are to:

  • “Extending Blockchain Protocol Layers”
  • “Implementing first-class ecosystem tools”
  • “Develop and publish user-facing applications”

The firm was co-founded in April 2021 by Nicolas Arqueros, Sébastien Guillemot and Robert Kornacki. dcSpark is best known in the Cardano community for its sidechain project Milkomeda.

On Friday, August 12, a Cardano attorney sent out a tweet that made it sound like Cardano is a great blockchain for storing large amounts of on-chain data.

However, dcSpark’s CTO responded that Cardano’s current design makes it one of the worst blockchains for storing data:

Really weird tweet. Cardano is definitely one of the worst blockchains for storing data and that was an explicit design decision to avoid blockchain bloat and that’s the root cause of many design decisions like the 64 byte chunks of data plutus , off-chain pool and token registry, etc.

Vasil improves this with online references, but they are indirectly discouraged due to the high cost of using them. I agree that blockchain providing data availability is an important feature, but having a good solution will require changes to the existing protocol.

Then another $ADA holder asked Guillemot if this design decision might make life harder for roll-up team building solutions (like Orbis), and he received the following response:

Yes, trying to provide data availability for use cases like rollups, mithril, entry endorsers and other similar data rich use cases while keeping the L1 thin (unlike Ethereum that optimizes for people who are just dumping data) is one of the big technical challenges being addressed




On August 1, IOG co-founder and CEO Charles Hoskinson posted a short video in which he explained why the Vasil hard fork had been delayed for a second time and provided an update regarding the testing of the update. Vasil protocol day.

Hoskinson said:

Originally we planned to have the hard fork with 1.35, and that’s what we sent to the testnet. The testnet was tough under it. And then a lot of testing, both internal and community, was going on. A series of bugs were found: three separate bugs that resulted in three new versions of the software. And now we have 1.35.3, which seems to be the version that will survive the hard fork and transition to Vasil.

There is a great retrospective that will be made. The long short is that the ECDSA primitives and among a few other things aren’t quite where they need to be. And so, this feature should be put aside, but all the remaining features, CIP 31, 32, 33, 40 and others are pretty good.

So those are in advanced stages of testing, and then a lot of downstream components need to be tested, like DB Sync and the serialization library, and those other things. And it is currently in progress. And many tests are in progress. As I mentioned before, this is the most complicated Cardano upgrade in its history as it includes both changes to the Plutus programming language, changes to the consensus protocol and a litany of other things, and it was a very busy version. There was a lot in it, and therefore, it was the one everyone had better test thoroughly.

The problem is that every time something is discovered you have to fix it, but then you have to verify the fix and go back through the whole testing pipeline. So you get to a situation where you’re done with the features, but then you have to test and when you test you might find out something and then you have to fix that. And then you have to backtrack through the whole testing pipeline. So that’s what’s causing the delays in posting…

I was really hoping to release it in July, but you can’t do that when you have a bug, especially one that’s involved with consensus or serialization or related to a particular problem with transactions. Just erase it, and that’s how it goes. All things considered though, things are moving in the right direction, steadily and consistently…

The set of things that could go wrong has gotten so small, and now we’re kind of in the final stages of testing in that regard. So unless something new is discovered, I don’t anticipate we’ll have any further delays, and that just brings people up to speed…

And with any luck, we should have some positive news as soon as we get deeper into August. And the other side is that no issues were found with the pipelining, no issues were found with CIP 31, 32, 33, or 40 throughout this process, which is also a very positive news, and given that they have been extensively tested internally and externally by developers at QA companies and our engineers, that means there is a pretty good chance that these features will be at bulletproof and waterproof. So, just a few edge cases to iron out, and hopefully we can deliver a mid-month update with more news.

Image credit

Image selected via Pixabay

LEAVE A REPLY

Please enter your comment!
Please enter your name here