WEBVTT
00:00:00.580 --> 00:00:01.824
Hey everyone it's Alex.
00:00:01.864 --> 00:00:09.631
Cahaya from the Index Podcast, I want to tell you about Mantis, a groundbreaking platform that's simplifying the way we interact across blockchains.
00:00:09.631 --> 00:00:13.808
If you're a developer or just into DeFi, you'll want to pay attention.
00:00:13.808 --> 00:00:25.771
Mantis enables trust minimized transactions across different chains, letting you trade or execute actions seamlessly while getting the best possible outcome, all without the usual complexities.
00:00:25.771 --> 00:00:36.014
Imagine being able to move assets and settle transactions across blockchains easily, with maximum value extraction, all while staying secure and decentralized.
00:00:36.014 --> 00:00:39.244
That is what Mantis is bringing to the table.
00:00:39.968 --> 00:00:48.713
Mantis is an official sponsor of the Index podcast, and their founder, omar, and I regularly host a new live stream series on X called Everything SVM.
00:00:48.713 --> 00:01:01.854
We have these live streams weekly, and if you want to keep up with what's happening in the Solana ecosystem, especially as it relates to the new innovative deployments of the Solana virtual machine, you should tune into this live stream.
00:01:01.854 --> 00:01:10.868
Check them out at mantisapp and follow them onX at Mantis M-A-N-T-I-S At the Index.
00:01:10.868 --> 00:01:18.983
We believe that people are worth knowing and we thank Mantis for enabling us to tell the stories of the people who are building the future of the internet.
00:01:18.983 --> 00:01:56.917
We'll see you on the other side side.
00:01:56.917 --> 00:02:11.143
Welcome to the Index Podcast co-host Alex Kahaya, along with Omar Zaki, and today I'm excited to have Nick White, vice President of Celestia Labs, which is a modular blockchain powering unstoppable apps with full stack customizability.
00:02:11.143 --> 00:02:12.486
Thanks for being here, nick.
00:02:12.486 --> 00:02:14.451
I appreciate you taking the time today.
00:02:15.259 --> 00:02:15.721
Thanks, alex.
00:02:15.721 --> 00:02:17.407
Yeah, it's a little bit of a mouthful.
00:02:17.407 --> 00:02:23.091
You know unstoppable apps with full stack customizability, but you got through it, excited to talk about all things SVM.
00:02:23.091 --> 00:02:28.265
But you got through it, excited to talk about all things SVM.
00:02:28.265 --> 00:02:46.430
Yeah, I feel like a lot of people may perceive Celestia and Solana to be kind of opposing forces in this like modular versus monolithic sort of architecture debate, but I think there's a lot of alignment between Solana and Celestia in terms of wanting to provide abundance, like abundant throughput I mean wanting to optimize the stack for performance.
00:02:46.430 --> 00:02:51.111
But then there are also some like divergences and philosophy and thinking and design.
00:02:51.111 --> 00:02:52.526
But, yeah, it's a pleasure to be here.
00:02:53.068 --> 00:02:59.491
It's really interesting to have watched this debate unfold really over like the last 12 months more intensely than it did the previous like couple of years.
00:02:59.491 --> 00:03:08.436
For Solana at least, like inside the Solana ecosystem, right, like there are other ecosystems like Cosmos and stuff like Cosmos was like out of the gate, highly modular, right.
00:03:08.436 --> 00:03:15.390
I think like the natural maturation process of anything that's monolithic is to become modular.
00:03:15.390 --> 00:03:17.903
I think we've seen that happen just over time.
00:03:17.903 --> 00:03:25.180
Like if you look at Linux one of the most widely pieces of software that's used and that's open source it kind of went the same direction.
00:03:25.180 --> 00:03:25.762
Eventually.
00:03:25.762 --> 00:03:26.944
I don't really know why that is.
00:03:26.944 --> 00:03:38.590
Perhaps we could talk about that, but before we do, maybe you can just jump in and just tell us a little bit about Celestia, for people who might not know exactly what it is and why you're working on that.
00:03:38.590 --> 00:03:41.842
Why do you get out of bed every day to work on Celestia?
00:03:42.484 --> 00:03:49.087
So Celestia is a modular blockchain network, which means that it doesn't do all the functions of a blockchain.
00:03:49.087 --> 00:03:56.567
It purposely leaves execution, which is where, like, transactions get processed and state gets updated.
00:03:56.567 --> 00:04:16.194
It leaves that to the developer to define and run as their own sort of application specific chain, and Celestia just focuses on the back end, if you will, of consensus and data availability, and the beauty of taking that approach is that Celestia can be extremely scalable and flexible as a result of that.
00:04:16.194 --> 00:04:35.629
So it's possible to scale execution in terms of, like you know, increasing the performance of a chain, of a single chain like Solana, but then it's much easier to just scale the consensus and data availability throughput of a single protocol, whereas, like for execution, I think at some point you have to basically go into a sharded model.
00:04:36.211 --> 00:04:44.684
The core innovation that Slessy uses is something called data availability sampling, and that's an innovation that came around in 2019, actually 2018, I think.
00:04:45.404 --> 00:05:06.380
It basically describes a way that you can increase the size of a block in a specific blockchain without actually making it harder to verify the chain, so you can.
00:05:06.401 --> 00:05:10.954
Finally, you know the age old debate in Bitcoin or even Ethereum of like, how can we scale the system while staying decentralized was solved in a sense by that innovation.
00:05:10.954 --> 00:05:29.430
You can think of Celestia as this very, very scalable base layer blockchain on which someone can launch their own app that can have full stack customizability, meaning that they're not just writing a smart contract, they can define the sequencing rules of their application, they can define the virtual machine or like anything like specific about the state machine.
00:05:29.430 --> 00:05:45.160
They have, like just way, way, way more control over the thing that they're building, versus, like, if you build on something like Ethereum L1 or Solana L1, a lot of those choices have been made for you, and so that's kind of what the modular stack is about.
00:05:45.160 --> 00:05:57.492
To me, it's about kind of an evolution of the AppChain thesis that originated in Cosmos but, I would say, executed in a much more scalable kind of practical way.
00:05:58.100 --> 00:06:01.786
I have so many questions but before I ask one just observation.
00:06:01.786 --> 00:06:15.514
I'm seeing in the Solana ecosystem is just this natural progression to modularity, and it's even being forced because there are just other people who want to take the open source code that is Solana and experiment with it and do different things.
00:06:15.514 --> 00:06:21.752
You have the Solana chain, which is powered by the guys that built Zen, the Zen protocol on ETH.
00:06:21.752 --> 00:06:25.348
They did simple things like they removed the voting fees.
00:06:25.348 --> 00:06:28.024
They made it proof of work instead of proof of stake.
00:06:28.024 --> 00:06:29.990
So they've modified the consensus.
00:06:29.990 --> 00:06:35.050
And then they've modified some of the core economics of the network and some of this stuff was easy.
00:06:35.050 --> 00:06:39.410
The removing of fees was like one line of code getting deleted from the validator.
00:06:39.410 --> 00:06:42.387
But you can't just YOLO that onto mainnet when you're at the $100 billion market cap asset, right.
00:06:42.387 --> 00:06:49.329
You can't just YOLO that onto mainnet when you're a $100 billion market cap asset, right, like you can't make those experiments happen.
00:06:50.100 --> 00:06:52.605
And then there's another company that we're working with at ABK.
00:06:52.605 --> 00:06:53.947
That's in this AI space.
00:06:53.947 --> 00:07:00.339
That's doing some really interesting thing that they need the validators to be GPU enabled for AI and they're also altering the consensus.
00:07:00.339 --> 00:07:22.088
So there's an actual hardware change which requires some software changes to make it work on the validator, as well as a consensus change, and so, long term, we think there needs to be a modular validator right where you strip all these components to define how it behaves, just so you can accomplish the same thing, same thing you guys are accomplishing at Celestia, which is more customizability, more flexibility for the developers who who want to do things with it.
00:07:22.608 --> 00:07:36.124
If some or all of this gets merged onto mainnet, then for mainnet agave or firedancer, it's just going to mean that you get more people contributing to it, because it breaks it down from a million lines of code in one repo that's really hard to navigate to.
00:07:36.124 --> 00:07:49.182
Oh, I just have to know this one specific thing about consensus or about what kind of hardware can be used on the network, and so I think it's a real net benefit, regardless of what your designs are.
00:07:49.182 --> 00:07:50.024
On scalability, I guess, is what I'm saying.
00:07:50.024 --> 00:07:55.684
The question that came to my mind, though, for you was and this is gonna sound dumb what is data availability Like?
00:07:55.684 --> 00:07:59.192
Explain that to me as if I'm, like you know, 12.
00:08:00.480 --> 00:08:04.952
It's a question that sounds dumb but is a very, very difficult question to answer.
00:08:04.952 --> 00:08:07.108
And it sounds dumb but is a very, very difficult question to answer.
00:08:07.108 --> 00:08:09.339
And it sounds dumb because data availability sounds like an intuitive term.
00:08:09.600 --> 00:08:12.026
It's like it's available, I can find it.
00:08:12.567 --> 00:08:13.930
Yeah, Is it stored somewhere?
00:08:13.930 --> 00:08:14.331
Kind of.
00:08:14.331 --> 00:08:18.091
So a lot of people mistake data availability for data storage.
00:08:18.091 --> 00:08:19.887
That's totally a misnomer.
00:08:19.887 --> 00:08:21.781
So data availability is completely different to data storage.
00:08:21.781 --> 00:08:23.742
So data availability is completely different to data storage.
00:08:24.343 --> 00:08:30.910
The analogy I like to use is a difference between like a library and like a newspaper or like a news network.
00:08:30.910 --> 00:08:35.374
Right, when a library is something where you want to store.
00:08:35.374 --> 00:08:49.955
You have a book or an article or whatever you want it to be stored, so in the future you can go and see, like, pull that data, that article, that book off the shelf and read it again because you need it for some reason.
00:08:49.955 --> 00:08:51.881
Right, so that's a library and a news network and that's data storage.
00:08:51.881 --> 00:09:11.206
A news network or data availability is something where there's some very vital information that you need to distribute and disseminate to a network of people who care about it and they want to make sure that that information has been shared and distributed and is public, and that's what data availability is.
00:09:11.206 --> 00:09:23.440
So it's a publishing network and that's important because for the security of any kind of verifiable computer, like a blockchain, like if you think about what's happening, right, you have the.
00:09:23.399 --> 00:09:24.259
You've defined a state machine like the execution.
00:09:24.259 --> 00:09:25.000
Think about what's happening.
00:09:25.000 --> 00:09:25.240
Right, you have the.
00:09:25.240 --> 00:09:30.952
You've defined a state machine, like the execution logic or the application logic for that computer.
00:09:30.952 --> 00:09:33.408
That application, right, Everyone knows what that is.
00:09:33.408 --> 00:09:39.352
Now everyone needs to also agree on what are the inputs that go into that state machine.
00:09:39.352 --> 00:09:41.826
Right, you have to have both of those things.
00:09:41.826 --> 00:09:44.662
The inputs need to be ordered and they also need to be public.
00:09:44.662 --> 00:09:47.283
You have to have both of those things the inputs need to be ordered and they also need to be public.
00:09:47.303 --> 00:09:51.267
Otherwise, me, as an independent observer, I can't actually know what's happening on this computer.
00:09:51.267 --> 00:09:52.847
And that's what blockchains are all about.
00:09:52.847 --> 00:09:58.312
They're all about transparency and verifiability by anyone, and that's what makes them so powerful.
00:09:58.312 --> 00:10:01.234
They're about proving things to each other.
00:10:01.234 --> 00:10:05.496
I can prove that I own this money and I'm sending it to you.
00:10:05.496 --> 00:10:11.761
So, anyway, data availability is that part.
00:10:11.761 --> 00:10:12.322
What are the inputs?
00:10:12.322 --> 00:10:18.984
And then the cool thing is the execution layer is the thing where you define the application logic and, because Celestia doesn't define that, you can define whatever you want.
00:10:18.984 --> 00:10:25.703
Whereas, like Ethereum, right, it says you know, it gives you flexibility because it's, but it says this is the.
00:10:25.703 --> 00:10:35.135
This is sort of the programming environment you have to use, which is the EVM, it's kind of like having to use window like a Windows computer, but maybe you're a Mac person, right?
00:10:44.080 --> 00:10:45.567
and consensus, like you've got DA and consensus happening on Celestia.
00:10:45.567 --> 00:10:47.034
So it's getting the information, distributing it to the network.
00:10:47.034 --> 00:10:50.144
The network comes to agreement that it's true, this piece of data, this information is correct.
00:10:50.144 --> 00:10:59.576
And then whatever virtual machine it could be EVM, it could be SVM, it could be OMRM, like whatever just make an OMRVM.
00:10:59.576 --> 00:11:02.360
That is where the code gets executed.
00:11:02.360 --> 00:11:05.782
That's writing the thing to the ledger, right.
00:11:05.782 --> 00:11:07.903
But the ledger, like where does the ledger live?
00:11:07.903 --> 00:11:10.724
Does it live on the VM or does it live on the consensus?
00:11:10.724 --> 00:11:13.144
Like where's that database actually stored?
00:11:13.946 --> 00:11:18.648
That data is published right and it's stored on the full nodes of the network.
00:11:18.648 --> 00:11:27.412
However, there are archival nodes and then there are normal full nodes and the full nodes only store that all the data for 30 days.
00:11:27.412 --> 00:11:29.852
Anything after that they prune.
00:11:30.653 --> 00:11:32.214
This is on the Celestia network.
00:11:32.953 --> 00:11:51.355
Yes, and this is where another misnomer or like a lot of people who are stuck on the data availability, thinking that it's data storage thing, they're like well, that data is not very available if it's gone after 30 days, and again that's missing the point.
00:11:51.355 --> 00:12:03.190
It's like there's a newspaper, like after a day or even a week, it's not going to print the same newspaper and distribute it around, right, it already has distributed that information, it's already been disseminated and there are probably places that are storing it like a library, like there are libraries that keep archives of all the news, right.
00:12:03.190 --> 00:12:11.071
There are also storage solutions, whether those are indexers, or you could put all the data on Filecoin or Arweave or whatever.
00:12:11.071 --> 00:12:19.285
Or you know their data services, like, let's say, blockworks, analytics companies, block explorers or just people running archive nodes.
00:12:19.285 --> 00:12:19.547
I have a.
00:12:19.547 --> 00:12:23.561
My own ambition is to set up my own archive node here in my office.
00:12:23.561 --> 00:12:28.509
I haven't gotten around to it yet, but that way I'll just have an answer of like, where's the, where's the data stored?
00:12:28.509 --> 00:12:30.813
Well, it's stored literally like I have it right here.
00:12:30.813 --> 00:12:34.710
But anyway, hopefully that kind of answers the question of like where the data is.
00:12:35.221 --> 00:12:37.506
The interesting thing about Celestia, too, is that the architecture.
00:12:37.506 --> 00:12:52.234
This is why, like it's very different if you build a, you design everything completely differently, and one of them is that all the data in Celestia is committed.
00:12:52.234 --> 00:13:24.405
Each block is committed to in a special Merkle tree, this namespace, and so if I want to know everything that's ever happened on the namespace, or if anything happened in this block, like any transactions were sent in that namespace, I can easily search for it.
00:13:24.405 --> 00:13:30.259
Someone can easily prove to me here's all the data, or there was no data in that block relevant to that namespace.
00:13:30.259 --> 00:13:31.114
Here's all the data, or there was no data in that block relevant to that namespace.
00:13:31.114 --> 00:13:40.158
So, anyway, that's one of the ways that you can kind of like query the blockchain and like get all the data that's relevant to you without being overwhelmed by everything else.
00:13:40.970 --> 00:13:47.629
Omar, I'm curious, like as somebody who's built network extensions using Solana, like how can you apply this to what you guys are building?
00:13:47.629 --> 00:13:52.054
Or, just generally speaking, how do you think about some of these problems that Celestia is solving?
00:13:52.436 --> 00:14:00.255
One of the important things about DA is like it makes the process of running a roll up cheaper.
00:14:00.255 --> 00:14:05.991
Ethereum roll ups, use like Eclipse, use Celestia, for instance, like.
00:14:05.991 --> 00:14:09.000
I find that to be very interesting and particularly useful.
00:14:09.000 --> 00:14:11.629
Actually, I was going to turn this into a question.
00:14:11.629 --> 00:14:19.833
Can you actually go and like explain a little bit about how that process works and like why you know celestia versus?
00:14:19.833 --> 00:14:24.201
Technically, ethereum is also like has blobs as well.
00:14:24.201 --> 00:14:36.076
How does it make it cheaper to operate a roll-up and post data to wherever you want to post it, like l1 on ethereum and two like why celestia versus blobs?
00:14:37.118 --> 00:14:37.658
great question.
00:14:37.658 --> 00:14:43.530
So data availability is like one of the bottlenecks for scaling.
00:14:43.530 --> 00:14:48.792
I would say, like, broadly, there are two bottlenecks for scaling applications, like decentralized applications.
00:14:48.792 --> 00:14:51.177
I mean, they're basically the two things that I described.
00:14:51.177 --> 00:14:55.024
One is execution and the other is like data availability and consensus.
00:14:55.024 --> 00:15:04.317
So the execution part scaling is hard, because it's like my little computer that I've implemented can only process so many transactions per unit time, right.
00:15:04.317 --> 00:15:11.562
And then if there's, you know, 10 million people trying to use this thing, I can't process all of them fast enough.
00:15:11.562 --> 00:15:13.274
And all of a sudden there's a congestion fee, right.
00:15:13.274 --> 00:15:20.299
You have to price that resource so that you can actually, you know, allocate it efficiently and fairly, right.
00:15:20.299 --> 00:15:35.461
So sometimes, you know, on roll-ups on Ethereum, there are spikes in fees because, not because the data availability is expensive, but because there are too many people trying to buy this meme coin, let's say, and the rollup just can't process it fast enough.
00:15:36.524 --> 00:15:47.919
The other bottleneck is data availability, which is how much block space is there, like how many transactions can we publish and verify were published per unit time, essentially?
00:15:47.919 --> 00:15:53.113
So the way that we measure data availability, throughput is just bytes per second.
00:15:53.113 --> 00:16:03.018
So, like Celestia's throughput as of tomorrow will be 1.33 megabytes per second, which honestly is it sounds really small.
00:16:03.018 --> 00:16:04.863
It's actually quite a bit of throughput, like if you.
00:16:04.863 --> 00:16:13.870
I'm trying to think of what the equivalent TPS is I don't actually know off the top of my head, but probably like 10,000 or something like that.
00:16:13.870 --> 00:16:18.423
So like you can actually do a lot of throughput on that scale of DA.
00:16:18.423 --> 00:16:24.581
It's kind of obvious why it's hard to scale execution right Because you have to, you know, make your computer go faster and faster.
00:16:24.581 --> 00:16:28.017
That's why you know, one of Solana's things is like they don't have to mergulize the state.
00:16:33.110 --> 00:16:36.342
Well, I just want to put like a fine point on this for people who might not be like engineers listening, but it really does boil down to physics.
00:16:36.342 --> 00:16:46.102
Like you have a piece of hardware, a physical computer, that is metal, right and to steal Meltem Demir's tagline that I love is where bits meet atoms, right.
00:16:46.102 --> 00:16:52.062
It's like there's only so much space in that piece of hardware for the bits to flow through.
00:16:52.062 --> 00:17:03.966
And you might not realize it or not when you're like using a computer, but when you start creating, like a high throughput system that is getting, you know, tens of thousands, hundreds of thousands of transactions in it, it just runs out of physical space, like it can't.
00:17:03.966 --> 00:17:10.521
It's not even about storage, it's literally like the pipes that things go through are not big enough, and that's the problem that Nick is talking about.
00:17:10.521 --> 00:17:15.705
It's like it's just, and that's the thing when you have the you brought the Bitcoin debate right Like they just want to increase block sizes.
00:17:15.806 --> 00:17:30.800
Well, you know, if you increase block sizes, then then those pipes need to be bigger and then that means that the machines need to be bigger, and the thesis is that because the machines need to be bigger, they're more expensive and there's less of them, and then therefore, you become less decentralized, because the more machines you have, the more decentralized you are.
00:17:30.800 --> 00:17:34.766
So, like the whole ecosystem is trying to get to, in theory, some people there's different.
00:17:34.766 --> 00:17:43.613
Obviously, solana takes a very different approach because their machines are huge, right, relatively than to, like you know, you can run ETH on like a Raspberry Pi if you wanted to or something.
00:17:43.613 --> 00:17:46.076
Supposedly I'm very visual, right.
00:17:46.076 --> 00:17:50.057
So if you like, look at a piece of hardware and just imagine like the things flowing through it.
00:17:50.057 --> 00:17:51.077
It just runs out of space.
00:17:51.499 --> 00:17:55.305
It's easy for people to forget, and when you're using a blockchain, what is a blockchain?
00:17:55.305 --> 00:18:06.913
It's just a bunch of different computers around the world talking to each other, but, like the computers, it's still a physical thing, you know, like it's actually happening somewhere, it's not just like this esoteric thing in the sky.
00:18:06.913 --> 00:18:12.904
And those computers are the things that constrain how much the overall network can process.
00:18:12.904 --> 00:18:20.612
So, anyway, execution is constrained by, like, how much compute do you have available to process those transactions?
00:18:20.612 --> 00:18:25.202
And data availability is kind of constrained by a few different things.
00:18:25.202 --> 00:18:32.295
One is just like the bandwidth, like how much do the nodes in the network, how much data can they, you know, send and receive to each other per unit time?
00:18:32.295 --> 00:18:48.760
But there's also this other element, right, which is like, so like, when you're scaling a system, like a distributed system, I guess you can be stuck in this paradigm of like, okay, every node has to redo all of the work of every other node.
00:18:48.760 --> 00:19:04.874
So, like you know, and that's kind of what, like, traditional monolithic blockchains do Like, every time there's an Ethereum block, every single Ethereum node downloads the entire block and re-executes it itself, right?
00:19:04.874 --> 00:19:13.915
And when you have a system like that, you can only process however much the slowest computer, whatever it is, however much that can process, so they can actually stay up with the network.
00:19:13.915 --> 00:19:17.593
So that's why every blockchain has like, some minimum stats.
00:19:17.593 --> 00:19:22.652
You need like you need this much you know disk, you need this much ram, you need this much bandwidth.
00:19:22.652 --> 00:19:45.891
Whatever the the beauty of roll-ups and data availability sampling is it breaks free from those constraints, so like, rather than every node, what a roll-up does is it breaks free from those constraints, so like, rather than every node, what a rollup does is it parallelizes the verification of the execution part, so like I don't have to recompute every transaction that this rollup sequencer did, because he can just hand me a proof that he did it correctly.
00:19:45.891 --> 00:19:47.472
So then I just verify the proof.
00:19:47.472 --> 00:19:50.977
I don't have to verify that entire block or series of blocks.
00:19:51.718 --> 00:19:59.853
And the same thing is true of data availability, where what data availability sampling does is it makes it such that.
00:19:59.853 --> 00:20:08.622
So, typically, if I had to verify that the data behind a set of blocks is available, I would have to download each one of those blocks.
00:20:08.622 --> 00:20:10.694
There's no other way for me to do it.
00:20:10.694 --> 00:20:21.138
If it's a Solana, let's say Solana right, then I have to have a gigabit, you know connection I have to be using a shitload of bandwidth to be verifying the DA of that chain.
00:20:21.138 --> 00:20:35.719
And obviously if you want to get to, like you know, billions of users in the future on chain, then to verify that it's going to be like I don't know you're gonna have to have a Google level, like you know, pipes of bandwidth.
00:20:35.719 --> 00:20:37.522
So obviously that doesn't really scale.
00:20:38.830 --> 00:20:50.050
And the beauty of data value sampling is that it provides a mechanism whereby you can verify that the data is available probabilistically and kind of in a in parallel with everyone else in the network.
00:20:50.732 --> 00:21:15.258
So rather than me downloading the whole block, I download a very small portion of it, like a few kilobytes or maybe a megabyte or whatever of data, and everyone else downloads their own little sample, random sample of a few kilobytes or what have you, and collectively we can each have a statistical guarantee that that block was fully published and is fully available.
00:21:15.258 --> 00:21:24.987
So now, even if we had you know gigabits of data throughput or like, that's the amount of like, yeah, amount of data flowing through this network.
00:21:24.987 --> 00:21:37.542
Even if I just have my little phone and I'm somewhere really remote with like a you know few megabits connection, I can easily verify every single block.
00:21:37.542 --> 00:21:38.064
So that's kind of like.
00:21:38.064 --> 00:21:45.510
The beauty of data availability sampling is that it unlocks the ability to scale block sizes independently of the nodes that need to verify the chain.
00:21:45.510 --> 00:21:51.864
That's the only way I think we can ever have blockchains that truly scale without becoming centralized.
00:21:53.990 --> 00:22:00.355
So can I just summarize and Omar and Nick correct me if I'm wrong here but essentially it does still boil down to like the hardware.
00:22:00.355 --> 00:22:07.826
You need less nodes on the network to get the same benefit of the security benefits because of the sampling piece, right.
00:22:07.826 --> 00:22:13.477
Or you can still have more nodes but they can be smaller, don't have to be like like as high power yeah.
00:22:13.497 --> 00:22:16.321
So it still boils down to the hardware, except that.
00:22:16.321 --> 00:22:20.980
So even in celestia, you, these light nodes, I'm talking about that just sample the blocks.
00:22:20.980 --> 00:22:35.481
There is some minimum hardware requirements that we and bandwidth requirements we have, but they're really really really really small, like I could pull them up in a second, but there's like orders of magnitude lower than than a full node.
00:22:35.481 --> 00:22:37.289
So there still are some hardware constraints.
00:22:37.289 --> 00:22:51.201
But the point is that by this magical process of the sampling you can make those requirements much smaller, first of all, and then, second of all, they don't scale proportionally to the block size.
00:22:51.201 --> 00:22:53.788
First of all, and then, second of all, they don't scale proportionally to the block size.
00:22:53.788 --> 00:23:06.095
So let's say we 100x the block size, the amount of work and node requirements that you need as a participant does not grow by 100x, whereas if it's Ethereum or Solana, it basically does.
00:23:06.095 --> 00:23:11.491
The node requirements scale linearly with the amount of transactions you want to process, right?
00:23:11.673 --> 00:23:21.385
Yeah, I can definitely verify that that has happened, obviously, you can, and the thing is that that's not absolutely true, because obviously, in a monolithic system, you can still optimize things you know like.
00:23:21.385 --> 00:23:39.718
You can still optimize it to make more efficient use of the resources that you have, but there's always a limit to that and ultimately, once you've exhausted that optimization, then it's like okay, well, you want to process twice as many transactions, you need twice the bandwidth, twice the compute, otherwise, you know, it's just not physically possible.
00:23:39.718 --> 00:23:42.611
And the beauty of Celestia is that you don't Like.
00:23:42.611 --> 00:23:46.521
You can have, you know, a bigger blockchain.
00:23:46.521 --> 00:23:50.589
You can 10x the block size and you don't have to 10x the node requirements.
00:23:50.589 --> 00:23:52.991
But you do what you do and we can talk more about this.
00:23:52.991 --> 00:24:00.897
You do need more overall light nodes participating in the sampling process, because you're kind of parallelizing it.
00:24:00.897 --> 00:24:01.961
There's no free lunch.
00:24:01.961 --> 00:24:07.259
There still has to be more people doing work to verify a block.
00:24:07.259 --> 00:24:13.792
That's like 10 times the size, but at least it's not you individually having to do 10 times more work.
00:24:14.734 --> 00:24:29.881
One thing I've always thought about crypto in general is that there are going to be unintended positive consequences for lack of a better word for this but innovations that come out of it right, like around Cryptogryph, for example, like zero knowledge proofs.
00:24:29.881 --> 00:24:39.749
I don't think there would have been nearly the amount of investment into zkp tech if crypto hadn't existed, and I think it has other applications and I think these same problems that we're talking about apply to ai.
00:24:39.749 --> 00:24:46.510
The other, like most innovative, exciting thing happening in technology, you know, next to crypto is is ai.
00:24:46.510 --> 00:24:59.601
I think we can all agree on that and and they had the same problem like ever increasing demand for compute and GPUs and power and throughput, and like needing to like network these things together so that they can like.
00:24:59.601 --> 00:25:12.895
You know, that's why you have like a meta building a $65 billion data center the size of Manhattan, right, because they need all those machines in the same room to coordinate together, literally wired up together, because of the bandwidth and throughput challenges.
00:25:12.895 --> 00:25:14.840
It's kind of all the same problem.
00:25:15.301 --> 00:25:24.362
And what's interesting, like I bring this up, what makes me think about this is what you were saying about how you know Celestia solves a problem in one way, right, and Solana, you know, for better or for worse.
00:25:24.362 --> 00:25:30.141
You know totally is like we can build this monolithic, single state machine and whatever.
00:25:30.141 --> 00:25:31.703
Fuck all you guys like we're going to do it this way.
00:25:31.703 --> 00:25:33.125
And that's not his attitude really.
00:25:33.125 --> 00:25:34.184
He's like supports everybody.
00:25:34.244 --> 00:25:43.811
But what's interesting is, I think some of the optimizations they're making, the things that they're having to do, like with fire dancer, for example, that's going to bleed into all the other tech too and vice versa.
00:25:43.811 --> 00:25:56.853
Right, like I think there's like just so much cross pollination that can come from all this open source software that's getting built where I mean, I know it's kind of a meme, but it really does accelerate what we're able to do on the internet and with this technology period, which is kind of interesting.
00:25:56.853 --> 00:26:03.282
I think like obviously not the people on this call, but I do feel like our ecosystem kind of misses the boat on that a lot.
00:26:03.282 --> 00:26:12.722
You know they're like all arguing about ETH or whatever, and it's like, guys, come on, this is not, this is not pushing the envelope, the conversation we're having about that, you know, in that way.
00:26:16.190 --> 00:26:17.010
Omar, do you have any thoughts on that?
00:26:17.010 --> 00:26:19.154
Or just generally on what Nick was talking about?
00:26:19.154 --> 00:26:24.704
Yeah, I mean like in general, like there's, you know, even within DA there's somewhat of some tribalism also right.
00:26:24.704 --> 00:26:31.804
Like you have people who want to use EigenDA, you have people who use Cel celestia, you have people who just post blobs.
00:26:31.804 --> 00:26:42.699
We're all really trying to do the same thing like super scalable system in the cheapest possible way, with the most amount of customizability for for an application.
00:26:42.699 --> 00:27:03.523
I think oftentimes what we see is just, you know, obviously, like we come from themos ecosystem as well and we always believed in the ability to customize things to the fullest extent, and obviously that's not possible on Solana, so that's why we built a Solana network extension for that specific purpose.
00:27:03.523 --> 00:27:14.393
Oftentimes, as Alex was saying, there is not a ton of cross-pollination happening of technology stacks, and I do want that to change, I think.
00:27:14.413 --> 00:27:36.044
But I think it is changing as well between the solana and sort of modular ecosystem somehow, maybe eventually, I think that's one of the promises of modularity is that in theory you could, you know, mix and match a bunch of different technologies from a bunch of different ecosystems and kind of like get the best of both worlds.
00:27:36.044 --> 00:27:36.811
Or like you don't have to.
00:27:36.811 --> 00:27:51.961
There's this kind of notion that we have at Celestia of like the monolithic L1 loop, where it's basically like each time someone had a new idea, like oh, let's do move as a new execution environment, or you know, I don't know, this changed consensus and let's do move as a new execution environment.
00:27:51.961 --> 00:27:53.361
Or you know, I don't know this changed consensus and let's use hot stuff.
00:27:53.361 --> 00:28:06.134
Or like we want to do narwhal and like separate the, the sort of like mempool and transaction propagation from like the actual, like block construction and stuff like that all these really cool innovations.
00:28:06.134 --> 00:28:18.663
But every time someone had these ideas they had to go out and build an entirely new chain and it just seems like a waste and then it becomes isolated in their own little stack that no one else can use.
00:28:19.130 --> 00:28:32.030
And so if we build things in a more modular way, the hope is that we could actually reuse those components across different things, mix and match them and experiment with them in a way that accelerates innovation in multiple ways.