What is the future of open-source development and AI Superintelligence? This week on The Index, we sit down with Dr. Steven Waterhouse, Founder & CEO of Nazare Ventures, to explore the challenges and opportunities within decentralized AI frameworks.
Join us as we venture into the world of AI infrastructure that’s pushing beyond traditional applications. Dr. Waterhouse will discuss the essential shift towards innovative models that move away from the typical AI tech stack, emphasizing decentralized solutions that merge cost-effectiveness with scalability.
Discover the expansive potential of open-source AI and how it mirrors the monumental shift away from proprietary systems seen during the dot-com era.
As we consider the broader consequences of AI, we examine the societal and ethical dilemmas posed by the rapid advancements in AI technologies. Tune in to gain insights from the frontier of AI innovation and decentralization, a conversation particularly relevant for developers in the open-source community.
Nazare Ventures: https://nazare.io/
Show Links
The Index
X Channel
YouTube
Host - Alex Kehaya
Producer - Shawn Nova
00:00 - Exploring AI Ventures With Dr. Waterhouse
08:53 - Revolutionizing AI Infrastructure and Decentralization
12:39 - Revolutionizing Open Source AI Infrastructure
19:04 - Decentralized AI and Crypto Innovations
28:36 - The Future of AI Superintelligence
WEBVTT
00:00:00.300 --> 00:00:03.111
Hey everyone, it's Alex Cahaya from the Index Podcast.
00:00:03.111 --> 00:00:09.345
I want to tell you about Mantis, a groundbreaking platform that's simplifying the way we interact across blockchains.
00:00:09.345 --> 00:00:13.490
If you're a developer or just into DeFi, you'll want to pay attention.
00:00:13.490 --> 00:00:25.474
Mantis enables trust-minimized transactions across different chains, letting you trade or execute actions seamlessly while getting the best possible outcome, all without the usual complexities.
00:00:25.474 --> 00:00:35.709
Imagine being able to move assets and settle transactions across blockchains easily, with maximum value extraction, all while staying secure and decentralized.
00:00:35.709 --> 00:00:38.963
That is what Mantis is bringing to the table.
00:00:39.685 --> 00:00:48.447
Mantis is an official sponsor of the Index podcast, and their founder, omar, and I regularly host a new live stream series on X called everything SVM.
00:00:48.447 --> 00:01:01.606
We have these live streams weekly and if you want to keep up with what's happening in the salon ecosystem, especially as it relates to the new innovative deployments of the salon or virtual machine, you should tune into this live stream.
00:01:01.606 --> 00:01:07.736
Tune into this live stream, check them out at mantisapp and follow them on X at Mantis.
00:01:07.736 --> 00:01:15.221
M-a-n-t-i-s At the Index.
00:01:15.221 --> 00:01:18.709
We believe that people are worth knowing and we thank Mantis for enabling us to tell the stories of the people who are building the future of the internet.
00:01:18.709 --> 00:01:20.093
We'll see you on the other side.
00:01:37.944 --> 00:01:41.227
Welcome to the Index Podcast hosted by Alex Kahaya.
00:01:41.246 --> 00:01:56.477
Plug in as we explore new frontiers with entrepreneurs, builders, and welcome to the Index.
00:01:56.477 --> 00:02:07.548
This week, I'm excited to be joined by Dr Stephen Waterhouse, founder and CEO of Nazare Ventures, which is an investment firm focused on early stage companies in AI infrastructure.
00:02:07.548 --> 00:02:16.461
And if you've been a listener of this show before, you've actually heard me talk about Dr Stephen Waterhouse before, because he was the CEO of Orchid Labs when I was there.
00:02:16.461 --> 00:02:21.497
It's really where I got my start in crypto, so super happy to have you here and thanks for coming on the show.
00:02:21.497 --> 00:02:21.699
Thanks.
00:02:21.722 --> 00:02:23.516
Alex, great to be on the show and great to see you again.
00:02:23.516 --> 00:02:24.723
Happy to have you here and thanks for coming on the show.
00:02:24.742 --> 00:02:26.248
Thanks, alex, great to be on the show and great to see you again.
00:02:26.248 --> 00:02:34.131
Yeah, it's good to see you and I'm really interested to talk more about Nazare and your new venture and kind of your thesis behind that.
00:02:34.131 --> 00:02:43.465
I know it focuses primarily on AI, it sounds like, but before we get into that you know we were kind of chatting about this before the show you don't really need to be doing another venture venture firm company.
00:02:43.465 --> 00:02:45.948
So I'm really need to be doing another venture firm company.
00:02:45.948 --> 00:02:47.991
So I'm really curious to kind of dig into why you're doing this.
00:02:47.991 --> 00:02:52.015
What gets you out of bed in the morning to keep hustling and build this firm?
00:02:59.020 --> 00:03:01.181
Well, I mean without getting too existential around the need to have things to do.
00:03:01.181 --> 00:03:04.705
I didn't mention earlier, but I recently had another kid, so I have a six-month-old now.
00:03:05.104 --> 00:03:06.606
What Congratulations.
00:03:06.887 --> 00:03:07.766
I have more things to do.
00:03:07.766 --> 00:03:08.247
I've got three.
00:03:08.247 --> 00:03:08.487
Now.
00:03:08.487 --> 00:03:19.777
That's the biggest project I have on my plate, but, as you mentioned, I was involved with Orchid, founder and CEO, and then, before that, started Pantera Capital in 2013.
00:03:19.777 --> 00:03:31.408
So I was trying to remember how many startups I've actually done or kind of things I've gone from zero to one with it's around 10 or so now in a career of 30 years or so and, as you said, like, why do another one?
00:03:31.408 --> 00:03:33.448
Why start a venture fund from scratch?
00:03:34.699 --> 00:03:40.068
Well, I learned a long time ago that I'm not very good at working for people, so that means I have to do my own thing.
00:03:40.068 --> 00:03:41.967
And then the question is well, why do something?
00:03:41.967 --> 00:03:53.728
And the other thing I learned is that it's really fun to work on interesting things and to work with interesting people, and that is what gets me up in the morning, that, that, that sort of thrill of discovering new things.
00:03:53.728 --> 00:04:14.292
And I've done a lot of things where I've started things, taking them from zero to one, and that's what I hope to offer people when I work with them as an investor is to help them, teach them things I know, work with them through the good times, the bad times, and I also, you know, have a, have a scene on what's hopefully a rocket ship when we, when we invest in that.
00:04:14.539 --> 00:04:24.249
As far as AI, I did my PhD in what was we didn't call it AI back then, it was just kind of machine learning and neural networks.
00:04:24.249 --> 00:04:34.672
I worked on recurrent neural networks and mix of experts and applied it to speech recognition in the 90s at Cambridge and then I worked with NASA for a few years when I moved out to the States in 97.
00:04:34.672 --> 00:04:47.423
But I mostly turned my back on AI as a category really when internet advertising started because I felt like it was really about kind of monetizing people and mining, mining humans.
00:04:47.423 --> 00:04:50.891
Right, it was like how do you build a better advertising model?
00:04:50.891 --> 00:04:59.187
How do you show you're personalizing, but really you're just kind of trying to monetize, and that was interesting and as much as making money is interesting.
00:04:59.208 --> 00:04:59.588
But it wasn't.
00:04:59.588 --> 00:05:04.531
It wasn't the thing that got me super excited about ai and then that came back later.
00:05:04.531 --> 00:05:18.809
So the last couple of years when I've seen the growth of agents and things that are really empowering people, I mean, apart from the fact that it's just crazy in science fiction, like we maybe build an AGI, like it's wild, there are like real practical applications of these things.
00:05:18.809 --> 00:05:23.286
It's growing faster than anything ever has, and so it's just hard to ignore.
00:05:23.286 --> 00:05:25.911
So I didn't ignore it, I got into it.
00:05:29.500 --> 00:05:36.425
There's a couple of stories I tell about you when we first started at Orchid, and one of them is the first thing that you told me to do and I benefited from this massively was you were like go literally like first day on the job.
00:05:36.425 --> 00:05:38.411
I had just been hired, but really this was kind of a test.
00:05:38.411 --> 00:05:46.274
I always felt like you asked me to look at the top 50 tokens by market cap and you weren't even specific about the deliverable.
00:05:46.274 --> 00:05:59.800
You were just like go figure it out, research it, and I came back to you 48 hours later with a 50-page report on it and you were blown away at how much I produced as far as what the research was.
00:05:59.800 --> 00:06:04.149
I learned so much from digging into the space in that way.
00:06:04.269 --> 00:06:05.754
I went to every single website.
00:06:05.754 --> 00:06:07.322
I read all their white papers.
00:06:07.322 --> 00:06:09.869
I did background research on the founders.
00:06:09.869 --> 00:06:12.624
I went really deep, literally within a week.
00:06:12.624 --> 00:06:18.687
I was like oh, these are the themes of crypto, zkp, this is an area of research.
00:06:18.687 --> 00:06:21.081
There's different cryptography topics, scalability.
00:06:21.081 --> 00:06:22.923
Back, this is 2016, 2017.
00:06:22.923 --> 00:06:29.274
So you remember what all the different narratives were back then and I was thinking about that this morning because I actually had to do that work.
00:06:29.274 --> 00:06:32.016
Like I actually did the work and read everything.
00:06:32.016 --> 00:06:37.701
But now I could just drop links into chat, gpt, and be like build me a report and it would just.
00:06:37.701 --> 00:06:40.069
It would just do it, you know, immediately.
00:06:40.069 --> 00:06:42.271
So it gives you superpowers.
00:06:43.038 --> 00:06:44.446
Yeah, it gives you superpowers.
00:06:44.446 --> 00:06:48.502
And I had a similar thing the other day.
00:06:48.502 --> 00:06:49.023
I wanted it to extract.
00:06:49.023 --> 00:06:53.432
I was trying to figure out how to extract emails from a folder in gmail.
00:06:53.432 --> 00:07:02.300
It was like, give me everyone I've sent emails to and there used to be tools to do this, but then I realized that these tools just really weren't getting updated and developed anymore.
00:07:02.300 --> 00:07:10.980
Because after I sat down for and figured, like why is it so hard to do, I realized, well, I'll just put it, just ask chanty pt to do it, and it did it.
00:07:12.002 --> 00:07:14.127
There's always this theory of how do you train a great lawyer?
00:07:14.127 --> 00:07:26.622
Well, you, you make them do the grant work for five, ten years and then, after the end of that, they're a great lawyer or, you know, a great research analyst or a great financial analyst or even a great trader, and we are somehow losing that skill set.
00:07:26.622 --> 00:07:27.824
I don't know.
00:07:27.824 --> 00:07:31.021
I think time will tell us to whether or not there's something lost there.
00:07:31.021 --> 00:07:44.406
There's some kind of inherent skill that you need and some instinct that you have, or whether we will, as you say, just kind of get superpowers and there'll be something that, like it's kind of irrelevant, like who cares if you don't know how to do x or y, it's just just you know something you do.
00:07:45.428 --> 00:07:48.254
Yeah, like I can't do spreadsheets, I suck at like Excel.
00:07:48.254 --> 00:07:55.624
If you ask me to sit down and like build a financial model, I literally could not do it, which sounds funny because I'm an entrepreneur and I'm a CEO now, but it's not my strength.
00:07:55.624 --> 00:08:12.442
I needed to make a model the other day and I had no one to help me with it for some reason, like people were just too busy and I just needed to get it done and I literally like use chat GPT to write a Python script for me and I know how to use command line and it debugged the whole thing for me.
00:08:12.442 --> 00:08:20.990
It took me about 20 minutes and I had a bunch of errors and stuff and then all of a sudden I had a spreadsheet on my desktop that the script just made.
00:08:21.370 --> 00:08:25.242
I think everybody can agree that the pace of innovation is kind of crazy and it's really hard to keep up with.
00:08:25.242 --> 00:08:27.644
And I'm kind of curious what your thesis is.
00:08:27.644 --> 00:08:32.927
You primarily do seed right through Nazare, so you're like looking at earliest stage ideas.
00:08:32.927 --> 00:08:34.129
What are you looking for?
00:08:34.129 --> 00:08:40.933
What's the areas of innovation that you like tip of the spear, really exciting things that you think are going to have like outsized outcomes, outsized outcomes.
00:08:42.014 --> 00:08:45.157
It's a bit counterintuitive, I guess, or a bit contradictory in some ways.
00:08:45.157 --> 00:08:55.205
I'm looking at not AI applications, so everything that's not an application.
00:08:55.205 --> 00:08:55.927
So it's basically infrastructure.
00:08:55.927 --> 00:09:03.461
In crypto, people are saying no more infrastructure, but in AI I'm saying, actually I think we need to take a look at the infrastructure aspects and I'm also not doing hardware.
00:09:03.461 --> 00:09:06.471
So it's kind of like everything everything I mean the squishy software stuff in between.
00:09:06.471 --> 00:09:14.615
So that includes things like I'm looking at fundamental model algorithms, so I'm looking at alternative to the current llm stack.
00:09:14.615 --> 00:09:26.809
I'm looking at how to scale existing systems, and so that brings into bear things like well, trying to find faster ways to train more flexible ways to train more flexible ways to execute.
00:09:26.809 --> 00:09:37.913
We're looking at decentralized architectures, which is kind of where the crypto time comes in Incentivize decentralized systems, whether it's inference or the new work that's happening in training, which is fascinating.
00:09:37.913 --> 00:09:39.044
I'm happy to chat more about that.
00:09:39.799 --> 00:09:52.888
Decentralized agents fills into this idea, because just cooperative agents, small agents, the idea that a collection of lots of smaller agents would be more efficient and perhaps more cost effective than one large monolithic agent.
00:09:52.888 --> 00:09:54.856
Edge computing is part of this framework.
00:09:54.856 --> 00:09:59.509
There's a lot of different things and that's it's really like I sometimes call it matcha.
00:09:59.509 --> 00:10:00.912
It's like make ai cheap again.
00:10:00.912 --> 00:10:10.028
So a lot of my contemporaries and funds which are looking more at a sort of more crypto ai, or even founders you hear them talk about.
00:10:10.028 --> 00:10:14.447
The first thing they say is big ai is bad, therefore let's democratize it.
00:10:14.847 --> 00:10:22.655
And I spent so long on crypto now at this point that I can't invest on a thesis which is like x is bad, therefore, do y, because no one cares.
00:10:22.655 --> 00:10:25.586
What matters is do these things make more money?
00:10:25.586 --> 00:10:26.368
Do they save more money?
00:10:26.368 --> 00:10:27.150
Are they more efficient?
00:10:27.150 --> 00:10:29.919
Do they do something that you couldn't have done before otherwise?
00:10:29.919 --> 00:10:36.692
And I think we're all maturing in this kind of crypto ecosystem to realize that you can't just trade on narratives.
00:10:36.692 --> 00:10:41.490
Well, you can it's called meme coins but you can't just go trade on that and invest on narratives.
00:10:41.490 --> 00:10:45.490
You have to have a more fundamental thought process.
00:10:45.650 --> 00:10:49.287
So that's been the the idea so far.
00:10:49.287 --> 00:10:55.807
We did a first close and we've started to invest in a few companies, and they're scattered around the globe.
00:10:55.807 --> 00:10:59.143
We have one in la, we have one in zurich, we have one in miami.
00:10:59.143 --> 00:11:08.214
I'm increasingly spending more time on the west coast because it's the sort of mecca of ai at this point again, or like it's a mecca or something again san francisco's back one more time on the West Coast, because it's the sort of mecca of AI at this point again.
00:11:08.214 --> 00:11:10.058
Or like it's a mecca of something again San Francisco's back one more time.
00:11:10.058 --> 00:11:15.385
But there's a lot of stuff happening online where, like, I'm plugged into Discord servers all day and live on Twitter, et cetera.
00:11:16.179 --> 00:11:21.341
I really see your thesis and vision here too, and it's the thing that I've been most interested in.
00:11:21.341 --> 00:11:27.322
The comment I think you make about the benefits, the core value prop is really accurate.
00:11:27.322 --> 00:11:35.375
I think that there are going to be some that happen to be better than the centralized systems, but that's a side effect of a better product.
00:11:35.375 --> 00:11:39.124
That happens to be like deep end type things.
00:11:39.124 --> 00:11:43.240
Right, they go to market for the GPUs plays into some kind of token economic thing.
00:11:43.240 --> 00:11:48.020
For deep end, I think it's got to come down to being better, faster, cheaper and stronger.
00:11:48.643 --> 00:11:54.725
I do think, though, that there is a real reason why you need token economics to achieve that right.
00:11:54.725 --> 00:12:02.307
So if you look at Elon Musk, he builds, in 19 days, 100,000 GPU cluster in a data center.
00:12:02.307 --> 00:12:04.296
Nobody can do that right.
00:12:04.296 --> 00:12:11.388
How is your two-person dev team going to get that kind of compute power at any reasonable price?
00:12:11.388 --> 00:12:12.629
It's not impossible.
00:12:12.629 --> 00:12:17.988
They probably won't get that kind of compute power using some kind of deep end network, but they could get pretty close.
00:12:17.988 --> 00:12:33.910
They could get something that's pretty powerful to get them off the ground, and that could be valuable the idea that token economic powered AI LLMs can power open source LLMs.
00:12:33.910 --> 00:12:39.065
That then the world can own and that doesn't get centralized into Facebook, google X.
00:12:39.065 --> 00:12:41.938
From a narrative standpoint, I do think there is an existential risk there.
00:12:43.081 --> 00:13:13.910
I agree, in the last seven years of work I'm not contradicting all the talks I gave and the narrative around that, I completely agree the thing I've come to realize is that I think those things should be the punchline, not the driving force, meaning if we have open source AI, if we have decentralized training systems for example let's just take training for a minute, because you started mentioning at the beginning then it will fundamentally end up democratized.
00:13:13.910 --> 00:13:27.903
Because if it's open and it's easy for people to use and get access to these things as you say, like the decentralized architectures, whether they are token incentivized or not, like the token incentivized just gets them like up there faster and creates a better economic model for it.
00:13:27.903 --> 00:13:33.643
And you know it's kind of hard to pay thousands of people over Stripe Not impossible but it's hard to sign them up and so on.
00:13:33.643 --> 00:13:41.581
But you just kind of want to say, okay, I think it's kind of easier, like the financial systems are easier, and then there's some benefit to getting into it and then there's some benefit to staking it, et cetera.
00:13:41.581 --> 00:13:58.293
We understand how these things work, but really fundamentally it's about the open source aspect of it and the fact that currently there's been such a incredible lock-in by nvidia and that's really what you should focus on.
00:13:58.312 --> 00:14:47.311
The infiniband component for these systems, the, you know the networking stack and the the software networking stack is all so intertwined that the thing that resembles to me the most is when I was working at Sun Microsystems in the early 2000s, when Sun was in that transition point, which it didn't really achieve, which was trying to go from this very sort of like proprietary stack of chips, operating systems, applications and hardware right, because they ran the internet that those systems were the things that ran all the dot-com servers and then all of a sudden it wasn't and all of a sudden it was a collection of thousands of basically pc grade servers connected over gigabit ethernet and that that idea that that would happen all of a sudden was was unthinkable during the dot-com boom.
00:14:47.311 --> 00:14:48.724
It's like that's just never going to happen.
00:14:48.724 --> 00:14:58.899
But there were these people murmuring around and people writing sort of articles about like how you could do this thing with these open source tools, and I think something similar will happen in the AI space and that's one of the things we're really focused on.
00:15:00.485 --> 00:15:00.905
Interesting.
00:15:00.905 --> 00:15:03.923
Have you heard of Bagel?
00:15:03.923 --> 00:15:04.825
Yeah, I know Bagel.
00:15:04.825 --> 00:15:12.163
Yeah, I'm an advisor to them and I've been working with them pretty closely and they're raising another round so we can talk about that.
00:15:12.163 --> 00:15:13.765
It might be too later stage for you.
00:15:13.765 --> 00:15:16.929
I don't know if you do like Series A or not, but they're really interesting.
00:15:16.929 --> 00:15:30.557
And just by way of example, to talk about some of the stuff we're talking about here, they have a way to train both open source or proprietary models with data that can be protected, like the source of the data, using FHE.
00:15:30.557 --> 00:15:31.299
That's like their primary thing.
00:15:31.299 --> 00:15:37.861
The other piece that they do is around the economics, right, especially as it relates to open source LLMs.
00:15:37.962 --> 00:15:40.347
So I didn't realize what parameters were.
00:15:40.347 --> 00:15:44.452
I didn't really understand this stuff and I learned by talking to Bidhan about it.
00:15:44.452 --> 00:16:03.171
But basically you can have a proprietary set of parameters that kind of defines how the model behaves when you feed it data and if I want to append that parameter, that behavior, or change it, I can then attach my own parameters to the same model and modify them.
00:16:03.171 --> 00:16:04.355
This is how Bagel works, right.
00:16:04.355 --> 00:16:18.049
And then if someone goes and uses my version of that model with my parameters, they pay me, but they also get the other guy who started the model and made the original parameters that gave it.
00:16:18.049 --> 00:16:26.587
Whatever behavior that it has also gets paid, and so it's a way to compensate people for contributing to and improving the various use cases around this.
00:16:26.587 --> 00:16:29.725
The thing is that that kind of is like really, really interesting.
00:16:29.725 --> 00:16:33.702
It took a long time for me to wrap my head around that because I just didn't understand the basic concepts.
00:16:33.702 --> 00:16:35.379
You're talking about fine-tuning the weights, right?
00:16:35.458 --> 00:16:36.522
Yes, so you kind of yeah.
00:16:36.581 --> 00:16:42.000
Yeah, exactly, exactly, and that's just not something that's possible in current systems.
00:16:42.000 --> 00:16:43.264
Right, like they don't do that.
00:16:50.195 --> 00:16:50.636
They don't build like.
00:16:50.636 --> 00:16:51.198
The incentives aren't there.
00:16:51.198 --> 00:16:52.041
I like what those guys are doing.
00:16:52.041 --> 00:17:12.089
The big challenge with so-called crypto AI systems right now and I speak as an investor in a few of them is the big challenge is like there's quite an impedance mismatch between the crypto world and the AI world, in the sense that most people in AI think we're crazy right, and they see crypto as mostly just a friction to getting people to use systems.
00:17:12.089 --> 00:17:23.289
That's largely because I think AI came up during a time where we were dealt and dealt ourselves some of our worst body blows.
00:17:23.289 --> 00:17:42.106
So if you think like the period of time where the generative AI systems were kicking in, like stability, ai was coming online and chat, gpt earlier versions was just in the wake of FTX and there were people exiting crypto to go work on AI startups and just being like I'm done, I'm out.
00:17:42.106 --> 00:17:49.368
You know, I wouldn't have been in this industry, in the crypto industry, for 11 years if it had all just been scam artists.
00:17:49.368 --> 00:17:59.680
I mean, it's like any bleeding-edge technology or industry is going to have especially when you add money to it is going to have the fair share of odd characters and scam artists et cetera.
00:17:59.680 --> 00:18:07.261
But I've always been fascinated by the technology and by the, especially the people some of the smartest people I know have been working on this.
00:18:08.395 --> 00:18:32.718
This kind of idea that we can fuse these worlds together has to be approached carefully, and I think one idea is, in a similar way to you know, kind of Aave and Uniswap and other people at Compound, et cetera like built out Curve, built out DeFi as this really self-contained system that's now starting to build bridges to the existing financial world.
00:18:32.718 --> 00:18:35.405
You've got real world assets bridging over with Aave.
00:18:35.405 --> 00:18:39.405
You've got components of Uniswap being cited in JP Morgan papers.
00:18:39.405 --> 00:18:48.167
Maybe crypto AI needs to build something like DeFi as a self-contained system that people build models from, people build applications from.
00:18:48.167 --> 00:18:51.622
That has its own kind of economics, its own kind of system.
00:18:51.622 --> 00:19:03.663
Before we get too excited about trying to go fix big AI, because big AI doesn't care, they're going to do that thing and people are going to be fine, make lots of money and do that, et cetera, et cetera.
00:19:04.365 --> 00:19:06.558
I'll give you another weird idea though the other area.
00:19:06.558 --> 00:19:18.747
I think that maybe, maybe let's not call it couture, but kind of decentralized AI may come in you and I did a lot of work on this in the ORCID days but one of the real powers of decentralization is to route around systems of control.
00:19:18.747 --> 00:19:27.047
So it's this idea of disintermediating existing things, and some of the existing things may be like, okay, I want to do uncensored stuff.
00:19:27.047 --> 00:19:42.535
So now we have, like venice ai, we have a bunch of the guys at noose research building uncensored language models and or unbiased language models, and so that's one aspect where you can imagine like a decentralized architecture is beneficial, like you want to do stuff that is just not allowed to do in your country or some other reasons.
00:19:42.535 --> 00:19:46.061
But then another reason is regulatory arbitrage.
00:19:46.061 --> 00:19:49.909
Is the idea, the idea of like well, hang on a second, I'm just not allowed to do this.
00:19:49.909 --> 00:19:53.875
Why can't I do what I want to do Now?
00:19:54.436 --> 00:19:58.487
The EU Act and I live in Portugal now, so I'm quite familiar with some of these rules.
00:19:58.487 --> 00:20:07.260
The EU AI Act, which is the European Union's AI bill it defines all generative AI startups or companies as high risk, all generative AI startups or companies as high risk.
00:20:07.260 --> 00:20:20.892
If you're a high risk company, there is a stack of documents that you have to adhere to, including having a compliance officer, a manual, a, this, a that you can imagine like, think crypto regulations, but gone crazy, right?
00:20:20.892 --> 00:20:39.598
So one of my predictions is that all crypto, all these AI companies look a lot like crypto companies and few years They've got Domicile in Switzerland, they're trying to do open source, they're doing decentralized architectures because otherwise they actually can't play, especially in Europe.
00:20:39.598 --> 00:21:01.411
The California bill is a bit better, but the regulators are having a field day in this space and applying a lot of misunderstanding and poor knowledge to an area that is so powerful and has so much capability and so much possibility to improve economies for countries, and yet they are running scared well, it's interesting impact from a go-to-market standpoint.
00:21:01.431 --> 00:21:04.921
But I want to go back to your previous comment about, like the defy of ai.
00:21:04.921 --> 00:21:10.316
I actually have thought about this too and I'm curious if you have any like specific examples of what that might look like.
00:21:10.316 --> 00:21:20.324
But in my head it's like the crypto native or DGN, you know use case that can still get pretty big market caps because of the flow of capital and I just need to understand what the what are some ideas?
00:21:21.315 --> 00:21:28.741
Well, so BitTensor, which is one of the first ones to really come down this from a sort of fair launch model, and morpheus is another one.
00:21:28.741 --> 00:21:32.567
I think there's some glimpses of of what might be there.
00:21:32.567 --> 00:21:46.997
And now, with the some of these like weird little kind of agent crypto agents plugged into twitter, these kinds of weird little stacks that people are, there's a bunch of them running around on solana and base and other places and they're like kind of autonomous.
00:21:46.997 --> 00:21:47.258
They.
00:21:47.258 --> 00:21:51.263
They're kind of not autonomous People pulling the strings, but that's fine.
00:21:51.263 --> 00:21:56.115
They're almost like glimpses into a weird future, I think.
00:21:56.115 --> 00:22:00.280
More interestingly, it's not that it's not interesting, so there could be interesting.
00:22:01.202 --> 00:22:06.490
This one guy this guy called Plinius the Exploiter I think his name is is on Twitter.
00:22:06.490 --> 00:22:13.539
He just raised some money from launching a what looks like a meme coin.
00:22:13.539 --> 00:22:17.771
He's been publishing exploits on twitter of all the different models.
00:22:17.771 --> 00:22:21.426
So he shows you how to jailbreak the ai, which you know.
00:22:21.426 --> 00:22:36.577
It's kind of like you give it a particular prompt and it kind of goes into like uncensored mode or you manage to go go around it and then meanwhile the, the ai developers you know, try and fix it and get better and it's all this funny game of uh, almost like cedia, like jay was to build.
00:22:36.577 --> 00:22:46.406
He has now raised money to build a project to continue to do work in this space and it's going to be somehow tied into the token economics of this, of this meme coin.
00:22:46.406 --> 00:22:49.223
That really got me thinking about.
00:22:49.223 --> 00:22:52.881
Well, this is real and this guy wasn't a crypto guy.
00:22:52.881 --> 00:22:55.894
He just did it because he's like I don't know how to raise money for this from VCs.
00:22:55.894 --> 00:23:00.007
I'm basically going to go out and essentially do like a little sort of ICO for my idea.
00:23:00.007 --> 00:23:16.582
I could see more things like that, more situations where you have independent researchers that want to figure out some way to fund their idea or fund their project and then do it through these crypto economic mechanisms, especially if there is loss of regulation or especially if there's a misuse around.
00:23:16.682 --> 00:23:19.107
You know, maybe it's hard to get funding for things in the future.
00:23:19.107 --> 00:23:31.931
Maybe people want to experiment research, but I think the real power will come when we have a stack built around these decentralized AI sets of tools that allows people to build applications easily and quickly.
00:23:31.931 --> 00:23:33.460
Now they may not be as good.
00:23:33.460 --> 00:23:37.222
The language models may not be as powerful, they may not be as fully fledged.
00:23:37.222 --> 00:24:02.441
Maybe it doesn't have reasoning yet, or maybe it does soon, but I think that will build a smaller but increasingly powerful ecosystem which perhaps, unlike DeFi and there's a common criticism of Krita, which is that it's just this sort of self-referential thing it's like DeFi just serves the people, like the same people who go to DEF CON and you know Breakpoint right, they're all hanging out in the same gang.
00:24:02.441 --> 00:24:14.125
But I think that if we can start getting applications built in AI from these decentralized stacks that actually start reaching a lot of people because you know it's AI, people love this stuff and they want to use it and access it and so on.
00:24:15.234 --> 00:24:20.907
The challenge is finding things that aren't just copies of existing things, because you have to do something new.
00:24:20.907 --> 00:24:22.862
You have to do something more powerful, more interesting.
00:24:22.862 --> 00:24:25.474
Maybe it ties in prediction markets.
00:24:25.474 --> 00:24:28.666
Maybe it does things that you know only crypto things do well in prediction markets.
00:24:28.666 --> 00:24:30.575
Maybe it does things that you know only crypto things do well.
00:24:30.575 --> 00:24:42.338
I think the success of the prediction markets during the election was fascinating, but yeah, there's like some drama and some scam and all these things associated with it, but that's like probably one of the most successful applications outside of trading that we've seen.
00:24:42.338 --> 00:24:45.467
I guess NFTs too, but this was just like are you kidding?
00:24:45.467 --> 00:24:46.976
This is front page every day.
00:24:46.976 --> 00:24:47.576
Well, are you kidding?
00:24:47.596 --> 00:24:48.477
This is front page every day.
00:24:48.477 --> 00:25:00.522
Well, I think it's really different than NFTs, though, because NFTs is kind of for me still and as somebody who built a company in the NFT space, I'm still kind of skeptical of like what their long-term value is to the world, like where's the value?
00:25:00.522 --> 00:25:02.784
Prop versus like polymarket man.
00:25:02.784 --> 00:25:16.294
That's very clear, like extremely clear that they predicted the election and this is one of the interesting things about Solana too and I know Polymarket's not on Solana, but one of Tully's things is like the information crossing the planet at the speed of light.
00:25:16.294 --> 00:25:22.057
That's financialized, the news getting from one end of the planet to the other end in like 400 milliseconds.
00:25:22.057 --> 00:25:23.548
It's tied to money.
00:25:23.969 --> 00:25:29.755
That's what Polymarket is, that's what a prediction market is, and I think you're right, there's got to be some kind of application.
00:25:29.755 --> 00:26:00.326
My theory is that one of these things like Bagel or these different ways for training and making an AI more powerful, with token economics tied around that whole process, will be a layer that an application gets built on, and when one of these layers gets an application on it, then it's going to be like a big boom for some kind of crypto native use case, right, and there'll be like an economy built around this very strong open source LLM or pieces of different models, whatever it might be.
00:26:00.326 --> 00:26:14.492
But then the end use case the pumpfund or the polymarket of AI in the web three it's hard to predict, but I think this thing about like what you were talking about that guy that raised money for his research is like it's those things as a seed investor you're going to like.
00:26:14.492 --> 00:26:20.791
If you feel like you're like kind of looking at it sideways trying to figure out what it is, it's probably something to pay attention to right.
00:26:26.292 --> 00:26:28.356
Like, especially in a space like this, is amazing.
00:26:28.356 --> 00:26:34.426
He's just publishing xbox all the time and then at some point a couple months ago, uh, mark andreessen sent him some money.
00:26:34.426 --> 00:26:43.272
Um, in the same way that mark andreessen sent the truth terminal some money, because he was just saying, hey, I want some help.
00:26:43.272 --> 00:26:45.786
You know, I need to, I need to fund some of my work, and so on.
00:26:45.786 --> 00:26:48.433
Mark's just like here's a couple of Bitcoin, go do it, go fund.
00:26:48.433 --> 00:26:53.028
That's when I should have found the guy and invested.
00:26:53.028 --> 00:26:53.630
That was it.
00:26:53.630 --> 00:26:54.772
That was it Missed it.
00:26:54.772 --> 00:26:55.713
But that happens.
00:26:55.934 --> 00:26:57.818
But you know a lot of this is pattern recognition.
00:26:57.818 --> 00:27:09.098
Right, that's what I've learned as an investor.
00:27:09.098 --> 00:27:15.516
It takes a unique perspective in every market and sometimes you got to look at a lot of opportunities to start seeing it.
00:27:15.516 --> 00:27:21.773
And then the next time you'll get that itch, you know you'll start that little voice in the back of your head be like this smells like the last one we saw.
00:27:21.773 --> 00:27:23.190
And then you just deploy some capital.
00:27:33.765 --> 00:27:51.128
I think that the thing that I got on that on that pattern recognition, also experiences, it's paradoxically right is that I feel like that the more these more investments I do, the more stuff I do, and the more I explore this space in particular, the more kind of, I guess like the more kind of humble I find myself trying to be, or if I find myself forced to be, um, there's just, there's like such an incredible amount to learn, um, and yes, you offer things and you offer experience and so on.
00:27:51.128 --> 00:28:08.615
But I know I try to, I try to approach it like uh, like almost like the first time, I'm like okay, let me try and understand this completely and and really like dig in and really, um, really explore each thing carefully, rather than just kind of assume I know something, because you gotta've got to keep discovering.
00:28:09.476 --> 00:28:14.173
We have only a couple of minutes left and I always ask the same question but what have I not asked you that I should have asked?
00:28:15.664 --> 00:28:17.492
Well, we had the election already, right, so that was easy.
00:28:17.492 --> 00:28:19.808
Yeah, we talked about that.
00:28:19.808 --> 00:28:21.133
Yeah, we did that.
00:28:21.133 --> 00:28:23.546
No, I was just saying if this was pre-election, we could have played that game.
00:28:23.546 --> 00:28:26.346
I don't know where the bull market's going.
00:28:26.346 --> 00:28:30.249
We talked about that before the call, so I can't really tell you where that's going.
00:28:30.249 --> 00:28:31.430
I think they're all very unique.
00:28:31.430 --> 00:28:35.913
I think the dynamics of this space has changed in crypto quite significantly.
00:28:35.913 --> 00:28:37.973
You could ask me about superintelligence and AGI.
00:28:37.973 --> 00:28:39.693
That'd be kind of a fun conversation.
00:28:40.375 --> 00:28:43.477
Explain what that is and then tell me if we're all going to die because of.
00:28:43.977 --> 00:28:44.896
Terminator-esque.
00:28:44.916 --> 00:28:45.396
Skynet.
00:28:45.498 --> 00:28:45.698
Yeah.
00:28:45.698 --> 00:29:02.228
So the idea of AGI or artificial intelligence is the idea that you get to this point where the systems that we're building start to self-improve and then they kind of take off and sort of accelerate past us.
00:29:02.228 --> 00:29:07.378
And there's various subcamps within the AI world which talk about either acceleration or deceleration.
00:29:07.378 --> 00:29:08.730
We should slow things down.
00:29:08.730 --> 00:29:11.071
World which talk about either acceleration or deceleration.
00:29:11.071 --> 00:29:11.633
We should slow things down.
00:29:11.633 --> 00:29:21.647
And there was a recent couple of the last year I think there was a, a petition signed by a lot of researchers and actually also elon musk and other people saying let's slow down the pace of innovation, let's slow down things.
00:29:21.647 --> 00:29:26.028
And there's other people who say, no, let's just accelerate as fast as you can and just get to this thing.
00:29:26.028 --> 00:29:28.737
It doesn't matter, we're going to figure it out, everything's gonna be fine, yolo, right.
00:29:28.737 --> 00:29:43.905
And then this is a lot where you see these regulatory bodies coming in and trying to understand what's going on but listening to what should be like very smart people like sam altman saying, yeah, you should actually be careful because this stuff's really powerful.
00:29:43.905 --> 00:29:46.289
We don't quite know what you could do with it or what might happen with it.
00:29:46.289 --> 00:29:49.453
And so the usual question is like when's the agi coming?
00:29:49.453 --> 00:29:59.326
So there's the guys at open ai believe there are, I think, five steps and we're two steps along the way, and the next step is reasoning and planning, which they are working on.
00:29:59.326 --> 00:30:03.182
Now there's preview releases of the, the new o1 models turns out.
00:30:03.201 --> 00:30:11.951
The way you get reasoning and planning is you make the ai, you make the network, think for longer, and when you let it think for longer, it does better on reasoning and planning.
00:30:11.951 --> 00:30:16.828
If it, if it just answers too quickly, it tends to make mistakes on these more complex problems.
00:30:16.828 --> 00:30:23.811
And then you get into all sorts of weird ideas of philosophy and like what does it mean if you have something super intelligence?
00:30:23.811 --> 00:30:24.554
Is it conscious?
00:30:24.554 --> 00:30:24.994
Is it?
00:30:24.994 --> 00:30:26.587
Is it something we should be concerned about?
00:30:26.587 --> 00:30:27.029
Is that what?
00:30:27.029 --> 00:30:28.032
What are its goals?
00:30:28.032 --> 00:30:29.215
Like, does it have goals?
00:30:29.215 --> 00:30:31.069
Does it just do what you tell it to?
00:30:31.069 --> 00:30:37.913
Like, is it automatically going to follow some science fiction dystopian narrative where it's going to turn around, like you said, like try and kill us?
00:30:39.705 --> 00:30:48.592
So where I land on this is as follows I think that attempts to try and slow things down are fundamentally flawed and, at the same time, regions of the world will.
00:30:48.592 --> 00:30:52.410
Slow things down are fundamentally flawed and, at the same time, regions of the world will slow things down.
00:30:52.410 --> 00:30:55.339
And they'll slow things down for different reasons.
00:30:55.339 --> 00:31:03.008
And this funny example when I moved to portugal, which has a different development pace to lots of places in the world, for other reasons.
00:31:03.008 --> 00:31:05.678
But one of the things that happened is I.
00:31:05.818 --> 00:31:08.144
I said, hey, I want to rent a garage space for my car.
00:31:08.144 --> 00:31:09.925
And they're like okay, cool.
00:31:09.925 --> 00:31:11.307
I was like, okay, here's my credit card.
00:31:11.307 --> 00:31:12.468
They're like, oh no, that's not how it works here.
00:31:12.468 --> 00:31:13.387
And I'm like what do you mean?
00:31:13.387 --> 00:31:29.616
In San Francisco, I just give them my credit card, I sign a contract, bye-bye, never talk to you again Until I quit an invoice and then the invoice has an IVAN on it.
00:31:29.616 --> 00:31:37.141
I send them the money, the bank sends the proof to me, which I have to then send to the bank, to the guy who asked for it, and so on and so on and so forth.
00:31:37.141 --> 00:31:40.923
So I'm busy, so I have an assistant, so my assistant does it, so my assistant then works the bank.
00:31:40.942 --> 00:31:50.797
Yeah, I was going to say sounds like something for someone else to do and and there's this chain of people in the way, and when I complained about this initially, I said this is ridiculous, what are you doing?
00:31:50.797 --> 00:31:53.240
Like, oh, if we automated this, what would all people do?
00:31:53.240 --> 00:32:03.096
And so that attitude of like what would all the people do will, I think, be a point where many jurisdictions in the world will just say no, no, this is too fast, we're going to slow this down.
00:32:03.096 --> 00:32:07.791
You can't use ai here, you can't automate that much, so we'll naturally slow things down.
00:32:08.192 --> 00:32:18.031
But also, the thing that's found is that trying to give ai systems constraints when you train them turns out to actually have like weird effects.
00:32:18.092 --> 00:32:28.451
And so we saw this situation with google gemini, where they try to add in I think they weren't trying to, like, you know, have um people from the you know sort of philippines look like founding fathers, I think they were.
00:32:28.451 --> 00:32:35.647
They were really like trying to make it um biased in a way that would like more accurately reflect a section of society.
00:32:35.647 --> 00:32:47.171
But the problem is is that when you train systems with these biases and these rules built in, it turns out you get worse results in general, like overall worse results, and the system learns things that you don't want it to.
00:32:47.171 --> 00:32:50.037
So I think fundamentally is flawed.
00:32:50.037 --> 00:32:59.618
I don't think the ai is going to turn around and kill us overall and I don't think that I think it implies a a hubris in our understanding of intelligence which I don't think we understand yet.
00:32:59.618 --> 00:33:08.049
So assuming that just because we don't understand something is going to go kill us, it's like you know, it seems a bit weird, so that's wrong.
00:33:08.049 --> 00:33:08.490
I got it.
00:33:09.232 --> 00:33:20.317
Fingers crossed, hope for the best space odyssey 2001 has been playing on repeat in my head this whole time you've been talking and it's like the red dot, it's like I'm sorry, dr waterhouse, I can't do that.
00:33:21.767 --> 00:33:42.696
Yeah, one of the interesting things about this in my mind is that some regions will run at the wall to accelerate this, and the real risk there is that the power they get from that, the upside for them, way outweighs their risk or the planet's risk, and they will end up with unimaginable power.
00:33:42.696 --> 00:33:54.294
And I actually the conspiracy theorist in me thinks that there already is AGI, like there's some very wealthy individuals or countries that already have access to it, you know, and what does that mean, you know, for the rest of the world?
00:33:54.605 --> 00:34:14.644
I don't think so yet because, assuming it's based on the same stack of technology that we've built so far, you'd notice if a cluster that big had been built, and I don't think anyone's got the GPU chips that NVIDIA has by their own means, so NVIDIA would know, if you know what I mean.
00:34:14.644 --> 00:34:19.393
So, like I don't know, maybe it's Manhattan Project-like race.
00:34:19.804 --> 00:34:21.652
It's the same power as nuclear weapons.
00:34:22.686 --> 00:34:32.456
It's that big, but also the same power as nuclear fusion in the sense that it could actually transform the planet A hundred percent A hundred percent.
00:34:33.217 --> 00:34:34.298
Well, thanks so much for coming on.
00:34:34.298 --> 00:34:38.413
It's been good to connect and uh it was super interesting conversation, thank you.
00:34:45.327 --> 00:34:47.277
You just listened to the index podcast with your host, alex Cahaya.
00:34:47.277 --> 00:34:49.224
If you enjoyed this episode, please subscribe to the Index Podcast with your host, alex Cahaya.
00:34:49.224 --> 00:34:56.239
If you enjoyed this episode, please subscribe to the show on Apple, spotify or your favorite streaming platform.
00:34:56.239 --> 00:34:58.753
New episodes are available every Friday.
00:34:58.753 --> 00:35:02.349
Thanks for tuning in you.