Leaders Shaping the Digital Landscape
June 25, 2024

The Practical AI Solution Architecture

Tune in to Tech Leaders Unplugged for a special AI-filled episode. The host, Wade Erickson, interviewed David Hood, the CEO of 42RobotsAI. They discussed the backwards way almost everyone is trying to build AI-I-powered solutions, shedding light on common pitfalls and how to avoid them.

This is a great opportunity to hear from a leading expert in AI innovation. Whether you're deeply involved in tech or just interested in the future of AI, this episode promises valuable insights

Tune in to Tech Leaders Unplugged for a special AI-filled episode. Host Wade Erickson interviews David Hood, CEO of 42robotsAI, uncovering the backwards way many build AI solutions and how to avoid common pitfalls. Don't miss this chance to hear from an AI innovation leader. Whether you're a tech enthusiast or curious about AI's future, this episode is packed with valuable insights.

Key Takeaways:

  • Common pitfalls in AI development and how to avoid them.
  • Effective strategies for building AI-powered solutions.
  • Insights into the future of AI from an industry expert.
Transcript

Wade Erickson (00:13):

Welcome everyone, to another episode of Tech Leaders Unplugged. I'm Wade Erickson, I'll be the host. And our guest that we're getting unplugged with today is David Hood, CEO and founder of 42 Robots ai. Thanks David for joining us. Thanks

David Hood (00:31):

For inviting me, Wayne.

Wade Erickson (00:33):

Yeah, our topic today is the practical AI solution architecture. And so why don't we kick it off, David, introduce yourself a little bit a little bit about your company, and then yeah, we'll jump into the topic after that.

David Hood (00:49):

Yeah, I can go right into the topic afterwards. So I'm the CEO of 42 Robots ai. I have 30 plus years coding in 30 plus languages. I remember the first time I, I coded on a TI 89 Yi. Instead of paying attention in math class, I'm a lot more of a backend coder than a front end coder. I don't really like the front end coding. I do have a degree in mechanical engineering and industrial engineering, and a lot of experience processing large data sets with code, which is aligned with what our product is, which is basically like a personal medical researcher AI agent that we're building. And we're close to having a demo of that. And if you want, I can jump right into this, the architecture.

Wade Erickson (01:24):

Yeah. So you know the practical AI solution architecture, I think a lot of folks are you know, trying to just wrap their head around this. We have a lot of companies, I, I know I talked to a lot of leadership and, you know, with the hype that we've really seen over the last year with chat GPT exploding, of course AI's been around a long time and there's so many different flavors of ai. You know, I know we've talked about, you know when, when companies want to try to get AI involved in their products or services, oftentimes it's a a search for the, what's the best type of AI really to, and machine learning to add. And so having this, this, this architectural conversation, I think that's foundational in, in getting an appreciation for what AI can do in many, many different scenarios and applying it where the maybe the best value return on investment. 'cause It all costs money. And there's a lot going into it. So you know how to, how to spend wisely versus just chasing the idea of ai. So, yeah. So let's just jump into that. And tell me a little bit about the topic. How did that form, you know, in your mind? Yeah. You wanted to talk about?

David Hood (02:42):

Yeah, actually, it, it, it was a long time in the in the working, because I, for about, I've identified this for over a year, and I just thought everybody else saw this. And then, and it, and then earlier on this year, even talking within my company about how to build AI powered solutions, I realized that this is not how almost anybody is thinking about it. In fact, I think everybody's thinking about it in a way that is extremely limiting. And you can see that in the way they talk about it. First of all, let me state that it's specifically LLM focused. So large language mono focus, this is where, yeah, there have been a lot of like visual AI models from the past that are, you know, they look, they can look at stuff and identify stuff. This is specifically the large language models that like the chat, GPT, the new capabilities that enabled that.

David Hood (03:29):

And this creates a whole bunch of problems. So if you think about it, it makes sense that there's a lot of challenges with this because prior to this, basically all coding was completely deterministic, non-random, and now we're injecting randomness in there. And that changes a lot. So the, I'll start by saying the way that, and the way that is wrong, the wrong way to think about it, that most people are thinking about it. And you can, when I tell you this and you listen to other, even like big companies, the big leaders in this industry, you, you'll start to hear them talk about it like this, where the model is the solution, and the solution is the model. This is a very limiting belief. They basically put the AI at the, the large language model at the center, and then there's a little bit of window dressing around it.

David Hood (04:11):

Sometimes they call that the application layer, which to me implies that it's, it's either thin or it's kind of dis discreet and distributed from the AI itself. To me, this is backwards. Did you wanna, oh and what IF what I've found and has been obvious to me for over a year, based upon trying to build real world solutions and talking to a lot of different companies about their own solutions, you, you actually have to think about it the other way around and back, and think about it as in terms of you have classic deterministic coding that uses AI or large language models at key places within the code to solve critical problems or to create new functionality that you couldn't do before. And it's not just one model. So in our solution, we, we use AI in over a dozen different places, and only like two of them are chat.

David Hood (04:59):

And there's a bunch of other nuanced ways, which we use ai and, and you have to do a lot of pre-processing. And it's not just about the model, the model, the models are there to, to solve the problems that you couldn't otherwise do. And there's a lot of new, yes, there's a lot of new things that you can do with LMS that couldn't be done before, but I think that there's a lot of actually this is what kinda the realization I had. It's kinda like, why is this happening? Why don't people see this to me in a more practical way? Because when I go to, I've gone to some AI conferences and I talk to the, some of the people that are actually really building real world effective practical solutions. And they go, yeah, that's correct. Which is that most of the solutions are about 20 to 50% ai, and the rest is class kind of classic deterministic coding.

David Hood (05:44):

And that it you and that all, a lot of this classic determinants of coding is new code that actually AI cannot build 'cause it's there to leverage ai. And so I think my, my theory behind why this is, is happening is, well, there's two, two parts of it. Number one, open ai, and the big companies have an incentive to sell you on the LLMs that they produce doing more than being more than they really are. But I think there's also a bit of a tech bro thing here going where, what do little boys love? They love what is it? Ninjas dragons and robots. And so you got a bunch of people saying, the robots are here, you know, Elon Musk saying a GI God next year, not because it's practical, but because they want it to be there.

David Hood (06:25):

So I'll, I'll also expand a little bit on, especially for like a company, if you're looking to build an AI powered solution for your, for your company, number one yes, the, this is, these are magic boxes that do amazing things. But what you get when you do, when you try to think about the model as the solution is you try to shove as much of the solution into the magic box as you can when in actuality, because there are, they're unreliable, there's randomness in there, and they're slow and they're expensive that you actually wanna pull as much of, of the solution out of the model. And and then additionally, when you're looking for opportunities, you don't really wanna, it's so generally broad across so many d useful across so many different business processes. You don't really have to find the perfect solution where you fully automate something.

David Hood (07:12):

In fact, if you look at the, the way Tesla measures their automation from, I think it's level zero to level five, you want to do something more like level two to level three for most solutions, because you can get 90 plus percent of the benefit for, for that level two, level three. Getting that to like 99 or a hundred percent often takes ex, you know, orders of magnitude more work to get there if it can even be done, especially with some of the limitations of LLMs. And I don't know, maybe part of this is also my history with coding is very old school. I'm not really super, you know I don't really understand the, the prompt, I'm sorry, the the software stack as well, because back in my day when you just load up c plus plus and you just, it was all in one thing.

David Hood (07:56):

And I think that what the mindset for how to build solutions is sort of like a very no code approximation type of framework that people are looking at. And that's why it's a, to me, it's an oversimplified where they're just trying to like put the AI in its box and they're trying to put the rest of the code in its box, and then that's it. You get the input that you, you give it the input you want, you put it in the magic box, and you get the output you want. Or you do that in a few different steps. That's still, to me, very, a very limiting kinda structure for that. And even when I was looking at, we built some AI agents the more I hear people talk about AI agents, the more I'm just like, it's just software.

David Hood (08:33):

It's just software that has an a, a large language model chat in there, and then it can do a whole bunch of other things. This, this kind of framework that you get where, again, the model is at the center. I heard one guy who has a very popular, very big, you know, kind of company and known, known in this space a lot, say that an AI agent is basically an LLM on a loop. And to me, I was like, oh God, that's such an awful, awful architecture. You're gonna be very limited if that's how you think about it. And very few companies talk about this. There's only one company that I've really seen kind of talk about this a little bit, and they're called Palantir. And so I think that's one of the few companies that is actually looking at this from a practical mindset and saying, they do say, they call, I can't remember what they call it, like like there's gonna be, need to be some support structure of code to help the AI become effective.

David Hood (09:24):

And so, so a couple things that help understand this a little bit deeper. I highly recommend Jan Koon. He's the head of meta ai. He has some good YouTube videos out there talking about why LLMs can't beat a GI by itself. And this to me is super obvious. So just so you know, that the robots aren't coming for everybody anytime soon. So it's not gonna happen. And especially just as an LM, you could have GPT 27, if that's just an LLM, there's, there's not enough there. There he has some arguments that are computer science as to why it wouldn't work. And philosophical, my argument, and this goes back to my industrial engineering days, is, is systems argument. I don't think there's enough pieces to the puzzle to make it like a sense of being. And so that's part of the issue. And then when you, when you look at how LLMs really work, it kind of starts to add up to why some of the things are happening.

David Hood (10:14):

It's weird because for a long time people are like, we don't know why hallucination happens, happens. And to me, it's been obvious because it's randomness there, it picks the next chunk of words, and if it picks the, like a low probability one and it can just kind of go off in a la la land, that's why it hallucinates. Or it just doesn't have something in the training data on that particular thing, but it can be a combination of those two. But, you know, you just think about it, it's really just all it's doing is predicting the next token. It can't predict when it gets here. It can't go, oh, that was bad, and circle back around unless you put it into a bigger system, which case you could do that. You can have AI take an AI output and have another AI evaluate it.

David Hood (10:51):

A lot of times it's more complicated than that in our solutions, which is like, yeah, you take that part, maybe you parse this out. You, you process this, you add some more information here, and then you do something. There's a lot of that, and there's a lot of nuance in, in that, in that structure. But effectively, one of the major things that doesn't really get talked about, that's a cri critical part of a, of the large language models, and actually potentially just as important, if not more important, is that what it's done is they figured out how to turn blocks of text into math to where they can compare. So they can compare two different chunks of text and see how closely relevant they are. This is part of how the OM works it, when you put in your chunk of text, it's looking on, in its training dataset for the relevant.

David Hood (11:37):

It's not exactly true, but it's kind of looking for the relevant parts of its training dataset to go find things. And this, this is also where you get vector databases, which are, that's a huge development. These are incredibly useful. These are where you can, it's a new type of database where essentially you can put a bunch of text into it, and then you get an incoming text from here and you can pop off, even if there's a hundred thousand records in there, you can pop off the top five most relevant semantically. It's not just that you're looking for yellow fruit. You can pull off banana when you, when you, when you type in yellow fruit, it'll pull that off. Or if you type in banana, it could, it could pull off o other relevant, maybe closely relative fruits. And so that's a, that's a key part of it.

David Hood (12:22):

Fine tuning is very valuable. Some people say that you can replace it with what's called retrieval augmented generation, which I alluded to a little bit. You need a vector database, and you essentially, when a user input comes in, you pop off the most relevant chunks from the database and you put it in the context window into the window to give to the AI model as relevance. Fine tuning. Sometimes it's a different thing. And just so you know that, but it's really useful. And this is where I definitely, for a lot of companies, most companies I don't think need to train a model from scratch. It's very expensive, it's very challenging. And a lot of times with a very small amount of data, fine tuning data, like we're talking sometimes a hundred between a hundred and a thousand good quality sample in, in an inputs and outputs, you can get an an incredibly powerful result from that.

David Hood (13:07):

So I really like fine tuning especially when prompt engineering doesn't do it. And when you're on a budget, you can do fine tuning for a tiny fraction of the cost less than 1% and get basically the same kind of output that you would get if you built something from scratch. And then with open AI with a LAMA three and, and Facebook metas models their, their open source models this becomes even easier. And this is actually one of the huge FA factors in this whole thing. I would be concerned if I were an investor in OpenAI, because if we, if we kind of look ahead a little bit with like LAMA four, I think what will happen is for 95 plus percent of use cases, LAMA four will do the job for you. And so when you think about it, that puts these big model companies that have invested billions of dollars to me in a lot of trouble because they don't, they're just not needed for most use cases.

David Hood (13:59):

And on top of that, the implication of that is that the, the valuable skillset, to me, the most valuable skillset in the world isn't even training these AI models. It's actually knowing how to use them. So basically AI engineering, how do you take this, the AI and put it in a practical solution? It's not just about training the model. It's not just about fine tuning the model. It's about the whole sy the system as a whole and how do all the pieces fit together. And you know that a lot of people, they're so focused on the chat that they don't realize that there's so many other capability, new capabilities that come about. What, what's central to our product is that I think that they're, what's been clear to me is that there's a new orders, new orders of magnitude, like a thousand x, 10,000 x million x more capability to analyze data, large data.

David Hood (14:47):

And then the semantic relevance is really huge with the vector databases being able to summarize text, you couldn't do this before. This is actually really useful in a variety of ways. You can expand, you can go the other way and expand text. You can extract data. You can kind of use it to triage. Do we go here? Do we go here or do we go here? And then categorization all this. And there's so much focus on the chat that a lot of this is getting ignored. I think that explains most of it. I'm ready to see. Great for the questions.

Wade Erickson (15:15):

Yeah. Yeah. So it's interesting you bring up the vector databases in the semantics space. I back in probably around 1998 I was dealing with, this was back in the heyday of the resume databases. And, and, and so the challenge was, is a lot of people were having to, you know, read them. And this was the early days where they were largely keyword searching. And I worked with a friend that's Dallas as well. You probably know he's one of the big AI gurus in town here. And we were working together to build a and Google didn't even have this search capability. It was a semantic vector based Mm-Hmm. <Affirmative>, it was called semantic reasoning back then. And it was, you know, you type in the word Java and that is a very different term for a barista than it is for a programmer.

Wade Erickson (16:10):

And so and it was all about the vector spacing and those kind of things. And so those were early, early days. And then we ap applied some of those kind of technologies to build much smarter. So basically what you could do is you could apply, put the the job description in, and then you could look against resume data. Banks are resumes, and it would look for those vector relationships and then find resumes that fit with a higher probability of match against the job description. And so and that was, I mean, that was nine, like I said, 1998.

David Hood (16:47):

Wow.

Wade Erickson (16:48):

So it was a long time ago. And of course people think this is all new stuff. That's almost 30 years ago, right? 25 at least. Right. So and it's exciting. 'cause Like you said, these things have only got better in those 25 years, you know, and Yeah.

David Hood (17:07):

And the, it

Wade Erickson (17:08):

Has been around a long time and it, and it, and I think, I think the excitement is great. It adds more dollars to the space. I think it's always been a, a great opportunity. Five years ago, if you would've put the kind of budgets that are being thrown at it now to in front of a CEO, they would've doubted it, you know? Mm-Hmm. <Affirmative>, because they would've thought about some of the other things that, that, you know, come to mind with AI five years ago versus today. So so, you know, as we talked about, you know programming and you know, linear path kind of result sets that we're used to with traditional programming as a testing company, that's, that's where the challenge is, is, is how do you test these AI products in an automated manner? And I think a lot of it is gonna be apply AI against ai. I think that's gonna be a somewhat of a result set, you know? But, but as you think about your product in the AI space, how do you think about testing? Is it closer to game testing, which is still very, very, very manual? Or is, or are you feeling like there is some approaches that might help with automated testing of AI products?

David Hood (18:21):

I, I think that there is a potential to do, to build like a, a, a testing checker to some degree. But I think that it will be, it, it's gonna be a lot manual. At least that's what the way I see it right now. And, and the, the good news is that you don't necessarily need to look through a thousand examples necessarily for a prompt to get the, to, to tune the right prompt. You can usually tell pretty quickly with maybe like, you know, the same input 10 times. So this is something I've been pushing really hard with my team, 'cause I don't think they saw this. There's a ton of work. 'cause You envision we're not just putting in, we're not just using AI at one place, we're using it in many places. And so a lot of those need to be tuned and there's unlimited combinations.

David Hood (18:59):

And so this is a little bit of an art, and I've spent a lot of time practicing this is prompt engineering is part of AI engineering, but it's less prompt engineering, like what you put in a chat GPT and more about when you fit it and within a software, it needs to change. You need to have a consistent output. That's really important. And so, so like getting it to be structured the right way, that's kind of part of what you're trying to do, but also getting it to find the things that you want or to give the types of answers that you want. This requires tons of testing. And there's, to me, I think that at least at first, you're gonna need a lot of manual testing. You're gonna need a lot of samples of the user, the, the manual person seeing what the input is, seeing what the output is, and then the input and output are actually the input to the checker.

David Hood (19:41):

And the output is the feedback and the thoughts and the final response and the correct output from the user. I think you could potentially build something for a narrow use case. But the problem is that once you do that, once you've built that for that, you already have the right prompt and now you need to move on to a different prompt. So especially when it comes to chat, you know, how many different types of prompts can come in unlimited? Basically, how many different variations are there? Now you can narrow that down. So for example, ours is medical at the very beginning what it does to triages. There's a triage check that says this is a medical question or customer support question or something else. And then there's a bunch of other kind of pre-checks like that for specific types of responses. So if you can kind of filter, okay, it's a medical question.

David Hood (20:23):

Okay, well that kind of narrows the band of different possibilities, but still, like, how many different questions can somebody ask on the medical space? It's al again, almost unlimited. And so I, I do think that in general, yeah, it's gonna have, you need to, you can't just like throw a tool at this if you wanna do this, right? Especially you think about, this is what I was talk what I, I think Palantir mentioned, but that I think they should even go further on. There's a lot of new code that needs to be, be built. There's a lot of new code and there's a lot of testing of the prompts that needs to be built. There's new, you're building, presumably if you're building something with how you're building something that hasn't been done before. So it makes sense that there's not gonna be really good testing. Yeah. Some of those pre-checks can be handled by an L LM Jacob thing.

Wade Erickson (21:03):

Yeah. That was a question that came in. Yeah. So yeah. Yeah. So and that's where I kind of the, the, the ai you know, testing ais and, you know, you're leveraging things in a smart manner. So, you know, we talked a little bit about people that wanna develop products that include ai. And, and oftentimes it takes a lot of discovery, a lot of questions about is this, you know, not only the right thing to do. 'cause Sometimes the best answer is not to do it, do something else. Right. So so tell me a little bit about as you're developing your product that you know, I understand there's gonna be a demo soon available. Mm-Hmm. <affirmative> you know, how, how does the role of, you know CX UX kind of you know, user research in your product development process, you know, how do you find those volunteers that are kind of the voice of the customer that actually can provide you know, pretty relevant information so that you can build a product that would be used for other people that, you know, how do you go after the market?

David Hood (22:12):

Yeah, I'll answer this from a high level. One of my co-founders is, is the, actually the ui ux expert. But we, what we've done is we have done user research. We've asked, we've kind of surveyed people in detail. We do have a doctor who is, and one of our investors who's helping us review the outputs and tell us what she would want to do for specific people. A lot of our solution actually is happened on the back end with how we process the data. We have multiple buckets of long-term memory, and each, each bucket has different layers. And so there's a lot of, how do you play with the data that's really critical here. But then there's a lot of a lot of research that need and, and development that needs to be done on like how to enable people to make it as easy as possible and as convenient as possible for them to, to, to get into the software, to use it in the way they need to do it.

David Hood (22:58):

What, how do we get the data from them? Because we're very data hungry. The more data we can get, the more for a person to personalize, giving them a personalized response, the better. But yeah, we're gonna have to, it's just gonna be a, a long term consistent back and forth of trying some stuff, giving it to users and volunteers. So we do have volunteers, mostly from just people we know who are interested. But then we had a previous product we just pivoted at the beginning of last month and we told our, you know, that we had customers, we said, Hey, we're shutting this down 'cause we think we can add a ton more value to society with this. And some people, here's a wait list, if you have, here's what we're trying to do. We sold them on just kind of the base idea. And if you want to be a part of it, great. Here's the wait list. And we did have some, there was some really good positive responses. One person said like, oh, oh my goodness, this is exactly what I was looking for. Please put me on this list. I have this, this condition, this condition, and, and so please add me. That was a really good sign. So we've gotten a few responses like that.

Wade Erickson (23:54):

Great. Great. Alright. So kind of the last question I had, of course, the 42 you know, Hitchhiker's guide answer the question of life is what most everybody when they hear that 42 comes to mind. Yeah, there's a couple of other things. You know, 42 was and you know, the dine, I don't know if I pronounce right, equation, which was, you know, 42 was the, the one that was the last standout between one to a hundred for that equation. And they finally solved it in 2019 with a ton of supercomputers and stuff. What, where did the 42, did it come in from there or did you have some other ideas about that helped shape that name of your company?

David Hood (24:33):

I'd say, let's say 70, 80% came from thinking that AI is the answer to life. Everything in the universe, that's kind of the, and yes, Tiger's guided Galaxy we're fans of that. And actually when we, when I brought, when we brought in two co-founders along with myself, when we did that late last year, actually I realized after the fact that we're all, all three of us were 42 when we, when we got started on that. But then there's some other mathematical properties of 42. There was this nerd movie with this genius who the answer to one of the questions was 42. And there's some other kind of, it's just kind of a fun number in a lot of ways. So it was a little bit tongue in cheek but also there's a little bit of additional meaning there. I think that it's, it's also part of like, there's a certain threshold that we're we, that I think we can meet and we're the robots sort of mm-Hmm. <Affirmative>. So I think if we kind of get to a certain point, we're gonna be successful. And then right now with our product, it is kind of like, it, it's kind of like you have 42 people out there, 42 entities out there pouring over your data every day, looking for new data for you every day. So there's, there's kind of a, you know, mathematical threshold that we're implying in the, in the number.

Wade Erickson (25:45):

Great. Cool. Alright, so this is the part of the show. I like to kind of pivot a little bit and talk about you as a, your personal, kind of your background and, you know, looking at your profile. I, I think this is the second company you started and, you know, a lot of the folks that watch the show you know, they maybe have ideas of starting their own company and you know, there's a lot of, I, I think I've had four or five companies. I think I've done some mildly successful, none of them fantastically successful. I'm hoping the next one will. But you know, and sometimes the government comes in and changes the regulations on you and squashes the company that was supposed to be a million dollar baby. Things like that come along. So buyer beware, when you start your own company, it's now in your, your hands. Mm-Hmm. <affirmative>. So tell me a little bit about the pivot. You, you've been working in tech for a long time. Mm-Hmm. <Affirmative> you know, what gave you the courage to jump out on your first or your second one? What was that kind of, that pivot and bifurcation and life's path to jump into this?

David Hood (26:49):

Yeah. so I did work. I, I had a, a good job at a big company before I started my la my previous business. But it just wasn't, I didn't fit very well into it. I didn't have a lot of autonomy. I don't like bureaucracy very much. And there's just a lot of nonsense you have to deal with now. You still have to deal with nonsense on your own, but at least you're kind of making your choice <laugh>. And then also there were just, there was like scenarios where one year I did, I think I did an amazing job and I got like a poor review because of some nonsense. And then one year I did a terrible job and I got a good review and it was just sort of like completely my perfor, my pay and my review seemed completely disconnected from my performance.

David Hood (27:27):

And that was just like wacky. When you own your own business, if you don't perform, you don't make money. And so, or your business fails or something. And so there's that part of it. Just also kind of an independent thinker. And sometimes you just gotta, you know, I I, what I did was I looked ahead and I thought to myself, I'm still working for this company and still complaining to myself and still saying that this is not make me happy. And it's 15 years later, who, whose fault is it but my own? Right? At some point, you gotta take responsibility for your, for, for your own life and just say, what can I do to do better? What can I do to be and go for it? And, and even if it failed, I don't think I would've regretted it. In fact, it did kind of get to the point towards the end where I, I was not really happy being in there and I was looking for other things.

David Hood (28:09):

And then chat GPT came out and I, and I, we ended up playing with it. I was like, oh my goodness, this is a, a new development. There's new capabilities here. And so I spent like three months just studying up on AI and LLMs and how they work and what they can do in brainstorming and planning. And, and then about three months in, I was like, all right, this is it. This is the moment I've been waiting for. This is right up my alley with my experience with understanding complex business systems, with processing large data sets with code right up my alley, perfect moment for me, perfect skill set. I've been practicing for this moment. To me, it feels like I've been practicing for this moment for 30 years. And so I started 42 robots and I just went out and we tried to build some stuff.

David Hood (28:48):

So our first product that we had out was more like a, an aggregator for AI with some productivity tools. So you could use multiple different models and you could use Claude and open AI's models, you could use mid journey. There was some productivity and efficiency features. But what I looked at, I kind of looked ahead and I saw some kind of dead ends. And also I just really, what it came down to was I looked at, I had actually, there was two instances, two kinda experiences I had over a short period of time about March-ish, where some, I was talking to somebody and her son has a long-term condition and she said, well, what if AI could help with that? And started thinking about it. And then a couple days later, a family member of mine got a really bad diagnosis and I started thinking about it a lot more.

David Hood (29:31):

And then what it came down to is I really, really thought deeply about it was like, what, when we're making it a little bit more productive for people to use AI versus like saving somebody's life. And I think that that is just worlds of difference of value to humans. Mm-Hmm. <Affirmative>. And so even though we had to take a couple steps back and like turn off customers who are paying for it I think it's the, the right play for us. And I'm really excited to go in because there's just so many things in the medical space where like, you have a doctor over here does this test and the doctor over does this test and they never get connected. Mm-Hmm. <affirmative>. And so many people are just spinning their wheels trying to talk to a doctor or a doctor. They only have five minutes.

David Hood (30:07):

They can talk to you and that's all they can give you. Or like, a new study comes out tomorrow and you don't have some sort of process to find that I have it. Somebody was telling me that they, they actually were wealthy. They know a family who was wealthy and they actually hired a doctor or somebody to every day go out there and search for stuff for them and to process their personalized data that's just relevant to them. And that's kind of what I think we can do here. And I think we're uniquely qualified to do that. I think that based upon the fact that most people are looking at the architecture wrong, and so they're, they're, they're very limited on top of the, my personal experience with processing large data sets with code. And I think we have a really great advantage that we can use to help people. So I'm excited. I I, I can't wait for the first person to go, you've, you saved my life or you made my life better. I'm like, super hap I'm gonna be doing a little dance when that happens.

Wade Erickson (31:00):

That's great. So dissatisfaction in work life purpose driven that, that seems to be a pretty common trend as you know, why, you know, why would you jump out the comfort zone of a job even though you're not like, maybe completely satisfied? You can get a lot of satisfaction out outside of work and you know, my recommendation to any young person coming up is absolutely do not look for your purpose in your nine to five if, unless you're a doctor or something, you know already. But look for it outside of the job and then maybe that could become your job, you know? Because it's so hard to really get everything in line, happiness of your job, you know, satisfaction, purpose, all of that, when really the job is to make money and pay the bills, right? Mm-Hmm. <affirmative> purpose can be found in, in many other places.

Wade Erickson (31:49):

So alright, well that kind of, we're at the top of the hour here. I wanted to introduce, we actually have another show tomorrow. Normally we only do one show a week. Tomorrow's show is with Walta AR Ramen. She is the DevOps automation, GTM product Skills, adoption and Growth. That's a long title at IBM. Looking forward to that. And the topic is stumbled into testing as a developer. So we're gonna talk a little bit about testing in the development process with polla of IBM. So look forward to having you there. It's a little bit different time. It's at 10:00 AM instead of nine at 9:30 AM Pacific time. So thank you so much, David. Great topic. You know, we've had quite a few shows with AI flavored in them. It's not nice to kind of have some that step back and really kind of look at it a little holistically, be honest and transparent about exactly what you said.

Wade Erickson (32:46):

Hey, it's not the center of the universe. It's a component of your product. And so, you know, as, as you know, think about the problems you're solving and then look at AI on how you can design that and use that. And then, like you said, don't shove stuff into it. Peel stuff out because it's the deterministic aspect. You're gonna make a much more predictable product experience than if you're having all this randomness come out and doesn't take long before people. I mean, think about when chat GPT came out the first couple months that was making the news was the crazy hallucination stuff that came out of it. Not the 95% that was, you know, solving people's day to day every day. It was like, oh, look at this stupid response that came up. Remember? So so yeah. Anyway, thanks again and thanks for having me talk soon and appreciate your time until tomorrow. Everybody else join us tomorrow live for a show tomorrow. Have a good day.

 

David Hood Profile Photo

David Hood

CEO

David has coded in 30+ languages, mostly self-taught, over the last 30 years. He excels at creatively building and debugging backend algorithms, especially when it comes to using that code to process large amounts of data. He has a Master’s in Industrial Engineering and worked as an Industrial Engineer at Texas Instruments for 6 years. This work required him to understand large, messy, human integrated data sets and to analyze that data with code to identify opportunities for systems improvements. to founding 42 Robots AI in 2023, David ran an SEO business for 11 years that involved data analysis and B2B selling. He’s a passionate technologist who has a unique vision for how certain aspects of AI will play out.