May 20, 2025

When Should You Trust a Machine With Your Life?

Could AI robots be better at diagnosing you than your actual doctor?

Healthcare AI sounds promising until you realize it can't tell the difference between your seasonal allergies and that weird parasite crawling up places it shouldn't be. We sit down with an actual MD, Dr. Ajit Barron-Dhillon, to discuss the terrifying reality of AI in healthcare, why digital diagnosis might make us dumber, and how insurance companies could weaponize medical AI to deny you even more coverage.

Three main benefits from this episode:

  1. Learn the real limitations of AI in medical diagnosis and when you should still see an actual human
  2. Discover how AI could be useful as a bridge between therapy appointments for mental health support
  3. Understand the potential dangers of automated healthcare in our profit-focused medical system

Check out this episode of The Fit Mess to discover if robots will be replacing your doctor anytime soon.

Topics Discussed

  • AI's accuracy in medical diagnosis compared to human doctors (77% vs 66%)
  • The dangers of relying on AI for self-diagnosis instead of seeking professional help
  • How regional and seasonal factors impact medical diagnoses that AI might miss
  • The potential for insurance companies to use AI to minimize costs and deny coverage
  • Using AI as a mental health resource between therapy sessions
  • The importance of human clinical observation in catching rare or unusual conditions
  • How AI might make healthcare more automated but potentially less personalized
  • The challenges of AI interpreting medical data like blood work and lab results correctly
  • The "Google effect" where people search for information that confirms what they want to hear
  • The potential for collaborative approaches between AI and human medical professionals

----

GUEST WEBSITE:

----

MORE FROM THE FIT MESS: Connect with us on Threads , Twitter , Instagram , Facebook , and Tiktok Subscribe to The Fit Mess on Youtube

Join our community in the Fit Mess Facebook group

----

LINKS TO OUR PARTNERS:

 

 

1
00:00:00,157 --> 00:00:06,963
In recent studies, AI diagnosed patients correctly 77 % of the time, better than human
doctors.

2
00:00:07,138 --> 00:00:09,969
but should you trust your health to an algorithm?

3
00:00:09,969 --> 00:00:15,205
Today we're joined by one of our oldest friends who happens to be an actual working
doctor, Dr.

4
00:00:15,205 --> 00:00:25,445
Ajit Beren We'll talk about why AI is not ready to be trusted to offer medical advice, how
insurance companies could use it against you, and why some conditions only human doctors

5
00:00:25,445 --> 00:00:26,230
can spot.

6
00:00:26,230 --> 00:00:36,607
And he shares a mind blowing story about a mysterious illness no algorithm could have
possibly diagnosed correctly, proving why we still need human expertise in medicine.

7
00:00:45,416 --> 00:00:46,638
Okay, this is the fit mess.

8
00:00:46,638 --> 00:00:47,388
Thanks so much for listening.

9
00:00:47,388 --> 00:00:50,580
We talk about all things sort of AI and wellness and mental health related.

10
00:00:50,580 --> 00:00:59,803
And today we're joined by an actual doctor with an actual knowledge about computers and
stuff and know Ajit that doesn't make you better than me.

11
00:01:06,706 --> 00:01:10,536
I think you were born with like a like a syringe and a computer in your hands, right?

12
00:01:10,536 --> 00:01:11,396
Yeah, exactly.

13
00:01:11,396 --> 00:01:16,122
I just don't want anyone calling me thinking I'm their support after the podcast.

14
00:01:16,904 --> 00:01:18,555
Because it...

15
00:01:18,555 --> 00:01:19,822
You can call me, give me...

16
00:01:19,822 --> 00:01:24,543
Just, you know, please leave a decent review after a brief survey for me.

17
00:01:24,543 --> 00:01:26,575
Yeah.

18
00:01:26,936 --> 00:01:28,316
Jason's here as well.

19
00:01:29,516 --> 00:01:30,016
That's right.

20
00:01:30,016 --> 00:01:31,136
That's right.

21
00:01:31,396 --> 00:01:32,296
All right.

22
00:01:32,296 --> 00:01:39,356
So, G, this is something Jason and I have been talking about a little bit here the last
few weeks in the show is just how much AI is being integrated into sort of all things

23
00:01:39,356 --> 00:01:40,556
health and wellness.

24
00:01:40,796 --> 00:01:44,616
Again, you're an actual doctor, you you are on the front lines of this stuff.

25
00:01:44,616 --> 00:01:49,704
How much of this is a part of what you do when it comes to treating the patients that you
work with?

26
00:01:50,136 --> 00:02:04,816
I work right now these days in a very complimentary and alternative limb, but I still have
to do standardized intakes that require some knowledge.

27
00:02:05,316 --> 00:02:11,896
Now, usually what I see out of AI is that people are looking for a diagnosis or looking
for a differential or something like that.

28
00:02:11,896 --> 00:02:18,576
So it's kind of like they're done with WebMD, they're done with just a generic Google
search.

29
00:02:18,995 --> 00:02:25,518
now they want this and this and what's funny is that Google has trained us to see the very
top first.

30
00:02:25,538 --> 00:02:27,869
It's how they do their ads, everything.

31
00:02:27,869 --> 00:02:39,994
So I think that when I search for something, I hate seeing an AI generated response
because it reiterates itself 10 times if you the entire thing.

32
00:02:39,994 --> 00:02:42,015
And it's not that knowledgeable.

33
00:02:42,015 --> 00:02:44,526
It doesn't give the differential for the patient.

34
00:02:44,526 --> 00:02:47,753
um

35
00:02:47,753 --> 00:02:49,166
It still needs to be worked on.

36
00:02:49,166 --> 00:02:53,292
It's scary enough where it could get really good if it starts asking questions back.

37
00:02:53,474 --> 00:02:54,514
You know?

38
00:02:55,197 --> 00:02:56,449
Like, what type are you?

39
00:02:56,449 --> 00:02:59,885
well that case, you know, you may have this.

40
00:02:59,885 --> 00:03:00,901
You know?

41
00:03:00,901 --> 00:03:12,357
Yeah, so along those lines, mean, if people are using AI to try to understand their health
care and context, and it becomes this choose your own adventure piece, would AI actually

42
00:03:12,357 --> 00:03:21,002
with better prompts inside of it actually asking you questions more effectively to try to
explore deeper on a topic?

43
00:03:21,222 --> 00:03:28,376
Is that something as a medical professional you think we should be looking at and that
patient should actually be paying attention to?

44
00:03:28,453 --> 00:03:34,623
Or is it still too nascent and too early at this point to really hope that it gets the
prompt engineering stuff?

45
00:03:36,982 --> 00:03:38,384
That's a great question.

46
00:03:39,750 --> 00:03:41,253
I think it's too early.

47
00:03:42,811 --> 00:03:56,761
And is that because the amount of information that's in the system itself is not actually
sanitized and produce results in a way that's actually reliably of value and results in

48
00:03:56,761 --> 00:04:06,028
human beings kind of having the Denning Kruger effect where they think I know shit that I
really don't and apply practical medicine advice to their own physical bodies and

49
00:04:06,028 --> 00:04:10,171
essentially start to biohack on themselves with limited controls.

50
00:04:10,902 --> 00:04:11,962
Yeah.

51
00:04:13,182 --> 00:04:21,122
For the most part, what physicians do is before we do blood tests or do any kind of scans
and all that, you have experience.

52
00:04:21,122 --> 00:04:30,262
That experience kind of lets you look at a patient and say, I can clearly see the reason
your face is hurting is because you have a bullet in it right now.

53
00:04:31,082 --> 00:04:33,162
So it's not like my face hurts.

54
00:04:33,382 --> 00:04:36,055
So we can actually do this.

55
00:04:36,055 --> 00:04:39,350
pretty obvious what the diagnosis is going to be.

56
00:04:39,350 --> 00:04:46,450
Yeah, but an AI generated response, mean, that thing isn't even giving blood tests.

57
00:04:46,450 --> 00:04:51,490
Now again, if there's an interaction where it's like, hey, do you have a copy of your
latest blood report?

58
00:04:51,490 --> 00:04:52,430
And when was it?

59
00:04:52,430 --> 00:04:53,710
And things like that.

60
00:04:53,710 --> 00:05:00,510
If it becomes more interactive, it can definitely be, definitely something that could come
pretty damn close.

61
00:05:00,901 --> 00:05:02,057
Yeah, so...

62
00:05:05,035 --> 00:05:12,672
You and I have a personal relationship where you help me with my health in a meaningful
way, which is a big reason why we've invited you on the show, because we both interacted

63
00:05:12,672 --> 00:05:19,247
with you on a personal level and on a business level in different degrees, but also at a
medical level.

64
00:05:19,247 --> 00:05:29,165
And because of that, some of the things that you told me to do were things that my
naturopath, who I really like and really value his opinion, he did not quite go through

65
00:05:29,165 --> 00:05:33,829
and ask the same, ask the same type of data to be collected.

66
00:05:33,863 --> 00:05:38,443
that you asked for when you were going through trying to fix kind my metabolic health
issues.

67
00:05:39,623 --> 00:05:50,863
Is it that the AI systems themselves are trained on classic Western styles of data and
information collection that's much more compliant with trying to reduce the cost of

68
00:05:50,863 --> 00:06:01,583
overall health care to try to fit into this model that insurance companies, private payer,
and these socialized medicine systems need to actually kind of narrow these costings.

69
00:06:01,583 --> 00:06:03,811
They don't ask for some of these

70
00:06:03,811 --> 00:06:04,581
other tests?

71
00:06:04,581 --> 00:06:12,285
Do you think it's more that the models themselves have kind of pigeonholed themselves in
this way because they've actually been trained to try to minimize the amount of things

72
00:06:12,285 --> 00:06:12,665
they ask?

73
00:06:12,665 --> 00:06:20,669
Or do you think it's more that they just don't know right now and they're an unreliable
arbiter of information?

74
00:06:20,669 --> 00:06:29,443
And I'm going to specifically talk about testosterone because you actually asked me to
test multiple different types of testosterone to go through to understand my testosterone,

75
00:06:29,443 --> 00:06:31,053
my free testosterone.

76
00:06:31,199 --> 00:06:35,580
my total available testosterone and then my albumin levels were part of that as well.

77
00:06:35,580 --> 00:06:39,152
And my naturopath looked at me and goes, yeah, we totally should test this.

78
00:06:39,152 --> 00:06:42,863
Like he didn't even think twice as another medical professional.

79
00:06:42,863 --> 00:06:45,743
And now it's part of my regular testing routine.

80
00:06:46,304 --> 00:06:52,946
And as a medical professional, he should have asked, he probably should have asked that
also, but he wasn't doing that because he was focusing on the pieces.

81
00:06:52,946 --> 00:06:54,967
And you just have these different approaches.

82
00:06:54,967 --> 00:06:56,447
When I think about AI,

83
00:06:57,433 --> 00:07:05,900
Is AI another approach or is it an accumulation of minimal data sets that trying to treat
symptomatic problems if they exist today?

84
00:07:06,142 --> 00:07:07,603
God, that was long-winded,

85
00:07:07,738 --> 00:07:13,730
No, I don't think AI should treat or diagnose anything at the moment.

86
00:07:13,730 --> 00:07:15,071
Okay, at all.

87
00:07:15,411 --> 00:07:19,422
Also, I don't know your natural path, but you it almost sounded like it could have been
Dr.

88
00:07:19,422 --> 00:07:20,368
Nick Riviera, right?

89
00:07:20,368 --> 00:07:22,374
It's just like, that's, that's a great idea, man.

90
00:07:22,374 --> 00:07:30,177
Let's, let's do that, you know, but, um but I respect natural paths, in many cases, more
than I do a lot of MDS.

91
00:07:30,177 --> 00:07:37,329
So, um because they're they're using the good stuff that allopathic physicians sometimes
ignore, you know,

92
00:07:37,427 --> 00:07:46,207
So that's kind of actually where I'm kind of, you know, I think I'm a little different
with things because I consider myself an allopath as well as a naturopath, you know.

93
00:07:46,207 --> 00:07:52,767
You guys have heard my terrible analogy or, know, like, okay, I'm gonna do it again.

94
00:07:53,267 --> 00:07:55,947
You know, if you got the cold, drink some oranges, get some rest.

95
00:07:56,207 --> 00:07:57,667
That's naturopathic.

96
00:07:57,667 --> 00:07:59,727
But if you have syphilis, man, I'm gonna give you some penicillin.

97
00:07:59,727 --> 00:08:01,847
That's allopathic, okay?

98
00:08:02,987 --> 00:08:05,487
So that's where...

99
00:08:06,031 --> 00:08:10,632
you know, it's too early for anything.

100
00:08:10,632 --> 00:08:20,055
Again, if there's a good interaction, and in fact, there are great interactions that
already exist, like when you go to like women's health websites or men's health websites,

101
00:08:20,055 --> 00:08:23,456
you know, which are predominantly like, dude, I need to get horny and I'm bald.

102
00:08:23,456 --> 00:08:26,096
Okay, so it's like one of those things, okay?

103
00:08:26,196 --> 00:08:33,050
And the same thing for women, and those are actually, most of it is done just through like
a wizard.

104
00:08:33,050 --> 00:08:41,812
you know, just like, you know, 30 questions, upload an ID, and then in some cases, a brief
talk with a physician to say, yeah, dude, I think this could help you, okay?

105
00:08:41,812 --> 00:08:46,753
uh Or just an email or something.

106
00:08:46,753 --> 00:08:48,064
I mean, it's gotten that simple.

107
00:08:48,064 --> 00:08:57,396
So uh those examples exist where you can actually, you know, get a good solid sort of
aggregate, okay?

108
00:08:57,396 --> 00:08:59,596
And then the physician will verify that.

109
00:09:00,077 --> 00:09:01,737
Okay, so.

110
00:09:02,031 --> 00:09:05,167
I don't think that AI right now is at that level yet.

111
00:09:05,167 --> 00:09:09,634
I think it's sending it, it could be dangerous too, very dangerous.

112
00:09:23,586 --> 00:09:09,472
OK.

113
00:09:09,472 --> 00:09:11,495
I have a much dumber question for you.

114
00:09:12,278 --> 00:09:16,536
So, I mean, you guys know this stuff better than I do.

115
00:09:16,536 --> 00:09:21,890
And that's part of why I'm here is I'm trying to understand it and incorporate it into my
life in ways that are not going to, you know.

116
00:09:21,890 --> 00:09:23,140
send me down the wrong path.

117
00:09:23,140 --> 00:09:30,963
But I mean, my understanding of AI is that it's basically scraping the internet for a
whole bunch of information and connecting a bunch of dots and regurgitating it back to

118
00:09:30,963 --> 00:09:31,083
you.

119
00:09:31,083 --> 00:09:38,385
So instead of you having to read 10 websites, it read 10 of them for you and wrote a
really stupid paragraph and left out a lot of key information that you would have gathered

120
00:09:38,385 --> 00:09:40,905
had you, you know, scrolled down 10 spots.

121
00:09:40,985 --> 00:09:50,228
So I mean, if my understanding of it is correct, if I'm anywhere near close to that, not
much has changed like

122
00:09:50,287 --> 00:09:56,609
we're still Googling, you know, how come there's blood running out of my butthole or
whatever's going on and then we're finding the results and putting them all together and

123
00:09:56,609 --> 00:10:01,581
going, my God, I have whatever terrible disease it just diagnosed me with.

124
00:10:01,581 --> 00:10:04,202
And then we run screaming to the doctor to verify it.

125
00:10:04,762 --> 00:10:12,675
If we're still using it in that way, I don't think it's necessarily dangerous unless it's
the person who, you know, finds the result that's like inject bleach into your body and

126
00:10:12,675 --> 00:10:15,606
it's going to cure it because that's what Donald Trump said five years ago.

127
00:10:16,247 --> 00:10:20,278
Is that kind of the danger that you worry about is sort of that bad information?

128
00:10:20,506 --> 00:10:23,158
Yeah, um here's what I think AI is.

129
00:10:23,158 --> 00:10:28,852
AI is kind of like, you know, why trouble yourself and read this entire page?

130
00:10:28,852 --> 00:10:31,274
Let AI sum it up for you real quick.

131
00:10:31,274 --> 00:10:36,318
Okay, so it's like, it's making us dumber and that I think is dangerous, you know?

132
00:10:36,318 --> 00:10:48,113
Like I love watching this one scene out of uh Parks and Recreation where Chris Pratt is
trying to figure out why Leslie Knope is sick.

133
00:10:48,113 --> 00:10:51,194
and he's on his computer and he says, think I know what you have.

134
00:10:51,638 --> 00:10:54,257
You might have a network connectivity problem.

135
00:10:55,877 --> 00:10:57,302
Such a great scene.

136
00:10:57,302 --> 00:11:00,243
you know, it's dangerous.

137
00:11:00,243 --> 00:11:02,724
It's a good place to get good ideas.

138
00:11:02,724 --> 00:11:12,116
But see, for someone who's like, I used to see this all the time, like people who are
trying to avoid going to the hospital, for example, when they absolutely need to go to the

139
00:11:12,116 --> 00:11:17,437
hospital, they will Google whatever they can to give them a better sense of assurance.

140
00:11:17,437 --> 00:11:22,568
Like, my God, this sneeze is just nothing, okay?

141
00:11:22,568 --> 00:11:24,529
I don't have brain cancer, you know?

142
00:11:24,529 --> 00:11:26,029
um

143
00:11:27,306 --> 00:11:28,937
That's where it can be dangerous.

144
00:11:28,937 --> 00:11:31,048
We want people to go get checked out.

145
00:11:31,048 --> 00:11:32,118
We want people to do that.

146
00:11:32,118 --> 00:11:34,579
But it's also good to have an understanding though as well.

147
00:11:34,579 --> 00:11:37,210
Hey, what are the treatment options now since I've been diagnosed?

148
00:11:37,210 --> 00:11:41,712
And in that regard, it's really helpful so I can say, well, what are my options?

149
00:11:41,712 --> 00:11:44,239
Who is against chemotherapy?

150
00:11:44,239 --> 00:11:45,474
Who's against this?

151
00:11:45,474 --> 00:11:51,036
So those things, obviously just like anything, just like the internet from the inception,
is good and bad.

152
00:11:51,337 --> 00:11:52,987
Helpful, not helpful.

153
00:11:54,117 --> 00:12:03,717
What's your take on, we talked about this a few episodes ago, there was a study done
about, basically they set up these trials where patients were walking into doctor's office

154
00:12:03,717 --> 00:12:09,677
X, interacting with a robot, answering all the questions, and then going and seeing a real
doctor, answering all their questions.

155
00:12:09,677 --> 00:12:17,317
And supposedly, according to the results, the robots got it right 77 % of the time, the
actual human doctors got it right 66 % of the time.

156
00:12:17,758 --> 00:12:25,452
Man, that isn't a Theranos uh type of, uh no, man, is that really a thing of the robot?

157
00:12:25,453 --> 00:12:27,994
What did the robot diagnose the person with?

158
00:12:28,034 --> 00:12:29,835
Was it something?

159
00:12:30,956 --> 00:12:31,434
Okay.

160
00:12:31,434 --> 00:12:32,016
don't have it up.

161
00:12:32,016 --> 00:12:33,603
should have actually pulled it up in front of me.

162
00:12:33,603 --> 00:12:35,414
Maybe I could get AI to bring that up for me.

163
00:12:35,414 --> 00:12:38,108
em

164
00:12:38,108 --> 00:12:39,688
would love to read that.

165
00:12:40,568 --> 00:12:43,245
All I can say is...

166
00:12:43,245 --> 00:12:47,589
if like the things that people go to the doctor for are so common.

167
00:12:48,130 --> 00:12:56,668
Like, like, when when you were in a situation seeing patients sort of all day long, is it
that like, is it a lot of people that all get the flu at the same time, and they all come

168
00:12:56,668 --> 00:12:59,061
knocking on the door going, I don't know what's going what's wrong with me.

169
00:12:59,061 --> 00:12:59,661
I feel terrible.

170
00:12:59,661 --> 00:13:02,064
Well, guess what everyone has the flu right now.

171
00:13:02,064 --> 00:13:06,768
Like it is that that does something like that factor into this.

172
00:13:07,064 --> 00:13:10,830
Uh, yeah, it does seasonal stuff, obviously where your location is.

173
00:13:10,830 --> 00:13:14,986
So it's like, you know, winter time, you know that flu, you know, cases are going to rise.

174
00:13:14,986 --> 00:13:27,954
So yeah, there's seasonal, environmental, and regional variations that help everyday
primary care physicians come to a better answer of what this could In addition to blood

175
00:13:27,954 --> 00:13:30,768
tests and things like that, if one needs to get that done.

176
00:13:30,768 --> 00:13:38,811
So when we were talking, just before we hit record, we were talking about sort of how much
you interact with AI and you mentioned that there are times when, you know, you'll use it

177
00:13:38,811 --> 00:13:43,754
and search things up, but then have to like verify the facts or you know, the information
that's been given to you.

178
00:13:43,754 --> 00:13:44,794
Like, what is that?

179
00:13:44,794 --> 00:13:45,925
What has your experience been with that?

180
00:13:45,925 --> 00:13:49,465
How much of it are you seeing is just like completely just wrong?

181
00:13:49,465 --> 00:13:52,245
Do you do you find most of the time you're like, damn, that's spot on?

182
00:13:52,245 --> 00:13:54,038
Like, what's your experience been?

183
00:13:54,038 --> 00:13:56,450
it's definitely spot on, okay?

184
00:13:56,450 --> 00:14:09,999
Sometimes, but a lot of times it's a very broad, uh it's a broad answer and you need to
kind of dig into the specifics of what it could be.

185
00:14:09,999 --> 00:14:15,522
um A great example is like an upper respiratory infection.

186
00:14:15,522 --> 00:14:16,983
That could be a...

187
00:14:18,518 --> 00:14:23,342
you know, a viral infection, could be bacterial infection, it could be an allergy, you
know?

188
00:14:23,402 --> 00:14:28,707
And so that is where AI can't help you too much.

189
00:14:28,707 --> 00:14:30,558
You you would need to have that trained eye.

190
00:14:30,558 --> 00:14:35,662
But for the most part, you know, it should be able to give you a few varying factors of
what it does.

191
00:14:35,662 --> 00:14:45,561
In fact, I think I saw something one time that was something like a cough, like it could
be a cough, and then I think it cascaded down to uh just something terrible.

192
00:14:45,561 --> 00:14:46,521
uh

193
00:14:47,233 --> 00:14:51,593
But it gave all the key, again, I forgot what it actually was.

194
00:14:52,373 --> 00:14:55,813
I think it was a type of cancer, but it gave varying answers.

195
00:14:55,813 --> 00:15:06,553
But I think all of it should be coupled with, if you're looking for a diagnosis, just use
this as a stepping stone to go to the hospital, to go to your physician and get checked

196
00:15:06,553 --> 00:15:06,881
out.

197
00:15:07,149 --> 00:15:11,787
You know what helps me think clearly enough to tell the difference between good and
questionable health advice?

198
00:15:11,787 --> 00:15:12,860
Magic Mind.

199
00:15:12,860 --> 00:15:14,222
And their promise is one you can trust.

200
00:15:14,222 --> 00:15:16,910
They only use the world's best ingredients.

201
00:15:16,910 --> 00:15:21,260
test every component rigorously and inspect every bottle by hand.

202
00:15:21,260 --> 00:15:29,805
And unlike most companies that make you mail back a half empty bottle for a refund,
MagicMind offers a no questions asked 100 % money back guarantee.

203
00:15:29,805 --> 00:15:33,140
No need to return the bottles, though why would you want to?

204
00:15:33,140 --> 00:15:34,961
This stuff actually works.

205
00:15:34,961 --> 00:15:46,667
Get your mental performance shot at magicmind.com forward slash fitmas20 and use the code
fitmas20 for 20 % off because your brain deserves better than WebMD and robot doctors.

206
00:15:46,852 --> 00:15:57,610
So I am doing another search at the same time that we're talking to try to look at the way
that AI spits back data sets.

207
00:15:58,892 --> 00:16:11,162
And part of what I am seeing on a consistent basis is that AI goes through and looks at, I
can prompt engineer the hell out of my data sets.

208
00:16:11,162 --> 00:16:16,896
And I can give it all kinds of parameterization and I can lock all kinds of component
pieces in and I'm trying to compare it.

209
00:16:17,080 --> 00:16:19,803
with some of the data sets that I have and some of the information that have.

210
00:16:19,803 --> 00:16:23,939
like, you brought up a really good point, you know, here's my lab data.

211
00:16:23,939 --> 00:16:26,922
ChatGPT, looking at my lab data, what can you tell me?

212
00:16:28,868 --> 00:16:30,899
Format not compatible.

213
00:16:32,219 --> 00:16:42,523
That's what it tells me because I'm uploading a PDF of my blood work and it's like, ah, I
can't really tell because this might be this and this might be that.

214
00:16:43,124 --> 00:16:51,788
Form data gets confused, field data gets mixed up, and suddenly what looks like a normal
level for one thing is slightly in the wrong spot.

215
00:16:51,788 --> 00:16:54,229
So this is a data indexing problem.

216
00:16:54,229 --> 00:16:57,750
The exact same problem that database engineers have.

217
00:16:57,750 --> 00:16:59,811
trying to run these things through.

218
00:16:59,892 --> 00:17:08,887
Now it's not that I don't think the data itself is of value and available and it can make
these pieces work, but I think like you said, Ajit, a human eye looking at this thing

219
00:17:08,887 --> 00:17:14,301
would have seen my data and gone, there's the problem.

220
00:17:14,461 --> 00:17:19,975
And I could look at my data because I've done this long enough and I'm not trained, but I
guess I'm tuned to look at stuff at this point.

221
00:17:19,975 --> 00:17:22,536
And I can go, there's the problem.

222
00:17:22,788 --> 00:17:23,728
But the AI can't.

223
00:17:23,728 --> 00:17:26,818
And this has to do, I think, with the fact the models are still learning.

224
00:17:26,818 --> 00:17:29,560
They're still trying to understand these things in context.

225
00:17:29,560 --> 00:17:37,722
Now, that being said, if I take a specific data field in there and I say, here are my
albumin levels.

226
00:17:37,722 --> 00:17:42,763
And here's what it says normal and what abnormal and all these different things are.

227
00:17:42,763 --> 00:17:45,084
And I say, me information on it.

228
00:17:45,084 --> 00:17:46,745
It's really good at that.

229
00:17:46,745 --> 00:17:52,218
But I can also see that it's querying the internet and going out there and compiling these
things across multiple different sources.

230
00:17:52,218 --> 00:17:53,671
And I can go through and I can look at that.

231
00:17:53,671 --> 00:17:57,670
And on top of that, I can go through and I can click on the link to the data source
itself.

232
00:17:57,670 --> 00:18:03,922
And it actually says, this link created for AI generated content.

233
00:18:05,976 --> 00:18:15,697
I think what we're seeing is that the companies that do the smart things already are
already trying to take this into consideration and lock these things into place to try to

234
00:18:15,697 --> 00:18:17,540
make them better over time.

235
00:18:17,540 --> 00:18:24,488
And it doesn't supplant the physician piece, but I think it's a clue and an Easter egg
that we keep following it.

236
00:18:24,488 --> 00:18:27,090
It's going to eventually reveal that.

237
00:18:28,548 --> 00:18:37,563
We don't know what we're doing at scale and in mass because these guys keep saying, I'm
making this stuff to answer the questions for generative AI.

238
00:18:37,563 --> 00:18:46,898
At what point do physicians have to create their own bank of information that's not
generated by AI, for AI to spit back responses?

239
00:18:46,898 --> 00:18:56,878
Because as far as I can tell, the internet seems to keep getting swept up in all these
kinds of weird motions and we keep deleting information and

240
00:18:56,878 --> 00:19:07,676
things and a lot of the index information that was available for like NIH is suddenly gone
because we have a new administration that's come in and anytime the word race or equity or

241
00:19:07,676 --> 00:19:16,343
anything like that shows up they delete it which are all terms that are used for good
reasons and then AI is further convoluting that data by crunching those pieces going

242
00:19:16,343 --> 00:19:17,274
across.

243
00:19:17,274 --> 00:19:23,218
How are physicians going to be able to adjust and work in this space with so much
inconsistency in the

244
00:19:24,622 --> 00:19:30,583
Well, don't think physicians, so long as there's human physicians, will have to deal with
that too much.

245
00:19:30,583 --> 00:19:38,976
I think they're going to be dealing more with the really cool technology that's coming out
in the work of uh blood chemistries, urine analysis, scans of the body.

246
00:19:38,976 --> 00:19:42,817
uh That stuff is getting so amazing.

247
00:19:42,817 --> 00:19:45,107
That's the stuff that's going to give you your definitive answers.

248
00:19:45,107 --> 00:19:46,938
And I think that's what they're going to be leaning towards.

249
00:19:46,938 --> 00:19:52,900
um I think part of AI learning is that it's going to understand certain population groups
too.

250
00:19:52,900 --> 00:19:55,312
And that might be based on your IP address,

251
00:19:55,856 --> 00:19:56,932
Tell me more about that.

252
00:19:56,932 --> 00:19:59,479
How does my IP address play into that?

253
00:19:59,530 --> 00:20:06,161
Like, let's say you, me, and Jason have an IP address and it's located, you know, in
Yemen.

254
00:20:06,502 --> 00:20:11,583
They're gonna be like, yeah dude, those guys are probably gonna die from terrorism, okay?

255
00:20:11,723 --> 00:20:13,163
You know, something like that.

256
00:20:13,163 --> 00:20:23,706
Whereas like, you know, just think of an IP address as being something regional, short of
it being firewalled and like, know, VPN'd and all that, okay?

257
00:20:23,706 --> 00:20:27,497
But, you know, you could, like, just like how you use Yelp, right?

258
00:20:27,497 --> 00:20:29,387
Like, you know, you're over your phone,

259
00:20:29,448 --> 00:20:31,670
If you have your settings right, it already knows your location.

260
00:20:31,670 --> 00:20:44,191
So whether it be location, GPS, or IP address, as long as there's no type of mirroring or
safety mechanism in place, you could pretty much get an idea of what it's like.

261
00:20:44,191 --> 00:20:53,539
I'm sure it could see what kind of metabolic cases are in certain areas by certain
hospitals, how many dialysis centers are in one area and stuff.

262
00:20:53,539 --> 00:20:56,096
So there's a lot of good predictability.

263
00:20:56,096 --> 00:21:05,236
You know, like I think there was something, I'm sure you guys have read something like
this, areas that have a Whole Foods market tend to be a little bit healthier than places

264
00:21:05,236 --> 00:21:08,116
that just have a Piggly Wiggly or something like that, you know?

265
00:21:08,423 --> 00:21:09,303
Right.

266
00:21:09,727 --> 00:21:10,928
Yeah, you bring up a good point.

267
00:21:10,928 --> 00:21:15,281
mean, the idea of understanding geolocation based upon these health indexes, but not just
that.

268
00:21:15,281 --> 00:21:23,668
Like if I'm using a browser and I'm using like chat GPT to go through and do this
information, I go through and actually tell it use history or not use history.

269
00:21:23,668 --> 00:21:28,044
So if I tell it to not use history, it's going to try to do everything from scratch,
assuming I'm somebody brand new.

270
00:21:28,044 --> 00:21:36,517
If I tell it to use history, it'll use contextual clues to try to figure out who I am,
including things like browser ID, log in information.

271
00:21:37,189 --> 00:21:39,880
location of other devices that are running this application set.

272
00:21:39,880 --> 00:21:47,882
Like it tries to track all those pieces and pull them down and it uses inference to try to
make sense of it in context.

273
00:21:48,622 --> 00:21:53,303
The same way humans do in their own local geographies.

274
00:21:54,044 --> 00:22:04,697
That being said, if I'm a physician, how soon do you think it's going to be before
physicians start taking this type of metadata that's produced by apps and produced by

275
00:22:04,697 --> 00:22:06,375
other interactive tools?

276
00:22:06,375 --> 00:22:08,126
and incorporated that into their practice.

277
00:22:08,126 --> 00:22:17,773
Like, I mean, you and I work together on a uh fitness product that takes information from
smartwatches, and it can include lat-long information, geolocation, movement, all those

278
00:22:17,773 --> 00:22:18,823
pieces.

279
00:22:19,104 --> 00:22:28,290
But doctors aren't really using that data today because they think it's kind of novel in
the way that it's approaching those pieces, or maybe they don't trust the source of it.

280
00:22:28,290 --> 00:22:31,372
But it's clear that we're machining things up a lot more.

281
00:22:31,372 --> 00:22:36,435
And the only way that you can actually handle the volume of data that's coming from the
telemetry perspective

282
00:22:36,465 --> 00:22:39,677
is to have these tools in place to be able to make sense of it.

283
00:22:40,078 --> 00:22:50,697
Do you think doctors are gonna be forced to start taking these types of data sets into
almost into practice and into consideration when doing diagnosis and understanding their

284
00:22:50,697 --> 00:22:55,919
patients or are we still just in that novel phase where, you know, we're just got to wait
and see.

285
00:22:55,919 --> 00:22:59,790
I think we're in the novel phase, it's uh the first part.

286
00:22:59,790 --> 00:23:10,693
I definitely see something like that happening as not necessarily a way to benefit the
patient, but rather to keep these asset houses that have been purchasing systems uh

287
00:23:10,693 --> 00:23:11,903
essentially cost down.

288
00:23:11,903 --> 00:23:18,695
So I think it's going to be kind of like a very automated chat kind of like thing.

289
00:23:18,695 --> 00:23:22,896
You know, when you call customer service for like an airline or something.

290
00:23:22,896 --> 00:23:26,622
you the first thing that they say is, you know, hey, you know, you can find us on the web.

291
00:23:26,622 --> 00:23:29,367
And I'm like, yeah, dude, I know I'm calling you for a different reason right now.

292
00:23:29,367 --> 00:23:30,249
Okay.

293
00:23:30,249 --> 00:23:35,196
And so with that, I think

294
00:23:39,535 --> 00:23:52,240
I think that physicians will use it, they will use it, but they may actually also be
forced to use it in the style that the overarching corporation would like them to use it.

295
00:23:52,400 --> 00:23:57,802
And that could redefine litigation, that could redefine malpractice.

296
00:23:58,195 --> 00:24:08,866
In other words, I can see a world where something today you could be sued for, and 10
years from now they're like, no, we've rewritten that, that's not a suable thing.

297
00:24:10,498 --> 00:24:14,598
I can't really give an example, just to add off the top of my head, but well, no, no, I
can't.

298
00:24:14,598 --> 00:24:19,998
mean, there are certain things that you can see in insurance that are paid for, partially
paid for, and just not covered whatsoever.

299
00:24:20,978 --> 00:24:29,298
And I think that would probably be the lean that where insurance companies and hospitals
may start working together and using AI as an aggregate.

300
00:24:29,298 --> 00:24:39,634
Because the thing is that if they use that as a tool to minimize costs and give, I don't
know, okay, well,

301
00:24:40,270 --> 00:24:47,514
That is where the consumer can go Google themselves and get the same pretty much the same
answer as well, which is also right.

302
00:24:47,855 --> 00:24:56,440
So it's like what my physician is now bound to basically to do for me, which you know what
they're telling him or her to do is the same information I'm able to Google.

303
00:24:56,440 --> 00:24:57,931
So it must be right.

304
00:24:59,856 --> 00:25:08,316
I you're you're both touching on something that the more Jason and I talk about this
stuff, the more terrified I get about the power that it wields.

305
00:25:08,316 --> 00:25:17,994
And just the connection of this information to the insurance companies who are denying
stuff that they should absolutely not be denying, you know, every single day to have some

306
00:25:17,994 --> 00:25:19,024
sort of

307
00:25:19,406 --> 00:25:28,224
sharing of this information that I think will just continue to disqualify more and more of
these costs because the insurance companies their sole purpose is to stop finding ways to

308
00:25:28,224 --> 00:25:29,185
pay these things.

309
00:25:29,185 --> 00:25:31,767
And I think this is only going to make it easier for them.

310
00:25:31,767 --> 00:25:40,054
And so in a system where you know, it isn't socialized medicine, it is very much a uh
massive for profit institution.

311
00:25:40,054 --> 00:25:42,055
ah It's terrifying.

312
00:25:42,658 --> 00:25:50,971
how far this could go and really harming people's lives when it comes to this particularly
in more ways than it actually will benefit them, I think.

313
00:25:53,660 --> 00:25:55,962
I would like to switch topics slightly.

314
00:25:56,212 --> 00:25:57,128
Okay.

315
00:25:58,331 --> 00:26:00,353
AI for mental health.

316
00:26:01,416 --> 00:26:09,909
Using AI as a counselor or a friendship or a relationship as a physician.

317
00:26:10,752 --> 00:26:12,293
What do you think of that?

318
00:26:12,714 --> 00:26:14,285
Dude, you all saw 2001, man.

319
00:26:14,285 --> 00:26:16,438
Okay, that's what I think of that.

320
00:26:16,438 --> 00:26:17,259
Okay.

321
00:26:17,259 --> 00:26:18,460
I'm sorry, Dave.

322
00:26:18,460 --> 00:26:21,322
I'm afraid you won't get your antibiotics, you know.

323
00:26:22,172 --> 00:26:23,016
Hahaha

324
00:26:24,259 --> 00:26:27,854
You know, um that's a

325
00:26:30,325 --> 00:26:40,339
Again, I can see a collaborative approach with AI, where it's not just you type in one
thing, it's like, okay, here's a great example.

326
00:26:40,699 --> 00:26:50,543
Whenever anyone Googles the best way to kill themself, the first thing that pops up is
there is help for you, there's suicide prevention lines, it's a first, they're not

327
00:26:50,543 --> 00:26:52,623
answering your question.

328
00:26:52,664 --> 00:26:58,265
And so I think that's gonna be, that's kind of like what I can see happening.

329
00:26:59,284 --> 00:27:01,039
there will be fail-safes in place.

330
00:27:01,039 --> 00:27:05,009
Or at least the ethical, you know, internet should, for stuff like that.

331
00:27:05,009 --> 00:27:07,164
In the same manner of suicide prevention.

332
00:27:07,950 --> 00:27:09,541
It's remarkable how much this is coming up.

333
00:27:09,541 --> 00:27:15,822
We just talked about this a few weeks ago and how, you know, when we started talking about
this in medicine, I was like, I'm, I would be curious about it as a mental health thing.

334
00:27:15,822 --> 00:27:24,025
And so I've actually used it as sort of a stopgap therapist, a few times just like,
feeling a little stuck, you know, like ruminating on something, and I'll just kind of dump

335
00:27:24,025 --> 00:27:33,873
it out there and it asked incredibly profound questions and offered advice that like, a
typical human therapist might not right, like I was, I was

336
00:27:33,873 --> 00:27:38,155
like really hanging on to a stupid softball thing related to my kids game.

337
00:27:38,155 --> 00:27:46,019
And like a normal therapist wouldn't necessarily go, Oh, well, here are some drills that
you can work on with your kid that might help her be better and help you feel like you've

338
00:27:46,019 --> 00:27:46,636
contributed, right?

339
00:27:46,636 --> 00:27:54,303
Like, my therapist would not go, Oh, I know these five drills, you should totally try
these, these would help they would go, hey, Google some stuff to get better, right.

340
00:27:54,303 --> 00:27:59,685
uh So like, for me, I've we've talked about using it as a bridge between

341
00:27:59,965 --> 00:28:07,267
your actual appointment with your actual therapist, you know, when when you just need
somewhere to dump those thoughts and have someone sort of thoughtfully reflect back your

342
00:28:07,267 --> 00:28:08,207
experience.

343
00:28:08,207 --> 00:28:09,858
And and for me, it's been helpful.

344
00:28:09,858 --> 00:28:18,050
And I was just reading again this morning more articles from people that are like, my God,
I tried this as a therapist, and I was terrified at the results because it was so good.

345
00:28:18,050 --> 00:28:24,741
I think this is a thing that is becoming more and more popular because more and more
people need help and don't know where else to go.

346
00:28:24,741 --> 00:28:27,782
And as we create these these relationships,

347
00:28:27,792 --> 00:28:33,034
with this artificial intelligence, it's becoming kind of a natural go-to that like, I've
got a question.

348
00:28:33,034 --> 00:28:34,847
I wonder if this can answer that.

349
00:28:35,484 --> 00:28:37,470
Yeah.

350
00:28:37,936 --> 00:28:39,004
In regard.

351
00:28:39,004 --> 00:28:46,370
speaks to what you're saying is like that collaborative approach, like there there can be,
you know, in combination with a therapist, especially if you take those threads, right, if

352
00:28:46,370 --> 00:28:50,473
you save them, bring them into therapy and go, here's, here's kind of where my head was
at.

353
00:28:50,473 --> 00:28:54,155
And here's the questions that I got back and what I did with them.

354
00:28:54,155 --> 00:28:55,026
What do you think of that?

355
00:28:55,026 --> 00:29:00,200
And the therapist can go, you know, that was really terrible advice, you should have done
this, or that's not bad, right?

356
00:29:00,200 --> 00:29:01,801
Yeah, yeah.

357
00:29:01,801 --> 00:29:13,439
You know, I think these models, what's funny is that if, if, if let's say something
totally automated happens in the world of mental health, okay, where the consumer is

358
00:29:13,439 --> 00:29:14,389
better.

359
00:29:15,000 --> 00:29:21,234
I mean, you can't argue that it may be terrible behavioral, you know, therapy.

360
00:29:21,294 --> 00:29:25,777
But if the person says, you know, well, you know, I don't want to kill myself anymore.

361
00:29:25,777 --> 00:29:29,052
Or you know what, I think elementary school kids should live.

362
00:29:29,052 --> 00:29:36,117
You know, like, you they, they, you know, I think that's a positive thing, you know, but
there's no way of me validating that.

363
00:29:36,117 --> 00:29:48,495
That's a scary thing, you know, you because already too many improper uh prescriptions for
uh antidepressants and, you know, anti-psychotics are prescribed too many sleep

364
00:29:48,495 --> 00:29:49,655
medications are prescribing.

365
00:29:49,655 --> 00:29:50,425
It's.

366
00:29:51,847 --> 00:29:53,427
You know, I.

367
00:29:53,948 --> 00:29:58,629
I think that could be a really scary thing, you know, but again.

368
00:29:58,629 --> 00:30:05,744
if it's collaborative, if there is a human that can just check off like, that makes sense.

369
00:30:05,744 --> 00:30:07,539
I see validity.

370
00:30:07,539 --> 00:30:12,949
Medicine is just one of those things where you can automate almost anything, right?

371
00:30:12,949 --> 00:30:25,477
And uh medicine is all right too, but there's nothing that's gonna take away that actual
clinical and physical diagnosis.

372
00:30:26,792 --> 00:30:27,640
Mm-hmm.

373
00:30:27,640 --> 00:30:34,016
a couple of years ago here at UCSF and a girl had come back from somewhere in Asia.

374
00:30:34,938 --> 00:30:48,189
And she was experiencing cramping, urethral discharge, something, uh she got checked, had
an STD panel, had a cancer panel, but nothing was turning up at all.

375
00:30:48,604 --> 00:30:49,366
Mm.

376
00:30:50,909 --> 00:30:57,134
But there was one physician there who happened to be Chinese and was from the area that
this girl had come back from.

377
00:30:57,134 --> 00:31:00,557
And was like, I'm gonna get you tested for something else.

378
00:31:00,557 --> 00:31:01,848
I'm curious.

379
00:31:02,289 --> 00:31:07,516
And sure enough, uh it uh was an STD, but it was a thing.

380
00:31:07,516 --> 00:31:13,429
It was almost like a parasite, or like a bug that only in that one region of the world
gets transmitted.

381
00:31:13,429 --> 00:31:15,580
So why the hell would we be testing for that here?

382
00:31:15,580 --> 00:31:17,530
And why, if you type stuff in,

383
00:31:17,530 --> 00:31:20,691
You know, you would get a completely different, you know, an incorrect response.

384
00:31:20,691 --> 00:31:26,632
You know, the AI would probably say, oh, maybe you've got a dermoid cyst going on in your
pelvic region or some shit like that.

385
00:31:26,632 --> 00:31:32,174
And and when in fact, like, no, dude, it's some crazy South Asian bug that crawled up
someone's dick.

386
00:31:32,174 --> 00:31:34,904
OK, so like that's that's really what it was.

387
00:31:34,904 --> 00:31:35,654
You could Google this.

388
00:31:35,654 --> 00:31:39,675
I swear it's it's it's the world is full of insane things.

389
00:31:40,076 --> 00:31:45,467
So can you imagine if I said that actually you might have an insect that crawled up your
dick?

390
00:31:45,467 --> 00:31:46,100
OK.

391
00:31:46,100 --> 00:31:51,119
I would fire my real doctor and hire that robot doctor immediately because that robot
doctor is great.

392
00:31:51,119 --> 00:31:52,001
Yeah.

393
00:31:52,229 --> 00:31:53,644
I don't disagree.

394
00:31:54,969 --> 00:31:59,294
there anything else Jason you want to talk about here before we wrap this one up?

395
00:32:00,016 --> 00:32:00,413
Sure.

396
00:32:00,413 --> 00:32:05,972
need to do a couple of episodes and a few rounds of this because I think that there's more
to learn and more to discover.

397
00:32:05,972 --> 00:32:20,266
I really, really am interested in hearing Ajit's perspective uh with some of the things
that we've found over time with our interactions with AI and its medical space.

398
00:32:21,027 --> 00:32:22,486
And yeah, this has been great.

399
00:32:22,486 --> 00:32:26,018
My perspectives only stand out due to my analogies.

400
00:32:26,018 --> 00:32:26,839
Okay?

401
00:32:26,901 --> 00:32:28,112
The analogies are gold.

402
00:32:28,112 --> 00:32:30,203
They're gold, absolutely.

403
00:32:30,203 --> 00:32:42,050
ah Ajit, if somebody wants to connect with you, get your medical advice, whatever you do,
I always hesitate to ask you where people can contact you, because you do 4,000 things.

404
00:32:42,050 --> 00:32:48,375
So if you want someone to be able to contact you or follow you or any of the work you do,
the floor is yours.

405
00:32:48,375 --> 00:32:50,616
Well, I don't have a professional Instagram.

406
00:32:50,616 --> 00:32:54,619
My wife does, but I've got my own Instagram.

407
00:32:54,719 --> 00:33:01,564
I guess for right now, uh I assume we'll do another episode, which we'll have more time
now since I know how the routine is.

408
00:33:02,645 --> 00:33:12,692
I mean, my Instagram is macdaddyachit, but I don't think that would be, I don't think I
should get DMs about that thing saying, hey dude, I think I got colorectal cancer.

409
00:33:12,692 --> 00:33:14,072
What do you think?

410
00:33:14,315 --> 00:33:16,475
Yeah, I think something crawled up my dick, dude.

411
00:33:17,194 --> 00:33:22,781
Or like, think something crawled up my husband's dick, because he's a total asshole right
now, you know?

412
00:33:23,483 --> 00:33:25,335
Well, then you got the diagnosis, okay?

413
00:33:25,335 --> 00:33:27,348
So, yeah, yeah.

414
00:33:27,348 --> 00:33:30,311
I'll get something set up, for sure.

415
00:33:30,773 --> 00:33:32,474
Something professional, ugh.

416
00:33:33,640 --> 00:33:35,321
Ajit, thank you so much for your time.

417
00:33:35,321 --> 00:33:39,104
of course, will do this again because we have 4,000 more questions, both dumb and smart.

418
00:33:39,104 --> 00:33:40,791
So we'll get to all of them next time.

419
00:33:40,791 --> 00:33:41,173
love it.

420
00:33:41,173 --> 00:33:42,106
Thanks for having me.

421
00:33:42,106 --> 00:33:43,307
I love seeing you guys

422
00:33:44,065 --> 00:33:45,386
All right, that's all the time we have for today.

423
00:33:45,386 --> 00:33:47,247
I want to give a huge thanks to our friend, Dr.

424
00:33:47,247 --> 00:33:53,052
Ajit Baran Dhillon, for sharing his insights on AI and healthcare, his perspective, and
those analogies.

425
00:33:53,052 --> 00:33:54,933
Always absolutely gold.

426
00:33:59,861 --> 00:34:01,841
about how AI is changing medicine and mental health support.

427
00:34:01,841 --> 00:34:07,061
Before we go, I want to remind you to check out Magic Mind if you're looking for a mental
performance booster that actually works.

428
00:34:07,061 --> 00:34:08,741
Magic Mind has been a game changer for me.

429
00:34:08,741 --> 00:34:15,581
It helps sharpen my focus, boost my energy and reduce stress without the crash that I
expect from coffee or other energy drinks.

430
00:34:15,581 --> 00:34:18,321
Head over to magicmind.com forward slash fitmas20.

431
00:34:18,321 --> 00:34:21,621
Use the promo code fitmas20 to get 20 % off.

432
00:34:21,621 --> 00:34:25,361
Again, magicmind.com promo code fitmas20.

433
00:34:25,367 --> 00:34:31,711
And finally, if you enjoyed today's conversation, please hit subscribe, leave us a review,
and share this episode with someone who might benefit from it.

434
00:34:31,711 --> 00:34:32,242
Thanks again.

435
00:34:32,242 --> 00:34:34,556
We'll see you next week at thefitmess.com.

Dr. Ajit Dhillon Profile Photo

Dr. Ajit Dhillon

Co-founder and CEO of the Euromed Institute.

Dr. Dhillon is a trained critical-care physician from California. He is also the co-founder and CEO of the Euromed Institute.