Announcer:
0:00
Welcome to MedEvidence, where we help you navigate the truth behind medical research with unbiased, evidence-proven facts Hosted by cardiologist and top medical researcher, Dr. Michael Koren.
Dr. Michael Koren:
0:11
Hello, I'm Dr. Michael Koren and I'm here to lead another session of MedEvidence in our series of two docs speaking with each other, and I'm really fortunate to have a kindred spirit here, Dr. Christopher Labos, from Montreal, Canada, and he and I are kindred spirits because we're both cardiologists, we both believe in evidence-based medicine and we both have a way of critical thinking and describing our critical thinking such that we engage patients and help people understand the truth behind the data, which is the mantra here at MedEvidence. So, christopher, thanks for being part of this program. Why don't you let people know a little bit more about yourself? And you have a fascinating background. I enjoyed learning a little bit about you and share that with the audience.
Dr. Christopher Labos:
0:59
Sure, so my name is Christopher Labos. I like long walks in the rain, cold autumn days by the fire drinking hot coffee
Dr. Michael Koren:
1:08
and pina coladas right?
Dr. Christopher Labos:
1:11
So, yeah, I'm a cardiologist by training. That was my clinical training. And then something really shocking happened to me during my clinical training, when I was going through residency, I kept asking people. I was like I was really interested in research and I was like, well, how do you do research? Like how do you do it? And they were like, oh, you know you have to recruit patients, like okay, but then how do you do it? Like how do you actually analyze the data? Once you do this chart review, like what happens?
Dr. Christopher Labos:
1:33
And I realized that for a lot of people they would collect all this data they would fill out an excel sheet, it would go into this black box and then all of these magical numbers would come out. Nobody really knew what they meant. And I started saying, like, well, how do I learn how to do research? And one of the cardiologists at Mcgill, where I did my training, uh, was also also had a phd in epidemiology. And he said, well, if you want to do that, you should do a degree in epidemiology so you can understand how it works. And so after I finished my residency, I then went back to school to do a master's in epi, which was really fun because everybody in the class was 10 years younger than me and they were you know, they were there with their laptops and I was, like you know, with my Star Wars notebook taking notes, so boy.
Dr. Christopher Labos:
2:16
I really got update because I hadn't been in class for like a decade at that point, and so I started learning about epidemiology and statistics and I was like, oh god, they do.
Dr. Christopher Labos:
2:28
they did not prepare you for this right they did not prepare you for this at all and it was a really awakening, like an eye-opening experience, because I ended up, you know, learning a lot about how research works, but I also learned a lot about how research goes badly and sort of the.
Dr. Christopher Labos:
2:44
The light bulb moment for me was we were in class and we were learning out things like about absolute risk reduction versus relative risk reduction. I was like this is really interesting. And then this article came out about and it was sort of one of the studies that came out of the think, out of Framingham I think, and it was one of these you know fairly things. They sent out a food questionnaire and it was about strawberries and berries and we found that women who ate berries were at a 22% reduced risk of cardiovascular disease, which is, you know, you see studies like that all the time and I was looking at it and then I said, oh, with my newfound statistical knowledge, let me crunch the numbers. And it turned out that you had to take like 86,000 women and like force feed them strawberries every day for two years to prevent one non-fatal heart attack.
Dr. Christopher Labos:
3:27
Right right, and so you know, when you frame it that way, probably not the most important public health intervention we're ever going to accomplish, Right?
Dr. Michael Koren:
3:36
yeah.
Dr. Christopher Labos:
3:36
But I wrote a. I was like I need to, like I need to do something about this and I wanted to get involved with science, communication, the media and one of my colleagues, his parents, were journalists, and I said what do you think I should do? He's like well, why don't you just send the letter to the editor of the local newspaper? Like well, they published that. And they were like, absolutely. And so I wrote a letter to the editor about the whole strawberry thing and absolute risk reduction, why we need to understand food research better. And their response was oh, this is really interesting. Can you cut out,3,000 words and then we'll publish it in the newspaper? And they did. And I was like so excited I had about seven copies.
Dr. Christopher Labos:
4:09
I got copies from my parents and a scrapbook and an email was like are you interested in like, can I write others? And they were like sure, we're not going to pay you, but go nuts. And so I started writing letters to the editor and then that became a regular column, and then the column became a radio segments and the radio segments became TV segments. And so I started doing that regularly.
Dr. Christopher Labos:
4:34
And then, you COVID, hit and all of a sudden you needed medical experts and because I had a track record and they knew they could rely on me, because I had a good camera and microphone set up, like you're seeing all now, I started doing the COVID updates because a lot of it at the beginning was not really infectious disease knowledge. It was about interpreting data and knowing how to separate out the wheat from the chaff Right. And you know it was just about telling people like I wouldn't really rely on this. This is kind of sketchy information, this is where the good information is coming from. And so a lot of it, you know, sort of exploded during COVID. And then I started doing a whole bunch of other things for Medscape and it just sort of started branching out into all kinds of science communication venues.
Dr. Michael Koren:
5:15
Nice, beautiful. Well, you do a terrific job, so thank you for helping to educate the public. So let's jump into some of the things that you mentioned. So you mentioned statistics, so we'll jump in statistics, because I think that's really important.
Dr. Christopher Labos:
5:30
I think we're going to lose all the viewership right now. We're going to lose it. We're going to talk about statistics. Click.
Dr. Michael Koren:
5:34
Right, right. Well, I was going to hold up back on that until the end of our conversation, but while we still have people's attention, maybe we should get into that a little bit. So I like to start with the Mark Twain line you might have heard. This is that there are three types of mendacities lies, damn lies and statistics. And you brought up a good example of it, where you read something oh, 22% reduction in mortality, and it turns out that that's three patients over 80,000 people, so obviously not meaningful. And you're getting into the concept of relative risk reduction, which is the 22% versus absolute risk reduction is the number of people who are affected. So that is a great point, a fabulous point that people don't always particularly understand well, but that's a good question to ask. And the easiest way to ask absolute risk reduction is simply how many people out of 100 will benefit from this?
Dr. Christopher Labos:
6:25
Yeah, and that's an important point. And listen, there is again not to become too nihilistic. There is a reason why relative risk reduction is used in most scientific research. That's how the math works out, right, right, and I think this is also important because it's very easy. That's how the math works out, right, right, and I think this is also important because it's very easy. It would be very easy for me to come on tools like this and be like all medical research is wrong, and there have been some people during COVID, very prominent physicians, who are like everybody is wrong except for me. I would do a much better job than Anthony Fauci or like, yeah, calm, down there, yeah exactly.
Dr. Christopher Labos:
7:25
So there is a reason why relative risk reductions are used . reduction is used there is a reason why odds&nbsp
ratios
Dr. Christopher Labos:
7:26
are sort of the standard and risk ratios are used less even though odds ratios are not
as
Dr. Christopher Labos:
7:26
useful clinically because they don't have the same type of interpretation right there's reasons
why
Dr. Christopher Labos:
7:26
stuff became the default there's reasons why requent statistics became the default over Beysian
stuff
Dr. Michael Koren:
7:26
yeah, exactly yeah. So let's just define some of those things for the audience a little bit. So just an odds ratio is sometimes used because it's the odds of one thing happening versus the odds of another thing happening, but accentuates benefits or harm, and so from a practical standpoint, it could look like something is much better than in fact it is, whereas relative risk is going to be less accentuating and absolute risk reduction is going to be the most conservative way of looking at what the true benefit is.
Dr. Michael Koren:
7:51
So just giving people that concept.
Dr. Christopher Labos:
7:54
Yeah, and the fascinating thing is that all of these things are correct, right. It's not as if one is wrong, it's just that they don't mean what you think they mean. I mean, for people who don't know what people think, the words odds and risk are synonyms. They're not right. If you had a dice and you rolled the dice and I ask you what is the risk that you're going to roll a four on that dice, you would say it's one in six, because you have six sides and one of them is the one. You're that, I think. What are the odds of rolling a four, though? Most people don't know the answer to that question? The odds of rolling a four on a dice are one in five, because it's one positive event divided by five negative effects. Right, the five sides you don't care about, right? Both of those are correct. It's just that our brains are not hardwired to think in terms of odds.
Dr. Christopher Labos:
8:38
Statisticians thinks in terms of odds, bookies thinks in term. Think in terms of odds, right, but the general public doesn't. So if you publish a paper with an odds ratio, you might get an odds ratio of three, and if you were to do the risk ratio, the risk ratio would be two, the odds ratio is always going to be more extreme than the risk ratio, and there's a hilarious example of that in cardiology. Actually, where they went to the uh I think it was an american heart association meeting was one of the major conferences where they asked a cardiologist like, would you send this patient for an angiogram? And uh, they answered yes, yes, yes, yes, yes.
Dr. Christopher Labos:
9:14
But they found that there was a discrepancy and they were having actors read scripts and so there was a discrepancy between the the white actors and the black said well, you know, there's a racial bias in cardiology that we need to address, which is a fair point and it's actually a true statement. But then somebody says like, yeah, but you reported the odds ratio. The odds ratio was 0.6, like a 40% difference between the white and the black actors. And well, if you look at the risk ratio, it's actually much less pronounced. It's still an issue, but you sort of, and so it's really important to really understand the significance of the statistics that you are using, because you know there's lies, there's damn and there is statistics. Statistics can be misused in the same way that anything can, and so it's important to have an actual good mathematical grounding so that you know what you're talking about, because you could unintentionally make a problem seem worse, that it's exaggerating the effect a little bit more so than relative risk.
Dr. Michael Koren:
10:29
And the most conservative concept again is the absolute risk reduction, and so that's a great take-home message for most physicians, quite frankly, and for patients, and so those are the kind of questions that you should be asking, or at least thinking about when you look at a study result. So you also use the term frequentist versus Bayesian and we had a little conversation about that before and I think, again, it's a foreign concept for most physicians, quite frankly, and maybe it's worth a couple of statements on that. Do you want to dig into that? We'll dig into that a little bit together.
Dr. Christopher Labos:
11:01
Sure, so I mean it's going to take about an hour, so like settle there.
Dr. Michael Koren:
11:05
Yeah, we want the three-minute version.
Dr. Christopher Labos:
11:07
Okay, okay, I assume this was like the Joe Rogan podcast and we could go for like four hours.
Dr. Michael Koren:
11:10
Yeah, we don't smoke pot during these broadcasts.
Dr. Christopher Labos:
11:13
Okay, fair enough, sorry.
Dr. Michael Koren:
11:16
It's not legal yet in Florida.
Dr. Christopher Labos:
11:19
Oh, okay, interesting, there you go there you go.
Dr. Michael Koren:
11:21
Another reason to come down to Florida.
Dr. Christopher Labos:
11:23
Yeah, okay, I'll give you a three-minute version of what Bayesian statistics are For anybody who's interested. There's a great book that I got I think I got it as a gift. It's called the Theory that Would Not Die. So if you're interested in a not mathematical explanation of what Bayesian statistics is, with a historical background and some very simple examples, I think's a great book. It's very accessible. It's called the theory that would not die. I can't remember the author. I read it quite a few years ago. I thought it was really good.
Dr. Christopher Labos:
11:48
Bayesian statistics named after Lord Bays, who was an english reverend of all things. Yeah, it's, it's the way of looking at data. The way most statistics work is you, you have a theory and then you try to disprove it, right? So you're basically saying, when you take the p-value, which is the statistic that we use in most medical research analysis, the p-value first of all. Nobody can actually give you the correct definition of a p-value because it's so convoluted and it's basically this I'll see if I can get it in one shot the p value is the probability that you would observe the data that you did, or, more extreme data, if the null hypothesis was true, so if there was no difference between the two medications that you were studying. If there was no difference, the probability that you would see the data that you got in your experiment is the p-value. So if the p-value is very, very small, you're like it's very unlikely that I would have gotten this data if the theory was untrue.
Dr. Michael Koren:
12:55
Right, yes, you said that beautifully. That was perfect. That was perfect. And then the convention, of course, is that we talk about a p-value of less than 0.5 as meaning significance, meaning that the likelihood that that occurred by chance is 1 in 20. But we know there's a lot of caveats to that and there's a lot of discussion around that concept. But that was beautiful. I love that.
Dr. Christopher Labos:
13:16
Yeah, and here's the thing that p-value is sort of like very arbitrary. I mean, it goes back to Ronald Fisher who just kind of, like you know, picked the number and he's like I don't know 5%, and it just became standard for no reason, right. And there's no reason why we can't lower that threshold. But the whole premise of that field of statistics, which is the standard across most of medical research, is that it's frequent statistics. It's saying if the theory is true or if the theory is untrue, depending on how you want to look at it. But that's not what we care about. We want to know whether the theory is true or not.
Dr. Christopher Labos:
13:52
So Bayesian statistics goes about it the other way Rather than starting with a premise about the theory and then looking at the data, you look at the data and you develop your theory and it says, given the data that you have in front of you, what is the probability that your theory is true? It's a different way of analyzing the data and it's actually a much more logical way to interpret the data, because it's sort of the difference between sensitivity, specificity and positive, negative, predictive value, right? So for people who are familiar with these terms, you're going to get a basic primary epidemiology. Now, the sensitivity of a says of all the people who are sick, how many of them will test positive. That's the sensitivity and specificities of all the people who are well. How many of them will test negative?
Dr. Michael Koren:
14:33
Right. So, christopher, from a clinical medicine standpoint, we usually use a Bayesian approach. So, for example, somebody comes to the emergency room because they're short of breath. they're coughing, they have a fever, we do a chest X-ray and we think we see pneumonia on the chest X-ray and we diagnose that patient with pneumonia. But we know that that chest X-ray is an imperfect tool.
Dr. Michael Koren:
14:56
But in this setting, what it does, it moves us from the presumption of pneumonia, to quote the official diagnosis of pneumonia, although there could still be some level of doubt. The flip side being is, if that patient came in and had fractured their foot and you got a chest x-ray and they have absolutely no symptoms and the chest x-ray comes back and says pneumonia, you'd be very skeptical about it. The exact same chest x-ray, but you're just moving the needle in terms of what the truth is, not actually deciding that that is the truth. But that's why clinical medicine works, whereas clinical trials works in the frequentist way that you were explaining, whereas you say okay, we think that this is the null hypothesis, this is our hypothesis and we're going to get a number of patients that come in that help us determine is that true or not true, based on this standard of evidence.
Dr. Christopher Labos:
15:46
Yeah, and I think what's fascinating is that there is a shift to make statistics more Bayesian and in fact there's nothing that stops you from using Bayesian statistics in a clinical trial. I've seen a few of them. They're not very common and that's because a lot of people are unfamiliar with Bayesian statistics and it's also because up until like the 1980s, the computers we had were just not powerful enough to do Bayesian analysis. The great, you know, wonder of frequent statistics is that you can literally do it with pen and pencil right, and that's what a lot of people do.
Dr. Christopher Labos:
16:17
So I think Bayesian statistics because, again, just to give you another example that you know, you said the x-ray one of another perfect example is stress testing right, just because the stress test is abnormal does not mean the patient has heart disease. It really depends on the pre-test probability. If you have a positive stress test on a young woman with no risk factors, well that's almost certainly a false positive, whereas if you have a positive stress test in a you know, middle-aged man who's a smoker and has high blood pressure, well then it's very likely that they have heart disease. And that's the problem with frequentist statistics it does not incorporate the pre-test probability into the analysis, whereas Bayesian statistics does. And that's why I think it's so fascinating for me and it's much more intuitive, because Bayesian statistics is how we practice medicine and yet frequentist statistics is how we analyze data and that I think a lot of people don't fully appreciate why that is so conflicting.
Dr. Michael Koren:
17:13
Yeah, it's confusing. It's really confusing for people from a number of standpoints. The flip side is that people will bring in a report that says you know, rule out lung cancer in, you know a 20-year-old who's a non-moker because they saw something on an x-ray that got them upset and you're not acting upset. So, while you're not acting upset that's what this study shows and we have that insight. That's what the physicians have is that insight to not be particularly concerned about it, based on this Bayesian concept. So really fascinating stuff. So let's take a break now and I also want to talk to you a little bit about your book, which I thought was fascinating, and a few examples from the book about how easy it is to misinterpret data and how important it is to understand these concepts so that you can make good medical decisions, which is really what our MedEvidence platform is all about.
Announcer:
18:02
Thanks for joining the MedEvidence podcast. To learn more, head over to MedEvidence platform is all about. Thanks for joining the MedEvidence podcast. To learn more, head over to MedEvidence. com or subscribe to our podcast on your favorite podcast platform.