playlist
stringclasses 160
values | file_name
stringlengths 9
102
| content
stringlengths 29
329k
|
---|---|---|
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019 | 13_Machine_Learning_for_Mammography.txt | ADAM YALA: OK, great. Well, thank you for the great setup. So for this section, I'm gonna talk about some of our work in interpreting mammograms for cancer. Specifically it's going to go into cancer detection and triage mammograms. Next, we'll talk about our technical approach in breast cancer risk. And then finally close up in the many, many different ways to mess up and the way things can go wrong, and how does it [INAUDIBLE] clinical implementation. So let's kind of look more closely at the numbers of the actual breast cancer screening workflow. So as Connie already said, you might see something like 1,000 patients. All them take mammograms. Of that 1,000, on average maybe 100 they called back for additional imaging. Of that 100, something like 20 will get biopsied. And you end up with maybe five or six diagnoses of breast cancer. So one very clear thing you see about problems when you look at this funnel is that way over 99% of people that you see in a given day are cancer-free. So your actual incidence is very low. And so there's kind of a natural question that can come up. What can you do in terms of modeling if you have an even OK cancer detection model to raise the incidence of this population but automatically reading a portion of the population is healthy. Does everybody just follow that broad idea? OK. That's enough head nods. So the broad idea here is you're going to train the cancer detection model to try to find cancer as well as we can. Given that, we're going to try to say, what's a threshold on a development set such that we can kind of say below the threshold no one has cancer. And if we use that at test times, simulating clinical implementation, what would that look like? And can we actually do better by doing this kind of process? And the kind of broad plan of how I'm gonna talk about this-- I'm gonna do this for the next product as well. Of course, we're going to talk about the kind of dataset collection and how we think about, like, you know, what is good data and how do we think about that. Next, the actual methodology and go into the general challenges when you're modeling mammograms for any computer mission tasks, specifically in cancer, and also, obviously, risk. And lastly, how we thought about the analysis and some kind of objectives there. So to kind of dive into it, we took consecutive mammograms. I'll get back into this later. This is actually quite important. We took consecutive mammograms from 2009 to 2016. This started off with about 280,000 cancers. And once we kind of filtered-- so at least one year follow up, we ended up with this final setting where we had 220,000 mammograms for training and about 26,000 for development and testing. And the way we had it, it all comes to say, is this a positive mammogram or not? We didn't look at what cancers were caught by the radiologists. We'd say, you know, what was cancer that was found in any means within a year? And where we looked to was through the radiology, EHR, and the Partners-- kind of five hospital registry. And there we were trying to save cancer-- if anyway we can tell a cancer occurred, let's mart it as such regardless of what others caught on MRI or some kind of later stage. And so the thing we're trying to do here is just mimic the real world of what are we trying to catch cancer. And finally, important details we always split by patient so that your results aren't just memorizing this specific patient didn't have cancer. And so we have some overlap that's some bad bias to have. OK. That's pretty simple. Now let's go into the modeling. There's going to kind of follow two chunks. One chunk is going to be on the kind of general challenges, and it's kind of shared between the variety of projects. And next is going to be kind of more specific analysis for this project. So kind of a general question you might be asking, I have some image. I have some outcome. Obviously, this is just image classification. How is it different from ImageNet? Well, it's quite similar. Most lessons are shared. But there are some key differences. So I gave you two examples. One of them is a scene in my kitchen. Can anyone tell me what the object is? This is not a particularly hard question. AUDIENCE: [Intermingled voices] Dog. Bear. ADAM YALA: Right. AUDIENCE: Dog. ADAM YALA: It is almost all of those things. So that is my dog, the best dog. OK. So can anyone tell me, now that you've had some training with Connie, if this mammogram indicates cancer? Well, it does. And this is unfair for a couple of reasons. Let's go into, like, why this is hard. It's unfair in part because you don't have the training. But it's actually a much harder signal to learn. So first let's kind of delve into it. In this kind of task, the image is really huge. So you have something like a 3,200 by 2,600 pixel image. This is a single view of a breast. And in that, the actual cancer they're looking for might be 50 by 50 pixels. So intuitively your signal to noise ratio is very different. Whereas an image that-- my dog is like the entire image. She's huge in real life and in that photo. And the image itself is much smaller. So not only do you have much smaller images, but you're kind of, like, the relative size of the object in there is much larger. To kind of further compound the difficulty, the pattern you're looking for inside the mammogram is really context-dependent. So if you saw that pattern somewhere else in the breast, it doesn't indicate the same thing. And so you really care about where in this kind of global context this comes out. And if you kind of take the mammogram at different times with different compressions, you would have this kind of non-rigid morphing of the image that's much more difficult to model. Whereas that's a more or less context-independent dog. You see that kind of frame kind of anywhere, you know it's a dog. And so it's a much easier thing to learn in a traditional computer vision setting. And so the core challenge here is that both the image is too big and too small. So if you're looking at just the number of cancers we have, the cancer might be less than 1% of the mammogram and about 0.7% of your images have cancers, even in this data set, which is from 2000 to 2016 MGH, a massive imaging center, in total across all of that, you will still have less than 2,000 cancers. And this is super tiny compared to regular object classification data sets. And this is looking at over a million images if you look at all the four views of the exams. And at the same time, it's also too big. So even if I downsample these images, I can only really fit three of them for a single GPU. And so this kind of limits the batch size I can work with. And whereas the kind of comparable, if I took just the regular image net size, I could fit batches of 128, easily happy days and do all this parallelization stuff, and it's just much easier to play with. And finally, the actual data set itself is quite large. And so you have to do some-- there's nuisances to deal with in terms of, like, just setting up your server infrastructure to handle these massive data sets, also be able to train efficiently. So you know, the core challenge here across all of these kind of tasks is, how do we make this model actually learn? The core problem is that our signal to noise ratio is quite low. So training ends up being quite unstable. And there's a kind of a couple of simple levers you can play with. The first lever is often deep learning initialization. Next, we're gonna talk about kind of the optimization or architecture choice and how this compares to what people often do in the community, including in a recent paper from yesterday. And then finally, we're gonna talk about something more explicit for the triage idea and how we actually use this model once it's trained. OK. So before I go into how we made these choices, I'm just going to say what we chose to give you context before I dive in. So we followed some image initialization. We use a relatively large batch size-ish of 24. And the way we do that is by taking 4 GPUs and just stepping a couple of times before doing an optimizer step. So when you do a couple rounds of back prop first to accumulate those gradients before doing optimization. And you sample balanced batches of training time. And for backbone architecture we use ResNet-18. It's just kind of, like, fairly standard. OK. But as I said before, one of the first key decisions is how do you think about your initialization? So this is a figure of ImageNet initialization versus random initialization. It's not any particular experiment. I've done this across many, many times. It's always like this. Where if you use image initialization, your loss drops immediately, both in train loss and development loss when you actually learn something. Whereas when you do random initialization, you kind of don't learn anything. And your loss kind of bounds around the top for a very long time before it finds some region where it quickly starts learning. And then it will plateau again for a long time before quickly start learning. And to kind of give some context, to give about 50 epochs takes on the order of, like, 15, 16 hours. And so to wait long enough to even see if random initialization could perform as well is beyond my level of patience. It just takes too long, and I have other experiments to be running. So this is more of an empirical observation that the image initialization learns immediately. And there's some kind of questions of why is this? Our theoretical understanding of this is not that strong. We have some intuitions of why this might be happening. We don't think it's anything about this particular filter of this dog is really great for breast cancer. That's quite implausible. But if you look it into a lot of the earlier research in terms of the right kind of random initialization for things like revenue networks, a lot of focus was on does the activation pattern not blow up as you go further down the line. One of the benefits of starting with the pre-trained network is that a lot of those kind of dynamics are already figured out for a specific task. And so shifting from that to other tasks has seemed to be not that challenging. Another possible area of explanation is actually in a BatchNorm statistics. So if you remember, we can only fit three images per GPU. And the way the BatchNorm initialization is implemented across every deep learning library that I know of, it computes independently per GPU to minimize the kind of inter-GPU communication. And so it's also less able to kind of guess from scratch. But if you're starting with the BatchNorm statistics to ImageNet and just slowly shifting it over, it might also result in some stability benefits. But in general, or like, a true deeper theoretical understanding, but as I said, it still eludes us. And it isn't something I can give too much conclusions about, unfortunately. OK. So that's initialization. And if you don't get this right, kind of nothing works for a very long time. So if you're gonna start a project in this space, try this. Next, another important decision that if you don't do, it kind of breaks, is your optimization/architecture choice. So as I said before, kind of a core problem in stability here is this idea that our just signal to noise ratio is really low. And so a very common approach throughout a lot of the prior work and things I actually have tried myself before is to say, OK, let's just break down this problem. We can train at a patch level first. We're going to take just subsets of a mammogram in this little bonding box, have it annotated for radiology findings like benign masses or calcification and things of that sort. We're going to pre-train on that task to have this kind of pixel level prediction. And then once we're done with that, we're going to fine tune that initialized model across the entire image. So you kind of have this two-stage training procedure. And actually, another paper that came out just yesterday does the exact same approach with some slightly different details. But one of the things we wanted to investigate is if you just-- oh, And the base architecture that's always used for this, there is quite a few valid options of things that just get reasonable performance and ImageNet, things like VGG, Wide ResNets and ResNets. In my experience, they all performed fairly similarly. So it's kind of a speed/benefit trade-off. And there's an advantage to using fully convolutional architectures because if you have fully connected layers that are assumed specific dimensionality, you can convert them to convolutional layers. They're just more convenient to start with a full convolutional architecture. There's going to be resolution invariant. Yes. AUDIENCE: In the last slide when you do patches-- ADAM YALA: Yes. AUDIENCE: How do you label every single patch? Are they just labeled with a global label? Or do you have to actually look and catch, and figure out what's happened? ADAM YALA: So normally what you do is you have positive patches labeled. And then you randomly sample other patches. So from your annotation-- so, for example, a lot people do this on public data sets like the Florida DSM dataset that has some entries, of like, here are benign masses, benign calcs, malignant calcs, et cetera. What people do then is take those annotations. They will randomly select other patches and say, if it's not there, it's negative. And I'm going to call it healthy. And then they'll say if this bonding box overlaps with patch by some marginal call, it's the same label. So do this heuristically. And other data sets that are proprietary also kind of play with a similar trick. In general, they don't actually label every single pixel accordingly. But there's relatively minor differences in how people do this. But the results are fairly similar, regardless. Yes. AUDIENCE: When you go from the patch level to the full image, if I understand correctly, the architecture hasn't quite changed because it's just convolution is over a larger-- ADAM YALA: Exactly. So the end thing right before we do the prediction is normally-- ResNet, for example, does a global average pool. Channel lies across the entire feature map. And so they just-- for the patch level they take in an image that's 250 by 250, do the global average pool across that to make the prediction. And when they just go up to the full resolution image, now you're taking a global average pool over a 3,000 by 2,000. AUDIENCE: And presumably there might be some scaling issue that you might need to adjust. Do you do any of that? Or are you just-- ADAM YALA: So you feed it in at the full resolution the entire time. So you just-- do you see what I mean? So you're taking a crop. So the resolution isn't changing. So the same filter map should be able to kind of scale accordingly. But if you do things like average pooling, then you're kind of-- any one thing that has a very high activation will get averaged down lower. And so, for example, in our work, we use max pooling to kind of get around that. Any other questions? But if this looks complicated, have no worries because we actually think it's totally unnecessary. And this is the next slide. So good for you. So as I said before, this kind of, what are the problems that signal to noise? So one obvious thing to kind of think about is, like, OK. Maybe doing SGD with a batch size of three when the lesion is less than 1% of the image is a bad idea. If I just take less noisy gradients by increasing my batch size, which means use more GPUs, take more steps before doing the weight update, we actually find that the need to do this actually goes away completely. So these are experiments I did in the publicly available data set a while back while we were figuring this out. If you take this kind of [INAUDIBLE] architecture and fine tune with a batch size of 2, 4, 10, 16, and compare that to just a one-stage training where you just do the [INAUDIBLE] beginning and initialized in ImageNet and as you use different batch sizes, you quickly start to close the gap on the development AUC. And so for all the experiments that we do broadly we find that we actually get reasonably stable training by just using a batch size of 20 and above. And this kind of comes down to if you use a batch size of one, it's just particularly unstable. In other details that we always sample the balanced batches. Cause otherwise you'd be sampling like, 20 batches before you see a single positive sample. You just don't learn anything. Cool. So that is like, if you do that, you don't do anything complicated. You don't do any fancy cropping or anything of that sort, or like, dealing with like VGG annotations. We found that the actual using VGG annotation for this task is not actually helpful. OK. No questions? Yes. AUDIENCE: So with the larger batch sizing you don't use the magnified patches? ADAM YALA: We don't. We just take the whole image from beginning. Pretend you-- like, can you just see the annotation as whole image, cancer with less than within a year. It's a much simpler setup. AUDIENCE: I don't get. That's the same thing I thought you said you couldn't do for memory reasons. ADAM YALA: Oh. So you just-- instead of-- so normally when you do, you're going to train the network, the most common approach is you do back prop and then step. Cause you do back prop several times, you're accumulating the gradients, at least in PyTorch. And then you can do step afterwards. So instead of doing the whole batch at one time, you just do it serially. So there you're just trading time for space. The minimum, though, is you have to fit at least a single image per GPU. And in our case we can fit three. But to make this actually scale, we use four GPUs at a time. Yes. AUDIENCE: How much is the trade-off with time? ADAM YALA: So if I'm gonna take one batch size any bigger, I would only do it in increments of let's say 12, because that's how much I can fit within my set of GPUs at the same time. But to control the size of the experiment you want to have the kind of the same number of gradient updates per experiment. So if I want to use a batch size of 48, so all my experiments, instead of taking about half a day, it takes about a day. And so there's kind of, like, this natural trade-off as you go along. So one of the things I mentioned at the very end is we're considering some adversarial approach for something. And one of the annoying things about that is that if I have five discriminator steps, oh my god. My my experiment-- I'll take three days per experiment. And [INAUDIBLE] update of someone that's trying to design a better model becomes really slow when the experiments start taking this long. Yes. AUDIENCE: So you said the annotations did not help with the training. Is that because the actual cancer itself is not really different from the dense tissue, and the location of that matters, and not the actual granularity of the-- what is the reason? ADAM YALA: So in general when something doesn't help, there's always kind of like a possibility of two things. One thing is that the whole image signal kind of subsumes that smaller scale signal. Or there is a better way to do it I haven't found that would help. And then this thing looks to us all very hard. As of now, so the task we're [INAUDIBLE] on is whole image classification. And so on that task it's possible that the kind of surrounding context-- so when you do a patch with an annotation, you kind of lose the context which it appears in. So it's possible that just by looking at the whole context every time, it's as good-- you don't get any benefit from kind of the zooming boxes. However, we're not evaluating on kind of an object detection type of evaluation metric. If you say how well we are catching the box. And if we were, we'd probably have much better luck with using the VGG annotation. Because you might be able to tell some of those discriminations by like, this looks like a breast that's likely to develop cancer at all. And the ability of the model to do that is part of why we can do risk modeling. Which is going to be the kind of the last bit of the talk. Yes. AUDIENCE: So do you do the object detection after you identify whether there's cancer or not? ADAM YALA: So as of now we don't do object detection in part because we're framing the problem as triage. So there is quite a few tool kits out there to draw more boxes on the mammogram. But the insight is that if there's 1,000 things to look at, looking at 2,000 things you drew more boxes per image. And it isn't necessarily the problem we're trying to look at. There's quite a bit of effort there. And it's something we might look into later in the future. But it's not the focus of this work. Yes. AUDIENCE: So Connie was saying that the same pattern appearing in different parts of the breast can mean different things. But when you're looking at the entire image as once, I would worry intuitively about whether the convolutional architecture is going to be able to pick that up or whether-- because you were looking for a very small cancer on a very large image. And then you were looking for the significance of that very small cancer in different parts of the image or in different contexts of the image. And I'm just-- I mean, it's a pleasant surprise that this works. ADAM YALA: So there is kind of like two pieces that can help explain that. So the first is that if you look at, like, the receptive fields of any given last receptive map at the very end of the network, each of those summarizes through these convolutions a fairly sizable part of the image. And so you are kind of, like, each pixel at the very end ends up being like something like a 50 by 50 image. That's by five total dimensions. And so each part does summarize this local context decently well. And when you do maximum at the very end, and you get some not perfect but OK global summary, what is the context of this image? So something like, let's say, some of the lower dimensions can summarize, like, is this a dense breast or kind of some of the other pattern information that might tell you what kind of breast this is. Whereas any one of them can tell you this looks like a cancer given its local context. So do you have some level summarization, both because of the channel-wise maxim of the end, and because each point through the many, many convolutions of different strides gives you some of that summary effect. OK, great. I'm going to jump forward. So we've talked about how to make this learn. It's actually not that tricky if we just do it carefully and tune. Now I'll talk about how to use this model to actually deliver on this triage idea. So some of my choices again, ImageNet initialization is going to make your life a happier time. Use bigger batch sizes. And architecture choice doesn't really matter if it's convolutional. And the overall setup that we do through this work and across many other projects we're training independently per image. Now this is a harder task because you don't actually have the-- you're not taking any of the other view, you're not taking prior mammograms. But this is for kind of more harder reasons than that. We're going to get the prediction for the whole exam by taking the maximum across the different images. So if I say this breast has cancer, the exam has cancer. So you should get it checked up. And at each development epoch we're going to evaluate the ability of the model to do triage task, which I'll step into in a second. And we're going to kind of take the best model that can do triage. So you're always kind of like, your true end metric is what you're measuring during training. And you're going to do model selection and kind of hyper patching based on that. And the way we're going to do triage and our goal here is to mark as many people as healthy without missing a single cancer that we always would have caught. So intuitively kind of by taking all the cancers that the radiologist would have caught, what's the probability of cancer across these images, and just take the minimum of those and call that the threshold. That's exactly what we do. And another detail that's quite relevant often is if you want these models to output a reasonable probability like this is the probability of cancer, and you train on a 50/50 sample the batches, by default your model thinks that the average incidence is 50%. So it's crazy confidence all the time. So to calibrate that one really simple trick is you do something called Platt's Method where you basically just fit like a two-parameter sigmoid or just scale and a shift to just-- on the development sets to make it actually fit the distribution. That way the average probability you would expect to actually fit the incidence. And you don't get this kind of like crazy off-kilter probabilities. OK. So analysis. The objectives of what we would try to do here is kind of similar across all the projects. One, does this thing work? Two, does this thing work across all the people it's supposed to work for? So we did a subgroup analysis. First we looked at the AUC in this model. So the ability to discriminate cancer is not. We did it across races. We have across MGH, age groups, and density categories. And finally, how does this relate to radiologist's assessments? And if we actually use this at test time on the test set, what would have happened? Kind of a simulation before a full clinical implementation. So overall AUC here was 82 with some confident from 80 to 85. And we did our analysis by age. We found that the performance was pretty similar across every age group. What's not shown here is the confidence intervals. So for example-- but the kind of key core takeaway here is that there was no noticeable gap in terms of by age group. We repeated this analysis by race, and we saw the same trend again. The performance kind of ranged generally around 82. And in places where the gap was bigger, the just confidence interval was bigger accordingly due to smaller sample sizes, cause MGH is 80% white. We saw the exact same trend by density. The outlier here is very dense breasts. But there's only like 100 of those on test set. So this confidence actually goes from like, 60 to 90. So as far as we know for the other three categories, it is very much tied to confidence interval and very similar, once again, around 82. OK. So we have a decent idea that this model seems at least with a publish of MGH actually serve the relevant populations that exist as far as we know so far. The next question is, how does the model assessment relate to the radiologist's assessment? So to look at that we looked at on the test, if you look at the radiologist's true positives, false positives, true negatives, false negatives. Where do they fall within the model distribution of percentile risk? And if there is below the threshold, we've got to color it in this kind of cyan color. And if it's above the threshold, we're going to color it in this purple color. So this is kind of triage, not triage. The first thing to notice-- this is the true positives-- is that there is like a pretty kind of steep drop-off. And so there is only one true positive fell below the threshold in a test set of 26,000 exams. So none of this difference was statistically significant. And the vast majority of them are kind of this top 10%. But you kind of see, like, there's a clear trend here that they kind of get piled up towards the higher percentages. Whereas if you look at the false positive assessments, this trend is much weaker. So you still see that there is some correlation that there's going to more false positives the higher amounts, but much less stark. And this actually means that a lot of radiologist's false positives we actually place below the threshold. And so because these assessments are completely concordant and we're not just modeling what the radiologist would have said, we get an anticipated benefit of actually reducing the false positives significantly because of the weight of disagreeing. And finally, kind of aiding that further, if you look at the true negative assessments, there is not that much trending between where it falls within this. So it shows that they're kind of picking up on different things and they're-- where they disagree gives them both areas to improve and ancillary benefits because now we can reduce false positives. This directly leads into assimilating the impact. So one of the things we did, we just said, OK. If people retrospective on the test set as a simulation before which truly plug it in, if people didn't rebuild the triage threshold-- so we can't catch any more cancer this way, but we can reduce false positives-- what would have happened? So at the top we have the original performance. So this is looking at 100% of mammograms, sensitivity was 98.6 with specificity of 93. And in the simulation the sensitivity dropped not significantly to 90.1, but significantly improved to 93.7 while looking at 81% of the mammograms. So this is like promising preliminary data. But to reevaluate this and go forward, our next step-- let's see if-- oh. I'm going to get to that in a second. Our next step is we need to do clinical implementation to really figure out-- because there's a core assumption here is that people read it the same way. But if you have this higher incidence, what does that mean? Can you focus more on the people that are more suspicious? And is the right way to do this just a single threshold to not read? Or have a double ended with the seniors cause they're much more likely to have cancer. And so there is quite a bit of exploration here to say, given we have these tools that give us some probability of cancer, that's not perfect, but gives us something. How well can we do that to improve care today? So as a quiz, can you tell which of these will be triaged? So this is no cherry-picking. I randomly picked four mammograms that were below and above the threshold. Can anyone guess which side-- left or right-- was triaged? This is not graded, Chris, so you know. AUDIENCE: Raise your hand for-- ADAM YALA: Oh yeah. Raise your hand for the left. OK. Raise your hand for right. Here we go. Well done. Well done. OK. And then next step, as I said before, is we need to kind of push to the clinical implementation because that's where the rubber hits the road. We identify is there any biases we didn't detect? And we need to say, can we deliver this value? So the next project is on assessing breast cancer risk. So this is the same mammogram I showed you earlier. It was diagnosed with breast cancer in 2014. It's actually my advisor, Regina's. And you can see that in 2013 you see it's there. In 2012 it looks much less prominence. And five years ago, really looking at breast cancer risk. So if you can tell from an image that is going to be healthy for a long time, you're really trying to model what's the likelihood of this breast developing cancer in the future. Now modeling breast cancer risk, as Connie earlier said, is not a new problem. It's been a quite researched one in the community. And the more classical approach that we're gonna look at other kind of global health factors-- the person's age, their family history, whether or not they've had menopause, and kind of any other of these kind of facts we can sort of say are markers of their health to try to predict whether this person's at risk of developing breast cancer. People have thought that the image contains something before. The way they've thought about this is through this kind of subjective breast density marker. And the improvements seen across this are kind of marginal from 61 to 63. And as before, the kind of sketch we're going to go through is dataset collection, modeling, and analysis. And dataset collection we followed a very similar template. We saw from consecutive mammograms from 2009 to 2012 we took outcomes from the EHR, once again, and the Partners Registry. We didn't do exclusions based on race or anything of that sort, or implants. But we did exclude negatives for followup. So if someone didn't have cancer in three years, but disappeared from the system, we didn't count them as negatives that we have some certainty in both the modeling and the analysis. And as always, we split patients into train, dev, test. The modeling is very similar. It's the same kind of templated lessons as from triage, except we experimented with a model that's only the image. And for the sake of analysis, a model that's the image model I just talked to you before concatenated with those traditional risk factors at the last layer and trained jointly. That make sense for everyone? So I'm going to call that ImageOnly an Image+RF or hybrid. OK. Cool? Our kind of goals for the analysis. As before, we want to see does this model actually serve the whole population? Is it going to be discriminative across race, menopause status, the family history? And how does it relate to kind of classical portions of risk? And are we actually doing any better? And so just diving directly into that, assuming there's no questions. Good. Just to kind of remind you, this is the kind of the setting. One thing I forgot to mention-- that's why I had the slide here to remind me-- is that we excluded cancers from the first year from the test set. So there is truly a negative screening population. That way we kind of disentangle cancer detection from cancer risk. OK. Cool. So Tyrer-Cuzick is the kind of prior state-of-the-art model. It's a model based out of the UK. Their developer is someone named Sir Cuzick, who was knighted for this work. It's very commonly used. So that one had an AUC of 62. Our image-only model had an AUC about 68. And hybrid one had an AUC of 70. So you know, what is this kind of AUC thing gives you when you look using a risk model. What it gives you is the ability to build better high-risk and low-risk cohorts. So in terms of looking at high-risk cohorts, our best model place about 30% of all the cancers in the population in the top 10%, and 3% of all the cancers in the bottom 10% compared to 18 and 5 to the prior state of the art. And so what this enables you to do, if you're going to say that this 10% should actually qualify for MRI, you can start fighting this problem of majority of people that get cancer don't have MRI, and the majority of people that get it don't need it. It's all about, is your risk model actually place the right people into the right buckets. Now we saw that this trend of outperforming the prior state of the art held across races. And one of the things that was kind of astonishing was that though Tyrer-Cuzick performed on white women, which makes sense because it was developed only using white women in the UK. It was worse than random [INAUDIBLE] for African-American women. And so this kind of emphasizes the importance of this kind of analysis to make sure that the kind of data that you have is reflective of the population that you're trying to serve and actually doing the analysis accordingly. So we saw that our model kind of held across races and as well across-- we see this trend from across pre-postmenopausal and with and without family history. One thing we did in terms of a more granular comparison of performance, if we just look at kind of like the risk thirds for our model and the Tyrer-Cuzick model, what's the trend that you see or the cases where kind of like which one is right that's kind of ambiguous. And what I should show in these boxes is the cancer incidence prevalence in the population. So the darker the box, the higher the incidence. And on the right-hand side are just random images from cases that fit within those boxes. Does that make sense for everyone? Great. So a clear trend that you see is that, for example, if TCv8 calls you a high risk but we call it low, that is a lower incidence than if we called it medium and they call it low. So kind of like you kind of see this straight column-wise pattern showing that discrimination truly does follow the deep learning model and not the classical approach. And by looking at the random images that were selected in case where we disagree, it supports the notion that it's not just that the column is just the most dense, crazy, dense looking breast, that there's something more subtle it's picking up that's actually indicative of breast cancer risk. Kind of a very similar analysis we looked at as if we look at just by a traditional breast density as labeled by the original radiologist on the development set or on the test set, we end up seeing the same trend where if someone is non-dense we call them high risk. They're much higher risk than someone that is dense than we call low risk. And as before, the kind of real next step here to make this truly valuable and truly useful is actually implementing a clinically seamless prospectively and with more centers and kind of more population to see does this work and does it deliver the kind of benefits that we care about. And viewing what is the leverage of change once you know that someone is high risk? Perhaps MRI, perhaps more frequent screening. And so this is the kind of gap between having a useful technology on the paper side to an actual useful technology in real life. So I am moving on schedule. So now I'm gonna talk about how to mess up. And it's actually quite interesting. There is like, so many ways. And I fall into them a few times myself, and it happens. And kind of following the sketch, you can mess up in dataset collection. That's probably the most common by far. You can mess up in modeling, which I'm doing right now. And it's very sad. And you can mess up in analysis, which is really preventable. So in dataset collection, enriched data sets are the kind of the most common thing you see in this space. You find in a public data set that's most likely going to be like 50-50 cancer, not cancer. And oftentimes these datasets collect can have some sort of bias within the way it was collected. So it might be that you have negative cases from less centers than you have positive cases. Or they're collected from different years. And actually, this is something we ran into earlier in our own work. Once upon a time, Connie and I were in Shanghai for the opening of a cancer center there. And at that time we had all the cancers from the MGH dataset, about 2,000. But the mammograms were still being collected annually from 2012-- from 2009. So at that time, we only had, like, half of the negatives by year, but all of the cancers. And all of a sudden I had to-- you know, I came from the slightly more complicated model, as one often does. I looked at several images at the same time. And my AUC went up to like, 95. And I had all this, like, bouncing off the wall. And then in-- you know, I had some suspicion of like, wait a second. This is too high. This is too good. And we completely realized that all these numbers were kind of a myth. But this level of-- kind of if you do these kind of case control things, you can oftentimes, unless you're very careful about the way it was constructed, you could easily run into these issues. And your testing set won't protect you from that. And so having a clean dataset that truly follows the kind of spectrum we expect to use it in-- i.e., a natural distribution, collected through routine clinical care is important to say will it behave as we actually want it to be used. In general, the only-- some of this you can think through in first principle. But it kind of stresses the importance of actually testing this prospectively in external validation to try to see does this work when I take away some of the biases in my dataset, and being really careful about that. The common approach of just controlling by age or by density is not enough when the model can catch really fine-grained signals. How to mess up in modeling. So there's been adventures in this space as well. One of the things I've recently discovered is that the actual mammography machine device that the machine was captured on-- so you saw a bunch of mammograms probably from different machines-- has an unexpected impact on the model. So the actual probability distribution-- the distribution of cancer probabilities by the model is not independent of the device. That's something we're going through now. We actually ran into this while working on clinical implementation is like this kind of conditional adversarial training set up to try to rectify this issue. It's important. So this is much harder to catch based on first principle. But it's important to think through as you kind of really start demoing out your computations. This will kind of-- these issues pop up easily, and they're harder to avoid. And lastly, and I think probably one that's probably the most important is messing up in analysis. So it's quite common in the previous section in this field-- yes. AUDIENCE: With the adversarial up there, just to understand what you do, do you that discriminate or predict the machine? And then you train against that? ADAM YALA: So my answer is going to be two parts. One, it doesn't work as well as I want it to yet. So really who knows? But my best hunch in terms of what's been done before for other kind of work, specifically in radio signals, is they use a conditional adversarial. So you're free to discriminate at both the label and the image presentation. You have to try to predict out the device to try to take away the information that's not just contained within the label distribution. And that's been shown to be very helpful for people trying to do [INAUDIBLE] detection based off on Wi-Fi-- or not Wi-Fi-- but like, radio waves. And the [INAUDIBLE] but also, it seems to be the most common approach I've seen in literature. So it's something that I'm going to try soon. I haven't implemented it. It was just GPU time and kind of waiting to queue up the experiment. And the last part in terms of how to mess up is this kind of analysis. One thing that's common is people assume that's it kind of like synthetic experiments or the same thing as clinical implementation. Like, people do reader studies very often. And it's quite common to see that when you do reader studies that it doesn't actually-- like, you might find that computer detection does a huge difference in reader studies. And it's-- Connie actual showed it was harmful in real life. And it's important to kind of like, do these real world experiments that we can say what is happening and just them the real benefit that I expected. And a hopefully less common nowadays mistake is that oftentimes people exclude all inconvenient cases. So there was a paper yesterday that just came out that the cancer detection used a kind of patched-up architecture which would read more closely into their details, they excluded all women with breasts that they considered too small by some threshold for like modeling convenience. But that might disproportionately affect specifically Asian women in that population. And so they didn't do a subgroup analysis for all the different races, so it's hard to know what is happening there. If your population is mostly white, which it is at MGH, and is at a lot of the centers that these colleges have developed, are reporting the average that you see isn't enough to really validate that. And so you can have things like Tyrer-Cuzick model that are worse than random and especially harmful for African-American women. And so guarding against that is you can do a lot of that based on first principle. But some of these things you can only really find out by actively monitoring to say, is there any subpopulation that I didn't think about a priority that could be harmed? And finally, so I talked about clinical deployments. We've actually done this a couple times. And I'm going to switch over to Connie real soon. In general, what you want to do is you want to make it as easy as plausible and possible for the in-house IT team to use your tool. We've gone through this with-- not like-- I don't-- depends on how you count. It's like once for density and then like three times at the same time. But I spent, like, many hours sitting there. And the broad way that we set it up so far is we just have a kind of docker as container to manage a web app that holds the model. This web app has kind of a backup processing toolkit. So the kind of steps that all of our deployments follow and I look under unified framework is the IT application would get some images out of the PAC system. It will send it over to application. We're going to convert to the PNG in the way that we expect, because we kind of encapsulate this functionality. Run for the models, send it back, and then write it back to the EHR. One of the things I ran into was that they didn't actually know how to use things like HTTP because it's not actually normal within their infrastructure. And so being cognizant that some of these more, like, tech standard things like just HTTP requests and responses and stuff is less standard within the inside of their infrastructure and kind of looking up how to actually do these things in like C Sharp, or whatever language they have, has been really what's enabled us to end block these things and actually plug it in. And that is it for my part. So I'm gonna hand it back-- oh, yes. AUDIENCE: So you're writing stuff in the IT application in C Sharp to do API requests? ADAM YALA: So they're writing it. I just meet them to tell them how to write it. But yes. So like, in general, like, there's libraries. So like, the entire environment is in Windows. And Windows has a very poor support for lots of things you would expect it to have a good support for. So there was like, if you wanted to send HP requests for like a multipart form and just put the images in that form, apparently that has bugs in it in like, Windows whatever version they use today. And so that vanilla version didn't work. Windows for Docker also has bugs. And I had to set up this kind of locking function for them to like, automatically table locks inside the container. And it just doesn't work in Windows for Docker. AUDIENCE: [INAUDIBLE] questions because he is short on time. ADAM YALA: Yeah. So we can get to this at the end. I want to hand off to Connie. If you have any questions, grab me after. |
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019 | 16_Reinforcement_Learning_Part_1.txt | PROFESSOR: Hi, everyone. We're getting started now. So this week's lecture is really picking up where last week's left off. You may remember we spent the last week talking about cause inference. And I told you how, for last week, we're going to focus on a one-time setting. Well, as we know, lots of medicine has to do with multiple sequential decisions across time. And that'll be the focus of this whole week's worth of discussions. And as I thought about really what should I teach in this lecture, I realized that the person who knew the most about the topic was in fact a postdoctoral researcher in my lab. Most about this topic in the general area of the medical field. FREDRIK D. JOHANSSON: Thanks. I'll take it. AUDIENCE: Global [INAUDIBLE]. FREDRIK D. JOHANSSON: It's very fair. PROFESSOR: And so I invited him to come to us today and to give this as an invited lecture. And this is Fredrik Johansson. He'll be a professor in Chalmers, in Sweden, starting in September. FREDRIK D. JOHANSSON: Thank you so much, David. That's very generous. Yeah, so as David mentioned, last time we looked a lot at causal effects. And that's where we will start on this discussion, too. So I'll just start with this reminder, here-- we essentially introduced four quantities last time, or the last two lectures, as far as I know. We had two potential outcomes, which represented the outcomes that we would see of some treatment choice under the various choices. So, the two different choices-- 1 and 0. We had a set of covariates, x and a treatment, t. And we were interested in, essentially, what is the effect of this treatment, t, on the outcome, y, given the covariates, x. And the effect that we focused on that time was the conditional average treatment effect, which is exactly the difference between these potential outcomes-- a condition on the features. So the whole last week was about trying to identify this quantity using various methods. And the question that didn't come up so much-- or one question that didn't come up too much-- is how do we use this quantity? We might be interested in it, just in terms of its absolute magnitude. How large is the effect? But we might also be interested in designing a policy for how to treat our patients based on this quantity. So today, we will focus on policies. And what I mean by that, specifically, is something that takes into account what we know about a patient and produces a choice or an action as an output. Typically, we'll think of policies as depending on medical history, perhaps which treatments they have received previously, what state is the patient currently in. But we can also base it purely on this number that we produce last time-- the conditional average treatment effect. And one very natural policy is to say, pi of x is equal to the indicator function representing if this CATE is positive. So if the effect is positive, we treat the patient. If the effect is negative, we don't. And of course, positive will be relative to the usefulness of the outcome being high. But yeah, this is a very natural policy to consider. However, we can also think about much more complicated policies that are not just based on this number-- the quality of the outcome. We can think about policies that take into account legislation or cost of medication or side effects. We're not going to do that today, but that's something that you can keep in mind as we discuss these things. So David mentioned, we should now move from the one-step setting, where we have a single treatment acting at a single time and we only have to take into account the state of a patient once, basically. And we will move from that to the sequential setting. And my first example of such a setting is sepsis management. So, sepsis is a complication of an infection, which can have very disastrous consequences. It can lead to organ failure and ultimately death. And it's actually one of the leading causes of deaths in the ICU. So it's of course important that we can manage and treat this condition. When you start treating sepsis, the primary target-- the first things you should think about fixing-- is the infection itself. If we don't treat the infection, things are going to keep being bad. But even if we figure out the right antibiotic to treat the infection that is the source of the septic shock or the septic inflammation, there are a lot of different conditions that we need to manage. Because the infection itself can lead to fever, breathing difficulties, low blood pressure, high heart rate-- all these kinds of things that are symptoms, but not the cause in themselves. But we still have to manage them somehow so that the patient survives and is comfortable. So when I say sepsis management, I'm talking about managing such quantities over time-- over a patient's stay in the hospital. So, last time-- again, just to really hammer this in-- we talked about potential outcomes and the choice of a single treatment. So we can think about this in the septic setting as a patient coming in-- or a patient already being in the hospital, presumably-- and is presenting with breathing difficulties. So that means that their blood oxygen will be low because they can't breathe on their own. And we might want to put them on mechanical ventilation so that we can ensure that they get sufficient oxygen. We can view this as a single choice. Should we put the patient on mechanical ventilation or not? But what we need to take into account here is what will happen after we make that choice. What will be the side effects of this choice going further? Because we want to make sure that the patient is comfortable and in good health throughout their stay. So today, we will move towards sequential decision making. And in particular, what I alluded to just now is that decisions made in sequence may have the property that choices early on rule out certain choices later. And we'll see an example of that very soon. And in particular, we'll be interested in coming up with a policy for making decisions repeatedly that optimizes a given outcome-- something that we care about. It could be minimize the risk of death. It could be a reward that says that the vitals of a patients are in the right range. We might want to optimize that. But essentially, think about it now as having this choice of administering a medication or an intervention at any time, t-- and having the best policy for doing so. OK, I'm going to skip that one. OK, so I mentioned already one potential choice that we might want to make in the management of a septic patient, which is to put them on mechanical ventilation because they can't breathe on their own. A side effect of doing so is that they might suffer discomfort from being intubated. The procedure is not painless, it's not without discomfort. So something that you might have to do-- putting them on mechanical ventilation-- is to sedate the patient. So this is an action that is informed by the previous action, because if we didn't put the patient on mechanical ventilation, maybe we wouldn't consider them for sedation. When we sedate a patient, we run the risk of lowering their blood pressure. So we might need to manage that, too. So if their blood pressure gets too low, maybe we need to administer vasopressors, which artificially raise the blood pressure, or fluids or anything else that takes care of this issue. So just think of this as an example of choices cascading, in terms of their consequences, as we roll forward in time. Ultimately, we will face the end of the patient's stay. And hopefully, we managed the patient in a successful way so that their response or their outcome is a good one. What I'm illustrating here is that, for any one patient in our hospitals or in the health care system, we will only observe one trajectory through these options. So I will show this type of illustration many times, but I hope that you can realize the scope of the decision space here. Essentially, at any point, we can choose a different action. And usually, the number of decisions that we make in an ICU setting, for example, is much larger than we could ever test in a randomized trial. Think of all of these different trajectories as being different arms in a randomized controlled trial that you want to compare the effects or the outcomes of. It's infeasible to run such a trial, typically. So one of the big reasons that we are talking about reinforcement learning today and talking about learning policies, rather than causal effects in the setup that we did last week, is because the space of possible action trajectories is so large. Having said that, we now turn to trying to find, essentially, the policy that picks this orange path here-- that leads to a good outcome. And to reason about such a thing, we need to also reason about what is a good outcome? What is good reward for our agent, as it proceeds through time and makes choices? Some policies that we produce as machine learners might not be appropriate for a health care setting. We have to somehow restrict ourself to something that's realistic. I won't focus very much on this today. It's something that will come up in the discussion tomorrow, hopefully. And also the notion of evaluating something for use in the health care system will also be talked about tomorrow. AUDIENCE: Thursday. FREDRIK D. JOHANSSON: Sorry, Thursday. Next time. OK, so I'll start by just briefly mentioning some success stories. And these are not from the health care setting, as you can guess from the pictures. How many have seen some of these pictures? OK, great-- almost everyone. Yeah, so these are from various video games-- almost all of them. Well, games anyhow. And these are good examples of when reinforcement learning works, essentially. That's why I use these in this slide here-- because, essentially, it's very hard to argue that the computer or the program that eventually beat Lee Sedol. I think it's in this picture but also, later, Go champions, essentially. In the AlphaGo picture in the top left, it's hard to argue that they're not doing a good job, because they clearly beat humans here. But one of the things I want you to keep in mind throughout this talk is what is different between these kinds of scenarios? And we'll come back to that later. And what is different to the health care setting, essentially? So I simply added another example here, that's why I recognize it. So there was recently one that's a little bit closer to my heart, which is AlphaStar. I play StarCraft. I like StarCraft, so it should be on the slide. Anyway, let's move on. Broadly speaking, these can be summarized in the following picture. What goes into those systems? There's a lot more nuance when it comes to something like Go. But for the purpose of this class, we will summarize them with a slide. So essentially, one of the three quantities that matters for a reinforcement learning is the state of the environment, the state of the game, the state of the patient-- the state of the thing that we want to optimize, essentially. So in this case, I've chosen Tic-tac-toe here. We have a state which represents the current positions of the circles and crosses. And given that state of the game, my job as a player is to choose one of the possible actions-- one of the free squares to put my cross in. So I'm the blue player here and I can consider these five choices for where to put my next cross. And each of those will lead me to a new state of the game. If I put my cross over here, that means that I'm now in this box. And I have a new set of actions available to me for the next round, depending on what the red player does. So we have the state, we have the actions, and we have the next state, essentially-- we have a trajectory or a transition of states. And the last quantity that we need is the notion of a reward. That's very important for reinforcement learning, because that's what's driving the learning itself. We strive to optimize the reward or the outcome of something. So if we look at the action to the farthest right here, essentially I left myself open to an attack by the red player here, because I didn't put my cross there. Which means that, probably, if the red player is decent, he will put his circle here and I will incur a loss, essentially. So my reward will be negative, if we take positive to be good. And this is something that I can learn from going forward. Essentially, what I want to avoid is ending up in this state that's shown in the bottom right here. This is the basic idea of reinforcement learning for video games and for anything else. So if we take this board analogy or this example and move to the health care setting, we can think of the state of a patient as the game board or the state of the game. We will always call this St in this talk. The treatments that we prescribe or interventions will be At. And these are like the actions in the game, obviously. The outcomes of a patient-- could be mortality, could be managing vitals-- will be as the rewards in the game, having lost or won. And then up at the end here, what could possibly go wrong. Well, as I alluded to before, health is not a game in the same sense that a video game is a game. But they share a lot of mathematical structure. So that's why I make the analogy here. These quantities here-- S, A, and R-- will form something called a decision process. And that's what we'll talk about next. This is the outline for today and Thursday. I won't get to this today, but this is the talks we're considering. So a decision process is essentially the world that describes the data that we access or the world that we're managing our agent in. Very often, if you've ever seen reinforcement learning taught, you have seen this picture in some form, usually. Sometimes there's a mouse and some cheese and there's other things going on, but you know what I'm talking about. But there are the same basic components. So there's the concept of an agent-- let's think doctor for now-- that takes actions repeatedly over time. So this t here indicates an index of time and we see that essentially increasing as we spin around this wheel here. We move forward in time. So an agent takes an action and, at any time point, receives a reward for that action. And that would be Rt, as I said before. The environment is responsible for giving that reward. So for example, if I'm the doctor, I'm the agent, I make an action or an intervention to my patient, the patient will be the environment. And essentially, responses do not respond to my intervention. The state here is the state of the patient, as I mentioned before, for example. But it might also be a state more broadly than the patient, like the settings of the machine that they're attached to or the availability of certain drugs in the hospital or something like that. So we can think a little bit more broadly around the patient, too. I said partially observed here, in that I might not actually know everything about the patient that's relevant to me. And we will come back a little bit later to that. So there are two different formalizations that are very close to each other, which is when you'd know everything about s and when you don't. We will, for the longest part of this talk, focus on the way I know everything that is relevant about the environment. OK, to make this all a bit more concrete, I'll return to the picture that I showed you before, but now put it in context of the paper that you read. Was that the compulsory one? The mechanical ventilation? OK, great. So in this case, they had an interesting reward structure, essentially. The thing that they were trying to optimize was the reward related to the vitals of the patient. But also whether they were kept on mechanical ventilation or not. And the idea of this paper is that you don't want to keep a patient unnecessarily on mechanical ventilation, because it has the side effects that we talked about before. So at any point in time, essentially, we can think about taking a patient on or off-- and also dealing with the sedatives that are prescribed to them. So in this example, the state that they considered in this application included the demographic information of the patient, which doesn't really change over time. Their physiological measurements, ventilator settings, consciousness level, the dosages of the sedatives they use, which could be an action, I suppose-- and a number of other things. And these are the values that we have to keep track of, moving forward in time. The actions concretely included whether to intubate or extubate the patient, as well as the administer and dosing the sedatives. So this is, again, an example of a so-called decision process. And essentially, the process is the distribution of these quantities that I've been talking about over time. So we have the states, the actions, and the rewards. They all traverse or they all evolve over time. And the loss of how that happens is the decision process. I mentioned before that we will be talking about policies today. And typically, there's a distinction between what is called a behavior policy and a target policy-- or there are different words for this. Essentially, the thing that we observe is usually called a behavior policy. By that, I mean if we go to a hospital and watch what's happening there at the moment, that will be the behavior policy. And I will denote that mu. So that is what we have to learn from, essentially. So decision processes so far are incredibly general. I haven't said anything about what this distribution is like, but the absolutely dominant restriction that people make when they study system processes is to look at Markov decision processes. And these have a specific conditional independent structure that I will illustrate in the next slide-- well, I'll just define it mathematically here. It says, essentially, that all of the quantities that we care about-- the states. I guess that should say state. Rewards and the actions only depend on the most recent state in action. If we observe an action taken by a doctor in the hospital, for example-- to make a mark of assumption, we'd say that this doctor did not look at anything that happened earlier in time or any other information than what is in the state variable that we observe at that time. That is the assumption that we make. Yeah? AUDIENCE: Is that an assumption you can make for a health care? Because in the end, you don't have access to the real state, but only about what's measured about the state in health care. FREDRIK D. JOHANSSON: It's a very good question. So the nice thing in terms of inferring causal quantities is that we only need the things that were used to make the decision in the first place. So the doctor can only act on such information, too. Unless we don't record everything that the doctor knows-- which is also the case. So that is something that we have to worry about for sure. Another way to lose information, as I mentioned, that is relevant for this is if we look to-- What's the opposite of far? AUDIENCE: Near. FREDRIK D. JOHANSSON: Too near back in time, essentially. So we don't look at the entire history of the patient. And when I say St here, it doesn't have to be the instantaneous snapshot of a patient. We can also Include history there. Again, we'll come back to that a little later. OK, so the Markov assumption essentially looks like this. Or this is how I will illustrate, anyway. We have a sequence of states here that evolve over time. I'm allowing myself to put some dots here, because I don't want to draw forever. But essentially, you could think of this pattern repeating-- where the previous state goes into the next state, the action goes into the next state, and the action and state goes in through the reward. This is the world that we will live in for this lecture. Something that's not allowed under the mark of assumption is an edge like this, which says that an action at an early time influences an action at a later time. And specifically, it can't do so without passing through a state, for example. It very well can have an influence on At by this trajectory here, but not directly. That that's the Markov assumption in this case. So you can see that if I were to draw the graph of all the different measurements that we see during a state, essentially there are a lot of errors that I could have had in this picture that I don't have. So it may seem that the Markov assumption is a very strong one, but one way to ensure that the Markov assumption is more likely is to include more things in your state, including summaries of the history, et cetera, that I mentioned before. An even stronger restriction of decision processes is to assume that the states over time are themselves independent. So this goes by different names-- sometimes under the name contextual bandits. But the bandits part of that itself is not so relevant here. So let's not go into that name too much. But essentially, what we can say is here, the state at a later time point is not influenced directly by the state at a previous time point, nor the action of the previous time point. So if you remember what you did last week, this looks like basically T repetitions of the very simple graph that we had for estimating potential outcomes. And that is indeed mathematically equivalent. If we assume that this S here represents the state of a patient and all patients are drawn from some sum process, essentially. So that S0, 1, et cetera, up to St are all i.i.d. draws of the same distribution. Then we have, essentially, a model for t different patients with a single time step or single action, instead of them being dependent in some way. So we can see that by going backwards through my slides, this is essentially what we had last week. And we just have to add more arrows to get to whatever we have this week, which indicates that last week was a special case of this-- just as David said before. It also hints at the reinforcement learning problem being more complicated than the potential outcomes problem. And we'll see more examples of that later. But, like with causal effect estimation that we did last week, we're interested in the influences of just a few variables, essentially. So last time we studied the effect of a single treatment choice. And in this case, we will study the influence of these various actions that we take along the way. That will be the goal. And it could be either through an immediate effect on the immediate reward or it can be through the impact that an action has on the state trajectory itself. I told you about the world now that we live in. We have these Ss and As and Rs. And I haven't told you so much about the goal that we're trying to solve or the problem that we're trying to solve. Most RL-- or reinforcement and learning-- is aimed at optimizing the value of a policy or finding a policy that has a good return, a good sum of rewards. There are many names for this, but essentially a policy that does well. The notion of well that we will be using in this lecture is that of a return. So the return at a time step t, following the policy, pi, that I had before, is the sum of the future rewards that we see if we were to act according to that policy. So essentially, I stop now. I ask, OK, if I keep on doing the same as I've done through my whole life-- maybe that was a good policy. I don't know. And keep going until the end of time, how well will I do? What is the sum of those rewards that I get, essentially? That's the return. The value is the expectation of such things. So if I'm not the only person, but there is the whole population of us, the expectation over that population is the value of the policy. So if we take patients as a better analogy than my life, maybe, the expectation is over patients. If we fact on every patient in our population the same way-- according to the same policy, that is-- what is the expected return over those patients? So as an example, I drew a few trajectories again, because I like drawing. And we can think about three different patients here. They start in different states. And they will have different action trajectories as a result. So we're treating them with the same policy. Let's call it pi. But because they're in different states, they will have different actions at the same times. So here we take a 0 action, we go down. Here, we take a 0 action, we go down. That's what that means here. The specifics of this is not so important. But what I want you to pay attention to is that, after each action, we get a reward. And at the end, we can sum those up and that's our return. So each patient has one value for their own trajectory. And the value of the policy is then the average value of such trajectories. So that is what we're trying to optimize. We have now a notion of good and we want to find a pi such that V pi up there is good. That's the goal. So I think it's time for a bit of an example here. I want you to play along in a second. You're going to solve this problem. It's not a hard one. So I think you'll manage. I think you'll be fine. But this is now yet another example of a world to be in. This is the robot in a room. And I've stolen this slide from David, who stole it from Peter Bodik. Yeah, so credits to him. The rules of this world says the following-- if you tell the robot, who is traversing this set of tiles here-- if you tell the robot to go up, there's a chance he doesn't go up, but goes somewhere else. So we have the stochastic transitions, essentially. If I say up, he goes up with point a probability and somewhere else with uniform probability, say. So 0.8 up and then 0.2-- this is the only possible direction to go in if you start here. So 0.2 in that way. There's a chance you move in the wrong direction is what I'm trying to illustrate here. There's no chance that they're going in the opposite direction. So if I say right here, it can't go that way. The rewards in this game is plus 1 in the green box up there, minus 1 in the box here. And these are also terminal states. So I haven't told you what that is, but it's essentially a state in which the game ends. So once you get to either plus 1 or minus 1, the game is over. For each step that the robot takes, it incurs 0.04 negative reward. So that says, essentially, that if you keep going for a long time, your reward would be bad. The value of the policy will be bad. So you want to be efficient. So basically, you can figure out-- you want to get to the green thing, that's one part of it. But you also want to do it quickly. So what I want you to do now is to essentially figure out what is the best policy, in terms of in which way should the arrows point in each of these different boxes? Fill in the question mark with an arrow pointing in some direction. We know the different transitions will be stochastic, so you might need to take that into account. But essentially, figure out how do I have a policy that gives me the biggest expected reward? And I'll ask you in a few minutes if one of you is brave enough to put it on the board or something like that. AUDIENCE: We start the discount over time? FREDRIK D. JOHANSSON: There's no discount. AUDIENCE: Can we talk to our neighbor? FREDRIK D. JOHANSSON: Yes. It's encouraged. [INTERPOSING VOICES] FREDRIK D. JOHANSSON: So I had a question. What is the action space? And essentially, the action space is always up, down, left, or right, depending on if there's a wall or not. So you can't go right here, for example. AUDIENCE: You can't go left either. FREDRIK D. JOHANSSON: You can't go left, exactly. Good point. So each box at the end, when you're done, should contain an arrow pointing in some direction. All right, I think we'll see if anybody has solved this problem now. Who thinks they have solved it? Great. Would you like to share your solution? AUDIENCE: Yeah, so I think it's going to go up first. FREDRIK D. JOHANSSON: I'm going to try and replicate this. Ooh, sorry about that. OK, you're saying up here? AUDIENCE: Yeah. The basic idea is you want to reduce the chance that you're ever adjacent to the red box. So just do everything you can to stay far from it. Yeah, so attempt to go up and then once you eventually get there, you just have to go right. FREDRIK D. JOHANSSON: OK. And then? AUDIENCE: [INAUDIBLE]. FREDRIK D. JOHANSSON: OK. So what about these ones? This is also part of the policy, by the way. AUDIENCE: I hadn't thought about this. FREDRIK D. JOHANSSON: OK. AUDIENCE: But those, you [INAUDIBLE],, right? FREDRIK D. JOHANSSON: No. AUDIENCE: Minus 0.04. FREDRIK D. JOHANSSON: So discount usually means something else. We'll get to that later. But that is a reward for just taking any step. If you move into a space that is not terminal, you incur that negative reward. AUDIENCE: So if you keep bouncing around for a really long time, you incur a long negative reward. FREDRIK D. JOHANSSON: If we had this, there's some chance I'd never get out of all this. And very little chance of that working out. But it's a very bad policy, because you keep moving back and forth. All right, we had an arm somewhere. What should I do here? AUDIENCE: You could take a vote. FREDRIK D. JOHANSSON: OK. Who thinks right? Really? Who thinks left? OK, interesting. I don't actually remember. Let's see. Go ahead. AUDIENCE: I was just saying, that's an easy one. FREDRIK D. JOHANSSON: Yeah, so this is the part that we already determined. If we had deterministic transitions, this would be great, because we don't have to think about the other ones. This is what Peter put on the slide. So I'm going to have to disagree with the vote there, actually. It depends, actually, heavily on the minus 0.04. So if you increase that by a little bit, you might want to go that way instead. Or if you decrease-- I don't remember. Decrease, exactly. And if you increase it, you might get something else. It might actually be good to terminate. So those details matter a little bit. But I think you've got the general idea. And especially I like that you commented that you want to stay away from the red one, because if you look at these different paths. You go up there and there-- they have the same number of states, but there's less chance you end up in the red box if you take the upper route. Great. So we have an example of a policy and we have an example of a decision process. And things are working out so far. But how do we do this? As far as the class goes, this was a blackbox experiment. I don't know anything about how you figured that out. So reinforcement learning is about that-- reinforcement learning is try and come up with a policy in a rigorous way, hopefully-- ideally. So that would be the next topic here. Up until this point, are there any questions that you've been dying to ask, but haven't? AUDIENCE: I'm curious how much behavioral biases could play into the first Markov assumption? So for example, if you're a clinician who's been working for 30 years and you're just really used to giving a certain treatment. An action that you gave in the past-- that habit might influence an action in the future. And if that is a worry, how one might think about addressing it. FREDRIK D. JOHANSSON: Interesting. I guess it depends a little bit on how it manifests, in that if it also influenced your most recent action, maybe you have an observation of that already in some sense. It's a very broad question. What effect will that have? Did you have something specific in mind? AUDIENCE: I guess I was just wondering if it violated that assumption, that an action of the past influenced an action-- FREDRIK D. JOHANSSON: Interesting. So I guess my response there is that the action didn't really depend on the choice of action before, because the policy remained the same. You could have a bias towards an action without that being dependent on what you gave as action before, if you know what I mean. Say my probability of giving action one is 1, then it doesn't matter that I give it in the past. My policy is still the same. So, not necessarily. It could have other consequences. We might have reason to come back to that question later. Yup. AUDIENCE: Just practically, I would think that a doctor would want to be consistent. And so you wouldn't, for example, want to put somebody on a ventilator and then immediately take them off and then immediately put them back on again. So that would be an example where the past action influences what you're going to do. FREDRIK D. JOHANSSON: Completely, yeah. I think that's a great example. And what you would hope is that the state variable in that case includes some notion of treatment history. That's what your job is then. So that's why state can be somewhat misleading as a term-- at least for me, I'm not American or English-speaking. But yeah, I think of it as too instantaneous sometimes. So we'll move into reinforcement learning now. And what I had you do on the last slide-- well, I don't know which method you use, but most likely the middle one. There are three very common paradigms for reinforcement learning. And they are essentially divided by what they focus on modeling. Unsurprisingly, model-based RL focused on-- well, it has some sort of model in it, at least. What you mean by model in this case is a model of the transitions. So what state will I end up in, given the action in the state I'm in at the moment? So model-based RL tries to essentially create a model for the environment or of the environment. There are several examples of model-based RL. One of them is G-computation, which comes out of the statistic literature, if you like. And MDPs are essentially-- that's a Markov decision process, which is essentially trying to estimate the whole distribution that we talked about before. There are various ups and downsides of this. We won't have time to go into all of these paradigms today. We will actually focus only on value-based RL today. Yeah, you can ask me offline if you are interested in model-based RL. The rightmost one here is policy-based RL, where you essentially focus only on modeling the policy that was used in the data that you observed. And the policy that you want to essentially arrive at. So you're optimizing a policy and you are estimating a policy that was used in the past. And the middle one focuses on neither of those and focuses on only estimating the return-- that was the G. Or the reward function as a function of your actions and states. And it's interesting to me that you can pick any of the variables-- A, S, and R-- and model those. And you can arrive at something reasonable in reinforcement learning. This one is particularly interesting, because it doesn't try to understand how do you arrive at a certain return based on the actions in the states? It's just optimize the policy directly. And it has some obvious-- well, not obvious, but it has some downsides, not doing that. OK, anyway, we're going to focus on value-based RL. And the very dominant instantiation of value-based RL is Q-learning. I'm sure you've heard of it. It is what drove the success stories that I showed before, the goal in the StarCraft and everything. G-estimation is another example of this, which, again, has come from the statistic literature. But we'll focus on Q-learning today. So Q-learning is an example of dynamic programming, in some sense. That's how it's usually explained. And I just wanted to check-- how many have heard the phrase dynamic programming before? OK, great. So I won't go into details of dynamic programming in general. But the general idea is one of recursion. In this case, you know something about what is a good terminal state. And then you want to figure out how to get there and how to get to the state before that and the state before that and so on. That is the recursion that we're talking about. The end state that is the best here is fairly obvious-- that is the plus 1 here. The only way to get there is by stopping here first, because you can't move from here since it's a terminal state. Your only bet is that one. And then we can ask what is the best way to get to 3, 1? How do we get to the state before the best state? Well, we can say that one way is go from here. And one way from here. And as we got from the audience before, this is a slightly worse way to get there then from there, because here we have a possibility of ending up in minus 1. So then we recurse further and essentially, we end up with something like this that says-- or what I tried to illustrate here is that the green boxes-- I'm sorry for any colorblind members of the audience, because this was a poor choice of mine. Anyway, this bottom side here is mostly red and this is mostly green. And you can follow the green color here, essentially, to get to the best end state. And what I used here to color this in is this idea of knowing how good a state is, depending on how good the state after that state is. So I knew that plus 1 is a good end state over there. And that led me to recurse backwards, essentially. So the question, then, is how do we know that that state over there is a good one? When we have it visualized in front of us, it's very easy to see. And it's very easy because we know that plus 1 is a terminal state here. It ends there, so those are the only states we need to consider in this case. But more in general, how do we learn what is the value of a state? That will be the purpose of Q-learning. If we have an idea of what is a good state, we can always do that recursion that I explained very briefly. You find a state that has the high value and you figure out how to get there. So we're going to have to define now what I mean by value. I've used that word a few different times. I say recall here, but I don't know if I actually had it on a slide before. So let's just say this is the definition of value that we will be working with. I think I had it on a slide before, actually. This is the expected return. Remember, this G here was the sum of rewards going into the future, starting at time, t. And the value, then, of this state is the expectation of such returns. Before, I said that the value of an policy was the expectation of returns, period. And the value of a state and the policy is the value of that return starting in a certain state. We can stratify this further if we like and say that the value of a state action pair is the expected return, starting in a certain state and taking an action, a. And after that, following the policy, pi. This would be the so-called Q value of a state-action pair-- s, a. And this is where Q-learning gets its name. So Q-learning attempts to estimate the Q function-- the expected return starting in a state, s, and taking action, a-- from data. The Q-learning is also associated with a deterministic policy. So the policy and the Q function go together in this specific way. If we have a Q function, Q, that tries to estimate the value of a policy, pi, the pi itself is the arg max according to that Q. It sounds a little recursive, but hopefully it will be OK. Maybe it's more obvious if we look here. So Q, I said before, was the value of starting an s, taking action, a, and then following policy, pi. This is defined by the decision process itself. The best pi, the best policy, is the one that has the highest Q. And this is what we call a Q-star. Well, that is not what we call Q-star, that is what we call little q-star. Q-star, the best estimate of this, is obviously the thing itself. So if you can find a good function that assigns a value to a state-action pair, the best such function you can get is the one that is equal to little q-star. I hope that wasn't too confusing. I'll show on the next slide why that might be reasonable. So Q-learning is based on a general idea from dynamic programming, which is the so-called Bellman question. There we go. This is an instantiation of Bellman optimality, which says that the best state-action value function has the property that it is equal to the immediate reward of taking action, a, and state, s, plus this, which is the maximum Q value for the next state. So we're going to stare at this for a bit, because there's a bit here to digest. Remember, q-star assigns a value to any state action pair. So we have q-star here, we have q-star here. This thing here is supposed to represent the value going forward in time after I've made this choice, action, a, and state, s. If I have a good idea of how good it is to take action, a, instead of s, it should both incorporate the immediate reward that I get-- that's RT-- and how good that choice was going forward. So think about mechanical ventilation, as I said before. If we put a patient on mechanical ventilation, we have to do a bunch of other things after that. If none of those other things lead to a good outcome, this part will be low. Even if the immediate return is good. So for the optimal q-star, this quantity holds. We know that-- we can prove that. So the question is how do we find this thing? How do we find q-star? Because q-star is not only the thing that gives you the optimal policy-- it also satisfied this equality. This is not true for every Q function, but it's true for the optimal one. Questions? If you haven't seen this before, it might be a little tough to digest. Is the notation clear? Essentially, here you have the state that you are arriving at the next time. A prime is the parameter of this here, or the argument to this. You're taking the best possible q-star value and then state that you arrive at after. Yeah, go ahead. AUDIENCE: Can you instantiate an example you have on the board? FREDRIK D. JOHANSSON: Yes. Actually, I might do a full example of Q-learning in a second. Yes, I will. I'll get to that example then. Yeah, I was debating whether to do that. It might take some time, but it could be useful. So where are we? So what I showed you before-- the Bellman inequality. We know that this holds for the optimal thing. And if there is a quality that is true at an optimum, one general idea in optimization is this so-called fixed point iteration that you can do to arrive there. And that's essentially what we will do to get to a good Q. So a nice thing about Q-learning is that if your states and action spaces are small and discrete, you can represent the Q function as a table. So all you have to keep track of is, how good is the certain action in a certain state? Or all actions in all states, rather? So that's what we did here. This is a table. I've described to you the policy here, but what we'll do next is to describe the value of each action. So you can think of a value of taking the right one, bottom, top, and left, essentially. Those will be the values that we need to consider. And so what Q-learning can do with discrete states is to essentially start from somewhere, start from some idea of what Q is-- could be random, could be 0. And then repeat the following fixed-point iteration, where you update your former idea of what Q should be, with its current value plus essentially a mixture of the immediate reward for taking action, At, in that state, and the future reward, as judged by your current estimate of the Q function. So we'll do that now in practice. Yeah. AUDIENCE: Throughout this, where are we getting the transition probabilities or the behavior of the game? FREDRIK D. JOHANSSON: So they're not used here, actually. A value-based RL-- I didn't say that explicitly, but they don't rely on knowing the transition probabilities. What you might ask is where do we get the S and the As and the Rs from? And we'll get to that. How do we estimate these? We'll get to that later. Good question, though. I'm going to throw a very messy slide at you. Here you go. A lot of numbers. So what I've done now here is a more exhaustive version of what I put on the board. For each little triangle here represents the Q value for the state-action pair. So this triangle is, again, for the action right if you're in this state. So what I've put on the first slide here is the immediate reward of each action. So we know that any step will cost us minus 0.04. So that's why there's a lot of those here. These white boxes here are not possible actions. Up here, you have a 0.96, because it's 1, which is the immediate reward of going right here, minus 0.04. These two are minus 1.04 for the same reason-- because you arrive in minus 1. OK, so that's the first step and the second step done. We initialize Qs to be 0. And then we picked these two parameters of the problem, alpha and gamma, to be 1. And then we did the first iteration of Q-learning, where we set the Q to be the old version of Q, which was 0, plus alpha times this thing here. So Q was 0, that means that this is also 0. So the only thing we need to look at is this thing here. This also is 0, because the Qs for all states were 0, so the only thing we end up with is R. And that's what populated this table here. Next timestep-- I'm doing Q-learning now in a way where I update all the state-action pairs at once. How can I do that? Well, it depends on the question I got there, essentially. What data do I observe? Or how do I get to know the rewards of the S&A pairs? We'll come back to that. So in the next step, I have to update everything again. So it's the previous Q value, which was minus 0.04 for a lot of things, then plus the immediate reward, which was this RT. And I have to keep going. So the dominant thing for the table this time was that the best Q value for almost all of these boxes was minus 0.04. So essentially I will add the immediate reward plus that almost everywhere. What is interesting, though, is that here, the best Q value was 0.96. And it will remain so. That means that the best Q value for the adjacent states-- we look at this max here and get 0.96 out. And then add the immediate reward. Getting to here gives me 0.96 minus 0.04 for the immediate reward. And now we can figure out what will happen next. These values will spread out as you go further away from the plus 1. I don't think we should go through all of this, but you get a sense, essentially, how information is moved from the plus 1 and away. And I'm sure that's how you solved it yourself, in your head. But this makes it clear why you can do that, even if you don't know where the terminal states are or where the value of the state-action pairs are. AUDIENCE: Doesn't this calculation assume that if you want to move in a certain direction, you will move in that direction? FREDRIK D. JOHANSSON: Yes. Sorry. Thanks for reminding me. That should have been in the slide, yes. Thank you. I'm going to skip the rest of this. I hope you forgive me. We can talk more about it later. Thanks for reminding me, Pete, there, that one of the things I exploited here was that I assume just deterministic transitions. Another thing that I relied very heavily on here is that I can represent this Q function as a table. I drew all these boxes and I filled the numbers in. That's easy enough. But what if I have thousands of states and thousands of actions? That's a large table. And not only is it a large table for me to keep in memory-- it's also very bad for me statistically. If I want to observe anything about a state-action pair, I have to do that action in that state. And if you think about treating patients in a hospital, you're not going to try everything in every state, usually. You're also not going to have infinite numbers of patients. So how do you figure out what is the immediate reward of taking a certain action in a certain state? And this is where a function approximation comes in. Essentially, if you can't represent your data set table, either for statistical reasons or for memory reasons, let's say, you might want to approximate the Q function with a parametric or with a non-parametric function. And this is exactly what we can do. So we can draw now an analogy to what we did last week. I'm going to come back to this, but essentially instead of doing this fixed-point iteration that we did before, we will try and look for a function Q theta that is equal to R plus gamma max Q. Remember before, we had the Bellman inequality? We said that q-star S, A is equal to R S, A, let's say, plus gamma max A prime q star S prime A prime, where S prime is the state we get to after taking action A in state S. So the only thing I've done here is to take this equality and make it instead a loss function on the violation of this equality. So by minimizing this quantity, I will find something that has approximately the Bellman equality that we talked about before. This is the idea of fitted Q-learning, where you substitute the tabular representation with the function approximations, essentially. So just to make this a bit more concrete, we can think about the case where we have only a single step. There's only a single action to make, which means that there is no future part of this equation here. This part goes away, because there's only one stage in our trajectory. So we have only the immediate reward. We have only the Q function. Now, this is exactly a regression equation in the way that you've seen it when estimating potential outcomes. RT here represents the outcome of doing action A and state S. And Q here will be our estimate of this RT. Again, I've said this before-- if we have a single time point in our process, the problem reduces to estimating potential outcomes, just the way we saw it last time. We have curves that correspond outcomes under different actions. And we can do regression adjustment, trying to find an F such that this quantity is small so that we can model each different potential outcomes. And that's exactly what happens with the fitted Q iteration if you have a single timestep, too. So to make it even more concrete, we can say that there's some target value, G hat, which represents the immediate reward and the future rewards that is the target of our regression. And we're fitting some function to that value. So the question we got before was how do I know the transition matrix? How do I get any information about this thing? I say here on the slide that, OK, we have some target that's R plus future Q values. We have some prediction and we have an expectation of our transitions here. But how do I evaluate this thing? The transitions I have to get from somewhere, right? And another way to say that is what are the inputs and the outputs of our regression? Because when we estimate potential outcomes, we have a very clear idea of this. We know that y, the outcome itself, is a target. And the input is the covariates, x. But here, we have a moving target, because this Q hat, it has to come from somewhere, too. This is something that we estimate as well. So usually what happens is that we alternate between updating this target, Q, and Q theta. So essentially, we copy Q theta to become our new Q hat and we iterate this somehow. But I still haven't told you how to evaluate this expectation. So usually in RL, there are a few different ways to do this. And either depending on where you coming from, essentially, these are varyingly viable. So if we look back at this thing here, it relies on having tuples of transitions-- the state, the action, the next state, and the reward that I got. So I have to somehow observe those. And I can obtain them in various ways. A very common one when it comes to learning to play video games, for example, is that you do something called on-policy exploration. That means that you observe data from the policy that you're currently optimizing. You just play the game according to the policies that you have at the moment. And the analogy in health care would be that you have some idea of how to treat patients and you just do that and see what happens. That could be problematic, especially if you've got that policy-- if you randomly initialized it or if you got it for some somewhere very suboptimal. A different thing that we're more, perhaps, comfortable with in health care, in a restricted setting, is the idea of a randomized trial, where, instead of trying out some policy that you're currently learning, you decide on a population where it's OK to flip a coin, essentially, between different actions that you have. The difference between the sequential setting and the one-step setting is that now we have to randomize a sequence of actions, which is a little bit unlike the clinical trials that you have seen before, I think. The last one, which is the most studied one when it comes to practice, I would say, is the one that we talk about this week-- is off-policy evaluation or learning, in which case you observe health care records, for example. You observe registries. You observe some data from the health care system where patients have already been treated and you try to extract a good policy based on that information. So that means that you see these transitions between state and action and the next state and the reward. You see that based on what happened in the past and you have to figure out a pattern there that helps you come up with a good action or a good policy. So we'll focus on that one for now. The last part of this talk will be about, essentially, what we have to be careful with when we learn with off-policy data. Any questions up until this point? Yeah. AUDIENCE: So if [INAUDIBLE] getting there for the [INAUDIBLE],, are there any requirements that has to be met by [INAUDIBLE],, like how we had [INAUDIBLE] and cause inference? FREDRIK D. JOHANSSON: Yeah, I'll get to that on the next set of slides. Thank you. Any other questions about the Q-learning part? A colleague of mine, Rahul, he said-- or maybe he just paraphrased it from someone else. But essentially, you have to see RL 10 times before you get it, or something to that effect. I had the same experience. So hopefully you have questions for me after. AUDIENCE: Human reinforcement learning. FREDRIK D. JOHANSSON: Exactly. But I think what you should take from the last two sections, if not how to do Q-learning in detail, because I glossed over a lot of things. You should take with you the idea of dynamic programming and figuring out, how can I learn about what's good early on in my process from what's good late? And the idea of moving towards a good state and not just arriving there immediately. And there are many ways to think about that. OK, we'll move on to off-policy learning. And again, the set-up here is that we receive trajectories of patient states, actions, and rewards from some source. We don't know what these sources necessarily-- well, we probably know what the source is. But we don't know how these actions were performed, i.e., we don't know what the policy was that generated these trajectories. And this is the same set-up as when you estimated causal effects last week, to a large extent. We say that the actions are drawn, again, according to some behavior policy unknown to us. But we want to figure out what is the value of a new policy, pi. So when I showed you very early on-- I wish I had that slide again. But essentially, a bunch of patient trajectories and some return. Patient trajectories, some return. The average of those, that's called a value. If we have trajectories according to a certain policy, that is the value of that policy-- the average of these things. But when we have trajectories according to one policy and want to figure out the value of another one, that's the same problem as the covariate adjustment problem that you had last week, essentially. Or the confounding problem, essentially. The trajectories that we draw are biased according to the policy of the clinician that created them. And we want to figure out the value of a different policy. So it's the same as the confounding problem from the last time. And because it is the same as the confounding from last time, we know that this is at least as hard as doing that. We have confounding-- I already alluded to variance issues. And you mentioned overlap or positivity as well. And in fact, we need to make the same assumptions but even stronger assumptions for this to be possible. These are sufficient conditions. So, under very certain circumstances, you don't need them. I should say, these are fairly general assumptions that are still strict-- that's how I should put it. So last time, we looked at something called strong ignorability. I realized the text is pretty small in here. Can you see in the back? Is that OK? OK, great. So strong ignorability said that the potential outcomes-- Y0 and Y1-- are conditionally independent of the treatment, t, given the set of variables, x, or the variable, x. And that's saying that it doesn't matter if we know what treatment was given. We can figure out just based on x what would happen under either treatment arm, where we should treat this patient, with t equals 0, t equals 1. We had an idea of-- or an assumption of-- overlap, which says that any treatment could be observed in any state or any context, x. That's what that means. And that is only to ensure that we can estimate at least a conditional average treatment effect at x. And if we want to estimate the average treatment effect in a population, we would need to have that for every x in that population. So what happens in the sequential case is that we need even stronger assumptions. There's some notation I haven't introduced here and I apologize for that. But there's a bar here over these Ss and As-- I don't know if you can see it. That usually indicates in this literature that you're looking at the sequence, up to the index here. So all the states up until t have observed and all the actions up until t minus 1. So in order for the best policy to be identifiable-- or the value of a positive to be identifiable-- we need this strong condition. So the return of a policy is independent of the current action, given everything that happened in the past. This is weaker than the Markov assumption, to be clear, because there, we said that anything that happens in the future is conditionally independent, given the current state. So this is weaker, because we now just need to observe something in the history. We need to observe all confounders in the history, in this instance. We don't need to summarize them in S. And we'll get back to this in the next slide. Positivity is the real difficult one, though, because what we're saying is that at any point in the trajectory, any action should be possible in order for us to estimate the value of any possible policy. And we know that that's not going to be true in practice. We're not going to consider every possible action at every possible point in the health care setting. There's just no way. So what that tells us is that we can't estimate the value of every possible policy. We can only estimate the value of policies that are consistent with the support that we do have. If we never see action 4 at time 3, there's no way we can learn about a policy that does that-- that takes action 4 at time 3. That's what I'm trying to say. So in some sense, this is stronger, just because of how sequential settings work. It's more about the application domain than anything, I would say. In the next set of slides, we'll focus on sequential randomization or sequential ignorability, as it's sometimes called. And tomorrow, we'll talk a little bit about the statistics involved in or resulting from the positivity assumption and things like importance weighting, et cetera. Did I say tomorrow? I meant Thursday. So last recap on the potential outcome story. This is a slide-- I'm not sure if he showed this one, but it's one that we used in a lot of talks. And it, again, just serves to illustrate the idea of a one-timestep decision. So we have here, Anna. A patient comes in. She has high blood sugar and some other properties. And we're debating whether to give her medication A or B. And to do that, we want to figure out what would be her blood sugar under these different choices a few months down the line? So I'm just using this here to introduce you to the patient, Anna. And we're going to talk about Anna a little bit more. So treating Anna once, we can represent as this causal graph that you've seen a lot of times now. We had some treatment, A, we had some state, S, and some outcome, R. We want to figure out the effect of this A on the outcome, R. Ignorability in this case just says that the potential outcomes under each action, A, is conditionally independent of A, given S. And so we know that ignorability and overlap is sufficient conditions for identification of this effect. But what happens now if we add another time point? OK, so in this case, if I have no extra arrows here-- I just have completely independent time points-- ignorability clearly still holds. There's no links going from A to R, there's no from S to R, et cetera. So ignorability is still fine. If Anna's health status in the future depends on the actions that I take now, here, then the situation is a little bit different. So this is now not in the completely independent actions that I make, but the actions here influence the state in the future. So we've seen this. This is a Markov decision process, as you've seen before. This is very likely in practice. Also, if Anna, for example, is diabetic, as we saw in the example that I mentioned, it's likely that she will remain so. This previous state will influence the future state. These things seem very reasonable, right? But now I'm trying to argue about the sequential ignorability assumption. How can we break that? How can we break ignorability when it comes to the sequential, say? If you have an action here-- so the outcome at a later point depends on an earlier choice. That might certainly be the case, because we could have a delayed effect of something. So if we measure, say, a lab value which could be in the right range or not, it could very well depend on medication we gave a long time ago. And it's also likely that the reward could depend on a state which is much earlier, depending on what we include in that state variable. We already have an example, I think, from the audience on that. So actually, ignorability should have a big red cross over it, because it doesn't hold there. And it's luckily on the next slide. Because there are even more errors that we can have, conceivably, in the medical setting. The example that we got from Pete before was, essentially, that if we've tried an action previously, we might not want to try it again. Or if we knew that something worked previously, we might want to do it again. So if we had a good reward here, we might want to do the same thing twice. And this arrow here says that if we know that a patient had a symptom earlier on, we might want to base our actions on it later. We've known that the patient had an allergic reaction at some point, for example. We might not want to use that medication at a later time. AUDIENCE: But you can always put everything in a state. FREDRIK D. JOHANSSON: Exactly. So this depends on what you put in the state. This is an example where I should introduce these arrows to show that, if I haven't got that information here, then I introduce this dependence. So if I don't have the information about allergic reaction or some symptom before in here, then I have to do something else. So exactly that is the point. If I can summarize history in some good way-- if I can compress all of these four variables into some variable age standard for the history, then I have ignorability, with respect to that history, H. This is your solution and it introduces a new problem, because history is usually a really large thing. We know that history grows with time, obviously. But usually we don't observe patients for the same number of time points. So how do we represent that for a program? How do we represent that to a learning algorithm? That's something we have to deal with. You can pad history with 0s, et cetera, but if you keep every timestep and repeat every variable in every timestep, you get a very large object. That might introduce statistical problems, because now you have much more variance if you have new variables, et cetera. So one thing that people do is that they look some amount of time backwards-- so instead of just looking at one timestep back, you now look at a length k window. And your state essentially grows by a factor, k. And another alternative is to try and learn a summary function. Learn some function that is relevant for predicting the outcome that takes all of the history into account, but has a smaller representation than just t times the variables that you have. But this is something that needs to happen, usually. Most health care data, in practice-- you have to make choices about this. I just want to stress that that's something you really can't avoid. The last point I want to make is that unobserved confounding is also a problem that is not avoidable just due to summarizing history. We can introduce new confounding. That is a problem, if we don't summarize history well. But we can also have unobserved confounders, just like we can in the one-step setting. One example is if we have an unobserved confounded in the same way as we did before. It impacts both the action at time 1 and the reward at time 1. But of course, now we're in the sequential setting. The confounding structure could be much more complicated. We could have a confounder that influences an early action and a late reward. So it might be a little harder for us to characterize what is the set of potential confounders? So I just wanted to point that out and to reinforce that this is only harder than the one-step setting. So we're wrapping up now. I just want to end on a point about the games that we looked at before. One of the big reasons that these algorithms were so successful in playing games was that we have full observability in these settings. We know everything from the game board itself-- when it comes to Go, at least. We can debate that when it comes to the video games. But in Go, we have complete observability of the board. Everything we need to know for an optimal decision is there at any time point. Not only can we observe it through the history, but in the case of Go, you don't even need to look at history. We certainly have Markov dynamics with respect to the board itself. You don't ever have to remember what was a move earlier on, unless you want to read into your opponent, I suppose. But that's a game theoretic notion we're not going to get into here. But more importantly, we can explore the dynamics of these systems almost limitlessly, just by simulation and self-play. And that's true regardless if you have full observability or not-- like in StarCraft, you might not have full observability. But you can try your things out endlessly. And contrast that with having, I don't know, 700 patients with rheumatoid arthritis or something like that. Those are the samples you have. You're not going to get new ones. So that is an amazing obstacle for us to overcome if we want to do this in a good way. The current algorithms are really inefficient with the data that they use. And that's why this limitless exploration or simulation has been so important for these games. And that's also why the games are the success stories of this. A last point is that typically for these settings that I put here, we have no noise, essentially. We get perfect observations of actions and states and outcomes and everything like that. And that's really true in any real-world application. All right. I'm going to wrap up. Tomorrow-- nope, Thursday, David is going to talk about more explicitly if we want to do this properly in health care, what's going to happen? We're going to have a great discussion, I'm sure, as well. So don't mind the slide. It's Thursday. All right. Thanks a lot. [APPLAUSE] |
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019 | 1_What_Makes_Healthcare_Unique.txt | [CLICK] DAVID SONTAG: So welcome to spring 2019 Machine Learning for Healthcare. My name is David Sontag. I'm a professor in computer science. Also I'm in the Institute for Medical Engineering and Science. My co-instructor today will be Pete Szolovits, who I'll introduce more towards the end of today's lecture, along with the rest of the course staff. So the problem. The problem is that healthcare in the United States costs too much. Currently, we're spending $3 trillion a year, and we're not even necessarily doing a very good job. Patients who have chronic disease often find that these chronic diseases are diagnosed late. They're often not managed well. And that happens even in a country with some of the world's best clinicians. Moreover, medical errors are happening all of the time, errors that if caught, would have prevented needless deaths, needless worsening of disease, and more. And healthcare impacts all of us. So I imagine that almost everyone here in this room have had a family member, a loved one, a dear friend, or even themselves suffer from a health condition which impacts your quality of life, which has affected your work, your studies, and possibly has led to a needless death. And so the question that we're asking in this course today is how can we use machine learning, artificial intelligence, as one piece of a bigger puzzle to try to transform healthcare. So all of us have some personal stories. I myself have personal stories that have led me to be interested in this area. My grandfather, who had Alzheimer's disease, was diagnosed quite late in his Alzheimer's disease. There aren't good treatments today for Alzheimer's, and so it's not that I would have expected the outcome to be different. But had he been diagnosed earlier, our family would have recognized that many of the erratic things that he was doing towards the later years of his life were due to this disease and not due to some other reason. My mother, who had multiple myeloma, a blood cancer, who was diagnosed five years ago now, never started treatment for her cancer before she died one year ago. Now, why did she die? Well, it was believed that her cancer was still in its very early stages. Her blood markers that were used to track the progress of the cancer put her in a low risk category. She didn't yet have visible complications of the disease that would, according to today's standard guidelines, require treatment to be initiated. And as a result, the belief was the best strategy was to wait and see. But unbeknownst to her and to my family, her blood cancer, which was caused by light chains which were accumulating, ended up leading to organ damage. In this case, the light chains were accumulating in her heart, and she died of heart failure. Had we recognized that her disease was further along, she might have initiated treatment. And there are now over 20 treatments for multiple myeloma which are believed to have life-lengthening effect. And I can give you four or five other stories from my own personal family and my friends, where similar things have happened. And I have no doubt that all of you have as well. So what can we do about it is the question that we want to try to understand in today's course. And don't get me wrong. Machine learning, artificial intelligence, will only be one piece of the puzzle. There's so many other systematic changes that we're going to have to make into our healthcare system. But let's try to understand what those AI elements might be. So let's start in today's lecture by giving a bit of a background on artificial intelligence and machine learning in healthcare. And I'll tell you why I think the time is right now, in 2019, to really start to make a big dent at this problem. And then I'll tell you about-- I'll give you a few examples of how machine learning is likely to transform healthcare over the next decade. And of course we're just guessing, but this is really guided by the latest and greatest in research, a lot of it happening here at MIT. And then we'll close today's lecture with an overview of what's different, what's unique about machine learning healthcare. All of you have taken some machine learning course in the past, and so you know the basics of supervised prediction. Many of you have studied things like clustering. And you're certainly paying attention to the news, where you see news every single day about Google, Facebook, Microsoft's latest advances in speech recognition, computer vision, and so on. So what's really different about trying to apply these techniques in the healthcare domain? The answer is that there's a huge amount of difference, and there are a lot of subtleties to doing machine learning right here. And we'll talk about that throughout the whole entire semester. So to begin, this isn't a new field. Artificial intelligence in medicine goes back to the 1970s or sometime even in the '60s. One of the earliest examples of trying to use artificial intelligence for diagnosis was this MYCIN system developed at Stanford, where the goal was try to identify bacteria that might cause infection and then to try to guide what would be the appropriate therapy for that bacteria. Now, it was found that this algorithm, this machine learning, this simple AI algorithm was able to propose a good therapy in 69% of cases, which at the time was better than the best or very good infectious disease experts. Now, it also had some other elements of it which were quite interesting. So those of you who've been following the latest in chat bots might find this a little prescient. So there's a dialog interface where the computer says, I am ready. The computer clinician responds, this is a 26-year-old male patient. The computer tests its understanding of what's going on. It says, the age of the patient is 26. The sex of the patient is male. The clinician writes, five days ago, he had respiratory tract infections. The computer asks what is his name. Joe. The computer responds, my understanding is the name of the patient is Joe. Respiratory tract is one of the symptoms the patient had. Then the clinician writes, a couple of days before the admission, he had malaise, which is general tiredness. And the computer responds, please give me a date of admission. The clinician responds, March 12, 1979, and the computer again confirms that it's understood appropriately. And this is the preface to the later diagnostic stages. So the ideas of how AI can really impact medicine have been around a long time. Yet these algorithms which have been shown to be very effective, even going back to the 1970s, didn't translate into clinical care. A second example, oh so equally impressive in its nature, was work from the 1980s in Pittsburgh, developing what is known as the INTERNIST-1 or Quick Medical Reference system. This was now used not for infectious diseases, but for primary care. Here one might ask, how can we try to do diagnosis at a much larger scale, where patients might come in with one of hundreds of different diseases and could report thousands of different symptoms, each one giving you some view, noisy view, into what may be going on with a patient's health. And at a high level, they modeled this as something like a Bayesian network. It wasn't strictly a Bayesian network. It was a bit more heuristic at the time. It was later developed to be so. But at a high level, there were a number of latent variables or hidden variables corresponding to different diseases the patient might have, like flu or pneumonia or diabetes. And then there were a number of variables on the very bottom, which were symptoms, which are all binary, so the diseases are either on or off. And here the symptoms are either present or not. And these symptoms can include things like fatigue or cough. They could also be things that result from laboratory test results, like a high value of hemoglobin A1C. And this algorithm would then take this model, take the symptoms that were reported for the patient, and try to do reasoning over what action might be going on with that patient, to figure out what the differential diagnosis is. There are over 40,000 edges connecting diseases to symptoms that those diseases were believed to have caused. And this knowledge base, which was probabilistic in nature, because it captured the idea that some symptoms would only occur with some probability for a disease, took over 15 person years to elicit from a large medical team. And so it was a lot of effort. And even in going forward to today's time, there have been few similar efforts at a scale as impressive as this one. But again, what happened? These algorithms are not being used anywhere today in our clinical workflows. And the challenges that have prevented them from being used today are numerous. But I used a word in my explanation which should really hint at it. I used the word clinical workflow. And this, I think, is one of the biggest challenges. Which is that the algorithms were designed to solve narrow problems. They weren't necessarily even the most important problems, because clinicians generally do a very good job at diagnosis. And there was a big gap between the input that they expected and the current clinical workflow. So imagine that you have now a mainframe computer. I mean, this was the '80s. And you have a clinician who has to talk to the patient and get some information. Go back to the computer. Type in a structured data, the symptoms that the patient's reporting. Get information back from the computer and iterate. As you can imagine, that takes a lot of time, and time is money. And unfortunately, it prevents it from being used. Moreover, despite the fact that it took a lot of effort to use it when outside of existing clinical workflows, these systems were also really difficult to maintain. So I talked about how this was elicited from 15 person years of work. There was no machine learning here. It was called artificial intelligence because one tries to reason in an artificial way, like humans might. But there was no learning from data in this. And so what that means is if you then go to a new place, let's say this was developed in Pittsburgh, and now you go to Los Angeles or to Beijing or to London, and you want to apply the same algorithms, you suddenly have to re-derive parts of this model from scratch. For example, the prior probability of the diseases are going to be very different, depending on where you are in the world. Now, you might want to go to a different domain outside of primary care. And again, one has to spend a huge amount of effort to derive such models. As new medicine discoveries are made, one has to, again, update these models. And this has been a huge blocker to deployment. I'll move forward to one more example now, also from the 1980s. And this is now for a different type of question. Not one of how do you do diagnosis, but how do you actually do discovery. So this is an example from Stanford. And it was a really interesting case where one took a data-driven approach to try to make medical discoveries. There was a database of what's called a disease registry from patients with rheumatoid arthritis, which is a chronic disease. It's an autoimmune condition, where for each patient, over a series of different visits, one would record, for example, here it shows this is visit number one. The date was January 17, 1979. The knee pain, patient's knee pain, was reported as severe. Their fatigue was moderate. Temperatures was 38.5 Celsius. The diagnosis for this patient was actually a different autoimmune condition called systemic lupus. We have some laboratory test values for their creatinine and blood nitrogen, and we know something about their medication. In this case, they were on prednisone, a steroid. And one has this data at every point in time. This almost certainly was recorded on paper and then later, these were collected into a computer format. But then it provides the possibility to ask questions and make new discoveries. So for example, in this work, there was a discovery module which would make causal hypotheses about what aspects might cause other aspects. It would then do some basic statistics to check about the statistical validity of those causal hypotheses. It would then present those to a domain expert to try to check off does this make sense or not. For those that are accepted, it then uses that knowledge that was just learned to iterate, to try to make new discoveries. And one of the main findings from this paper was that prednisone elevates cholesterol. That was published in the Annals of Internal Medicine in 1986. So these are all very early examples of data-driven approaches to improve both medicine and healthcare. Now flip forward to the 1990s. Neural networks started to become popular. Not quite the neural networks that we're familiar with in today's day and age, but nonetheless, they shared very much of the same elements. So just in 1990, there were 88 published studies using neural networks for various different medical problems. One of the things that really differentiated those approaches to what we see in today's landscape is that the number of features were very small. So usually features which were similar to what I showed you in the previous slide. So structured data that was manually curated for the purpose of using in machine learning. And there was nothing automatic about this. So one would have to have assistants gather the data. And because of that, typically, there were very small number of samples for each study that were used in machine learning. Now, these models, although very effective, and I'll show you some examples in the next slide, also suffered from the same challenges I mentioned earlier. They didn't fit well into clinical workflows. It was hard to get enough training data because of the manual efforts involved. And what the community found, even in the early 1990s, is that these algorithms did not generalize well. If you went through this huge effort of collecting training data, learning your model, and validating your model at one institution, and you then take it to a different one, it just works much worse. OK? And that really prevented translation of these technologies into clinical practice. So what were these different domains that were studied? Well, here are a few examples. It's a bit small, so I'll read it out to you. It was studied in breast cancer, myocardial infarction, which is heart attack, lower back pain, used to predict psychiatric length of stay for inpatient, skin tumors, head injuries, prediction of dementia, understanding progression of diabetes, and a variety of other problems, which again are of the nature that we see about, we read about in the news today in modern attempts to apply machine learning in healthcare. The number of training examples, as mentioned, were very few, ranging from 39 to, in some cases, 3,000. Those are individuals, humans. And the networks, the neural networks, they weren't completely shallow, but they weren't very deep either. So these were the architectures they might be 60 neurons, then 7, and then 6, for example, in terms of each of the layers of the neural network. By the way, that sort of makes, sense given the type of data that was fed into it. So none of this is new, in terms of the goals. So what's changed? Why do I think that despite the fact that we've had what could arguably be called a failure for the last 30 or 40 years, that we might actually have some chance of succeeding now. And the big differentiator, what I'll call now the opportunity, is data. So whereas in the past, much of the work in artificial intelligence in medicine was not data driven. It was based on trying to elicit as much domain knowledge as one can from clinical domain experts. In some cases, gathering a little bit of data. Today, we have an amazing opportunity because of the prevalence of electronic medical records, both in the United States and elsewhere. Now, here the United States, for example, the story wasn't that way, even back in 2008, when the adoption of electronic medical records was under 10% across the US. But then there wasn't an economic disaster in the US. And as part of the economic stimulus package, which President Obama initiated, there was something like $30 billion allocated to hospitals purchasing electronic medical records. And this is already a first example that we see of policy being really influential to create the-- to open the stage to the types of work that we're going to be able to do in this course today. So money was then made available as incentives for hospitals to purchase electronic medical records. And as a result, the adoption increased dramatically. This is a really old number from 2015 of 84% of hospitals, and now today, it's actually much larger. So data is being collected in an electronic form, and that presents an opportunity to try to do research on it. It presents an opportunity to do machine learning on it, and it presents an opportunity to start to deploy machine learning algorithms, where rather than having to manually input data for a patient, we can just draw it automatically from data that's already available in electronic form. And so there are a number of data sets that have been made available for research and development in this space. Here at MIT, there has been a major effort pioneered by Professor Roger Mark, in the ECS and Institute for Medical Engineering department, to create what's known as the PhysioNet or Mimic databases. Mimic contains data from over 40,000 patients and intensive care units. And it's very rich data. It contains basically everything that's being collected in the intensive care unit. Everything from notes that are written by both nurses and by attendings, to vital signs that are being collected by monitors that are attached to patients, collecting their blood pressure, oxygen saturation, heart rate, and so on, to imaging data, to blood test results as they're made available, and outcomes. And of course also medications that are being prescribed as it goes. And so this is a wealth of data that now one could use to try to study, at least study in a very narrow setting of an intensive care unit, how machine learning could be used in that location. And I don't want to under-emphasize the importance of this database, both through this course and through the broader field. This is really the only publicly available electronic medical record data set of any reasonable size in the whole world, and it was created here at MIT. And we'll be using it extensively in our homework assignments as a result. There are other data sets that aren't publicly available, but which have been gathered by industry. And one prime example is the Truven Market Scan database, which was created by a company called Truven, which was later acquired by IBM, as I'll tell you about more in a few minutes. Now, this data-- and there are many competing companies that have similar data sets-- is created not from electronic medical records, but rather from-- typically, it's created from insurance claims. So every time you go to see a doctor, there's usually some record of that that is associated to the billing of that visit. So your provider will send a bill to your health insurance saying basically what happened, so what procedures were performed, providing diagnoses that are used to justify the cost of those procedures and tests. And from that data, you now get a holistic view, a longitudinal view, of what's happened to that patient's health. And then there is a lot of money that passes behind the scenes between insurers and hospitals to corporate companies, such as Truven, which collect that data and then resell it for research purposes. And one of the biggest purchasers of data like this is the pharmaceutical industry. So this data, unfortunately, is not usually publicly available, and that's actually a big problem, both in the US and elsewhere. It's a big obstacle to research in this field, that only people who have millions of dollars to pay for it really get access to it, and it's something that I'm going to return to throughout the semester. It's something where I think policy can make a big difference. But luckily, here at MIT, the story's going to be a bit different. So thanks to the MIT IBM Watson AI Lab, MIT has a close relationship with IBM. And fingers crossed, it looks like we'll get access to this database for our homework and projects for this semester. Now, there are a lot of other initiatives that are creating large data sets. A really important example here in the US is President Obama's Precision Medicine Initiative, which has since been renamed to the All of Us Initiative. And this initiative is creating a data set of one million patients, drawn in a representative manner, from across the United States, to capture patients both poor and rich, patients who are healthy and have chronic disease, with the goal of trying to create a research database where all of us and other people, both inside and outside the US, could do research to make medical discoveries. And this will include data such as data from a baseline health exam, where the typical vitals are taken, blood is drawn. It'll combine data of the previous two types I've mentioned, including both data from electronic medical records and health insurance claims. And a lot of this work is also happening here in Boston. So right across the street at the Broad Institute, there is a team which is creating all of the software infrastructure to accommodate this data. And there are a large number of recruitment sites here in the broader Boston area where patients or any one of you, really, could go and volunteer to be part of this study. I just got a letter in the mail last week inviting me to go, and I was really excited to see that. So all sorts of different data is being created as a result of these trends that I've been mentioning. And it ranges from unstructured data, like clinical notes, to imaging, lab tests, vital signs. Nowadays, what we used to think about just as clinical data now has started to really come to have a very tight tie to what we think about as biological data. So data from genomics and proteomics is starting to play a major role in both clinical research and clinical practice. Of course, not everything that we traditionally think about healthcare data-- there are also some non-traditional views on health. So for example, social media is an interesting way of thinking through both psychiatric disorders, where many of us will post things on Facebook and other places about our mental health, which give a lens on our mental health. Your phone, which is tracking your activity, will give us a view on how active we are. It might help us diagnose early the variety of conditions as well that I'll mention later. So we have-- to this whole theme right now is about what's changed since the previous approaches at AI medicine. I've just talked about data, but data alone is not nearly enough. The other major change is that there has been decades' worth of work on standardizing health data. So for example, when I mentioned to you that when you go to a doctor's office, and they send a bill, that bill is associated with a diagnosis. And that diagnosis is coded in a system called ICD-9 or ICD-10, which is a standardized system where, for many, not all, but many diseases, there is a corresponding code associated with it. ICD-10, which was recently rolled out nationwide about a year ago is much more detailed than the previous coding system, includes some interesting categories. For example, bitten by a turtle has a code for it. Bitten by sea lion, struck by [INAUDIBLE].. So it's starting to get really detailed here, which has its benefits and its disadvantages when it comes to research using that data. But certainly, we can do more with detailed data than we could with less detailed data. Laboratory test results are standardized using a system called LOINC, here in the United States. Every lab test order has an associated code for it. I just want to point out briefly that the values associated with those lab tests are less standardized. Pharmacy, national drug codes should be very familiar to you. If you take any medication that you've been prescribed, and you look carefully, you'll see a number on it, and you see 0015347911, that number is unique to that medication. In fact, it's even unique to the brand of that medication. And there's an associated taxonomy with it. And so one can really understand in a very structured way what medications a patient is on and how those medications relate to one another. A lot of medical data is found not in the structured form, but in free text, in notes written by doctors. And these notes have, often, lots of mentions of symptoms and conditions in them. And one can try to standardize those by mapping them to what's called a unified medical language system, which is an ontology with millions of different medical concepts in them. So I'm not going to go too much more into these. They'll be the subject of much discussion in this semester, but particularly in the next two lectures by Pete. But I want to talk very briefly about what you can do once you have a standardized vocabulary. So one thing you can do is you could build APIs, or Application Programming Interfaces, for now sending that data from place to place. And FHIR, F-H-I-R, is a new standard, which has widespread adoption now here in the United States for hospitals to provide data both for downstream clinical purposes but also directly to patients. And in this standard, it will use many of the vocabularies I mentioned to you in the previous slides to encode diagnoses, medications, allergies, problems, and even financial aspects that are relevant to the care of this patient. And for those of you who have an Apple phone, for example, and if you open up a Apple Health Records, it makes use of this standard to receive data from over 50 different hospitals. And you should expect to see many competitors to them in the future, because of the fact that it's now an open standard. Now other types of data, like the health insurance claims I mentioned earlier, is often encoded in a slightly different data model. One which my lab works quite a bit with is called OMOP, and it's being maintained by a nonprofit organization called the Observational Health Data Sciences Initiative Odyssey. And this common data model gives a standard way of taking data from an institution which might have its own intricacies and really mapping it to this common language, so that if you write a machine learning algorithm once, then that machine learning algorithm reads in data in this format, you can then apply it somewhere else very easily. And the portions of these standards really can't be understated, the importance for translating what we're doing in this class into clinical practice. And so we'll be returning to these things throughout the semester. So we've talked about data. We've talked about standards. And the third wheel is breakthroughs in machine learning. And this should be no surprise to anyone in this room. All right, we've been seeing time and time again, over the last five years, benchmark after benchmark being improved upon and human performance beaten by state-of-the-art machine learning algorithms. Here I'm just showing you a figure that I imagine many of you have seen, on the error rates on the image net competition for object recognition. The error rates in 2011 were 25%. And even just a few years ago, it already surpassed human level to under 5%. Now, the changes that have led to those advances in object recognition are going to have some parallels in healthcare, but only up to some point. For example, there was big data, large training sets that were critical for this. There were algorithmic advances, in particular convolutional neural networks, that played a huge role. And there was open source software that was created, things like TensorFlow and PyTorch, which allow a researcher or industry worker in one place to very, very quickly build upon successes from other researchers in other places and then release the code, so that one can really accelerate the rate of progress in this field. Now, in terms of those algorithmic advances that have made a big difference, the ones that I would really like to point out because of their relevance to this course are learning with high dimensional features. So this was really the advances in the early 2000s, for example. And support vector machines and learning with L1 regularization as a type of sparsity. And then more recently, in the last six years, on stochastic gradient descent, like methods for very rapidly solving these convex optimization problems, that will play a huge role in what we'll be doing in this course. In the last few years, there have been a huge amount of progress in unsupervised and semi-supervised learning algorithms. And as I'll tell you about much later, one of the major challenges in healthcare is that despite the fact that we have a large amount of data, we have very little labeled data. And so these semi-supervised learning algorithms are going to play a major role in being able to really take advantage of the data that we do have. And then of course the modern deep learning algorithms. Convolutional neural networks, recurrent neural networks, and ways of trying to train them. So those played a major role in the advances in the tech industry. And to some extent, they'll play a major role in healthcare as well. And I'll point out a few examples of that in the rest of today's lecture. So all of this coming together, the data availability, the advances in other fields of machine learning, and the huge amount of potential financial gain in healthcare and the potential social impact it could have has not gone unnoticed. And there's a huge amount of industry interested in this field. These are just some examples from names I think many of you are familiar with, like DeepMind Health and IBM Watson to startup companies like Bay Labs and PathAI, which is here in Boston, all of which are really trying to build the next generation of tools for healthcare, now based on machine learning algorithms. There's been billions of dollars of funding in the recent quarters towards digital health efforts, with hundreds of different startups that are focused specifically on using artificial intelligence and healthcare. And there's the recognition that data is so essential to this process has led to an all-out purchasing effort to try to get as much of that data as you can. So for example, IBM purchased a company called Merge, which made medical imaging software and thus had accumulated a large amount of medical imaging data for $1 billion in 2015. They purchased Truven for $2.6 billion in 2016. Flatiron Health, which is a company in New York City focused on oncology, was purchased for almost $2 billion by Roche, a pharmaceutical company, just last year. And there's several more of these industry moves. Again, I'm just tying to get you thinking about what it really takes in this field, and getting access to data is actually a really important one, obviously. So let's now move on to some examples of how machine learning will transform healthcare. To begin with, I want to really lay out the landscape here and define some language. There are a number of different players when it comes to the healthcare space. They're us, patients, consumers. They are the doctors that we go to, which you could think about as providers. But of course they're not just doctors, they're also nurses and community health workers and so on. There are payers, which provide the-- where there is-- these edges are really showing relationships between the different players, so our consumers, we often, either from our job or directly from us, we will pay premiums for a health insurance company, to a health insurance company, and then that health insurance company is responsible for payments to the providers to provide services to us patients. Now, here in the US, the payers are both commercial and governmental. So many of you will know companies like Cigna or Aetna or Blue Cross, which are commercial providers of healthcare, of health insurance, but there are also governmental ones. For example, the Veterans Health Administration runs one of the biggest health organizations in the United States, servicing our veterans from the department, people who have retired from the Department of Defense, which has the one of the second biggest health systems, the Defense Health Agency. And that is an organization where-- both of those organizations, where both the payer and the provider are really one. The Center for Medicare and Medicaid Services here in the US provides health insurance for all retirees in the United States. And also Medicaid, which is run at a state level, provides health insurance to a variety of individuals who would otherwise have difficulty purchasing or obtaining their own health insurance. And those are examples of state-run or federally run health insurance agencies. And then internationally, sometimes the lines are even more blurred. So of course in places like the United Kingdom, where you have a government-run health system, the National Health Service, you have the same system both paying for and providing the services. Now, the reason why this is really important for us to think about already in lecture one is because what's so essential about this field is figuring out where the knob is that you can turn to try to improve healthcare. Where can we deploy machine learning algorithms within healthcare? So some algorithms are going to be better run by providers, others are going to be better run by payers, others are going to be directly provided to patients, and some all of the above. We also have to think about industrial questions, in terms of what is it going to take to develop a new product. Who will pay for this product? Which is again an important question when it comes to deploying algorithms here. So I'll run through a couple of very high-level examples driven from my own work, focused on the provider space, and then I'll bump up to talk a bit more broadly. So for the last seven or eight years, I've been doing a lot of work in collaboration with Beth Israel Deaconess Medical Center, across the river, with their emergency department. And the emergency department is a really interesting clinical setting, because you have a very short period of time from when a patient comes into the hospital to diagnose what's going on with them, to initiate therapy, and then to decide what to do next. Do you keep them in the hospital? Do you send them home? If you-- for each one of those things, what should the most immediate actions be? And at least here in the US, we're always understaffed. So we've got limited resources and very critical decisions to make. So this is one example of a setting where algorithms that are running behind the scenes could potentially really help with some of the challenges I mentioned earlier. So for example, one could imagine an algorithm which builds on techniques like what I mentioned to you for an internist one or quick medical reference, try to reason about what's going on with the patient based on the data that's available for the patient, the symptoms. But the modern view of this shouldn't, of course, use binary indicators of each symptom, which have to be entered in manually, but rather all of these things should be automatically extracted from the electronic medical record or listed as necessary. And then if one could reason about what's going on with a patient, we wouldn't necessarily want to use it for a diagnosis, although in some cases, you might use it for an earlier diagnosis. But it could also be used for a number of other more subtle interventions, for example, better triage to figure out which patients need to be seen first. Early detection of adverse events or recognition that there might be some unusual actions which might actually be medical errors that you want to surface now and draw attention to. Now, you could also use this understanding of what's going on with a patient to change the way that clinicians interact with patient data. So for example, one can try to propagate best practices by surfacing clinical decision support, automatically triggering this clinical decision support for patients that you think it might be relevant for. And here's one example, where it says, the ED Dashboard, the Emergency Department Dashboard decision support algorithms have determined this patient may be eligible for the atria cellulitis pathway. Cellulitis is often caused by infections. Please choose from one of the options. Enroll in the pathway, decline-- and if you decline, you must include a comment for the reviewers. Now, if you clicked on enroll in the pathway, at that moment, machine learning disappears. Rather, there is a standardized process. It's an algorithm, but it's a deterministic algorithm, for how patients with cellulitis should be properly managed, diagnosed, and treated. That algorithm comes from best practices, comes from clinicians coming together, analyzing past data, understanding what would be good ways to treat patients of this type, and then formalizing that in a document. The challenge is that there might be hundreds or even thousands of these best practices. And in an academic medical center, where you have patients coming-- where you have medical students or residents who are very quickly rotating through the system and thus may not be familiar with which are the most appropriate clinical guidelines to use for any one patient in this institution. Or if you go to a rural site, where this academic nature of thinking through what the right clinical guidelines are is a little bit less of the mainstream, everyday activity, the question of which one to use when is very challenging. And so that's where the machine learning algorithms can come in. By reasoning about what's going on with a patient, you might have a good guess of what might be appropriate for this patient, and you use that to automatically surface the right clinical decisions for a trigger. Another example is by just trying to anticipate clinician needs. So for example, if you think that this patient might be coming in for a psychiatric condition, or maybe you recognize that the patient came in that triage and was complaining of chest pain, then there might be a psych order set, which includes laboratory test results that are relevant for psychiatric patients, or a chest pain order set, which includes both laboratory tests and interventions, like aspirin, that might be suggested. Now, these are also examples where these order sets are not created by machine learning algorithms. Although that's something we could discuss later in the semester. Rather, they're standardized. But the goal of the machine learning algorithm is just to figure out which ones to show when directly to the clinicians. I'm showing you these examples to try to point out that diagnosis isn't the whole story. Thinking through what are the more subtle interventions we can do with machine learning and AI and healthcare is going to be really important to having the impact that it could have. So other examples, now a bit more on the diagnosis style, are reducing the need for specialist consults. So you might have a patient come in, and it might be really quick to get the patient in front of an X-ray to do a chest X-ray, but then finding the radiologist to review that X-ray could take a lot of time. And in some places, radiologist consults could take days, depending on the urgency of the condition. So this is an area where data is quite standardized. In fact, MIT just released last week a data set of 300,000 chest x-rays with associated labels on them. And one could try to ask the question of could we build machine learning algorithms using the convolutional neural network type techniques that we've seen play a big role in object recognition to try to understand what's going on with this patient. For example, in this case, the prediction is the patient has pneumonia, from this chest X-ray. And using those systems, it could help both reduce the load of radiology consults, and it could allow us to really translate these algorithms to settings which might be much more resource poor, for example, in developing nations. Now, the same sorts of techniques can be used for other data modalities. So this is an example of data that could be obtained from an EKG. And from looking at this EKG, one can try to predict, does the patient have a heart condition, such as an arrhythmia. Now, these types of data used to just be obtained when you go to a doctor's office. But today, they're available to all of us. For example, in Apple's most recent watch that was released, it has a single-lead EKG built into it, which can attempt to predict if a patient has an arrhythmia or not. And there are a lot of subtleties, of course, around what it took to get regulatory approval for that, which we'll be discussing later in the semester, and how one safely deploys such algorithms directly to consumers. And there, there are a variety of techniques that could be used. And in a few lectures, I'll talk to you about techniques from the '80s and '90s which were based on trying to signal processing, trying to detect where are the peaks of the signal, look at a distance between peaks. And more recently, because of the large wealth of data that is available, we've been using convolutional neural network-based approaches to try to understand this data and predict from it. Yet another example from the ER really has to do with not how do we care for the patient today, but how do we get better data, which will then result in taking better care of the patient tomorrow. And so one example of that, which my group deployed at Beth Israel Deaconess, and it's still running there in the emergency department, has to do with getting higher quality chief complaints. The chief complaint is usually a very short, two or three word quantity, like left knee pain, rectal pain, right upper quadrant, RUQ, abdominal pain. And it's just a very quick summary of why did the patient come into the ER today. And despite the fact that it's so few words, it plays a huge role in the care of a patient. If you look at the big screens in the ER, which summarize who are the patients and on what beds, they have the chief complaint next to it. Chief complaints are used as criteria for enrolling patients in clinical trials. It's used as criteria for doing retrospective quality research to see how do we care for patients in a particular type. So it plays a very big role. But unfortunately, the data that we've been getting has been crap. And that's because it was free text, and it was sufficiently high dimensional that just attempting to standardize it with a big dropdown list, like you see over here, would have killed the clinical workflow. It would've taken way too much time for clinicians to try to find the relevant one. And so it just wouldn't have been used. And that's where some very simple machine learning algorithms turned out to be really valuable. So for example, we changed the workflow altogether. Rather than the chief complaint being the first thing that the triage nurse assigns when the patient comes in, it's the last thing. First, the nurse takes the vital signs, patient's temperature, heart rate, blood pressure, respiratory rate, and oxygen saturation. They talk to the patient. They write up a 10-word or 30-word note about what's going on with the patient. Here it says, "69-year-old male patient with severe intermittent right upper quadrant pain. Began soon after eating. Also is a heavy drinker." So quite a bit of information in that. We take that. We use a machine learning algorithm, a supervised machine learning algorithm in this case, to predict a set of chief complaints now drawn from a standardized ontology. We show the five most likely ones, and the clinician, in this case, a nurse, could just click one of them, and it would enter it into there. We also allow the nurse to type in part of a chief complaint. But rather than just doing a text matching to find words that match what's being typed in, we do a contextual autocomplete. So we use our predictions to prioritize what's the most likely chief complaint that contains that sequence of characters. And that way it's way faster to enter in the relevant information. And what we found is that over time, we got much higher quality data out. And again, this is something we'll be talking about in one of our lectures in this course. So I just gave you an example, a few examples, of how machine learning and artificial tolerance will transform the provider space, but now I want to jump up a level and think through not how do we treat a patient today, but how do we think about the progression of a patient's chronic disease over a period of years. It could be 10 years, 20 years. And this question of how do we manage chronic disease is something which affects all aspects of the healthcare ecosystem. It'll be used by providers, payers, and also by patients themselves. So consider a patient with chronic kidney disease. Chronic kidney disease, it typically only gets worse. So you might start with the patient being healthy and then have some increased risk. Eventually, they have some kidney damage. Over time, they reach kidney failure. And once they reach kidney failure, typically, they need dialysis or a kidney transplant. But understanding when each of these things is going to happen for patients is actually really, really challenging. Right now, we have one way of trying to stage patients. The standard approach is known as the EGFR. It's derived predominantly from the patient's creatinine, which is a blood test result, and their age. And it gives you a number out. And from that number, you can get some sense of where the patient is in this trajectory. But it's really coarse grained, and it's not at all predictive about when the patient is going to progress to the next stage of the disease. Now, other conditions, for example, some cancers, like I'll tell you about next, don't follow that linear trajectory. Rather, patients' conditions and the disease burden, which is what I'm showing you in the y-axis here, might get worse, better, worse again, better again, worse again, and so on, and of course is a function of the treatment for the patient and other things that are going on with them. And understanding what influences, how a patient's disease is going to progress, and when is that progression going to happen, could be enormously valuable for many of those different parts of the healthcare ecosystem. So one concrete example of how that type of prediction could be used would be in a type of precision medicine. So returning back to the example that I mentioned in the very beginning of today's lecture of multiple myeloma, which I said my mother died of, there are a large number of existing treatments for multiple myeloma. And we don't really know which treatments work best for whom. But imagine a day where we have algorithms that could take what you know about a patient at one point in time. That might include, for example, blood test results. It might include RNA seq, which gives you some sense of the gene expression for the patient, that in this case would be derived from a sample taken from the patient's bone marrow. You could take that data and try to predict what would happen to a patient under two different scenarios. The blue scenario that I'm showing you here, if you give them treatment A, or this red scenario here, where you give them treatment B. And of course, treatment A and treatment B aren't just one-time treatments, but they're strategies. So they're repeated treatments across time, with some intervals. And if your algorithm says that under treatment B, this is what's going to happen, then you might-- the clinician might think, OK. Treatment B is probably the way to go here. It's going to long-term control the patient's disease burden the best. And this is an example of a causal question. Because we want to know how do we cause a change in the patient's disease trajectory. And we can try to answer this now using data. So for example, one of the data sets that's available for you to use in your course projects is from the Multiple Myeloma Research Foundation. It's an example of a disease registry, just like the disease registry I talked to you about earlier for rheumatoid arthritis. And it follows about 1,000 patients across time, patients who have multiple myeloma. What treatments they're getting, what their symptoms are, and at a couple of different stages, very detailed biological data about their cancer, in this case, RNA seq. And one could attempt to use that data to learn models to make predictions like this. But such predictions are fraught with errors. And one of the things that Pete and I will be teaching in this course is that there's a very big difference between prediction and prediction for the purpose of making causal statements. And the way that you interpret the data that you have, when your goal is to do treatment suggestion or optimization, is going to be very different from what you were taught in your introductory machine learning algorithms class. So other ways that we could try to treat and manage patients with chronic disease include early diagnosis. For example, patients with Alzheimer's disease, there's been some really interesting results just in the last few years, here. Or new modalities altogether. For example, liquid biopsies that are able to do early diagnosis of cancer, even without having to do a biopsy of the cancer tumor itself. We can also think about how do we better track and measure chronic disease. So one example shown on the left here is from Dina Katabi's lab here at MIT and CSAIL, where they've developed a system called Emerald, which is using wireless signals, the same wireless signals that we have in this room today, to try to track patients. And they can actually see behind walls, which is quite impressive. So using this for the signal, you could install what looks like just a regular wireless router in an elderly person's home, and you could detect if that elderly patient falls. And of course if the patient has fallen, and they're elderly, it might be very hard for them to get back up. They might have broken a hip, for example. And one could then alert the caregivers, maybe if necessary, bring in emergency support. And that could have a long-term outcome for this patient which would really help them. So this is an example of what I mean by better tracking patients with chronic disease. Another example comes from patients who have type 1 diabetes. Type 1 diabetes, as opposed to type 2 diabetes, generally develops in patients at a very early age. Usually as children it's diagnosed. And one is typically managed by having an insulin pump, which is attached to a patient and can give injections of insulin on the fly, as necessary. But there's a really challenging control problem there. If you give a patient too much insulin, you could kill them. If you give them too little insulin, you could really hurt them. And how much insulin you give them is going to be a function of their activity. It's going to be a function of what food they're eating and various other factors. So this is a question which the control theory community has been thinking through for a number of years, and there are a number of sophisticated algorithms that are present in today's products, and I wouldn't be surprised if one or two people in the room today have one of these. But it also presents a really interesting opportunity for machine learning. Because right now, we're not doing a very good job at predicting future glucose levels, which is essential to figure out how to regulate insulin. And if we had algorithms that could, for example, take a patient's phone, take a picture of the food that a patient is eating, have that automatically feed into an algorithm that predicts its caloric content and how quickly that'll be processed by the body. And then as a result of that, think about when, based on this patient's metabolic system, when should you start increasing insulin levels and by how much. That could have a huge impact in quality of life of these types of patients. So finally, we've talked a lot about how do we manage healthcare, but equally important is about discovery. So the same data that we could use to try to change the way that algorithms are implemented could be used to think through what would be new treatments and make new discoveries about disease subtypes. So at one point later in the semester, we'll be talking about disease progression modeling, and we'll talk about how to use data-driven approaches to discover different subtypes of disease. And on the left, here, I'm showing you an example of a really nice study from back in 2008 that used a k-means clustering algorithm to discover subtypes of asthma. One could also use machine learning to try to make discoveries about what proteins, for example, are important in regulating disease. How can we differentiate at a biological level which patients will progress quickly, which patients will respond to treatment. And that of course will then suggest new ways of-- new drug targets for new pharmaceutical efforts. Another direction also studied here at MIT, by quite a few labs, actually, has to do with drug creation or discovery. So one could use machine learning algorithms to try to predict what would a good antibody be for trying to bind with a particular target. So that's all for my overview. And in the remaining 20 minutes, I'm going to tell you a little bit about what's unique about machine learning in healthcare, and then an overview of the class syllabus. And I do see that it says, replace lamp in six minutes, or power will turn off and go into standby mode. AUDIENCE: We have that one [INAUDIBLE].. DAVID SONTAG: Ah, OK. Good. You're hired. If you didn't get into the class, talk to me afterwards. All right. AUDIENCE: [INAUDIBLE]. DAVID SONTAG: [LAUGHS] We hope. So what's unique about machine learning healthcare? I gave you already some hints at this. So first, healthcare is ultimately, unfortunately, about life or death decisions. So we need robust algorithms that don't screw up. A prime example of this, which I'll tell you a little bit more about towards the end of the semester is from a major software error that occurred something like 20, 30 years ago in a-- in an X-ray type of device, where an overwhelming amount of radiation was exposed to a patient just because of a software overflow problem, a bug. And of course that resulted in a number of patients dying. So that was a software error from decades ago, where there was no machine learning in the loop. And as a result of that and similar types of disasters, including in the space industry and airplanes and so on, led to a whole area of research in computer science in formal methods and how do we design computer algorithms that can check that a piece of software would do what it's supposed to do and would not make-- and that there are no bugs in it. But now that we're going to start to bring data and machine learning algorithms into the picture, we are really suffering for lack of good tools for doing similar formal checking of our algorithms and their behavior. And so this is going to be really important in the future decade, as machine learning gets deployed not just in settings like healthcare, but also in other settings of life and death, such as in autonomous driving. And it's something that we'll touch on throughout the semester. So for example, when one deploys machine learning algorithms, we need to be thinking about are they safe, but also how do we check for safety long-term? What are checks and balances that we should put into the deployment of the algorithm to make sure that it's still working as it was intended? We also need fair and accountable algorithms. Because increasingly, machine learning results are being used to drive resources in a healthcare setting. An example that I'll discuss in about a week and a half, when we talk about risk stratification, is that algorithms are being used by payers to risk stratify patients. For example, to figure out which patients are likely to be readmitted to the hospital in the next 30 days, or are likely to have undiagnosed diabetes, or are likely to progress quickly in their diabetes. And based on those predictions, they're doing a number of interventions. For example, they might send nurses to the patient's home. They might offer their members access to a weight loss program. And each of these interventions has money associated to them. They have a cost. And so you can't do them for everyone. And so one uses machine learning algorithms to prioritize who do you give those interventions to. But because health is so intimately tied to socioeconomic status, one can think about what happens if these algorithms are not fair. It could have really long-term implications for our society, and it's something that we're going to talk about later in the semester as well. Now, I mentioned earlier that many of the questions that we need to study in the field don't have good label data. In cases where we know we want to predict, there's a supervised prediction problem, often we just don't have labels for that thing we want to predict. But also, in many situations, we're not interested in just predicting something. We're interested in discovery. So for example, when I talk about disease subtyping or disease progression, it's much harder to quantify what you're looking for. And so unsupervised learning algorithms are going to be really important for what we do. And finally, I already mentioned how many of the questions we want to answer are causal in nature, particularly when you want to think about treatment strategies. And so we'll have two lectures on causal inference, and we'll have two lectures on reinforcement learning, which is increasingly being used to learn treatment policies in healthcare. So all of these different problems that we've talked about result in our having to rethink how do we do machine learning in this setting. For example, because driving labels for supervised prediction is very hard, one has to think through how could we automatically build algorithms to do what's called electronic phenotyping to discover, to figure out automatically, what is the relevant labels for a set of patients that one could then attempt to predict in the future. Because we often have very little data, for example, some rare diseases, there might only be a few hundred or a few thousand people in the nation that have that disease. Some common diseases present in very diverse ways and [INAUDIBLE] are very rare. Because of that, you have just a small number of patient samples that you could get, even if you had all of the data in the right place. And so we need to think through how can we bring through-- how can we bring together domain knowledge. How can we bring together data from other areas-- will everyone look over here now-- from other areas, other diseases, in order to learn something that then we could refine for the foreground question of interest. Finally, there is a ton of missing data in healthcare. So raise your hand if you've only been seeing your current primary care physician for less than four years. OK. Now, this was an easy guess, because all of you are students, and you probably don't live in Boston. But here in the US, even after you graduate, you go out into the world, you have a job, and that job pays your health insurance. And you know what? Most of you are going to go into the tech industry, and most of you are going to switch jobs every four years. And so your health insurance is going to change every four years. And unfortunately, data doesn't tend to follow people when you change providers or payers. And so what that means is for any one thing we might want to study, we tend to not have very good longitudinal data on those individuals, at least not here in the United States. That story is a little bit different in other places, like the UK or Israel, for example. Moreover, we also have a very bad lens on that healthcare data. So even if you've been going to the same doctor for a while, we tend to only have data on you when something's been recorded. So if you went to a doctor, you had a lab test performed, we know the results of it. If you've never gotten your glucose tested, it's very hard, though not impossible, to figure out if you might be diabetic. So thinking about how do we deal with the fact that there's a large amount of missing data, where that missing data has very different patterns across patients, and where there might be a big difference between train and test distributions is going to be a major part of what we discuss in this course. And finally, the last example is censoring. I think I've said finally a few times. So censoring, which we'll talk about in two weeks, is what happens when you have data only for small windows of time. So for example, you have a data set where your goal is to predict survival. You want to know how long until a person dies. But a person-- you only have data on them up to January 2009, and they haven't yet died by January 2009. Then that individual is censored. You don't know what would have happened, you don't know when they died. So that doesn't mean you should throw away that data point. In fact, we'll talk about learning algorithms that can learn from censored data very effectively. So there are a number of also logistical challenges to doing machine learning in healthcare. I talked about how having access to data is so important, but one of the reasons-- there are others-- for why getting large amounts of data in the public domain is challenging is because it's so sensitive. And removing identifiers, like name and social, from data which includes free text notes can be very challenging. And as a result, when we do research here at MIT, typically, it takes us anywhere from a few months-- which has never happened-- to two years, which is the usual situation, to negotiate a data sharing agreement to get the health data to MIT to do research on. And of course then my students write code, which we're very happy to open source under MIT license, but that code is completely useless, because no one can reproduce their results on the same data because they don't have access to it. So that's a major challenge to this field. Another challenge is about the difficulty in deploying machine learning algorithms due to the challenge of integration. So you build a good algorithm. You want to deploy it at your favorite hospital, but guess what? That hospital has Epic or Cerner or Athena or some other commercial electronic medical records system, and that electronic medical records system is not built for your algorithm to plug into. So there is a big gap, a large amount of difficulty to getting your algorithms into production systems, which we'll talk about as well during the semester. So the goals that Pete and I have for you are as follows. We want you to get intuition for working with healthcare data. And so the next two lectures after today are going to focus on what healthcare is really like, and what is the healthcare data that's created by the practice of healthcare like. We want you to get intuition for how to formalize machine learning challenges as healthcare problems. And that formalization step is often the most tricky and something you'll spend a lot of time thinking through as part of your problem sets. Not all machine learning algorithms are equally useful. And so one theme that I'll return to throughout the semester is that despite the fact that deep learning is good for many speech recognition and computer vision problems, it actually isn't the best match to many problems in healthcare. And you'll explore that also as part of your problem sets, or at least one of them. And we want you to understand also the subtleties in robustly and safely deploying machine learning algorithms. Now, more broadly, this is a young field. So for example, just recently, just about three years ago, was created the first conference on Machine Learning in Healthcare, by that name. And new publication venues are being created every single day by Nature, Lancet, and also machine learning journals, for publishing research on machine learning healthcare. Because it's one of those issues we talked about, like access to data, not very good benchmarks, reproducibility has been a major challenge. And this is again something that the field is only now starting to really grapple with. And so as part of this course, oh so many of you are currently PhD students or will soon be PhD students, we're going to think through what are some of the challenges for the research field. What are some of the open problems that you might want to work on, either during your PhD or during your future career. |
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019 | 5_Risk_Stratification_Part_2.txt | "[CLICK] [SQUEAK] [PAGES RUSTLING] [MOUSE DOUBLE-CLICKS] PROFESSOR: So today we'll be continuing alo(...TRUNCATED) |
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019 | 24_Robustness_to_Dataset_Shift.txt | "[SQUEAKING] [RUSTLING] [CLICKING] DAVID SONTAG: OK, so then today's lecture is going to be about da(...TRUNCATED) |
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019 | 25_Interpretability.txt | "PROFESSOR: OK, so the last topic for the class is interpretability. As you know, the modern machine(...TRUNCATED) |
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019 | 9_Translating_Technology_Into_the_Clinic.txt | "PETER SZOLOVITS: Fortunately, I have a guest today, Dr. Adam Wright, who will be doing an interview(...TRUNCATED) |
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019 | 22_Regulation_of_Machine_Learning_Artificial_Intelligence_in_the_US.txt | "PROFESSOR: All right. Let's get started. Welcome, ladies and gentlemen. Today it's my pleasure to i(...TRUNCATED) |
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019 | 18_Disease_Progression_Modeling_and_Subtyping_Part_1.txt | "DAVID SONTAG: So we're done with our segment on causal inference and reinforcement learning. And fo(...TRUNCATED) |
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019 | 8_Natural_Language_Processing_NLP_Part_2.txt | "PETER SZOLOVITS: All right. Let's get started. Good afternoon. So last time, I started talking abou(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
- Downloads last month
- 6