diff --git "a/transcript/allocentric_SCPpM9i7GPU.txt" "b/transcript/allocentric_SCPpM9i7GPU.txt" new file mode 100644--- /dev/null +++ "b/transcript/allocentric_SCPpM9i7GPU.txt" @@ -0,0 +1,1579 @@ +[0.000 --> 3.960] All right, hello there, everybody. +[3.960 --> 5.680] This is my favorite part. +[5.680 --> 7.480] I don't know if you guys get to see this. +[7.480 --> 9.800] I get to see the participant numbers, +[9.800 --> 11.920] just sort of climb and roll in. +[11.920 --> 13.880] This is where I get to imagine you +[13.880 --> 17.720] streaming into the venue and taking your seats +[17.720 --> 19.600] and speaking to your neighbors +[19.600 --> 21.600] and getting all excited about this. +[21.600 --> 25.720] But as I will drag myself away from my favorite part +[25.720 --> 29.560] just to say hello, everyone and welcome. +[29.560 --> 32.880] Historically, today is National High Five Day. +[32.880 --> 35.560] But I think the pandemic has officially +[35.560 --> 38.920] replaced that with National Elbow Bump Day. +[38.920 --> 42.320] In case you didn't know, today is also McDonald's Day. +[42.320 --> 47.080] Today in 1955, Roy Croc opened the first McDonald's +[47.080 --> 49.120] in Deplanes, Illinois. +[49.120 --> 53.280] And while they are hands down the best frontries, +[53.280 --> 55.720] our New York cortex has been struggling +[55.720 --> 59.080] with whether or not we really should be eating them. +[59.120 --> 63.040] And so welcome, everyone, to skeptical and choir presents. +[63.040 --> 66.280] This is a series of live online presentations from experts +[66.280 --> 69.640] who are devoted to advancing science over pseudoscience, +[69.640 --> 71.880] media literacy, over conspiracy theories, +[71.880 --> 74.600] and critical thinking over magical thinking. +[74.600 --> 78.440] My name is Leon Lorde and I'm delighted to be your host. +[78.440 --> 80.760] I am a stand-up comedian and author, +[80.760 --> 85.440] and I'm also a co-host for the Point of Inquiry podcast. +[85.440 --> 86.880] If you are so inclined, +[86.880 --> 89.400] you can also find out more about me and my work +[89.400 --> 92.200] at veryfunnylady.com. +[92.200 --> 95.640] Now, before we get going, I have a few reminders. +[95.640 --> 100.440] The Center for Inquiry's Coronavirus Resource Center +[100.440 --> 104.760] continues doing the work of fact checking misinformation +[104.760 --> 107.120] and sharing reliable news at +[107.120 --> 110.080] CenterforInquiry.org slash coronavirus. +[110.080 --> 112.480] And some of the articles that they've curated this week +[112.480 --> 116.200] include a piece from Wired about a new type of Bill Gates +[116.200 --> 119.760] conspiracy theory that is going viral on Facebook. +[119.760 --> 121.720] I don't know where folks find the time. +[121.720 --> 122.840] I really don't. +[122.840 --> 125.480] And an article from the Medical Care Blog +[125.480 --> 130.320] about how COVID-19 scams are spreading just as fast as COVID-19, +[130.320 --> 135.560] so far to the tune of $397 million. +[135.560 --> 138.240] That's a lot of franchise. +[138.240 --> 141.200] And as always, I invite you to subscribe +[141.200 --> 143.240] to skeptical and choir magazine. +[143.240 --> 145.120] There are two ways to do that. +[145.120 --> 147.000] Both digital and print. +[147.000 --> 149.840] And the bonus, the print subscription also gives you access +[149.840 --> 151.520] to the digital version. +[151.520 --> 155.960] So you can get either and or both at skepticalenquiry.org. +[155.960 --> 159.120] And by all means, please, please, please mark your calendar +[159.120 --> 161.920] for our next skepticalenquiry presents, +[161.920 --> 165.120] which is on Thursday, April 29th. +[165.120 --> 169.000] Look, this is the one you don't want to miss, okay? +[169.000 --> 171.360] Given the presentations that we've had here in the past +[171.360 --> 174.120] and the questions that have proliferated on this topic, +[174.120 --> 177.600] you will want to be here to hear Mick West talking +[177.600 --> 180.440] about escaping the rabbit hole. +[180.440 --> 184.160] How to help your conspiracy theorist friend? +[184.160 --> 186.120] Yes, this is the one everybody. +[186.120 --> 187.440] There's hope. +[187.440 --> 190.280] Now, if you're new here, here's the deal. +[190.280 --> 192.480] The flow of the evening is easy breezy. +[192.480 --> 194.560] You keep doing whatever you're doing. +[194.560 --> 197.920] I will introduce our guest, their Razzle and Dazzle, +[197.920 --> 201.080] after which we will open it up for your questions. +[201.080 --> 203.600] And at the bottom of your screen, you know the deal. +[203.600 --> 207.080] You'll see this little Q&A button at that icon there. +[207.080 --> 209.560] That's the place for you to type your questions +[209.560 --> 213.080] in the form of a question everybody. +[213.080 --> 215.320] Don't need the CV for that. +[215.320 --> 217.920] And if you miss any of the talk tonight, +[217.920 --> 220.680] it is being recorded and will be available +[220.680 --> 223.120] on skepticalenquiry.org. +[223.120 --> 224.120] All right, everybody. +[224.120 --> 226.440] So now off we go. +[226.440 --> 231.200] I had the pleasure of meeting tonight's guest in 2019 at Sicon. +[231.200 --> 234.120] And I can honestly say that my neo cortex +[234.120 --> 235.920] has not been the same since. +[235.920 --> 239.880] Our guest was elected to the National Academy of Engineering +[239.880 --> 240.920] in the early arts. +[240.920 --> 244.760] He directed the Redwood Neuroscience Institute, +[244.760 --> 246.440] now located at UC Berkeley. +[246.440 --> 249.560] He is a scientist and co-founder at New Menta, +[249.560 --> 253.040] a research company focused on neo-cortical theory. +[253.040 --> 257.120] Simply put, it's where neuroscience meets machine intelligence. +[257.120 --> 260.080] He wrote the book on intelligence. +[260.080 --> 263.040] And a thousand brains, a new theory of intelligence, +[263.040 --> 265.480] which is the subject of tonight's talk. +[265.480 --> 270.440] He is widely known for founding Palm Computing and Hanspring +[270.440 --> 272.400] and is basically credited with starting +[272.400 --> 275.080] the entire handheld computing industry. +[275.080 --> 278.160] And I'd like to think I helped with that +[278.160 --> 283.160] since I owned and loved the Palm 3, Palm 5, and the Palm M515, +[284.880 --> 288.760] doing my part for retail, consumerism, and science. +[288.760 --> 290.280] You're welcome. +[290.280 --> 294.040] So with us tonight to talk about how the brain learns +[294.040 --> 296.200] and why it sometimes gets it wrong, +[296.200 --> 298.600] please welcome Jeff Hawkins. +[298.600 --> 300.640] Jeff, you have the con. +[301.760 --> 302.600] Thank you, Lee. +[302.600 --> 304.680] And that's very, very kind introduction. +[304.680 --> 307.840] Yeah, we met, as you said, about a year and a half ago +[307.840 --> 311.440] and at the Sicon conference in Las Vegas. +[311.440 --> 316.120] And the talk I'm gonna give today is similar +[316.120 --> 317.360] to when I gave then. +[317.360 --> 318.880] It's been slightly modified. +[318.880 --> 320.760] But I want to tell a story about that conference +[320.760 --> 325.760] because I spoke early in the morning on one of the days. +[325.960 --> 329.600] And Richard Dawkins spoke at the end of the day. +[329.600 --> 334.360] And he gave a beautiful and elegant talk as he always does. +[334.360 --> 335.880] And so I approached him afterwards. +[335.880 --> 339.240] I met Richard a few times, but he didn't really know him too well. +[339.240 --> 340.520] And I say, asked him, I said, +[340.520 --> 342.040] well, Richard, did you hear my talk in the morning? +[342.040 --> 342.880] He goes, yeah, I didn't. +[342.880 --> 344.760] I said, oh, I said, well, what do you think? +[344.760 --> 345.760] I'm kind of nervous, you know, he says, +[345.760 --> 347.200] oh, that was pretty good. +[347.200 --> 350.440] And then I then I went bolder and I said, +[350.440 --> 352.280] well, you know, I'm writing this book. +[352.280 --> 354.680] And it's not done yet, but would you be +[354.680 --> 356.040] so kind of reading early draft? +[356.040 --> 357.040] Would you be interested in that? +[357.040 --> 358.920] And he said, yeah, I'll do that. +[358.920 --> 362.480] So months or so later I sent him a draft of this book. +[362.480 --> 363.720] I was writing. +[363.720 --> 366.280] And a few weeks later, he began to say, +[366.280 --> 367.480] Richard, did you start reading it? +[367.480 --> 368.320] And then what do you think? +[368.320 --> 369.160] He said, I think it's pretty good. +[369.160 --> 370.640] I'm about halfway through. +[370.640 --> 373.160] And so I said, okay, well, I said, +[373.160 --> 375.960] we do consider writing a forward for the book. +[375.960 --> 378.160] And I'm doing this email exchange. +[378.160 --> 380.080] And he says, yeah, consider it. +[380.080 --> 381.400] So a few weeks later, I wrote him again, +[381.400 --> 382.840] I said, Richard, well, what do you think? +[382.840 --> 383.680] He said, oh, I've written it. +[383.680 --> 385.080] Here it is. +[385.080 --> 388.680] And that book just came out last month, +[388.680 --> 391.000] called A Thousand Brains, a New Theory of Intelligence. +[391.000 --> 392.720] And the talk I'm going to get today +[392.720 --> 395.960] is talking about part of it to that book. +[395.960 --> 399.000] And Richard Duck and Jordan, very, very generous forward +[399.000 --> 399.920] to the book. +[399.920 --> 402.720] So that's a little introduction about tying this back +[402.720 --> 404.520] the last time that Leon and I met. +[404.520 --> 409.000] And also when we were at the last real person conference. +[409.000 --> 412.280] Okay, so I'm going to talk about brains today. +[412.280 --> 415.840] And it's a pretty deep topic, +[415.840 --> 418.600] but I'm going to make it as accessible as I can. +[419.560 --> 421.360] It's going to require show some images. +[421.360 --> 423.160] So we're going to do a presentation on this. +[423.160 --> 426.520] And hopefully everyone can follow along. +[426.520 --> 430.080] And when we get to the end, we'll look forward to doing Q&A. +[430.080 --> 432.240] So I'm going to now share my screen. +[432.360 --> 434.080] We're going to get started with this. +[434.080 --> 436.960] And hopefully this is all going to work as it's supposed to. +[438.840 --> 441.600] Like that, and like that, and like that. +[441.600 --> 444.600] And I can, I assume everyone can see that. +[444.600 --> 448.320] So as we said, that's the title of my talk. +[448.320 --> 451.160] And as you said, I work for this company called The Meta, +[451.160 --> 454.320] which does sort of neuroscience research. +[454.320 --> 456.600] I run a research lab and we also do machine learning +[456.600 --> 458.600] and AI work related to that. +[458.600 --> 460.840] And I mentioned the new book, which is here, +[460.840 --> 463.240] the Thousand Brains, New Surin Intelligence. +[463.240 --> 465.240] And I shame as you plugging it right now +[465.240 --> 467.760] and once again at the very end of the talk. +[467.760 --> 469.520] So let's just jump right into it. +[469.520 --> 470.640] Hopefully you recognize this. +[470.640 --> 472.440] Everyone has one. +[472.440 --> 474.720] It's at least everyone here has one. +[474.720 --> 477.680] It's a picture or drawing over here in brain. +[477.680 --> 482.040] And we can roughly divide that into two parts. +[482.040 --> 484.600] The one part, the quote, the New York court text, +[484.600 --> 486.400] is a big sheet of cells. +[486.400 --> 490.160] It's about the size of a large dinner napkin. +[490.160 --> 492.560] And it's about 2 1 1 1 half millimeter six, +[492.560 --> 494.560] maybe twice as thick as a dinner napkin. +[494.560 --> 497.960] And it wraps around the rest of the brain and fills our skull. +[497.960 --> 500.600] It's about 70% of the volume of our brain. +[500.600 --> 502.160] And those little ridges and values +[502.160 --> 505.720] are just trying to stop this sheet of cells into your skull. +[505.720 --> 507.560] There's a lot of other parts of the brain. +[507.560 --> 509.280] Doesn't have other regions. +[509.280 --> 511.760] And we can just roughly call them older brain areas +[511.760 --> 515.160] because they're mostly evolutionary older. +[515.160 --> 516.600] And they do lots of special things. +[516.600 --> 518.320] And they occupy about 30% of the brain. +[518.320 --> 519.520] And you can't see most of them. +[519.520 --> 521.820] They're stuffed up inside the inside the New York +[521.820 --> 523.720] court text or wraps around them. +[523.720 --> 525.720] If we want to say like, what are these older brain areas do? +[525.720 --> 527.720] Where are they take care of a lot of bodily functions +[527.720 --> 530.400] such as breathing, digestion, reflex behaviors? +[530.400 --> 533.840] Even things like you might think you learn to do. +[533.840 --> 536.000] You don't really do walking and running and chewing. +[536.000 --> 538.880] These are things that are your your your bio your genes +[538.880 --> 539.960] know how to do. +[539.960 --> 541.840] We're just born prematurely. +[541.840 --> 543.520] And so we don't walk right away. +[543.520 --> 545.880] Also in the older brain areas are things like our emotions. +[545.880 --> 549.560] If you get angry or sad or someone becomes violent, +[549.560 --> 551.880] that's part of these older brain areas. +[551.880 --> 553.440] So we can roughly divide it like this. +[553.440 --> 554.840] The New York court text on the other hand +[554.840 --> 557.040] is everything we think about is intelligence. +[557.040 --> 559.920] So anything you're conscious of, your conscious perceptions. +[559.920 --> 562.200] When you look at something or see or you're looking at the +[562.200 --> 565.200] computer right now or looking around the room, +[565.200 --> 567.560] you're hearing things, you're touching things, +[567.560 --> 569.560] that's your New York court text doing that. +[569.560 --> 571.400] It's also responsible all language. +[571.400 --> 573.760] And not just like spoken language, like I'm doing right now, +[573.760 --> 575.000] but written language. +[575.000 --> 578.400] Also the language of mathematics, sign language, +[578.400 --> 581.520] the language of music, the New York court text creates it +[581.520 --> 582.920] and understand it. +[582.920 --> 585.520] So right now there are cells in my head that are spiking on +[585.520 --> 587.720] and off, creating the movements of my lips +[587.720 --> 590.080] and my voice box, which are creating my language. +[590.080 --> 592.280] All things we might think of as cognition or thought +[592.280 --> 594.240] or planning happen near court text. +[594.240 --> 596.480] So all the accomplishments of the humans have done over the +[596.480 --> 598.520] years from engineering, math, science, literature, +[598.520 --> 600.640] agriculture, you name it. +[600.640 --> 602.480] That's it's a product in your court text. +[602.480 --> 605.000] So the New York court text is a pretty amazing organ +[605.000 --> 608.640] and all mammals have one, but in humans it's particularly large +[608.640 --> 611.680] relative to our body area and we are particularly smart. +[611.680 --> 614.800] So there's no question about why it's due to the New York +[614.800 --> 616.240] court text. +[616.240 --> 619.080] Now the New York court text interesting is, +[619.080 --> 621.520] although it generates a lot of our behaviors, +[621.520 --> 624.040] like my speech right now and all the things we do, +[624.040 --> 625.920] day to day, it doesn't, and then none of the cells in +[625.920 --> 628.080] New York court text directly control any muscles. +[628.080 --> 630.440] So the New York court text can't make any muscles move +[630.440 --> 631.280] directly. +[631.360 --> 633.160] None of these cells project the muscles. +[633.160 --> 634.280] And the cells in the New York court text +[634.280 --> 636.000] protect the other areas of the brain, +[636.000 --> 638.200] which then can make movement. +[638.200 --> 641.320] And so it's not really in control all the time. +[641.320 --> 642.360] So if you think about it, like, +[642.360 --> 643.840] take something as simple as breathing. +[643.840 --> 645.400] Well, you don't need to think about breathing. +[645.400 --> 646.920] We just read them or sleep you feel. +[646.920 --> 648.080] Call them or you breathe. +[648.080 --> 649.560] I'm not thinking about breathing right now. +[649.560 --> 650.720] I'm not talking about it. +[652.440 --> 653.840] But we could, if I said, okay, +[653.840 --> 656.760] we're gonna take two deep breaths and we're gonna hold a breath. +[656.760 --> 658.120] Well, we could all do that and that's a New York +[658.120 --> 660.000] court text controlling that. +[660.000 --> 661.920] But after a while, the old brain says, +[661.920 --> 663.840] you know what, I'm gonna need some oxygen +[663.840 --> 665.800] and we're just gonna go for it. +[665.800 --> 667.760] And we're gonna breathe and you can't stop that. +[667.760 --> 669.680] The same thing, just, you know, if you, +[669.680 --> 671.120] you might leave your house the morning saying, +[671.120 --> 673.160] I'm gonna eat only healthy food today. +[673.160 --> 675.800] And so you get to the break room and there's some old doughnuts +[675.800 --> 677.120] and you're saying, ah, I should need that. +[677.120 --> 679.200] But then your old brain smells it and looks at it and you do. +[679.200 --> 681.520] Anyway, that's because of these +[681.520 --> 683.800] emosimations and drives that the New York court text +[683.800 --> 686.280] is not in control of all the time. +[686.280 --> 689.360] So that's, that's a good place, a big part of who we are. +[689.360 --> 692.600] And why we don't always do good things. +[692.600 --> 694.520] Now, if we, we ask ourselves, +[694.520 --> 696.160] what does the New York court text do? +[697.240 --> 698.680] You know, I should stop for a second point now. +[698.680 --> 700.520] I think I'm hoping you're finding this interesting. +[700.520 --> 701.960] I think everyone should want to know +[701.960 --> 702.960] what their brain does. +[702.960 --> 704.600] I mean, we are our brain, right? +[704.600 --> 707.040] So, and this is what you are. +[707.040 --> 708.880] So I think, you know, always, +[708.880 --> 710.640] I think hope everyone's interested in this +[710.640 --> 712.360] because it's important to know who we are. +[712.360 --> 713.960] I think it's even critical. +[713.960 --> 714.880] Critical action. +[714.880 --> 716.960] Okay, so what does the New York court text do? +[716.960 --> 719.040] You might think it's like a computer and say, +[719.040 --> 721.200] oh, it gets some inputs and processes and access +[721.200 --> 722.040] and do something. +[722.040 --> 722.880] That's not right. +[722.880 --> 724.040] That's not the way to think about it. +[724.040 --> 725.040] The way to think about the New York court text +[725.040 --> 727.160] is it learns a model of the world. +[727.160 --> 730.640] It literally creates a model of the world in your head. +[730.640 --> 732.880] And, and let's just talk about that. +[732.880 --> 735.560] Yeah, first of all, everything you know about the world +[735.560 --> 737.000] is stored in this model. +[737.000 --> 739.200] So you've learned things feel like +[739.200 --> 740.640] and what they look like, what they sound like. +[740.640 --> 741.880] Even simple things. +[741.880 --> 743.520] You know, if you pick something up, +[743.520 --> 746.080] I often use a copy couple, I use that example today. +[746.080 --> 747.400] You have to learn what it's, +[747.440 --> 749.240] the surfaces feel like and what it, +[749.240 --> 750.960] and how it, and the sound it makes +[750.960 --> 752.960] when you put it on a table and so on. +[752.960 --> 754.120] You do this for everything, you know, +[754.120 --> 756.760] you have this complex model of the world +[756.760 --> 758.880] about everything looks and feels and sounds like. +[758.880 --> 760.400] We also have to learn where things are located. +[760.400 --> 762.560] Our brain is not just a list of things. +[762.560 --> 764.000] We know where we keep everything. +[764.000 --> 767.040] Where's, you know, where do I keep the knives in my kitchen? +[767.040 --> 769.480] Where is the chairs in my living room? +[769.480 --> 773.160] Where do I, you know, where is the buildings in my town? +[773.160 --> 774.000] And so on. +[774.000 --> 776.960] Everything has a location and the court text knows this. +[777.000 --> 779.040] We also have to learn how things change +[779.040 --> 780.520] when we interact with them. +[780.520 --> 782.840] Take a simple like a thing like a smartphone. +[782.840 --> 785.240] Well, it's an object you can feel it and look at it, +[785.240 --> 787.920] but when you touch it, the icons change or make sounds. +[787.920 --> 789.880] And you touch another icon and something else happens. +[789.880 --> 791.680] And you push the buttons and so on. +[791.680 --> 793.040] Something as simple as a stapler. +[793.040 --> 794.520] You know, what happens when you push it down? +[794.520 --> 796.200] How do you open up and change the stapler? +[796.200 --> 798.120] These seem very simple things, +[798.120 --> 800.440] but these have to be stored in your head someplace. +[800.440 --> 802.160] And of course, we learned, +[802.160 --> 806.360] couldn't have earned this number of conceptual things. +[806.360 --> 810.240] So we have tens of thousands of things about the world, +[810.240 --> 811.600] but we also know things like words. +[811.600 --> 813.240] Every one of us knows 40,000 words. +[813.240 --> 818.320] And we learn concepts like democracy and fear and humility. +[818.320 --> 820.240] And these are things that are somehow stored +[820.240 --> 822.000] in our head in this model. +[822.000 --> 824.720] What's the advantage of having a model? +[824.720 --> 827.600] A model allows you to do several things. +[827.600 --> 829.760] It allows you to know your current situation. +[829.760 --> 831.480] I can open my eyes, look around and say, +[831.480 --> 832.720] oh, I know where I am. +[832.720 --> 833.480] I can recognize it. +[833.480 --> 834.480] I can see it. +[834.480 --> 835.880] Or I might be been a new place, +[835.880 --> 836.720] but I'll know what it is. +[836.720 --> 837.560] Oh, this is a restaurant. +[837.560 --> 840.520] And I can see where the kitchen is and things like that. +[840.520 --> 843.640] But mostly important model lets us to predict the future. +[843.640 --> 845.000] And so we can predict a concept +[845.000 --> 846.000] that comes with deserved actions. +[846.000 --> 848.520] Like what would happen if I won't go down this hallway? +[848.520 --> 849.640] What would happen if I turned left? +[849.640 --> 851.120] What would see if I turned right? +[851.120 --> 853.400] And how to achieve goals? +[853.400 --> 857.280] And this way it really becomes an important part of our survival. +[857.280 --> 859.560] It allows us to say, given a model of the world, +[859.560 --> 862.520] how is it that I might achieve a particular goal, +[862.520 --> 864.880] whether something simple is getting a bite to eat +[864.880 --> 867.200] or something complex, like getting a promotion at work, +[867.200 --> 868.520] something like that. +[868.520 --> 871.080] Now, I found that a lot of people have a trouble understanding +[871.080 --> 872.280] what it means to have a model. +[872.280 --> 873.920] Like what do you mean a model of the world? +[873.920 --> 875.200] We're going to talk about it a bit. +[875.200 --> 877.280] But I've added a few slides here. +[877.280 --> 878.120] I want to talk about this, +[878.120 --> 880.280] just to give you a sense of what we're talking about a model. +[880.280 --> 882.480] Here's a picture of a model. +[882.480 --> 885.520] This is a model of a house that an architect might have made +[885.520 --> 887.000] at the physical model. +[887.000 --> 888.640] And the reason we do this is the people +[888.640 --> 891.560] build models like this is that you can imagine +[891.560 --> 894.320] what this building would look like from different angles. +[894.320 --> 896.600] You can imagine you could say, well, how far is it +[896.600 --> 897.920] from the driveway to the pool? +[897.920 --> 900.440] Or what will my views be from different directions? +[900.440 --> 903.160] And how would I plan to do things in this house? +[903.160 --> 904.640] And that's how we build a model. +[904.640 --> 906.560] But now this is a physical model. +[906.560 --> 909.720] And we clearly don't have physical models in our head. +[909.720 --> 911.880] But nowadays, a lot of models are built on a computer. +[911.880 --> 914.240] So here's another model of a house. +[914.240 --> 916.080] This is a computer model of a house. +[916.080 --> 918.800] And the same things, you can say, what does a house look like +[918.800 --> 920.920] from different positions, how many steps +[920.920 --> 921.760] are between things. +[921.760 --> 923.440] So I could say, OK, this is one location, +[923.440 --> 925.160] whatever I was looking down upon the house, +[925.160 --> 927.000] what was looking from a different angle. +[927.000 --> 929.520] Now, how are models like this created? +[929.520 --> 931.040] And that's, of course, in creating a computer. +[931.040 --> 934.720] The basic way that models like this are created in a computer +[934.720 --> 937.400] is they create reference frames for these things. +[937.400 --> 939.600] So you can think of it like the X, Y, and Z reference frames. +[939.600 --> 942.160] Hopefully you'll remember that from high school. +[942.160 --> 946.240] And you can locate things like where is the door relative +[946.240 --> 948.520] to where is the window and things like that. +[948.520 --> 951.400] And objects in the, I can model like this house. +[951.400 --> 954.920] Imagine there's a garage door on this model. +[954.920 --> 957.040] And it has its own models. +[957.040 --> 959.560] And when engineers create models like this, +[959.560 --> 962.960] they say, oh, there's a reference frame in green for the door, +[962.960 --> 964.760] the garage door, and it's a location +[964.760 --> 967.880] to the reference frame for the house, which is the blue hours. +[967.880 --> 970.280] Now, I imagine this because surprisingly, +[970.280 --> 973.360] this is very close to what's going on in your head. +[973.360 --> 976.120] And when I talk about you building models of the world +[976.120 --> 979.840] in your head, you're going to see it's something very much like this. +[979.840 --> 982.000] And it allows you, but you're doing it for everything, +[982.000 --> 982.720] not just your house. +[982.720 --> 985.160] You're doing it for your town, and every possession you have, +[985.160 --> 988.080] and everything you ever interact with. +[988.080 --> 990.200] So we're going to go back to the presentation here. +[990.200 --> 991.200] And we're going to say, OK, so we've +[991.200 --> 995.560] talked about what models are good for. +[995.560 --> 998.000] Now there's a surprising thing about our model in our head. +[998.000 --> 1002.320] And that the model in your head is actually your reality. +[1002.320 --> 1005.320] What you perceive about the world is the model. +[1005.320 --> 1007.080] You're actually not perceiving the real world. +[1007.080 --> 1008.240] You're perceiving the model. +[1008.240 --> 1011.040] I know this sounds weird, but it's true. +[1011.040 --> 1014.160] And I'll just give you a walkies a little bit about it. +[1014.160 --> 1017.600] You can think of your brain as being in this box, your skull. +[1017.600 --> 1019.520] And the inputs are coming from your senses, +[1019.520 --> 1021.680] like your eyes and your ears and your skin. +[1021.680 --> 1024.640] But how they enter the skull is these little fibers, +[1024.640 --> 1029.280] these nerve fibers, called axons, but the nerves. +[1029.280 --> 1031.480] And they just like a little wire. +[1031.480 --> 1033.240] And they have these spikes coming down the wires +[1033.240 --> 1034.440] just to call action potentials. +[1034.440 --> 1037.080] You might have heard of spikes of action potentials. +[1037.080 --> 1038.440] And now the interesting thing about it +[1038.440 --> 1039.960] is there's a bunch of these coming from the eye. +[1039.960 --> 1041.480] And there's a bunch of these coming about a million +[1041.480 --> 1043.280] from the eye, about a million from your skin, +[1043.280 --> 1045.280] and tens of thousands of years. +[1045.280 --> 1046.640] But they're all identical. +[1046.640 --> 1048.280] The fibers coming into your head are there +[1048.280 --> 1049.120] is no difference between them. +[1049.120 --> 1051.800] You can't look at a fiber and tell this one represents light, +[1051.800 --> 1053.120] this one represents touch or something. +[1053.120 --> 1054.800] That doesn't have that. +[1054.800 --> 1056.160] It's just these spikes. +[1056.160 --> 1057.880] And then your perception, so there +[1057.880 --> 1059.560] is no light entering head and there is no sound. +[1059.560 --> 1061.680] And your perception of what going on in the world +[1061.680 --> 1063.640] is built up from your model. +[1063.640 --> 1065.800] And it turns out things like color don't really exist +[1065.800 --> 1067.040] in the world. +[1067.040 --> 1068.800] I know you might think they do, but they don't. +[1068.800 --> 1070.240] There's frequency of light. +[1070.240 --> 1070.760] And so on. +[1070.760 --> 1072.240] So we live in this model. +[1072.240 --> 1074.000] The model is tied to the world. +[1074.000 --> 1076.240] It's learned from the world. +[1076.240 --> 1078.240] But we actually perceive the model +[1078.240 --> 1081.800] and it forms our basis of all of our beliefs. +[1081.800 --> 1084.400] And that's what this talk is going to be getting to, +[1084.400 --> 1087.600] is about our belief systems, how the brain creates beliefs. +[1087.600 --> 1091.280] So we don't, we clearly are model of the world +[1091.280 --> 1093.040] relates to the real world. +[1093.040 --> 1094.520] But it doesn't always get it right. +[1094.520 --> 1095.880] And come back to that moment. +[1095.880 --> 1098.920] But everything you believe, all your things you think +[1098.920 --> 1101.160] you know for certain are really part of this model. +[1101.160 --> 1103.960] And if that model's accurate, then they're accurate. +[1103.960 --> 1105.640] If it's not accurate, they're not accurate. +[1105.640 --> 1107.840] So this leads us to the question, how +[1107.840 --> 1110.720] is it that the cortex learns the model of the world? +[1110.720 --> 1116.200] This is the, gets the crux of what my team does in our research. +[1116.200 --> 1119.440] So I'm going to delve into some more neuroscience here. +[1119.440 --> 1120.680] I hope it's OK. +[1120.680 --> 1122.080] And I want to get, lose anyone here. +[1122.080 --> 1124.200] But I think it's really fascinating. +[1124.200 --> 1126.360] So let's go on. +[1126.360 --> 1127.360] Whoops. +[1127.360 --> 1128.360] I need to click here. +[1128.360 --> 1128.860] All right. +[1128.860 --> 1131.360] So now the next thing we can say is the new cortex, +[1131.360 --> 1136.200] although it looks very uniform, from the outside, +[1136.200 --> 1139.120] it just looks like I'm going to try to give this a smile on my screen. +[1139.120 --> 1139.960] Excuse me for a second. +[1139.960 --> 1141.440] I got something on my screen. +[1144.800 --> 1146.600] It looks very uniform. +[1146.600 --> 1150.080] But it's actually divided into different functional regions. +[1150.080 --> 1151.880] And it's too bad here. +[1151.880 --> 1154.180] So if you look at a human brain, there's +[1154.180 --> 1156.320] some areas in the back your head that are visual. +[1156.320 --> 1158.520] There's areas on the side of your head that are auditory regions. +[1158.520 --> 1160.920] And these thematic is just the touch region. +[1160.920 --> 1164.160] So this is where the inputs from those different century +[1164.160 --> 1165.560] organs come into the brain. +[1165.560 --> 1168.080] And then there's other regions such as these +[1168.080 --> 1170.680] on the side here, which are responsible for creating language. +[1170.680 --> 1172.880] This is what these are responsible for my, +[1172.880 --> 1175.240] creating my speech right now on your listening. +[1175.240 --> 1176.800] And then of course, there's a lot of other regions +[1176.800 --> 1178.280] that do have more high level things. +[1178.280 --> 1180.600] They're very difficult to characterize. +[1180.600 --> 1184.720] Now, you might think that, OK, well, the visual regions +[1184.720 --> 1185.440] are doing vision. +[1185.440 --> 1188.080] The auditory regions are doing sound and hearing. +[1188.080 --> 1190.280] And language are doing language and so on. +[1190.280 --> 1191.920] You'd think, well, they must be operating +[1191.920 --> 1193.000] on some different principle. +[1193.000 --> 1195.080] They must be different in some ways, +[1195.080 --> 1198.800] because sight and sound and language don't seem the same to us. +[1198.800 --> 1200.120] But that's not the truth. +[1200.120 --> 1202.280] And the truth is very surprising. +[1202.280 --> 1205.320] When people started looking at the near cortex, +[1205.320 --> 1207.480] about 120 years ago. +[1207.480 --> 1210.840] And these are images from a famous scientist, Ramonika Howe. +[1210.840 --> 1213.480] He was the first person to look at the cells +[1213.480 --> 1215.600] in the near cortex of a microscope. +[1215.600 --> 1217.960] And he drew these drawings by hand. +[1217.960 --> 1218.880] It's not a photograph. +[1218.880 --> 1220.360] Those are hand drawings. +[1220.360 --> 1221.200] He did 1000 of them. +[1221.200 --> 1223.280] He went on a Nobel Prize for his work. +[1223.280 --> 1225.080] And so what he's looking at here, +[1225.080 --> 1227.360] he started looking at the near cortex. +[1227.360 --> 1232.040] And these are pictures of what he saw through the microscope, +[1232.040 --> 1234.800] a slice through the 2 1 1 1 2 half millimeter thickness. +[1234.800 --> 1236.600] So this is like a slice in the little dinner. +[1236.600 --> 1238.640] And I've got a very small little slice. +[1238.640 --> 1240.320] And what you see on the left picture, +[1240.320 --> 1242.080] those little dots are the actual neurons. +[1242.080 --> 1244.040] Now, they're far more than you can see here. +[1244.040 --> 1246.160] This is a very small subset of them. +[1246.160 --> 1248.600] But you can see that the neurons are, +[1249.560 --> 1251.560] they have different sizes, they're different shapes, +[1251.560 --> 1253.640] and they're different packing densities. +[1253.640 --> 1256.600] And so you can imagine there's these layers going horizontally. +[1256.600 --> 1258.320] And people start talking about the near cortex, +[1258.320 --> 1259.960] it's composed of layers. +[1259.960 --> 1261.880] The second drawing shows some of the connections +[1261.880 --> 1263.040] between the neurons. +[1263.040 --> 1265.000] And these are the axons and dendrites. +[1265.000 --> 1266.600] And you can see these are mostly vertical, +[1266.600 --> 1268.560] then it's a cutting across the 2 1 1 2 1 2 1 2 +[1268.560 --> 1271.520] millimeters and information flows up and down. +[1271.520 --> 1274.800] Now, this is 120 some years ago. +[1274.800 --> 1277.080] There have been thousands of neuroscience papers, +[1277.080 --> 1279.840] scientific papers published on the near cortex, +[1279.840 --> 1281.720] and its architecture. +[1281.720 --> 1283.440] It's an incredible amount of data +[1283.440 --> 1285.520] that has been collected over the decades. +[1285.520 --> 1287.880] And it's hard to even comprehend +[1287.880 --> 1289.480] all the information we know about it. +[1289.480 --> 1293.200] But people like myself, we try to organize this information +[1293.200 --> 1295.680] and try to figure, okay, what are the different types of cells, +[1295.680 --> 1298.800] how they organize, what are the connections between them. +[1298.800 --> 1300.280] Now, here's the amazing thing. +[1300.280 --> 1301.920] We make these kind of diagrams. +[1301.920 --> 1306.920] But for the most part, these details are the same everywhere. +[1306.920 --> 1309.960] If you look in a language area, a visionary, a touch area, +[1309.960 --> 1312.120] if you look at a cat brain, a rat's brain, +[1312.120 --> 1314.360] a mouse's brain, you all have a near cortex, +[1314.360 --> 1317.240] you're going to see an amazingly similar circuitry. +[1317.240 --> 1319.240] There are some differences. +[1319.240 --> 1320.840] Sometimes you'll see more of oneself type +[1320.840 --> 1323.160] in others, some a little bit thicker and less thick and so on. +[1323.160 --> 1326.880] But this incredibly preserved detail architecture. +[1326.880 --> 1328.840] And this doesn't at first make any sense. +[1328.840 --> 1331.240] And how could all the different things that we do +[1331.240 --> 1334.320] be built on the same detailed architecture? +[1334.320 --> 1336.400] Well, the first person to figure that out, +[1336.400 --> 1337.800] so that's the question we're going to answer. +[1337.800 --> 1338.960] The first person to figure it out, +[1338.960 --> 1340.880] or at least suggest a way of thinking about it, +[1340.880 --> 1343.240] was this man, Vernon Mountcastle. +[1343.240 --> 1346.160] And he was a neurophysiologist at Johns Hopkins University. +[1346.160 --> 1350.360] He wrote a famous monograph, which is a sort of a small technical book, +[1350.360 --> 1352.560] in which he made the following arguments. +[1352.560 --> 1355.880] He said, all areas in near cortex look the same +[1355.880 --> 1358.280] because they perform the same intrinsic function, +[1358.280 --> 1360.800] meaning internally they're doing the same thing. +[1360.800 --> 1363.760] He says, why one reads an division and another reads an auditory, +[1363.760 --> 1364.880] is then what you connect it to. +[1364.880 --> 1367.000] So he says, if you just take a bunch of near cortex +[1367.000 --> 1369.040] and connect it to the eye, you're going to get vision. +[1369.040 --> 1371.480] If you connect it to the ears, you're going to get hearing. +[1371.480 --> 1374.040] If you're going to touch the skin, you get touch sensation. +[1374.040 --> 1376.840] If you connect outputs of these, one section, +[1376.840 --> 1378.080] in these different regions of the cortex, +[1378.080 --> 1379.400] into other regions, those other regions +[1379.400 --> 1382.560] might do things like high level thinking or language. +[1382.560 --> 1385.640] And so which is an incredible idea. +[1385.640 --> 1387.640] And then he said, one thing further, he says, +[1387.640 --> 1390.960] well, we can actually think of the cortex +[1390.960 --> 1394.520] as divided into functional units that are the same. +[1394.520 --> 1396.120] And he proposed that the function unit +[1396.120 --> 1399.040] is something called a cortical column, +[1399.040 --> 1402.040] which you can think about it as like a millimeter in area +[1402.040 --> 1403.440] and 2 1 1 1 2 1 1 1. +[1403.440 --> 1407.760] And so in this picture, this is a cartoon drawing. +[1407.760 --> 1410.240] You might think about what near cortex might look like. +[1410.240 --> 1411.880] It's this sheet of metal tissue, +[1411.880 --> 1413.360] composed of all these little columns, +[1413.360 --> 1414.920] snacking our side by side. +[1414.920 --> 1418.440] In a human, we would have about 150,000 of these columns +[1418.440 --> 1419.720] in our near cortex. +[1419.720 --> 1421.400] Now, these columns are very complex. +[1421.400 --> 1423.800] Each one has 100,000 neurons, +[1423.800 --> 1426.720] about 500 million connections between neurons, +[1426.720 --> 1431.360] each of these columns, and they're very complex entities. +[1431.360 --> 1433.760] But we have 150,000 in our brain. +[1433.760 --> 1435.960] Other animals have different size near cortex. +[1435.960 --> 1437.880] And they also have columns, but they're just +[1437.880 --> 1438.960] different numbers of columns. +[1438.960 --> 1442.520] So we got more in some ways and some ways, not always, +[1442.520 --> 1443.040] but in some ways. +[1443.040 --> 1445.880] That's why we're smarter than other animals. +[1445.880 --> 1448.760] So now our question is, well, okay, +[1448.760 --> 1450.120] what does a cortical column do? +[1450.120 --> 1453.200] If we can figure out what one cortical column does, +[1453.200 --> 1456.560] then we can figure out what all of them do. +[1456.560 --> 1458.760] And so this is like a great scientific puzzle. +[1458.760 --> 1461.320] Like what kind of function could a cortical column do? +[1461.600 --> 1463.720] That can explain vision and hearing and touch +[1463.720 --> 1464.920] and all these other things. +[1465.880 --> 1468.480] And how could you just make a brain out of lots of them +[1468.480 --> 1470.080] and how do they work together? +[1470.080 --> 1470.920] That kind of thing. +[1470.920 --> 1472.000] So that's what we study. +[1472.000 --> 1474.600] And we made a lot of progress understanding this. +[1474.600 --> 1475.800] A great deal, actually. +[1475.800 --> 1477.640] We think we have a pretty good idea of what's going on. +[1477.640 --> 1480.040] I'm gonna share with you right now. +[1480.040 --> 1483.120] I'm gonna start with a little thought experiment. +[1483.120 --> 1487.200] And so to do that, I'm gonna stop sharing, +[1487.200 --> 1489.800] and you hopefully can see me again. +[1489.800 --> 1490.880] And this is the thought experiment +[1491.000 --> 1491.720] that actually happened. +[1491.720 --> 1493.960] This is how we had a real breakthrough here. +[1493.960 --> 1495.280] I was holding this cup. +[1495.280 --> 1497.800] This is a Nemento coffee cup. +[1497.800 --> 1499.160] I hope you can all see that. +[1499.160 --> 1501.880] And I was just idly playing with it. +[1501.880 --> 1503.640] And I had my finger on the side of the cup, +[1503.640 --> 1506.920] and I said, well, I'm touching this cup with my index finger. +[1506.920 --> 1508.920] And I said, if I move my finger up to this outlet cup, +[1508.920 --> 1510.280] I can predict what I'll feel. +[1510.280 --> 1512.520] I said, oh yeah, I'm gonna feel this rounded edge up here. +[1512.520 --> 1514.000] If I move my finger to the side of the cup, +[1514.000 --> 1517.000] I know my brain is gonna predict what I'm gonna feel this handle. +[1517.000 --> 1518.680] And if I move my finger to the bottom of the cup, +[1518.680 --> 1520.840] I know I'm gonna feel this unglazed rough area +[1520.840 --> 1522.440] at the bottom down here. +[1522.440 --> 1525.520] Now, I know I make this prediction because I can imagine it, +[1525.520 --> 1527.840] but I also know that the brain's always making predictions. +[1527.840 --> 1529.560] And if any of these predictions weren't true, +[1529.560 --> 1530.880] I would notice it. +[1530.880 --> 1532.000] But the question I asked him, +[1532.000 --> 1534.840] what does the brain need to know to make that prediction? +[1534.840 --> 1537.400] And the answer was at least partially pretty simple. +[1537.400 --> 1540.160] It said, well, first of all, I need an on touching this cup, +[1540.160 --> 1541.000] right? +[1541.000 --> 1542.000] That's because it matters when I'm touching +[1542.000 --> 1543.800] something else will feel different. +[1543.800 --> 1545.480] But it also needs my brain needs to know +[1545.480 --> 1548.600] where my finger is relative to the cup. +[1548.600 --> 1551.000] It needs to know the location of my finger on the cup. +[1551.000 --> 1552.680] And as I bat to move my finger, +[1552.680 --> 1555.920] it needs to know where my finger will be after it stops moving. +[1555.920 --> 1558.160] Because otherwise, it wouldn't be able to make that prediction. +[1558.160 --> 1559.880] So it needs to know the location where it is +[1559.880 --> 1562.480] and where it will be when it's done moving as I move. +[1562.480 --> 1563.680] If I move one way, I feel one thing, +[1563.680 --> 1566.080] if I move another way, I feel something else. +[1566.080 --> 1569.280] Now, this is actually something really hard for neurons to do. +[1569.280 --> 1571.080] How could they do this? +[1571.080 --> 1572.880] This knowing where my finger is, +[1572.880 --> 1574.480] there's nothing to do with where the cup is relative. +[1574.480 --> 1576.120] To me, it doesn't matter whether the cup is oriented +[1576.120 --> 1577.400] or hard on the sideways, +[1577.400 --> 1578.880] I make the same predictions. +[1578.880 --> 1580.920] So it's really, I need to know my finger's location +[1580.920 --> 1584.000] relative to this cup, where were the cup is in the world? +[1584.000 --> 1587.240] And that was a key insight that sort of +[1587.240 --> 1588.720] exploded the whole thing open. +[1588.720 --> 1594.080] So we'll go back to talking about the presentation here, hopefully. +[1594.080 --> 1597.200] And oh, yeah, I can just like this. +[1597.200 --> 1598.400] I should be able to do this and that. +[1598.400 --> 1601.800] OK, so this ultimately light up to several years, +[1601.800 --> 1603.720] what we call the 1000 brain seriopentelegens. +[1603.720 --> 1606.880] And that's what the name of the book is, the 1000 brains. +[1606.880 --> 1609.120] So I'm going to explain what that means. +[1609.120 --> 1611.120] So here is a picture of a cortical column. +[1611.120 --> 1612.320] I know this is getting deep. +[1612.320 --> 1613.920] Some people, I hope you can all follow this. +[1613.920 --> 1615.920] But I think it's pretty cool. +[1615.920 --> 1618.680] We'll bump up again a little bit. +[1618.680 --> 1620.000] So here's a picture of a cortical column. +[1620.000 --> 1621.480] And imagine this one is getting input +[1621.480 --> 1623.480] from the tip of my index finger touching that cup. +[1623.480 --> 1625.760] Like I just talked about. +[1625.760 --> 1627.280] And it's just one column. +[1627.280 --> 1628.840] And it's just getting input from a little part of the tip. +[1628.840 --> 1631.000] Now, there's actually two things that come into this column. +[1631.000 --> 1631.760] This is facts. +[1631.760 --> 1633.840] This is not speculation. +[1633.840 --> 1636.720] There is the actual sensation from the tip of your finger, +[1636.720 --> 1638.040] what you're feeling. +[1638.040 --> 1640.160] And then there's a movement command, which basically +[1640.160 --> 1643.360] represent how is the finger or the hand moving? +[1643.360 --> 1645.640] So the column gets to know where the finger, what's +[1645.640 --> 1648.640] the finger's feeling, and which way the finger is moving. +[1648.640 --> 1653.080] In the internals of the column, this is a, +[1653.080 --> 1654.760] I'm going to explain this in a very simple way. +[1654.760 --> 1656.200] It's more complicated than I'm explaining it. +[1656.200 --> 1659.200] But basically, this is the right idea. +[1659.200 --> 1660.320] There's two things that are known. +[1660.320 --> 1663.360] One is the column keeps track of the location of the finger +[1663.360 --> 1664.960] in a reference range relative to the cup, +[1664.960 --> 1666.600] just like the reference range I talked about. +[1666.600 --> 1668.640] And the house. +[1668.640 --> 1672.320] And when I move my finger, the column updates the location +[1672.320 --> 1674.560] of the finger in the reference frame of the cup. +[1674.560 --> 1676.440] And then, of course, there's a sensation coming in. +[1676.440 --> 1678.440] And that goes into another layer cells. +[1678.440 --> 1680.320] And the blue line here represents how +[1680.320 --> 1683.120] you learn the shape or the feeling of the cup. +[1683.120 --> 1686.960] It basically pairs what you sense with the location +[1686.960 --> 1689.280] of what you sense. +[1689.280 --> 1690.360] Now, think about this. +[1690.360 --> 1693.320] I can learn an entire cup just by putting my finger on it. +[1693.320 --> 1695.600] I can put my hand in a black box and touch something. +[1695.600 --> 1697.920] And I move my hand around it and touch it with just one finger. +[1697.920 --> 1701.680] I can learn the shape of this cup or learn shape of anything. +[1701.680 --> 1704.080] And what you're doing is you're literally moving the finger +[1704.080 --> 1707.400] and you're learning what you're sensing at each location. +[1707.400 --> 1711.160] There is another layer cells which represent the object +[1711.160 --> 1712.440] the cup itself. +[1712.440 --> 1715.560] So the object, in this case, the coffee cup, +[1715.560 --> 1718.800] is essentially a collection of locations and sensations. +[1718.800 --> 1720.440] It's like, what are all the things I'm +[1720.440 --> 1722.800] feeling at these locations? +[1723.760 --> 1725.640] And so you can learn a model of a cup. +[1725.640 --> 1726.560] So this is pretty impressive. +[1726.560 --> 1727.960] A single column now. +[1727.960 --> 1729.840] This one little piece of the new cortex, +[1729.840 --> 1732.200] touching one little finger can learn the entire model +[1732.200 --> 1734.400] of a coffee cup, what it feels like. +[1734.400 --> 1737.200] So basic ideas, it columns create reference +[1737.200 --> 1739.200] range of every object they know. +[1739.200 --> 1740.760] And the reference chain is used in the same way +[1740.760 --> 1741.600] I mentioned it before. +[1741.600 --> 1743.640] It specifies locations of features. +[1743.640 --> 1745.480] It can lose it to predict the outcomes of movements. +[1745.480 --> 1747.720] Like what will I feel at my moment finger? +[1747.720 --> 1750.320] It allows us to plan and achieve goals like how do I reach? +[1750.320 --> 1752.840] If I want to reach and grab the handle, which wet direction? +[1752.840 --> 1755.040] If I want to stick my finger into the liquid in the cup, +[1755.040 --> 1756.200] which weather I go? +[1756.200 --> 1759.920] And our modeling and simulation, so that a single column +[1759.920 --> 1765.080] can learn hundreds of objects in a very sophisticated way. +[1765.080 --> 1767.400] So the second board of thousand brain theories +[1767.400 --> 1769.400] is that there are thousands of complementary models +[1769.400 --> 1770.320] for every object. +[1770.320 --> 1772.160] If I said, well, where is the knowledge of a coffee +[1772.160 --> 1772.920] cup in my brain? +[1772.920 --> 1774.360] It's not one place. +[1774.360 --> 1775.960] It's in many places, in many columns. +[1775.960 --> 1778.520] It's not all the columns, but in thousands of columns +[1778.520 --> 1780.800] in the cortex know what the coffee cup thinks about. +[1780.800 --> 1782.760] Here's a simple way of thinking about it. +[1782.760 --> 1786.400] Here is a picture of a hand touching the coffee cup. +[1786.400 --> 1789.480] And let's say we're using three fingers at a time. +[1789.480 --> 1791.480] So there are three fingers touching the cup. +[1791.480 --> 1793.560] Each finger is at a different location. +[1793.560 --> 1795.880] Each finger is feeling something different. +[1795.880 --> 1797.560] And so each column is going to model, +[1797.560 --> 1799.640] each of the three columns associated with the fingertips, +[1799.640 --> 1802.080] each is going to try to learn the model of that object. +[1802.080 --> 1804.360] And they can do that. +[1804.360 --> 1808.760] And so they can, but now there's something else that could happen. +[1808.760 --> 1811.160] And because they're all touching the same object, +[1811.160 --> 1813.440] in this case, they're all touching the coffee cup, +[1813.440 --> 1815.560] they should agree on what the object is. +[1815.560 --> 1818.080] They should say, yes, we all know this is a coffee cup. +[1818.080 --> 1818.760] And they do. +[1818.760 --> 1821.240] There is a, there are these long range connections +[1821.240 --> 1824.200] in the cortex between certain layers, which we believe +[1824.200 --> 1824.920] are voting. +[1824.920 --> 1826.520] It's the ways where the columns are saying, +[1826.520 --> 1827.640] well, I'm touching an edge. +[1827.640 --> 1828.640] It might be this or that. +[1828.640 --> 1830.440] And sometimes I'm touching this surface. +[1830.440 --> 1831.440] It might be this or that. +[1831.440 --> 1833.080] And sometimes I'm saying, I'm touching something else. +[1833.080 --> 1833.880] I don't know what it is. +[1833.880 --> 1834.880] But they can all get together. +[1834.880 --> 1836.840] And they can say, the only thing that makes sense here +[1836.840 --> 1837.680] is a cup. +[1837.680 --> 1839.760] And they great quickly resolve this and say, +[1839.760 --> 1841.040] we're all touching a cup. +[1841.040 --> 1842.200] Now, you can think about this. +[1842.200 --> 1844.840] If I were to ask you to recognize an object by sticking your hand +[1844.840 --> 1847.120] in a black box and touching with one finger, +[1847.120 --> 1848.560] well, then you'd have to move your finger +[1848.560 --> 1850.440] around it a bit to feel what it looks like. +[1850.440 --> 1852.280] But if I grab it with my hand all at once, +[1852.280 --> 1854.800] often you can recognize it with a single grass. +[1854.800 --> 1856.360] That's because the columns are voting, +[1856.360 --> 1857.880] and they don't need to move, or at least, +[1857.880 --> 1860.000] they need to move less. +[1860.000 --> 1862.520] The same thing is happening with vision. +[1862.520 --> 1865.280] You may think vision feels like a different type of sensation, +[1865.280 --> 1867.280] but it's really not. +[1867.280 --> 1869.800] In this case, there is a, you could +[1869.800 --> 1872.600] think about the back of the retina is an array of sensors, +[1872.600 --> 1874.520] just like the skin is an array of sensors. +[1874.520 --> 1876.320] They happen to move together, but that's +[1876.320 --> 1878.120] not that important at this time. +[1878.120 --> 1880.760] And what happens is that each of those little patches +[1880.760 --> 1883.080] of your retina project to a cortical column +[1883.080 --> 1885.400] in your cortex. +[1885.400 --> 1887.040] So when you look at something, you're +[1887.040 --> 1888.880] not really looking at a picture of it. +[1888.880 --> 1890.800] You're actually looking at lots of little pieces, +[1890.800 --> 1893.080] each piece in your brain is keeping track of where it is +[1893.080 --> 1895.240] on the object you're looking at. +[1895.240 --> 1896.600] And when you look at something, you say, +[1896.600 --> 1898.280] oh, that's a cat or that's a dog. +[1898.280 --> 1900.120] All those columns vote together. +[1900.120 --> 1902.160] If you looked at the world through a straw, +[1902.160 --> 1903.640] so imagine you had a little skinny straw, +[1903.640 --> 1905.400] and you could only look at the world through a straw, +[1905.400 --> 1907.120] well, then you wouldn't be able to see much at once. +[1907.120 --> 1908.800] You'd only be activating a free column, +[1908.800 --> 1910.200] and you'd have to move the straw around, +[1910.200 --> 1911.960] just like you have to move your finger around. +[1911.960 --> 1915.480] So it's very analogous to what's going on with touch. +[1915.480 --> 1919.720] OK, so the interesting about this is, +[1919.720 --> 1921.520] what do we perceive? +[1921.520 --> 1923.880] It turns out that the only part of this system +[1923.880 --> 1926.560] that we can perceive is the sort of voting +[1926.560 --> 1928.720] there on the object in some sense. +[1928.720 --> 1931.160] So when you look out the world, it seems stable. +[1931.160 --> 1933.480] If I'm looking at a coffee cup, or I'm looking at a person, +[1933.480 --> 1935.840] or I'm looking at a refrigerator, or whatever, +[1935.840 --> 1937.760] my eyes are constantly moving. +[1937.760 --> 1939.000] They're moving about three times a second. +[1939.000 --> 1940.760] Everyone's eyes are moving about three times a second. +[1940.760 --> 1942.640] They're jumping this way and that, they're cults of cod. +[1942.640 --> 1944.360] You're not aware of real at all. +[1944.360 --> 1945.760] The world seems stable. +[1945.760 --> 1947.840] You don't think that the inputs are changing. +[1947.840 --> 1949.560] What's going on is the inputs are changing +[1949.560 --> 1950.400] your brain. +[1950.400 --> 1951.760] These columns are all jumping around +[1951.760 --> 1953.680] and looking at different locations, different sense features. +[1953.680 --> 1955.480] And yet, they're voting on the same thing. +[1955.480 --> 1956.400] It's still the refrigerator. +[1956.400 --> 1957.640] It's still your friend. +[1957.640 --> 1960.120] And that is the only part of the brain we can perceive. +[1960.120 --> 1962.040] We can only perceive the operation of the voting there. +[1962.040 --> 1964.640] And so we're not aware of what mostly is going on our brain. +[1964.640 --> 1967.720] We're not able to perceive all these crazy things +[1967.720 --> 1968.640] that are happening underneath. +[1968.640 --> 1971.040] We're just perceiving what's the consensus vote +[1971.040 --> 1972.600] of what's going on out there. +[1972.600 --> 1974.320] And this is important because this is why we +[1974.320 --> 1976.400] have a singular sensation in the world. +[1976.400 --> 1979.200] This is why we don't feel like we have thousands of models +[1979.200 --> 1981.440] operating independently voting, trying to figure +[1981.440 --> 1986.400] what's going on, is because we only perceive what they agree to. +[1986.400 --> 1988.920] And this is something that some of people may have heard +[1988.920 --> 1990.160] of it called the binding problem. +[1990.160 --> 1992.800] Like how does the brain bind all our sensations together? +[1992.800 --> 1994.240] It doesn't seem possible. +[1994.240 --> 1997.160] And the answer to it is, is that the columns vote +[1997.160 --> 2000.440] and we're only able to perceive the voting. +[2000.440 --> 2002.280] So that's not been pretty nice thing about that. +[2002.280 --> 2006.000] OK, so let's talk about knowledge in general. +[2006.000 --> 2008.080] And I'm going to claim here that all knowledge is stored +[2008.080 --> 2008.960] in reference range. +[2008.960 --> 2012.160] So again, this is the idea that if the entire new cortex +[2012.160 --> 2013.600] is working on the same principles, +[2013.600 --> 2016.160] then everything we know must be stored this way. +[2016.160 --> 2019.640] And so it's like everything is stored in reference range. +[2019.640 --> 2023.600] And now that's a sort of deduced operation, but that's true. +[2023.600 --> 2025.880] So now we can think about a column as a generic column. +[2025.880 --> 2027.800] Let's say someplace else in the cortex doing math +[2027.800 --> 2030.120] or talking about history or something like that. +[2030.120 --> 2031.920] And so you can think of it as, OK, well, +[2031.920 --> 2032.960] there's inputs to this column. +[2032.960 --> 2035.040] These inputs may not be coming from your senses. +[2035.040 --> 2036.440] They're becoming from other parts of the brain. +[2036.440 --> 2038.600] They may be objects you recognize or other things +[2038.600 --> 2040.680] you've already perceived. +[2040.680 --> 2042.120] But those are the inputs to this column +[2042.120 --> 2043.440] and this other part of the brain. +[2043.440 --> 2046.560] And the movements, in this case, are not physical movements +[2046.560 --> 2047.920] of the body necessarily. +[2047.920 --> 2050.480] It's sort of the same idea. +[2050.480 --> 2052.120] It's like mentally moving through space. +[2052.120 --> 2054.480] It's like you can imagine mentally moving through your house +[2054.480 --> 2054.960] right now. +[2054.960 --> 2057.200] I can say, well, go on the front door and look right. +[2057.200 --> 2058.560] And on the other side, what do you see? +[2058.560 --> 2061.160] So what we're doing is when we're accessing knowledge +[2061.160 --> 2063.320] in our life, we're mentally moving through reference +[2063.320 --> 2065.560] range, recalling the facts that are stored +[2065.560 --> 2066.720] at different locations. +[2066.720 --> 2067.880] And you're not aware of this. +[2067.880 --> 2070.280] You're not thinking, oh, my knowledge is stored in reference range. +[2070.280 --> 2071.640] No, you're not aware of that at all. +[2071.640 --> 2072.480] It's just some ideas. +[2072.480 --> 2074.080] These concepts pop in your head. +[2074.080 --> 2076.480] But what's going on underneath is that you've +[2076.480 --> 2078.720] stored all the knowledge of the world +[2078.720 --> 2082.720] in these reference range and you're moving around and getting them. +[2082.720 --> 2086.240] This is important because it makes knowledge actionable. +[2086.240 --> 2089.560] Our knowledge of the world, again, it's not like a list of facts. +[2089.560 --> 2091.680] It's stuff we can think in reason about. +[2091.680 --> 2096.960] We can say, oh, what about think about evolution? +[2096.960 --> 2097.960] What do I know about it? +[2097.960 --> 2099.000] How does it behave? +[2099.000 --> 2101.600] What would I do if I was trying to, if I changed the way +[2101.600 --> 2104.600] this gene works and so on, knowledge becomes actionable +[2104.600 --> 2106.440] because we've arranged in reference range. +[2106.440 --> 2108.560] Just like the knowledge about a coffee cup +[2108.560 --> 2112.320] is actionable because that range put it in a reference range. +[2112.320 --> 2115.360] And now thinking is what occurs when your brain moves from +[2115.360 --> 2117.760] locations to locations in reference range. +[2117.760 --> 2120.560] As we think during the day, these ideas sort of pop in their head +[2120.560 --> 2123.240] all day long, just constantly, we're awake. +[2123.240 --> 2126.280] And what's going on there is literally the cells in your +[2126.280 --> 2128.760] cortex are accessing different locations in these reference +[2128.760 --> 2130.360] range, moving from one location to the other. +[2130.360 --> 2133.000] And when it moves to the reference range, it recalls some fact +[2133.000 --> 2136.400] and a fact pops into your head. +[2136.400 --> 2139.400] And that's what we do when we think. +[2139.400 --> 2141.400] Now, here's an interesting thing. +[2141.400 --> 2146.000] You can take the same facts in a range of different types +[2146.000 --> 2147.120] of reference frames. +[2147.120 --> 2150.400] And this can lead to different beliefs about those facts. +[2150.400 --> 2153.280] So I give a simple example in the book. +[2153.400 --> 2156.320] You can imagine I take a bunch of historical facts, +[2156.320 --> 2157.760] things that happen. +[2157.760 --> 2160.240] And I can say, OK, let's arrange them on a reference +[2160.240 --> 2162.680] frame that looks like a timeline. +[2162.680 --> 2163.800] That's a type of reference frame. +[2163.800 --> 2165.760] It's a one-dimensional reference frame. +[2165.760 --> 2168.360] And if I do that, I can see the two facts next each other +[2168.360 --> 2170.160] might be causal related in time. +[2170.160 --> 2171.880] Like, oh, this one's a characteristic of that one. +[2171.880 --> 2173.680] That may be the related. +[2173.680 --> 2176.280] I can take the same set of facts and arrange them +[2176.280 --> 2179.120] in a different reference frame, think about like a map. +[2179.120 --> 2181.240] And I put these events on a map. +[2181.240 --> 2183.640] And now I can say, oh, I can say these two facts +[2183.640 --> 2185.920] were happened right next to each other in space. +[2185.920 --> 2187.240] Maybe they're causal related because they're +[2187.240 --> 2187.840] next to each other. +[2187.840 --> 2189.640] Or these facts occur in next to mountains. +[2189.640 --> 2192.080] And maybe they occur because they're in next to mountains. +[2192.080 --> 2195.200] The point is, you can take the same set of facts +[2195.200 --> 2197.480] in a range of different different reference frames. +[2197.480 --> 2199.160] We can all agree on the facts. +[2199.160 --> 2201.800] But Pew people might view those facts differently. +[2201.800 --> 2203.800] Now, it turns out in the brain, these reference +[2203.800 --> 2205.440] frames are not set in stone. +[2205.440 --> 2207.120] You don't, they're discovered. +[2207.120 --> 2208.400] They're part of the learning process. +[2208.400 --> 2209.640] You're not aware you're doing this. +[2209.640 --> 2210.880] But they're part of the learning process. +[2210.880 --> 2213.280] So two people can take the same set of data, +[2213.280 --> 2215.480] the same facts and information, +[2215.480 --> 2218.360] arrange them in different ways, in different reference +[2218.360 --> 2219.360] frames. +[2219.360 --> 2220.360] And they will make different predictions. +[2220.360 --> 2223.640] And they'll have different beliefs about those things. +[2223.640 --> 2226.240] That can be very good at times because it can mean like two people +[2226.240 --> 2228.040] can look at the same things different ways. +[2228.040 --> 2229.280] And they can help each other. +[2229.280 --> 2231.360] But it can also be a problem. +[2231.360 --> 2232.760] Sometimes you can arrange things. +[2232.760 --> 2236.120] And it's not a useful way of arranging. +[2236.120 --> 2239.360] Now, as I said a moment ago, some of your columns +[2239.360 --> 2241.920] get direct input from the senses. +[2241.920 --> 2243.720] And these columns, I highlighted here +[2243.720 --> 2246.600] from the skin and the eyes and the ears. +[2246.600 --> 2250.800] When they build models, it's hard for those models to be wrong. +[2250.800 --> 2254.000] So for example, if I touch a coffee cup and you touch a coffee +[2254.000 --> 2257.720] cup, if one of us built an incorrect model of the coffee cup, +[2257.720 --> 2260.360] we very quickly find out because our predictions don't +[2260.360 --> 2261.440] work out right. +[2261.440 --> 2264.120] If I think the coffee cup is round and you think it's square, +[2264.120 --> 2265.640] well, one of us is going to be wrong. +[2265.640 --> 2267.520] And as soon as you start touching the cup, +[2267.520 --> 2269.880] one of us will find out that, hey, it's not round or it isn't +[2269.880 --> 2271.240] square. +[2271.240 --> 2274.560] So these columns can verify their predictions all the time. +[2274.560 --> 2276.540] So two people, no matter where you live in the world, +[2276.540 --> 2278.880] we'll all have similar ideas what a coffee cup looks like. +[2278.880 --> 2279.880] If we give them a coffee cup, we'll +[2279.880 --> 2281.960] form a similar model of it, or if we're given something +[2281.960 --> 2284.840] a cell phone, we'll form a similar model of it. +[2284.840 --> 2287.480] But then there's a lot of parts of the brain, which, +[2287.480 --> 2289.360] in the cortex, where they're getting +[2289.360 --> 2290.680] input from other columns. +[2290.680 --> 2292.800] They're not directly sensing anything. +[2292.800 --> 2294.960] In fact, much of what we learn about the world +[2294.960 --> 2297.360] is coming through language. +[2297.880 --> 2301.400] If we learn through language, I can't directly verify that it's +[2301.400 --> 2302.400] correct. +[2302.400 --> 2305.720] So we build a model of the world that may be consistent, +[2305.720 --> 2308.320] but it may not be accurately fucked in the world. +[2308.320 --> 2310.000] I'll give you two examples. +[2310.000 --> 2312.520] Or just two, for example, I've never +[2312.520 --> 2314.400] been to the city of Nevada. +[2314.400 --> 2315.880] But I believe it exists. +[2315.880 --> 2316.880] Why do I believe it exists? +[2316.880 --> 2318.480] Because I've read about it. +[2318.480 --> 2320.080] And people have told me about it. +[2320.080 --> 2320.840] I've never been there. +[2320.840 --> 2323.720] I've never never verified that it exists. +[2323.720 --> 2325.640] I've never even seen Cuba. +[2325.680 --> 2328.840] But I believe it's there because I have read things about it. +[2328.840 --> 2330.840] I don't believe it haven't exists. +[2330.840 --> 2332.520] But some people do. +[2332.520 --> 2334.560] And they've read about it, too. +[2334.560 --> 2336.080] So who's right? +[2336.080 --> 2336.960] Maybe we're both right. +[2336.960 --> 2338.120] Maybe we're both wrong. +[2338.120 --> 2340.040] The problem, this is an inherent problem +[2340.040 --> 2341.920] of the way the brain works, of relearning things +[2341.920 --> 2343.800] through other people, through language. +[2343.800 --> 2348.040] We can form very believable sets of knowledge +[2348.040 --> 2350.600] that are consistent with each other. +[2350.600 --> 2353.040] And yet they make completely different and wrong. +[2353.040 --> 2354.640] Or someone could be wrong, someone could be right. +[2354.640 --> 2356.560] And this is an inherent problem +[2356.560 --> 2357.800] the way the brain is designed. +[2357.800 --> 2359.520] Of course, the scientific method is the way +[2359.520 --> 2363.480] we have to suss out these false beliefs. +[2363.480 --> 2365.600] We just keep looking for more evidence +[2365.600 --> 2367.080] that contradicts our beliefs. +[2367.080 --> 2369.360] But if we don't do that, and if we're not exposed to it, +[2369.360 --> 2371.680] well, that's what we're going to believe. +[2371.680 --> 2375.960] And so these two methods for how false beliefs can arise. +[2375.960 --> 2377.240] One is we can take information, +[2377.240 --> 2378.600] arrange it in different reference frames. +[2378.600 --> 2380.640] And we have different beliefs about the data. +[2380.640 --> 2382.280] And the other thing is we can form beliefs +[2382.280 --> 2385.200] based not on direct observation, but through language. +[2385.200 --> 2387.480] And that's very useful in general, +[2387.480 --> 2390.600] but it also leaves the false beliefs. +[2390.600 --> 2397.200] So in summary, we can talk about, this is what the basic way +[2397.200 --> 2397.960] to talk about here. +[2397.960 --> 2400.040] The New York heart text is learned to model the world. +[2400.040 --> 2402.760] That's the first thing you need to know. +[2402.760 --> 2406.120] We're beginning to understand exactly how this model works. +[2406.120 --> 2408.440] In details, the neurons do this. +[2408.440 --> 2409.600] It's a distributed model. +[2409.600 --> 2411.080] So it's not a single model world. +[2411.080 --> 2415.000] We have thousands and thousands of submodels for everything. +[2415.000 --> 2417.000] And those models have built on reference frames +[2417.000 --> 2418.080] in every quarter of column. +[2418.080 --> 2421.640] And that's how knowledge is structured in reference rooms. +[2421.640 --> 2423.280] The brain's model is our reality. +[2423.280 --> 2426.320] It's what we perceive and what we believe. +[2426.320 --> 2430.000] And it's very, very, if it's wrong, what we believe is wrong. +[2430.000 --> 2431.920] If it's right, what we believe is right. +[2431.920 --> 2435.360] It turns out, we can't really sense the entire world. +[2435.360 --> 2437.200] We can't know what the world really is like. +[2437.200 --> 2438.960] The world is much larger than we can sense. +[2438.960 --> 2440.240] We can only sense small parts of it. +[2440.240 --> 2442.960] So in some sense, our model of the world +[2442.960 --> 2446.560] is always an approximation of the real world, +[2446.560 --> 2447.280] which is OK. +[2447.280 --> 2448.320] It's usually pretty useful. +[2448.320 --> 2450.080] But it can be wrong. +[2450.080 --> 2452.080] And so our model, and beliefs, can be wrong. +[2452.080 --> 2453.800] And then I've talked about two ways that can happen. +[2453.800 --> 2456.840] You can arrange the same facts in different reference frames, +[2456.840 --> 2459.640] which could lead to different beliefs about those facts. +[2459.640 --> 2461.840] And you can rely on facts we get through language, +[2461.840 --> 2463.800] not direct observation. +[2463.800 --> 2466.400] And that, basically, then we can build a model of the world +[2466.400 --> 2468.600] that doesn't actually reflect the real world. +[2468.600 --> 2470.720] But we can believe it anyway. +[2470.720 --> 2471.640] So that's it. +[2471.640 --> 2475.600] And here again, plug for my, I think here. +[2475.600 --> 2478.560] Do I have my ideas? +[2478.560 --> 2479.560] Oops, here we go. +[2479.560 --> 2480.800] Plug for my book. +[2480.800 --> 2483.560] I would like to say that I'm not really caring about selling books, +[2483.560 --> 2485.680] but I really care about selling these ideas. +[2485.680 --> 2487.640] I want everybody to know these stuff. +[2487.640 --> 2489.880] I think everyone should know these things. +[2489.880 --> 2492.880] In the book, I argue that we should be teaching these kind +[2492.880 --> 2495.280] of brain theory to kids in high school +[2495.280 --> 2498.440] in the same way we teach them about DNA and evolution. +[2498.440 --> 2500.400] Because I think it's important we all understand +[2500.400 --> 2503.280] how it is we form beliefs about the world. +[2503.280 --> 2505.240] The book covers a lot more than I talked about today +[2505.240 --> 2507.560] as a whole section on AI and has a whole section +[2507.560 --> 2510.760] about the future of humanity, how to think about humanity. +[2510.760 --> 2513.080] When you think about it's building a model of the world +[2513.080 --> 2516.160] and the how our knowledge is stored, how would you +[2516.160 --> 2519.920] might think about it a future that might be a little different +[2519.920 --> 2522.440] than you might have thought about in the past. +[2522.440 --> 2523.520] So I'll leave it at that. +[2523.520 --> 2524.800] And that's the end of my talk. +[2524.800 --> 2528.120] And I think we're going to do now Q&A for those +[2528.120 --> 2529.400] who'd like to do Q&A. +[2529.400 --> 2532.800] I'm going to stop sharing my talk here. +[2532.800 --> 2535.440] And we're back to looking at beautiful in the end. +[2535.440 --> 2536.680] Oh, thank you. +[2536.680 --> 2537.200] Thank you. +[2537.200 --> 2541.720] Yes, we most certainly are going to do Q&A right now. +[2541.720 --> 2543.200] But you said a couple things. +[2543.200 --> 2545.680] Thank you for sharing that, that opening story +[2545.680 --> 2546.720] about Richard Dawkins. +[2546.720 --> 2550.080] If we don't, one of the lessons that people should take away +[2550.080 --> 2553.360] today is that if you don't ask, you don't get, +[2553.360 --> 2556.360] you were brave enough to go and ask and get that. +[2556.360 --> 2557.120] So yeah. +[2557.120 --> 2557.960] I was nervous. +[2557.960 --> 2559.000] I was nervous. +[2559.000 --> 2560.000] Yeah. +[2560.000 --> 2561.160] You were nervous. +[2561.160 --> 2562.760] Oh my gosh, you have all people. +[2562.760 --> 2566.320] That should inspire you guys. +[2566.320 --> 2567.760] Shoot your shot. +[2567.760 --> 2571.160] If you are a Hamilton fan, I love the example +[2571.160 --> 2574.200] that you gave of Havana exists, but you've never been there +[2574.200 --> 2577.560] and how some people feel the same way about heaven. +[2577.560 --> 2578.200] I'm in the middle. +[2578.200 --> 2580.120] I feel the same way about Hogwarts, +[2580.120 --> 2583.680] though I have not been invited. +[2583.680 --> 2585.280] I want to believe Hogwarts is real. +[2585.280 --> 2585.760] I do. +[2585.760 --> 2586.800] I want you to. +[2586.800 --> 2590.000] I've been waiting for my owl now for a few years. +[2590.000 --> 2593.480] And I hope the sorting hat puts me in Ravenclaw for sure. +[2593.480 --> 2597.840] And I will say the scariest thing I heard you say today, +[2597.840 --> 2599.160] other animals have a cortex. +[2599.160 --> 2600.680] My cat has a neocortex. +[2600.680 --> 2602.440] And my cat's been telling me that this whole time +[2602.440 --> 2604.200] and I have not been believing him. +[2604.200 --> 2606.960] So you have proved my cat correct. +[2606.960 --> 2608.600] But so, Mia, thank you so much. +[2608.600 --> 2610.000] And we do have several questions. +[2610.000 --> 2612.600] But I have a few questions for you too. +[2612.600 --> 2616.840] You used to, I want to, if all the areas of the brain +[2616.840 --> 2620.440] are the same, I started to picture it like Legos +[2620.440 --> 2622.520] because Legos are the same and more or less. +[2622.520 --> 2625.160] Maybe I'm wrong there, I don't know. +[2625.160 --> 2627.760] But are they interchangeable? +[2627.760 --> 2631.040] Can they be used to repair a damaged section? +[2631.040 --> 2632.800] Yes, they can. +[2632.800 --> 2634.640] And they do. +[2634.640 --> 2636.480] So first of all, just take someone +[2636.480 --> 2640.000] who's a people who are born blind, can generally blind. +[2640.000 --> 2643.200] The part of their cortex that does vision +[2643.200 --> 2645.200] ends up doing something else. +[2645.200 --> 2647.680] It says, OK, we're going to do touch. +[2647.680 --> 2648.960] We're going to do a higher sense of touch. +[2648.960 --> 2650.560] So we're going to do some other things. +[2650.560 --> 2652.120] So that's one example. +[2652.120 --> 2655.840] Another example, if you have damaged trauma in your life, +[2655.840 --> 2659.080] let's say someone, you have some famous examples. +[2659.080 --> 2661.680] But let's say you have a stroke and part of your cortex, +[2661.680 --> 2662.920] the small part of your cortex choice. +[2662.920 --> 2667.080] Not a big part, but a smallest part. +[2667.080 --> 2670.000] Well, immediately you'll have some loss of function. +[2670.000 --> 2673.040] And it's very known that over the next four months or so, +[2673.040 --> 2675.920] you'll gain a lot of that function back. +[2675.920 --> 2678.840] And what's going on literally is that the dead section is +[2678.840 --> 2682.360] not becoming rejuvenated. +[2682.360 --> 2685.440] But what's happening is that the columns around it +[2685.440 --> 2688.680] are taking over and they're reassigning themselves +[2688.680 --> 2690.320] to the part that was lost. +[2690.320 --> 2692.600] And so they literally can say, well, I +[2692.600 --> 2693.880] can take over that part. +[2693.880 --> 2696.720] I can take over this and this sort of a competition going on. +[2696.720 --> 2699.000] And then finally, there was an interesting experiment +[2699.000 --> 2703.360] that someone did, a guy named Baki Vida. +[2703.360 --> 2706.960] And they took ferrets when they were in the utero +[2706.960 --> 2708.240] being before they were born. +[2708.240 --> 2714.160] And they rewired the optic nerve from the eye +[2714.160 --> 2715.480] into the part of the brain that I think +[2715.480 --> 2718.520] didn't hear or touch or something like that. +[2718.520 --> 2720.880] And the animal grew up and the literally +[2720.880 --> 2722.960] those parts of the brain took over different functions. +[2722.960 --> 2724.920] So they're proving the Mount Kessles proposal that. +[2724.920 --> 2727.840] You can do anything. +[2727.840 --> 2729.240] It's plug and play in some sense. +[2729.240 --> 2730.920] It's not exactly like that. +[2730.920 --> 2732.040] There are differences. +[2732.040 --> 2734.600] There are reasons you wouldn't want to do this. +[2734.600 --> 2737.360] It's not like, oh, you're just a bunch of legos. +[2737.360 --> 2739.320] But the basic idea is right. +[2739.320 --> 2740.040] Thank you. +[2740.040 --> 2740.520] Yeah. +[2740.520 --> 2742.760] Actually, and me and Mary Thrupp sort of +[2742.760 --> 2745.280] had the same question that you answered +[2745.280 --> 2747.320] about what happens to the regions of the Neocortex +[2747.320 --> 2749.640] that are supposed to be set aside for vision and hearing +[2749.640 --> 2751.320] if the person is born blind and deaf. +[2751.320 --> 2753.160] So you're ahead of the curve. +[2753.160 --> 2756.080] Because I was also wondering, you know, +[2756.080 --> 2758.280] is when you did the cup demonstration, +[2758.280 --> 2761.800] is that how Helen Keller was able to learn? +[2761.800 --> 2763.840] Well, she, what's interesting about Helen Keller +[2763.840 --> 2764.680] and fascinating, right? +[2764.680 --> 2766.360] So she was deaf and blind, right? +[2766.360 --> 2768.840] There was a really profound deficit. +[2768.840 --> 2771.920] But she could only learn through the world to touch. +[2771.920 --> 2773.840] And I mean, touch, smell and taste, +[2773.840 --> 2775.680] but we don't really rely on smell and taste very much +[2775.680 --> 2776.880] for humans. +[2776.880 --> 2779.880] So mostly just just just for dating. +[2779.880 --> 2783.120] And we may probably be guys don't know this. +[2783.960 --> 2788.640] But she learned a model of the world just like, +[2788.640 --> 2790.240] you're an eye model, my model of the world. +[2790.240 --> 2792.200] Yes, she wouldn't know what color was. +[2792.200 --> 2793.680] But she could walk around the world, +[2793.680 --> 2795.920] she could speak, she gave lectures all around the world. +[2795.920 --> 2799.800] She knew how things in the world work just like you and I did. +[2799.800 --> 2801.640] She had what coffee cups were. +[2801.640 --> 2804.760] And so it shows you that we can learn a model of the world +[2804.760 --> 2806.440] through different senses. +[2806.440 --> 2808.520] But we end up with the thing kind of model. +[2808.520 --> 2811.720] You know, because you're at the model is what the world is. +[2811.720 --> 2813.400] And so yes, you can learn a model through touch. +[2813.400 --> 2814.920] You can learn a model of this to site. +[2814.920 --> 2817.840] And yet with running through site, I won't be able to test. +[2817.840 --> 2821.200] I won't be able to detect, let's say, temperature or texture. +[2821.200 --> 2824.120] And learning through touch, I can't more colors. +[2824.120 --> 2826.320] But other than that, we can learn the same models. +[2826.320 --> 2828.680] It's amazing flexible system. +[2828.680 --> 2829.480] Yeah. +[2829.480 --> 2831.840] And I really hadn't thought of it that way. +[2831.840 --> 2834.800] Which is why I'm here at the best job. +[2834.800 --> 2837.800] Carolyn wants to know, how do the columns communicate +[2837.800 --> 2838.640] with each other? +[2838.640 --> 2839.440] Did you touch on that? +[2839.440 --> 2840.200] Did I miss that? +[2840.200 --> 2843.560] Yeah. Well, I talked about the voting member. +[2843.560 --> 2846.360] Well, it's more complex than that too. +[2846.360 --> 2848.240] Column send information to each other. +[2848.240 --> 2851.200] So one column, there are cells in each column that +[2851.200 --> 2852.280] are get input. +[2852.280 --> 2854.440] And there are cells in each column that have great output. +[2854.440 --> 2857.200] And so columns will send their output to other columns +[2857.200 --> 2858.400] and they'll get input from other columns. +[2858.400 --> 2860.880] It's a very complicated system. +[2860.880 --> 2863.040] I didn't really talk about that much. +[2863.040 --> 2865.240] But you can just imagine being wired together +[2865.240 --> 2867.200] in a sort of a chain. +[2867.200 --> 2868.400] So there's two types of connections. +[2868.400 --> 2871.680] Like it goes to region one, region two, region three. +[2871.680 --> 2872.640] That's one topic connection. +[2872.640 --> 2876.080] And then there's, so that's the feed forward connections +[2876.080 --> 2877.680] you might call in the feedback connections. +[2877.680 --> 2879.000] And then the columns are also connected +[2879.000 --> 2880.120] through these voting layers. +[2880.120 --> 2881.600] And that's what I talked about. +[2881.600 --> 2883.840] So there's a lot of connections going back and forth. +[2883.840 --> 2887.480] But still, almost all the vast majority of connections +[2887.480 --> 2891.120] in the brain and the cortex are within each column. +[2891.120 --> 2894.480] And the number of ones that go long distance +[2894.480 --> 2896.160] in other places is still high, but it's +[2896.160 --> 2897.600] a much smaller number. +[2897.600 --> 2899.400] OK. +[2899.400 --> 2903.640] Fred wanted to know, are frames stored in one column +[2903.640 --> 2905.280] or multiple or elsewhere? +[2905.280 --> 2908.240] Because I think that the knowledge of the cop is everywhere. +[2908.240 --> 2909.240] Yeah. +[2909.240 --> 2913.200] But every column is creating its own frames. +[2913.200 --> 2913.880] Oh, OK. +[2913.880 --> 2915.680] Really, every column. +[2915.680 --> 2918.440] And it's a very interesting way to do it. +[2918.440 --> 2920.360] We actually know a lot about how they do this. +[2920.360 --> 2923.200] But every column of a column has its own set of reference +[2923.200 --> 2924.040] frames. +[2924.040 --> 2927.240] And the next column over has a different set of reference +[2927.240 --> 2927.880] frames. +[2927.880 --> 2930.200] They can coordinate. +[2930.200 --> 2932.200] But basically, they create them independently. +[2932.200 --> 2935.080] For those who are like neuroscience geeks, +[2935.080 --> 2936.920] there's a type of cell in the old brain +[2936.920 --> 2939.880] called the grid cells, which some people who got grid cells +[2939.880 --> 2940.400] in place cells. +[2940.400 --> 2942.640] These are people who got no more prizes for discovering these. +[2942.640 --> 2944.400] They're not in the near cortex. +[2944.400 --> 2947.240] But we have speculated that these cells learn reference +[2947.240 --> 2947.520] frames. +[2947.520 --> 2948.880] We know this. +[2948.880 --> 2951.000] And we've speculated that the equivalent cells +[2951.000 --> 2952.520] exist in the cortex. +[2952.800 --> 2955.280] So we know the actual, a lot about the neural mechanisms. +[2955.280 --> 2957.800] And now we speculated this five years ago. +[2957.800 --> 2959.000] But now there's a lot of evidence. +[2959.000 --> 2961.440] This is true that people are finding these grid cells, which +[2961.440 --> 2964.840] are like reference-same cells throughout the near cortex. +[2964.840 --> 2968.960] So that prediction has been being verified right now. +[2968.960 --> 2969.360] OK. +[2969.360 --> 2970.280] Thank you. +[2970.280 --> 2973.080] Larry, a woman once said, does our brain +[2973.080 --> 2975.720] try to fill in the gaps in knowledge +[2975.720 --> 2978.600] to form a more coherent theory or a frame? +[2978.600 --> 2979.640] Oh, absolutely. +[2979.800 --> 2983.720] It's well known, for example. +[2983.720 --> 2987.840] You everyone knows about the blind spot. +[2987.840 --> 2988.760] You've done that trick. +[2988.760 --> 2990.280] Like, there's a blind spot where you're +[2990.280 --> 2991.320] optic nerve leaves the eye. +[2991.320 --> 2994.040] And so if you look at these two dots in front of you, +[2994.040 --> 2996.080] you look at one dot, this other dot disappears. +[2996.080 --> 2998.560] Anyway, there's holes in your retina. +[2998.560 --> 3000.720] Your retina is not like a camera. +[3000.720 --> 3003.720] Your retina has these blood vessels going through it +[3003.720 --> 3005.680] and all these holes in it. +[3005.680 --> 3006.640] It's a real mess. +[3006.640 --> 3008.400] And you're not aware of any of this, right? +[3008.400 --> 3012.120] And the reason is, is because the filling in, if you will, +[3012.120 --> 3013.280] is that voting layer, right? +[3013.280 --> 3015.880] The vote, they all say, yeah, we're looking out a cat. +[3015.880 --> 3020.960] And so you're not aware that you're not actually, +[3020.960 --> 3022.480] you're not actually, because it's back to sense. +[3022.480 --> 3023.360] Remember, I said, you're not really +[3023.360 --> 3025.440] perceiving the world, you're perceiving your level, right? +[3025.440 --> 3025.640] Yes. +[3025.640 --> 3027.560] Because the way you're getting from your senses +[3027.560 --> 3029.440] is this mess that's got holes and beans +[3029.440 --> 3030.680] missing all over the place. +[3030.680 --> 3033.280] It's like when I touched the cup, +[3033.280 --> 3034.680] I'm not touching all the cup, right? +[3034.680 --> 3035.720] I'm just touching a few places. +[3035.720 --> 3038.200] But I perceive the entire cup is there. +[3038.200 --> 3040.000] And the same thing's going on with your vision. +[3040.000 --> 3043.920] So yes, the reason it's filling in, +[3043.920 --> 3045.480] it's less than it's filling in. +[3045.480 --> 3049.800] It's more that their model says the cup is solid. +[3049.800 --> 3053.000] And therefore, that's what you perceive. +[3053.000 --> 3054.800] When reality, the input coming into your brain +[3054.800 --> 3057.240] is full of holes and some noise in is messy. +[3057.240 --> 3058.880] But you're perceiving the model. +[3058.880 --> 3061.280] You're not actually perceiving the real thing. +[3061.280 --> 3063.680] I know that sounds weird, but it's true. +[3063.680 --> 3066.240] No, listen, I'm here for the weird. +[3066.240 --> 3067.600] I am. +[3067.600 --> 3070.160] John Miniger, I hope I'm getting that right, John. +[3070.160 --> 3073.200] How does consciousness fit into your models? +[3073.200 --> 3076.080] Yeah, I wrote a chapter about this in the book. +[3076.080 --> 3079.600] And I know consciousness is such a controversial topic +[3079.600 --> 3083.600] that I snuck it in the machine intelligence section. +[3083.600 --> 3085.920] And I stuck it in with SkyNet. +[3085.920 --> 3088.400] That's all I said to myself, can a machine be conscious? +[3088.400 --> 3091.760] And that sort of diffuses the somewhat diffuses +[3091.760 --> 3093.800] all these emotional arguments about consciousness. +[3093.800 --> 3095.200] Because let's talk about a machine. +[3095.200 --> 3097.040] Could it be conscious? +[3097.040 --> 3099.680] And the bottom of the conscious is really +[3099.680 --> 3101.200] enough good definition for it. +[3101.200 --> 3103.960] And lots of people think it's all over the map. +[3103.960 --> 3107.960] But I do address two aspects of it. +[3107.960 --> 3110.080] And it's too much to answer here in the Q&A. +[3110.080 --> 3113.480] But I talk about why, our theory does explain why +[3113.480 --> 3114.840] you have a sense of self. +[3114.840 --> 3117.840] Why do you feel like you're present in the world? +[3117.840 --> 3121.600] Why am I, how come I don't, am I just a machine? +[3121.600 --> 3124.680] Why do I feel like I'm not just a machine? +[3124.680 --> 3127.320] And then the other thing is, I talked about this briefly, +[3127.320 --> 3130.600] is like, why do we perceive things sort of called quality? +[3130.600 --> 3132.640] Why do I perceive something to be green? +[3132.640 --> 3134.800] Green is not really a thing in the world. +[3134.800 --> 3136.840] There's light frequency, but that's not green. +[3136.840 --> 3139.120] Why do I feel green to me? +[3139.120 --> 3141.000] And you can explain these two things. +[3141.000 --> 3145.080] At least I showed how one might explain them. +[3145.080 --> 3147.680] And so yes, the theory does tell you about that. +[3147.680 --> 3151.240] In terms of it's part of the model again. +[3151.240 --> 3153.880] Green is part of what we, is part of the model of the world. +[3153.880 --> 3155.480] And why it looks like the way it is. +[3155.480 --> 3156.920] So it's too much for today. +[3156.920 --> 3159.080] But yes, I get it. +[3159.080 --> 3161.680] And in the words of the immortal, Kermit the Frog, +[3161.680 --> 3164.080] it is not easy being green. +[3164.080 --> 3167.000] For sure. +[3167.000 --> 3170.920] I don't know if this gets to sort of aerie fairy or woo. +[3170.920 --> 3175.320] But are you making a distinction between the mind and the brain? +[3175.320 --> 3177.120] No, it's all one. +[3177.120 --> 3178.120] OK. +[3178.120 --> 3184.600] So I think the CFI crowd should hopefully be unanimous in this. +[3184.600 --> 3187.760] There is, it's in our everyday language. +[3187.760 --> 3191.560] All of us can't avoid saying like, oh, I thought this today. +[3191.560 --> 3193.320] I'm not something else. +[3193.320 --> 3195.360] But in reality, of course, it's not like that. +[3195.360 --> 3198.040] The brain, everything you've ever thought you have +[3198.040 --> 3198.800] is a brain state. +[3198.800 --> 3200.040] It's neuroindactive. +[3200.040 --> 3202.000] Neuroindactive leave the brain states. +[3202.000 --> 3203.920] I mean, you need to, you're acting as a neuroindicator +[3203.920 --> 3204.440] you're thinking. +[3204.440 --> 3205.440] There is no separation. +[3205.440 --> 3207.480] There's no dualism. +[3207.480 --> 3208.480] It's all one thing. +[3208.480 --> 3210.240] And there's something wrong with that. +[3210.240 --> 3211.920] That's pretty cool. +[3211.920 --> 3213.640] Some people are disappointed in that. +[3213.640 --> 3214.640] Yeah, no. +[3217.080 --> 3218.120] Gary wants to know. +[3218.120 --> 3220.680] And this is, again, in that same category, +[3220.680 --> 3224.720] any idea about dreams, about how they, how are they created? +[3224.720 --> 3225.240] Yeah, it's a great. +[3225.240 --> 3227.720] Aside from drinking vodka way too late, +[3227.720 --> 3229.920] or that might just be a very personal experience. +[3233.760 --> 3234.280] OK. +[3234.280 --> 3236.840] So look, there's a lot of research on dreams. +[3236.840 --> 3238.960] Our theory doesn't really say much about it. +[3238.960 --> 3242.320] And so I should just leave it at that. +[3242.320 --> 3244.120] Clearly, humans need a dream. +[3244.120 --> 3244.880] It's important. +[3244.880 --> 3247.680] It performs a biological function. +[3247.680 --> 3248.840] There's some theories about that. +[3248.840 --> 3250.600] The best theory I've heard is that we need a dream +[3250.600 --> 3254.240] because it's actually a cleaning, a junk cleaning process. +[3254.240 --> 3255.960] It removes these chemicals in your brain. +[3255.960 --> 3257.600] I think that's a pretty cool idea. +[3257.600 --> 3260.240] But our theory itself doesn't say anything about that. +[3260.240 --> 3262.240] It doesn't need to, it's more like, hey, +[3262.240 --> 3265.400] how does a normal healthy, a live brain work? +[3265.680 --> 3267.960] We don't really talk too much about disease. +[3267.960 --> 3270.800] We don't talk about things like dreaming, +[3270.800 --> 3274.160] and why do you have to eat certain foods and things like that. +[3274.160 --> 3275.640] It's like, oh, how does the information process? +[3275.640 --> 3278.760] So our theory doesn't really say much about dreams. +[3278.760 --> 3279.400] OK. +[3279.400 --> 3280.760] Here it is. +[3280.760 --> 3284.440] How do you, in Huat Rose, once know how do you match, +[3284.440 --> 3287.520] and let me full disclosure, this question is way above my pay +[3287.520 --> 3287.840] grade. +[3287.840 --> 3289.400] I don't even know what I'm saying here. +[3289.400 --> 3292.280] So how do you match a Bayesian model of knowledge +[3292.280 --> 3294.840] with your highly distributed model of knowledge? +[3295.520 --> 3297.680] And those are English words. +[3297.680 --> 3299.040] Well, I don't know what Bayesian model is. +[3299.040 --> 3301.960] Bayesian, I think, is supposed to be capitalized. +[3301.960 --> 3302.720] Yes, it is. +[3302.720 --> 3306.440] So you can be excused for not knowing it. +[3306.440 --> 3308.720] All right, that is a very technical question. +[3308.720 --> 3313.360] Bayesian is a named after the famous person, Bayes, +[3313.360 --> 3316.560] is a type of probabilistic framework. +[3316.560 --> 3322.440] And some people think about the brain in probabilistic terms. +[3322.440 --> 3327.520] Our theory is sort of, it's not adverse to Bayesian, +[3327.520 --> 3330.360] but Bayes, it's not really a Bayesian theory. +[3330.360 --> 3334.600] And the way the brain represents uncertainty +[3334.600 --> 3336.800] is not through probabilities. +[3336.800 --> 3339.000] And as a technical topic, and I'm not +[3339.000 --> 3341.960] going to talk about it further, but we don't believe +[3341.960 --> 3345.240] that that's the right framework, although you +[3345.240 --> 3348.640] can think there's lots of things in cognition +[3348.640 --> 3350.840] you can think in terms of probabilities. +[3350.840 --> 3353.840] But that's not really the right framework for our model. +[3353.840 --> 3356.040] OK. +[3356.040 --> 3360.040] Aina wants to know, first of all, she said, +[3360.040 --> 3361.840] thank you, this is fascinating. +[3361.840 --> 3366.040] Can you please talk a little bit about AI and the human brain +[3366.040 --> 3369.160] and how AI acquires models from humans? +[3369.160 --> 3370.240] Yeah, yeah. +[3370.240 --> 3372.680] OK, well, first of all, there's a whole second section +[3372.680 --> 3374.120] on the book, a book is in three sections. +[3374.120 --> 3375.120] And there's a whole second section. +[3375.120 --> 3376.160] It's all about AI. +[3376.160 --> 3377.160] OK. +[3377.160 --> 3379.160] And I take some very unusual, I don't +[3379.160 --> 3381.040] have to take controversial, but I would say I +[3381.040 --> 3386.120] would take some non-common views about AI. +[3386.120 --> 3389.640] And so the basic principle I have is today's AI +[3389.640 --> 3391.000] works on very, very different principles. +[3391.000 --> 3392.840] There's nothing like what I just talked about. +[3392.840 --> 3395.480] And I, and I, most AI researchers, +[3395.480 --> 3397.600] don't think that today's AI is smart. +[3397.600 --> 3398.880] It's not really intelligent. +[3398.880 --> 3400.000] They're clever. +[3400.000 --> 3401.480] They're really good pattern, cat, +[3401.480 --> 3403.400] massifier, classifiers, and so on, but they're not really +[3403.400 --> 3405.880] smart, like humans are animals. +[3405.880 --> 3410.320] And, but the brain represents a road map +[3410.320 --> 3413.200] for how to build truly intelligent machines. +[3413.200 --> 3415.440] Now, we can talk about the board if you do that or not. +[3415.440 --> 3418.440] But the main thing of that, one of the key messages +[3418.440 --> 3422.040] I want to get across is that when we want to build +[3422.040 --> 3424.360] truly intelligent machines and just, you know, +[3424.360 --> 3426.160] disclaimer, I think we're going to be doing this +[3426.160 --> 3428.920] in big time, the latter part of this century. +[3428.920 --> 3431.960] So we're going to have these, it's going to be like crazy. +[3431.960 --> 3435.280] But when we build these intelligent machines, +[3435.280 --> 3436.760] we don't want to build the entire brain. +[3436.760 --> 3438.360] We don't want to build the old parts of the human brain. +[3438.360 --> 3439.840] We don't want them to be sitting around going, +[3439.840 --> 3442.440] oh, don't that sound, you know, I'm lazy. +[3442.440 --> 3443.440] I don't even know. +[3443.440 --> 3447.800] So I think that was a scene out of an outtake of I robot. +[3447.800 --> 3448.320] Yes. +[3448.320 --> 3450.200] Yeah, maybe I don't know. +[3450.200 --> 3452.520] So you can build in a tells machine +[3452.520 --> 3454.360] by replicating the New York cortex. +[3454.360 --> 3457.160] And on its own, the New York cortex does not, +[3457.160 --> 3459.120] it can be smart, but it has no motivations. +[3459.120 --> 3460.960] It's not like I'm going to say, oh, human, +[3460.960 --> 3461.640] you've created me. +[3461.640 --> 3463.520] Now I'm going to take over because I'm tired of being, +[3463.520 --> 3464.720] you know, you're slave. +[3464.720 --> 3466.640] It's not going to do that. +[3466.640 --> 3470.520] You have to give these intelligent machines some motivations +[3470.520 --> 3473.640] or some drives or some things that they have to do. +[3473.640 --> 3476.480] But they aren't going to be like us at all. +[3476.480 --> 3478.320] Unless you went out of our way to do that. +[3478.320 --> 3479.880] And so the whole people, a lot of people +[3479.880 --> 3482.640] think that AI is an existential threat to humanity. +[3482.640 --> 3484.680] Like we're going to create these super intelligent machines +[3484.680 --> 3486.760] and they're going to kill us or sling slave us and things +[3486.760 --> 3487.520] like that. +[3487.520 --> 3488.640] I don't believe that's true at all. +[3488.640 --> 3490.440] And I walk through arguments about that. +[3490.440 --> 3492.440] So the whole section goes to all these issues +[3492.440 --> 3494.200] about intelligent machines, what they'll +[3494.200 --> 3496.960] be like, what they won't be like. +[3496.960 --> 3499.720] And why today's AI is not, I have a chapter called why +[3499.720 --> 3501.360] there's no I and AI. +[3501.360 --> 3504.720] Because today's AI is not intelligent. +[3504.720 --> 3505.240] Got it. +[3505.240 --> 3505.480] Got it. +[3505.480 --> 3506.680] No, that is comforting. +[3506.680 --> 3508.920] And you know where that's coming from. +[3508.920 --> 3511.720] There's a bit of a bend diagram and a crossover +[3511.720 --> 3513.720] of folks who are here and folks who have watched way +[3513.720 --> 3516.240] too many sci-fi movies. +[3516.240 --> 3519.640] And we've been primed for how our phone is going +[3519.640 --> 3520.240] to take us over. +[3520.240 --> 3522.280] Yeah, but in a lot of smart people out there +[3522.280 --> 3525.000] claiming that the world is going to be overrun +[3525.000 --> 3526.800] by intelligent rope, intelligent machines. +[3526.800 --> 3528.800] I mean, a lot of people saying this now. +[3528.800 --> 3530.040] I don't know if you know what I mean. +[3530.040 --> 3531.000] I think they're all wrong. +[3531.000 --> 3533.920] And I make a very reasoned argument why. +[3533.920 --> 3537.560] And yet another reason, shameless plug for why you guys +[3537.560 --> 3540.080] should get his book. +[3540.080 --> 3542.800] Thank you so much for spending time with us +[3542.800 --> 3545.080] and in sharing your time and expertise. +[3545.080 --> 3547.720] And I thank everyone in the audience for watching. +[3547.720 --> 3550.560] And I'm so sorry I couldn't get to all of your questions. +[3550.560 --> 3553.920] There is never enough time to do that. +[3553.920 --> 3556.320] But I got to as many as I could. +[3556.320 --> 3557.960] And I do want to assure everyone +[3557.960 --> 3561.640] if you've missed this anything, the recording +[3561.640 --> 3565.760] of this event will be available tomorrow at skepticalonquirer.org. +[3565.760 --> 3568.520] And our reminder, our next guests in this series +[3568.520 --> 3570.680] will be here on Thursday, April 29th. +[3570.680 --> 3574.080] Mick West talking about escaping the rabbit hole, +[3574.080 --> 3576.720] how to help your conspiracy theorist friend. +[3576.720 --> 3580.520] I will be here with pen and paper taking notes in hand. +[3581.040 --> 3584.880] So my thanks, of course, to skepticalonquirer, CFI, +[3584.880 --> 3587.480] our producer tonight, Mark Krijdler, +[3587.480 --> 3589.520] and to you, the audience. +[3589.520 --> 3592.640] And again, to you, Jeff, thank you for making the time. +[3592.640 --> 3593.880] And sharing your expertise. +[3593.880 --> 3594.400] That was great. +[3594.400 --> 3594.880] That was great. +[3594.880 --> 3595.640] Thank you. +[3595.640 --> 3597.080] And everybody, you know me. +[3597.080 --> 3598.960] My name is Leanne Lorde. +[3598.960 --> 3599.840] Thank you. +[3599.840 --> 3601.800] And good night. +[3601.800 --> 3603.120] Good night, Jeff. +[3603.120 --> 3604.360] Good night.