[speaker001:] Ah, so comfortable. [speaker006:] Smooth. [speaker001:] Mm hmm. Good. I know that he's going to like, Taiwan and other places to eat. So. [speaker004:] On? Am I on? I think I'm on? [speaker001:] Yep. Yep. [speaker002:] Yeah. [speaker004:] Good. Good. [speaker002:] Actually [disfmarker] [speaker001:] Bye. [speaker006:] I just had one of the most frustrating meetings of my career. [speaker004:] You a [speaker001:] It's definitely not the most frustrating meeting I've ever had. [speaker004:] You're [disfmarker] you remember you're being recorded at this point. [speaker006:] Yeah. [speaker005:] Yeah. [speaker001:] Oh, yeah, so, w we didn't yet specify with whom. [speaker005:] Right. [speaker001:] But um. [speaker005:] Uh, right. Uh. [speaker001:] So that's why Keith and I are going to be a little dazed for the first half m the meeting. [speaker006:] Huh. Yeah, I'm just gonna sit here and [speaker005:] Right. Yeah, I [disfmarker] I [disfmarker] I avoided that as long as I could for you guys, [speaker006:] growl. Yeah. [speaker005:] but, uh [disfmarker] [speaker001:] Mm hmm. [speaker006:] For which we thank you, by the way. [speaker001:] Are very appreciative, yeah. [speaker005:] Right. [speaker006:] I know you were [disfmarker] you were doing that, but, anyway. [speaker004:] Oh yeah, how di how d exactly did, uh, that paper lead to anti lock brakes? [speaker006:] Oh, I could tell you had a rough day, man! [speaker004:] Nah. [speaker001:] What? [speaker004:] I love that story. [speaker006:] Yeah, it's a great story. [speaker003:] OK. [speaker006:] Oh my goodness. [speaker003:] Oh yeah, um, Liz suggested we could start off by uh, doing the digits all at the same time. [speaker001:] What? [speaker004:] All at the same time. I don't know if [disfmarker] I would get distracted and confused, probably. [speaker005:] e [speaker001:] Really? Do we have to like, synchronize? [speaker005:] Well, I think you're supposed to [disfmarker] [speaker006:] Are you being silly? [speaker005:] OK. We can do this. [speaker004:] Oh wait do we have t [speaker005:] Everybody's got different digits, right? [speaker006:] Uh. [speaker003:] Yep. [speaker004:] Yeah, do we have to time them at the same time or just overlapping [disfmarker] [speaker001:] You're kidding. [speaker005:] No. [speaker003:] No, no, just [disfmarker] just start whenever you want. [speaker005:] e yeah, the [speaker001:] And any rate? [speaker006:] Alright. [speaker005:] Well, they [disfmarker] they have s they have the close talking microphones for each of us, [speaker001:] Yeah, that's true. [speaker005:] so [disfmarker] [speaker006:] Alright. [speaker005:] yeah, there's separate channels. [speaker003:] Yeah. [speaker001:] OK. [speaker005:] So when I say [speaker006:] Just plug one ear. [speaker003:] Yeah. [speaker005:] OK. [speaker001:] You lose. [speaker006:] OK, bye! That was a great meeting! [speaker005:] Right. [speaker006:] So [vocalsound] Now, uh, why? [speaker004:] Alright. [speaker003:] Just to save time. [speaker006:] OK. [speaker003:] Does matter for them. [speaker001:] Are we gonna start all our meetings out that way from now on? [speaker005:] No. [speaker001:] Oh. Too bad. I kinda like it. [speaker006:] Well, could we? [speaker004:] It's strangely satisfying. [speaker001:] Yeah. It's a ritual. [speaker004:] Are we to r Just to make sure I know what's going on, we're talking about Robert's thesis proposal today? Is that [speaker003:] We could. [speaker004:] true? [speaker001:] We are? [speaker003:] We might. [speaker001:] OK. [speaker005:] Well, you [disfmarker] you had s you said there were two things that you might wanna do. [speaker004:] Is [disfmarker] [speaker005:] One was rehearse your i i talk [disfmarker] [speaker004:] Oh yes, and that too. [speaker003:] Not [disfmarker] not rehearse, I mean, I have just not spent any time on it, so I can show you what I've got, get your input on it, and maybe some suggestions, that would be great. And the same is true for the proposal. I will have time to do some revision and some additional stuff on various airplanes and trains. So, um. I don't know how much of a chance you had to actually read it [speaker001:] I haven't looked at it [speaker003:] because [disfmarker] [speaker001:] yet, [speaker003:] but you could always send me comments per electronic mail [speaker001:] but I will. [speaker003:] and they will be incorporated. [speaker001:] OK. [speaker003:] Um, [vocalsound] the [disfmarker] It basically says, well "this is construal", and then it continues to say that one could potentially build a probabilistic relational model that has some general, domain general rules how things are construed, and then the idea is to use ontology, situation, user, and discourse model to instantiate elements in the classes of the probabilistic relational model [pause] to do some inferences in terms of what is being construed as what [speaker001:] Hmm. [speaker003:] in our beloved tourism domain. But, with a focus on [speaker001:] Can I s [speaker006:] I think I need a copy of this, yes. [speaker001:] Sorry. [speaker003:] Hmm? [speaker006:] I is there an extra copy around? [speaker004:] OK, we can [disfmarker] we can [disfmarker] we can pass [disfmarker] pass my, uh [disfmarker] we can pass my extra copy around. [speaker001:] Uh. He sent it. OK. You can keep it. [speaker006:] Alrigh [speaker004:] Er, actually, my only copy, now that I think about it, [speaker001:] OK. [speaker004:] but. I already read half of it, [speaker003:] Um, I don't [disfmarker] I, uh [disfmarker] I don't need it. [speaker004:] so it's OK. [speaker006:] OK. [speaker003:] Um, actually this is the [disfmarker] the newest version after your comments, and [disfmarker] [speaker005:] Yeah, no I s I s I see this has got the castle in it, and stuff like that. [speaker003:] Yeah. [speaker005:] Yep. [speaker004:] Oh, maybe the version I didn't have that I [disfmarker] mine [disfmarker] the w did the one you sent on the email have the [disfmarker] [speaker005:] Yeah. [speaker004:] That was the most recent one? [speaker005:] Uh, yeah, I think so. [speaker004:] OK. [speaker003:] Yep. [speaker004:] Cuz I read halfway but I didn't see a castle thing. [speaker001:] I'm changing this. Just so you know. [speaker003:] Yeah, [speaker001:] But, anyway. [speaker003:] um, if you would have checked your email you may have received a note from Yees asking you to send me the, uh, up to d [speaker001:] Oh. Oh, sorry. OK. Sorry. [speaker003:] current formalism thing that you presented. [speaker001:] OK. I will. OK. OK. OK. [speaker003:] But for this it doesn't matter. But, uh [disfmarker] [speaker001:] We can talk about it later. That's not even ready, so. Um, OK! Go on t to, uh, whatever. [speaker003:] And [disfmarker] [speaker001:] I'm making changes. "Don't worry about that." OK. Mmm mmm. Oh! OK, sorry, go on. [speaker003:] And any type of comment whether it's a spelling or a syntax or [speaker001:] Mm hmm. [speaker006:] There's only one "S" in "interesting". [speaker003:] readability [disfmarker] Hmm? [speaker006:] There's only one "S" in "interesting". On page five. [speaker003:] Interesting. [speaker001:] Anyway. And y uh, email any time, but most usefully before [disfmarker] [speaker004:] The twenty first I'm assuming. [speaker001:] The twenty first? [speaker003:] Twenty ninth. [speaker005:] No, this is the twenty first. [speaker006:] That's [disfmarker] [speaker004:] What, today's the twenty first? [speaker006:] Well, better hurry up then! [speaker004:] Oh, man! [speaker003:] The twenty ninth. [speaker001:] Before the twenty ninth, OK. [speaker004:] OK. [speaker003:] That's when I'm meeting with Wolfgang Wahlster to sell him this idea. [speaker001:] Mm hmm. OK. [speaker003:] OK? Then I'm also going to present a little talk at EML, about what we have done here and so of course, I'm [disfmarker] I'm gonna start out with this slide, so the most relevant aspects of our stay here, and um, then I'm asking them to imagine that they're standing somewhere in Heidelberg and someone asks them in the morning [disfmarker] The Cave Forty Five is a [disfmarker] is a well known discotheque which is certainly not open at that [disfmarker] that time. [speaker006:] OK. [speaker003:] And so they're supposed to imagine that, you know, do they think the person wants to go there, or just know where it is? Uh, which is probably not, uh, the case in that discotheque example, or in the Bavaria example, you just want to know where it is. And so forth. So basically we can make a point that here is ontological knowledge but if it's nine [disfmarker] nine PM in the evening then the discotheque question would be, for example, one that might ask for directions instead of just location. Um, [vocalsound] and so forth and so forth. That's sort of motivating it. Then what have we done so far? We had our little bit of, um, um, SmartKom stuff, that we did, um, everth [speaker006:] Oh, you've got the parser done. Sorry. [speaker003:] That's the [disfmarker] not the construction parser. [speaker006:] OK. [speaker003:] That's the, uh, tablet based parser, [speaker001:] Easy parser. [speaker006:] OK. [speaker003:] and the generation outputter. [speaker004:] Halfway done? Yeah. [speaker003:] That's done. [speaker001:] Mmm. [speaker003:] You have to change those strategies, [speaker004:] OK. [speaker003:] right? [speaker004:] Yeah. [speaker003:] That's, ten words? [speaker004:] Well, i it, you know. Maybe twelve. [speaker003:] Twelve? OK. And, um, and Fey is doing the synthesis stuff as we speak. That's all about that. Then I'm going to talk about the data, you know these things about [disfmarker] uh, actually I have an example, probably. Two s Can you hear that? [speaker006:] Mm hmm. [speaker003:] Or should I turn the l volume on. [speaker001:] I could hear it. [speaker004:] I I can hear it. [speaker006:] I heard it. [speaker004:] They might not hear it in the [disfmarker] well maybe they will. I don't know. [speaker001:] This was an actual, um, subject? Ah. [speaker003:] Mm hmm. [speaker006:] Sounds like Fey. [speaker001:] Yeah. [speaker003:] But they're [disfmarker] they're mimicking the synthesis when they speak to the computer, [speaker006:] Oh, OK. [speaker003:] the [disfmarker] you can observe that all the time, [speaker006:] Oh really. [speaker003:] they're trying to match their prosody onto the machine. [speaker006:] Interesting. Oh, it's pretty slow. [speaker003:] Yeah, you have to [disfmarker] [speaker001:] Wh [speaker006:] The system breaking. [speaker001:] What is the s? Oh! [speaker003:] OK. And so forth and so forth. Um, I will talk about our problems with the rephrasing, and how we solved it, and some preliminary observations, also, um, I'm not gonna put in the figures from Liz, but I thought it would interesting to, uh, um, point out that it's basically the same. Um, as in every human human telephone conversation, and the human computer telephone conversation is of course quite d quite different from, uh, some first, uh, observations. Then sort of feed you back to our original problem cuz, uh [disfmarker] how to get there, what actually is happening there today, and then maybe talk about the big picture here, e tell a little bit [disfmarker] as much as I [pause] can about the NTL story. I [disfmarker] I wa I do wanna, um [disfmarker] I'm not quite sure about this, whether I should put this in, um, that, you know, you have these two sort of different ideas that are [disfmarker] or two different camps of people envisioning how language understanding works, and then, [vocalsound] talk a bit about the embodied and simulation approach favored here and as a prelude, I'll talk about monkeys in Italy. And, um, Srini was gonna send me some slides but he didn't do it, so from [disfmarker] but I have the paper, I can make a resume of that, and then I stole an X schema from one of your talks I think. [speaker001:] Oh. I was like, "where'd you get that?" [speaker006:] Yeah, that looks familiar. [speaker001:] OK. "Looks familiar." [speaker003:] I think that's Bergen, Chang, something, or the other. [speaker001:] Uh. [speaker005:] Whatever. [speaker001:] OK. [speaker003:] Um, and that's [disfmarker] now I'm not going to bring that. So that's basically what I have, so far, and the rest is for airplanes. So X schemas, then, I would like to do [disfmarker] talk about the construction aspect and then at the end about our Bayes net. [speaker001:] Mm hmm. [speaker003:] End of story. Anything I forgot that we should mention? Oh, maybe the FMRI stuff. Should I mention the fact that, um, we're also actually started [disfmarker] going to start to look at people's brains in a more direct way? [speaker005:] You certainly can. I mean I y I you know, I don't know [disfmarker] [speaker001:] You might just wanna like, tack that on, as a comment, to something. [speaker005:] Right, um. [speaker003:] "Future activities" something. [speaker005:] Well, the time to mention it, if you mention it, is when you talk about mirror neurons, then you should talk about the more recent stuff, about the kicking and, you know, the [disfmarker] [speaker003:] Yeah. [speaker005:] yeah, yeah [disfmarker] and [disfmarker] that the plan is to see to what extent the [disfmarker] you'll get the same phenomena with stories about this, so that [disfmarker] [speaker003:] Mm hmm. [speaker005:] and that we're planning to do this, um, which, we are. So that's one thing. Um. Depends. I mean, there is a, um, whole language learning story, OK? [speaker003:] Yeah. [speaker005:] which, uh, actually, i i even on your five layer slide, you [disfmarker] you've got an old one that [disfmarker] that leaves that off. [speaker003:] Yeah, I [disfmarker] I [disfmarker] I do have it here. [speaker001:] Hmm. [speaker005:] Yeah. [speaker003:] Um. And, of course, you know, the [disfmarker] the big picture is this bit. [speaker005:] Right. [speaker003:] But, you know, it would [disfmarker] But I don't think I [disfmarker] I am capable of [disfmarker] of do pulling this off and doing justice to the matter. I mean, there is interesting stuff in her terms of how language works, so the emergentism story would be nice to be [disfmarker] you know, it would be nice to tell people how [disfmarker] what's happening there, plus how the, uh, language learning stuff works, [speaker005:] OK, so, so anyway, I [disfmarker] I agree that's not central. [speaker003:] but [disfmarker] [speaker005:] What you might wanna do is, [speaker001:] Mm hmm. [speaker005:] um, and may not, but you might wanna [disfmarker] this is [disfmarker] rip off a bunch of the slides on the anal there [disfmarker] the [disfmarker] there [disfmarker] we've got various i generations of slides that show language analysis, and matching to the underlying image schemas, and, um, how the construction and simulation [disfmarker] that ho that whole th [speaker003:] Yeah, th that [disfmarker] that's c that comes up to the X schema slide, [speaker001:] OK, right. [speaker003:] so basically I'm gonna steal that from Nancy, one of Nancy's st [speaker001:] OK, I can give you a more recent [disfmarker] if you want [disfmarker] well, that might have enough. [speaker003:] Uh, I [disfmarker] yeah, but I also have stuff you [disfmarker] trash you left over, [speaker005:] OK. [speaker003:] your quals and your triple AI. [speaker005:] The quals w the [disfmarker] the [disfmarker] the quals slides would be fine. [speaker001:] Yeah. [speaker005:] You could get it out of there, or some [speaker001:] Which I can even email you then, you know, like there probably was a little [disfmarker] few changes, not a big deal. Yeah, you could steal anything you want, I don't care. Which you've already done, obviously. So. [speaker003:] Well, I [disfmarker] I don't feel bad about it at all [speaker001:] Sorry No, you shouldn't. [speaker003:] because [disfmarker] because you are on the, uh, title. I mean on the [disfmarker] the, you're [disfmarker] that's [disfmarker] see, that's you. [speaker001:] Oh, that's great, that's great. [speaker005:] Yeah. [speaker001:] I'm glad to see propagation. [speaker005:] Yeah. [speaker001:] Mmm. [speaker003:] Hmm? Propagated? [speaker001:] Yes. [speaker003:] I mean I might even mention that this work you're doing is sort of also with the MPI in Leipzig, so. [speaker001:] It's [disfmarker] it's certainly related, um, [speaker003:] Because, um, EML is building up a huge thing in Leipzig. [speaker001:] might wanna say. Is it? [speaker003:] So it [disfmarker] It's on biocomputation. Would [disfmarker] [speaker005:] Yeah, it's different, this is the, uh, DNA building, or someth the double helix building. Yeah. [speaker003:] Yeah. [speaker001:] Kind of a different level of analysis. [speaker005:] The [disfmarker] yeah it was [disfmarker] it turns out that if [disfmarker] if you have multiple billions of dollars, y you can do all sorts of weird things, and [disfmarker] [speaker004:] Wait, they're building a building in the shape of DNA, [speaker001:] What? [speaker004:] is that what you said? [speaker005:] Roughly, yeah. [speaker006:] Oh! Oh boy! [speaker005:] Including cr cross bridges, [speaker001:] O [speaker005:] and [speaker001:] What? [speaker006:] That's brilliant! [speaker001:] Oh my god! [speaker006:] Hhh. [speaker005:] You d you really [disfmarker] now I I spent [disfmarker] the last time I was there I spent maybe two hours hearing this story which is, um [disfmarker] [speaker001:] Of what [speaker004:] Y You definitely wanna w don't wanna waste that money on research, [speaker001:] the building? [speaker004:] you know? [speaker005:] Right. [speaker004:] That's horrible. [speaker005:] Right. Well, no, no, y i there's infinite money. See you th you th you then fill it with researchers. [speaker001:] And give them more money. They just want a fun place for them to [disfmarker] to work. [speaker005:] Right. Right. [speaker006:] And everybody gets a trampoline in their office. [speaker003:] Well, the [disfmarker] the offices are actually a little [disfmarker] the, think of um, ramps, coming out of the double helix and then you have these half domes, glass half domes, and the offices are in [disfmarker] in the glass half dome. [speaker001:] Really? [speaker006:] Alright, let's stop talking about this. [speaker005:] Yeah. [speaker001:] Does it exist yet? [speaker005:] Yeah. [speaker003:] Uh, as a model. [speaker001:] They are w now building it? Hmm. [speaker003:] But I th [speaker005:] So, yeah, I think that's [disfmarker] that's a good point, th th that the date, the, uh, a lot of the [disfmarker] this is interacting with, uh, people in Italy but also definitely the people in Leipzig and the [disfmarker] the b the combination of the biology and the Leipzig connection might be interesting to these guys, yeah. OK. OK. Anyway! Enough of that, let's talk about your thesis proposal. [speaker003:] Yeah, if somebody has something to say. [speaker005:] Yep. [speaker006:] You might want to, uh, double check the spellings of the authors' names on your references, you had a few, uh, misspells in your slides, there. Like I believe you had "Jackendorf". [speaker005:] Um. [speaker006:] Uh, unless there's a person called "Jackendorf", [speaker005:] No, no, no. [speaker006:] yeah. [speaker001:] On that one? [speaker006:] But that's the only thing I noticed in there. [speaker001:] In the presentation? [speaker006:] In the presentation. [speaker001:] I'll probably [disfmarker] I c might have [disfmarker] I'll probably have comments for you separately, not important. Anyway. [speaker003:] Oh, in the presentation here. [speaker001:] Yeah, that's what he was talking about. [speaker006:] Yeah. [speaker003:] I was ac actually worried about bibtex. Uh. No, that's quite possible. That's copy and paste from something. [speaker005:] So I did note i i it looks like the, uh, metaphor didn't get in yet. [speaker003:] Uh, it did, there is a reference to Srini [disfmarker] [speaker005:] Well, s reference is one thing, the question is is there any place [disfmarker] Oh, did you put in something about, [speaker001:] Metonymy and metaphor here, right? [speaker005:] uh, the individual, we'd talked about putting in something about people had, uh [disfmarker] Oh yeah, OK. Good. I see where you have it. So the top of the second [disfmarker] of pa page two you have a sentence. [speaker003:] Mm hmm. [speaker005:] But, what I meant is, I think even before you give this, to Wahlster, uh, you should, unless you put it in the text, and I don't think it's there yet, about [disfmarker] we talked about is the, um, scalability that you get by, um, combining the constructions with the general construal mechanism. Is that in there? [speaker003:] Yeah, mmm. Um. [speaker005:] Uh, OK, so where [disfmarker] where is it, cuz I'll have to take a look. [speaker003:] Um, but I [disfmarker] I did not focus on that aspect but, um [disfmarker] Ehhh, um, it's just underneath, uh, um, that reference to metaphor. So it's the last paragraph before two. So on page two, um, the main focus [disfmarker] [speaker005:] Uh, OK. Yeah. [speaker003:] But that's really [disfmarker] [speaker001:] That's not about that, [speaker003:] Yeah. [speaker001:] is it? [speaker005:] No, it [disfmarker] it [disfmarker] it s says it but it doesn't say [disfmarker] it doesn't [disfmarker] it d it d [speaker003:] Why. [speaker005:] yeah, it doesn't give the punch line. [speaker003:] Mm hmm. [speaker005:] Cuz let me tell the gang what I think the punch line is, because it's actually important, which is, that, the constructions, that, uh, Nancy and Keith and friends are doing, uh, are, in a way, quite general but cover only base cases. And to make them apply to metaphorical cases and metonymic cases and all those things, requires this additional mechanism, of construal. And the punch line is, he claimed, that if you do this right, you can get essentially orthogonality, that if you introduce a new construction at [disfmarker] at the base level, it should com uh, interact with all the metonymies and metaphors so that all of the projections of it also should work. [speaker006:] Mm hmm. [speaker005:] And, similarly, if you introduce a new metaphor, it should then uh, compose with all of the constructions. [speaker006:] Mm hmm. Yeah. [speaker005:] And it [disfmarker] to the extent that that's true then [disfmarker] then it's a big win over anything that exists. [speaker004:] So does that mean instead of having tons and tons of rules in your context free grammar you just have these base constructs and then a general mechanism for coercing them. [speaker006:] Yeah. [speaker005:] Mm hmm. So that, you know, for example, uh, in the metaphor case, that you have a kind of direct idea of a source, path, and goal and any metaphorical one [disfmarker] and abstract goals and all that sort of stuff [comment] [disfmarker] you can do the same grammar. And it is the same grammar. [speaker004:] Mmm. [speaker005:] But, um, the trick is that the [disfmarker] the way the construction's written it requires that the object of the preposition for example be a container. Well, "trouble" isn't a container, but it gets constr construed as a c container. [speaker004:] Right. [speaker005:] Et cetera. So that's [disfmarker] that's where this, um, [speaker004:] So with construal you don't have to have a construction for every possible thing that can fill the rule. [speaker005:] Right. So's it's [disfmarker] it [disfmarker] it's a very big deal, i i in this framework, and the thesis proposal as it stands doesn't, um, I don't think, say that as clearly as it could. [speaker003:] No, it doesn't say it at all. No. Even though [disfmarker] [vocalsound] One could argue what [disfmarker] if there are basic cases, even. I mean, it seems like nothing is context free. [speaker005:] Oh, nothing is context free, but there are basic cases. That is, um, there are physical containers, there are physical paths, there [disfmarker] you know, et cetera. [speaker003:] But "walked into the cafe and ordered a drink," and "walked into the cafe and broke his nose," that's sort of [disfmarker] [speaker005:] Oh, it doesn't mean that they're unambiguous. I mean, a cafe can be construed as a container, or it can be construed you know as [disfmarker] as a obstacle, [speaker003:] Mmm. Yeah. [speaker006:] Uh huh. [speaker005:] or as some physical object. So there are multiple construals. And in fact that's part of what has to be done. This is why there's this interaction between the analysis and the construal. [speaker003:] Mm hmm. Yep. [speaker005:] The b the [disfmarker] the double arrow. [speaker003:] Yep. [speaker005:] So, uh, yeah, I mean, it doesn't magically make ambiguity go away. [speaker003:] No. [speaker005:] But it does say that, uh, if you walked into the cafe and broke your nose, then you are construing the cafe as an obstacle. [speaker003:] Mm hmm. [speaker005:] And if that's not consistent with other things, then you've gotta reject that reading. [speaker003:] Yep. [speaker004:] You con [disfmarker] you conditioned me with your first sentence, and so I thought, "Why would he walk into the cafe and then somehow break his nose?" [speaker006:] He slipped on the wet floor. [speaker004:] uh, oh, uh [disfmarker] [speaker005:] Right. [speaker003:] You don't find that usage, uh [disfmarker] uh, I checked for it in the Brown national corpus. [speaker005:] Yeah. [speaker003:] The "walk into it" never really means, w as in walked smack [disfmarker] [speaker005:] But "run into" does. [speaker003:] Yeah, but, y y if you find "walked smacked into the cafe" or "slammed into the wall" [disfmarker] [speaker005:] Yeah, no, but "run into" does. [speaker003:] Mm hmm. [speaker005:] Because you will find "run into," uh, [speaker004:] Cars run into telephone poles all the time. [speaker005:] well, or "into the cafe" for that m you know [disfmarker] [speaker003:] Right. [speaker005:] "His car ran into the cafe." [speaker003:] Yeah. Or you can run into an old friend, or run. [speaker005:] Well, you can "run into" in that sense too. But, uh, [speaker001:] Yeah, "run into" might even be more impact sense than, you know, container sense. [speaker005:] Right. [speaker006:] Depends. [speaker005:] But [disfmarker] Like, "run into an old friend", it probably needs its own construction. I mean, uh, you know, George would have I'm sure some exa complicated ex reason why it really was an instance of something else [speaker001:] Mm hmm. Mm hmm. [speaker005:] and maybe it is, but, um, there are idioms and my guess is that's one of them, but, um [disfmarker] I don't know. [speaker001:] All contact. I mean, there there's contact that doesn't [disfmarker] social contact, whatever. I mean. [speaker005:] Uh. [speaker006:] Sudden surprising contact, right? [speaker005:] Yeah, but it's [disfmarker] it's [disfmarker] it's [disfmarker] it's [disfmarker] Right. i Yeah, it's more [disfmarker] [speaker001:] Forceful. [speaker006:] But of course, no, i i I mean it has a life of its own. It's sort of partially inspired by the spatial [disfmarker] [speaker005:] Well, this is this motivated [disfmarker] but yeah [disfmarker] [speaker006:] Yeah. [speaker005:] oh yeah, mo for sure, motivated, but then you can't parse on motivated. [speaker006:] Yeah. Yeah. Right. [speaker005:] Uh, [speaker001:] Too bad. [speaker004:] You should get a T shirt that says that. [speaker005:] OK. [speaker001:] There's [disfmarker] there's lots of things you could make T shirts out of, but, uh, this has gotten [disfmarker] I mean wh We don't need the words to that. [speaker003:] Pro probably not your marks in the kitchen, today. [speaker001:] What? [speaker003:] Not [disfmarker] not your marks. [speaker001:] Oh, no no no no no no no no no, we're not going there. OK. [speaker006:] In other news. [speaker005:] OK, so, um, anything else you want to ask us about the thesis proposal, you got [disfmarker] [speaker003:] Well, [speaker005:] We could look at a particular thing and give you feedback on it. [speaker003:] Well there [disfmarker] actually [disfmarker] the [disfmarker] i what would have been really nice is to find an example for all of this, uh, from our domain. So maybe if we w if we can make one up [pause] now, that would be c incredibly helpful. [speaker001:] So, w where it should illustrate [speaker005:] OK. [speaker003:] How [disfmarker] [speaker001:] uh [disfmarker] wh when you say all this, do you mean, like, I don't know, the related work stuff, as well as, mappings? [speaker005:] Right [disfmarker] right [disfmarker] r [speaker003:] w Well we have, for example, a canonical use of something and y it's, you know, we have some constructions and then it's construed as something, and then we [disfmarker] we may get the same constructions with a metaphorical use that's also relevant to the [disfmarker] to the domain. [speaker005:] OK, f let's [disfmarker] let's suppose you use "in" and "on". I mean, that's what you started with. [speaker003:] Mm hmm. [speaker005:] So "in the bus" and "on the bus," um, that's actually a little tricky in English because to some extent they're synonyms. OK. [speaker003:] I had two hours w with George on this, [speaker005:] OK, what did he say. [speaker003:] so it, [speaker001:] Did you? [speaker003:] um [disfmarker] Um. [speaker001:] Join the club. [speaker005:] Right. Oh, h that's [disfmarker] [speaker003:] "On the bus" is a m is a metaphorical metonymy that relates some meta path metaphorically and you're on [disfmarker] on that path and th w I mean it's [disfmarker] he [disfmarker] there's a platform notion, [speaker005:] Yeah, I [disfmarker] I believe all that, it's just [disfmarker] [speaker003:] right? "he's on the [disfmarker] standing on the bus waving to me." [speaker005:] Yeah. [speaker003:] But th the regular as we speak "J Johno was on the bus to New York," [speaker005:] Yeah. Yeah. [speaker003:] um, uh, he's [disfmarker] that's, uh, what did I call it here, the transportation schema, something, [speaker005:] Yeah. [speaker003:] where you can be on the first flight, on the second flight, [speaker005:] Yeah. [speaker003:] and you can be, you know, on the wagon. [speaker005:] Right. So [disfmarker] so that [disfmarker] that may or may not be what you [disfmarker] what you want to do. I mean you could do something much simpler [speaker003:] Yeah. [speaker005:] like "under the bus," or something, where [disfmarker] [speaker003:] But it's [disfmarker] it's [disfmarker] unfortunately, this is not really something a tourist would ever say. So. [speaker005:] Well, unless he was repairing it or something, but yeah. [speaker003:] Yeah. But um. [speaker005:] Uh, but OK. [speaker003:] So in terms of the [disfmarker] this [disfmarker] [speaker001:] I see. [speaker003:] We had [disfmarker] we had [disfmarker] initially we'd [disfmarker] started discussing the "out of film." [speaker005:] Right. [speaker003:] And there's a lot of "out of" analysis, so, um, [speaker005:] Right. [speaker003:] could we capture that with a different construal of [disfmarker] [speaker001:] Yeah, it's a little [disfmarker] it's, uh [disfmarker] we've thought about it before, uh t uh [disfmarker] to use the examples in other papers, and it's [disfmarker] it's a little complicated. [speaker006:] Out of [disfmarker] out of film, in particular. [speaker001:] Cuz you're like, it's a state of [disfmarker] there's resource, [speaker006:] Yeah. [speaker001:] right, and like, what is film, the state [disfmarker] you know. You're out of the state of having film, right? and somehow film is standing for the re the resour the state of having some resource is just labeled as that resource. [speaker006:] It's [disfmarker] yeah, I mean, [speaker001:] I mean. [speaker006:] but [disfmarker] and plus the fact that there's also s [speaker001:] It's a little bit [disfmarker] [speaker006:] I mean, can you say, like, "The film ran out" you know, or, maybe you could say something like "The film is out" [speaker001:] Yeah, is film the trajector? [speaker006:] so like the [disfmarker] the film went away from where it should be, namely with you, or something, right? You know. The [disfmarker] the film [disfmarker] the film is gone, right? Um, I never really knew what was going on, I mean I [disfmarker] I find it sort of a little bit farfetched to say that [disfmarker] that "I'm out of film" means that I have left the state of having film or something like that, [speaker001:] It's weird. That [disfmarker] [speaker006:] but. [speaker005:] Uh. [speaker001:] Or, "having" is also, um, associated with location, [speaker006:] Yeah. Yeah. [speaker001:] right? so if the film left, you know [disfmarker] state is being near film. [speaker003:] So running [disfmarker] running out of something is different from being out of somewhere. [speaker005:] Or being out of something as, uh [disfmarker] as well. So "running out of it" definitely has a process aspect to it. [speaker001:] Mm hmm. But that's from run, yeah. [speaker006:] Mm hmm. [speaker005:] So, b that's OK, I mean [disfmarker] b but the difference [speaker001:] Yeah. [speaker003:] Is the d the final state of running out of something is being out of it. [speaker005:] is [disfmarker] [speaker006:] Yeah. [speaker005:] Right. [speaker001:] Yeah. So th [speaker006:] You got there. [speaker001:] That part is fine. [speaker006:] You got to out of it. [speaker005:] Yeah. [speaker006:] Yeah. [speaker005:] But, uh [disfmarker] [speaker006:] Hmm! [speaker005:] Yeah, so [disfmarker] so nob so no one has in [disfmarker] in [disfmarker] of the, uh, professional linguists, they haven't [disfmarker] [speaker001:] Uh. [speaker005:] there was this whole thesis on "out of". [speaker001:] There was? [speaker005:] Well, there [disfmarker] I thought [disfmarker] or there was a paper on it. [speaker001:] Who? [speaker006:] Out. [speaker005:] Huh? [speaker006:] There was one on [disfmarker] on "out" or "out of"? [speaker005:] There was a Well, it may be just "out". [speaker006:] OK. [speaker005:] Yeah. I think there was "over" but there was also a paper on "out". [speaker006:] Yeah, Lind Susan Lindner, right? [speaker005:] Or something. [speaker001:] Oh, yeah, you're right. Yeah. [speaker006:] The [disfmarker] the [disfmarker] "the syrup spread out"? That kind of thing? [speaker005:] Yeah, and all that sort of stuff. [speaker001:] Yeah. And undoubtably there's been reams of work about it in cognitive linguistics, [speaker005:] OK. But anyway. We're not gonna do that between now and next week. [speaker001:] but. Yeah. [speaker003:] Yeah. [speaker005:] OK. So, um [disfmarker] [speaker001:] It's not one of the y it's more straightforward ones [disfmarker] forward ones to defend, so you probably don't want to use it for the purposes [disfmarker] [speaker003:] Mm hmm. [speaker005:] Right. OK. [speaker001:] th these are [disfmarker] you're addressing like, computational linguists, right. Or [disfmarker] are you? [speaker003:] There's gonna be four computational linguists, [speaker001:] OK. But more emphasis on the computational? Or emphasis on the linguist? [speaker003:] computer it's [disfmarker] More [disfmarker] there's going to be the [disfmarker] just four computational linguists, by coincidence, but the rest is, whatever, biocomputing people and physicists. [speaker005:] No no no, but not for your talk. [speaker001:] Oh, OK. [speaker005:] I'm we're worrying about the th the thes [speaker003:] Oh, the thesis! [speaker005:] it's just for one guy. [speaker001:] Oh, I meant this, [speaker003:] That's [disfmarker] that's computa should be very computational, [speaker001:] you know, like [disfmarker] OK. [speaker005:] Yeah. [speaker003:] and, uh, someth [speaker001:] So I would try to [disfmarker] I would stay away from one that involves weird construal stuff. [speaker006:] Yeah. [speaker005:] Right. [speaker001:] You know, it's an obvious one [disfmarker] [speaker006:] Totally weird stuff. [speaker003:] I mean the [disfmarker] the old bakery example might be nice, [speaker001:] but, uh [disfmarker] [speaker003:] "Is there a bakery around here". [speaker006:] Yeah. [speaker003:] So if you c we really just construe it as a [disfmarker] [speaker001:] Around? [speaker003:] No, it's the bakery itself [disfmarker] is it a building? uh, that you want to go to? or is it something to eat that you want to buy? [speaker001:] Oh. Oh, oh yeah. Yeah, we've thought about that. Right. Right. [speaker003:] And then [disfmarker] [speaker001:] Nnn. No. What? "Bakery" can't be something you're gonna eat. [speaker005:] No, no. The question is d do you wanna [disfmarker] do you wanna construe [disfmarker] do you wanna constr strue [speaker006:] Sh [speaker004:] It's a speech act. [speaker005:] r Exactly. It's because do you wanna c do you want to view the bakery as a p a place that [disfmarker] that [disfmarker] i for example, if [disfmarker] y [speaker001:] Yeah. Where you can get baked goods. [speaker005:] Well th well, that's one. You want to buy something. But the other is, uh, yo you might have smelled a smell and are just curious about whether there'd be a bakery in the neighborhood, or, [speaker006:] Mm hmm. [speaker005:] um, pfff you know, you wonder how people here make their living, and [disfmarker] there're all sorts of reasons why you might be asking about the existence of a bakery [speaker006:] Yeah. [speaker005:] that doesn't mean, "I want to buy some baked goods." [speaker001:] OK. [speaker005:] But [vocalsound] um, those are interesting examples but it's not clear that they're mainly construal examples. [speaker006:] Yeah. [speaker001:] So it's a lot of pragmatics, there, that [speaker003:] Mmm. [speaker005:] There's all sorts of stuff going on. So [speaker001:] might be beyond what you want to do. [speaker005:] let's [disfmarker] so let's think about this from the point of view of construal. So let's first do a [disfmarker] So the metonymy thing is probably the easiest and a and actually the [disfmarker] Though, the one you have isn't quite [disfmarker] [speaker001:] You mean the s You mean "the steak wants to pay"? [speaker005:] N no not that one, that's [disfmarker] that's a [disfmarker] the sort of background. This is the t uh, page five. [speaker004:] About Plato and the book? [speaker005:] No. [speaker001:] Oh. [speaker003:] No. [speaker001:] Um. [speaker006:] Onward. [speaker005:] Just beyond that. [speaker001:] How much does it cost? [speaker003:] Where is the castle? [speaker005:] Yeah. [speaker006:] A castle. [speaker003:] How old is it? How much does it cost? [speaker004:] Oh. [speaker006:] Mm hmm. [speaker001:] To go in, that's like [disfmarker] [speaker006:] Two hundred million dollars. [speaker005:] Right. It's not for sale. Uh. So [speaker006:] Yeah, I think that's a good example, actually. [speaker003:] S [speaker001:] Yeah, that's good. u [speaker003:] But as Nancy just su suggested it's probably ellipticus. [speaker001:] Ellipsis. [speaker003:] Huh. [speaker001:] Like, "it" doesn't refer to "thing," it refers to acti you know, j thing standing for activ most relevant activity for a tourist [disfmarker] you could think of it that way, but. [speaker005:] Yeah. [speaker006:] Well, shoot, isn't that [disfmarker] I mean, that's what [disfmarker] [speaker003:] Well, I mean, my argument here is [disfmarker] it's [disfmarker] it's [disfmarker] it's the same thing as "Plato's on the top shelf," [speaker006:] figuring that out is what this is about. [speaker001:] Yeah, yeah, no, I I agree. [speaker003:] I'm con you know, th that you can refer to a book of Plato by using "Plato," [speaker001:] Yeah. [speaker003:] and you can refer back to it, [speaker001:] No no, I [disfmarker] I'm agreeing that this is a good, um [disfmarker] [speaker003:] and so you can [disfmarker] Castles have [disfmarker] as tourist sites, have admission fees, so you can say "Where is the castle, how much does it cost?" Um. "How far is it from here?" [speaker006:] Mm hmm. [speaker003:] So, You're also not referring to the width of the object, or so, [speaker001:] Hmm. [speaker003:] www. [speaker001:] Mm hmm. [speaker006:] Mmm. [speaker005:] OK. Can we think of a nice metaphorical use of "where" in the tourist's domain? Um. [speaker006:] Hmm. [speaker005:] So you know it's [disfmarker] you [disfmarker] you can sometimes use "where" f for "when" [speaker006:] O [speaker005:] in the sense of, you know, um, where [disfmarker] wh where [disfmarker] where was, um, "where was Heidelberg, um, in the Thirty Years' War?" Or something. [speaker006:] Uh, yeah. [speaker003:] Mm hmm. [speaker005:] You know, or some such thing. Um. [speaker006:] Like what side were they on, or [disfmarker]? [speaker001:] What? [speaker005:] Yeah. Essentially, yeah. [speaker001:] OK. I was like, "Huh? It was here." Like [disfmarker] [comment] Um. [speaker005:] But anyway th so there are [disfmarker] there are cases like that. Um, [speaker001:] Ah! Or like its developmental state or something like that, you could [disfmarker] I guess you could get that. [speaker005:] Yeah. Um. [speaker001:] Um. [speaker006:] I mean, there's also things like [disfmarker] I mean, s um, I guess I could ask something like "Where can I find out about blah blah blah" in a sort of [disfmarker] doesn't nece I don't necessarily have to care about the spatial location, just give me a phone number and I'll call them or something like that? [speaker005:] Yeah. There certainly is that, yeah. You know, "Where could I learn its opening hours," or something. [speaker006:] Yeah. [speaker005:] But that's not metaphorical. [speaker001:] Hmm. [speaker005:] It's another [disfmarker] [speaker006:] Yeah. [speaker005:] So we're thinking about, um, or we could also think about, uh [disfmarker] [speaker003:] Well, I [disfmarker] I [disfmarker] I [disfmarker] [speaker005:] How about "I'm in a hurry"? [speaker001:] State. [speaker005:] It i But it's a state [disfmarker] and the [disfmarker] the issue is, is that [disfmarker] it may be just a usage, [speaker006:] Hmm? [speaker005:] you know, that it's not particularly metaphorical, I don't know. [speaker006:] Mmm. [speaker001:] Right. So you want a more exotic one [disfmarker] version of that. [speaker006:] Oh. [speaker005:] Yeah. Yeah, right. Ah! How about [speaker001:] I'm really into [disfmarker] [speaker005:] I [disfmarker] I [disfmarker] I [disfmarker] you know, "I'm in [disfmarker] I'm in a state of exhaustion"? or something like that, [speaker001:] Do you really say that? [speaker005:] which a tourist w Huh? [speaker001:] Would you really say that? [speaker005:] A st uh, well, you can certainly say, um, you know, "I'm in overload." [speaker006:] Yeah. [speaker005:] Tu stur tourists will often say that. [speaker004:] I I'm really into art. [speaker001:] Yeah, I was gonna say, like [disfmarker] [speaker004:] Uh [disfmarker] [speaker005:] Oh, you can do that? Really? Of course that's [disfmarker] that [disfmarker] that's definitely a, uh [disfmarker] [speaker006:] Fixed. [speaker001:] A fixed expression, yeah. [speaker005:] that's a, uh [disfmarker] Right. But. [disfmarker] [speaker001:] There're too [disfmarker] there're all sorts of fixed expressions I don't [disfmarker] like uh "I'm out of sorts now!" [speaker005:] Right. [speaker001:] Like [comment] "I'm in trouble!" [speaker003:] Well I [disfmarker] when, uh [disfmarker] just f u the data that I've looked at so far that rec [speaker005:] Yeah. [speaker003:] I mean, there's tons of cases for polysemy. [speaker005:] Right. [speaker003:] So, you know, mak re making reference to buildings as institutions, as containers, as build [speaker001:] Uh huh. [speaker005:] Right. [speaker003:] you know, whatever. Um, so ib in mus for example, in museums, you know, as a building or as something where pictures hang versus, you know, ev something that puts on exhibits, [speaker005:] Right. As an institution, [speaker003:] so forth. But [disfmarker] [speaker005:] yeah. [speaker003:] Um. [speaker001:] Why don't you want to use any of those? [speaker003:] Hmm? [speaker001:] So y you don't wanna use one that's [disfmarker] [speaker003:] Yeah, well [disfmarker] No, but this [disfmarker] that's what I have, you know, started doing. [speaker005:] The castle [disfmarker] the [disfmarker] that old castle one is sort of [disfmarker] [speaker003:] Metonymy, polysemy. [speaker004:] I love Van Gogh. [speaker005:] Yeah. Ah! [speaker001:] "I wanna go see the Van Gogh." [speaker006:] Oh geez. [speaker001:] Anyway, I'm sorry. [speaker003:] But I think the argument should be [disfmarker] uh, can be made that, you know, despite the fact that this is not the most met metaphorical domain, because people interacting with HTI systems try to be straightforward and less lyrical, [speaker005:] Yeah. [speaker003:] construal still is, uh, you know, completely, um, key in terms of finding out any of these things, so, um. [speaker005:] Right. So that's [disfmarker] that's [disfmarker] that's a [disfmarker] that's a reasonable point, that it [disfmarker] in this domain you're gonna get less metaphor and more metonymy. [speaker003:] We, uh [disfmarker] I [disfmarker] with a [disfmarker] I looked [disfmarker] with a student I looked at the entire database that we have on Heidelberg for cases of metonymy. [speaker005:] And polysemy, and stuff like that. Yeah. [speaker003:] Hardly anything. So not even in descriptions w did we find anything, um, relevant. [speaker006:] I have to go. [speaker005:] Alright. Yeah. [speaker003:] But OK this is just something we'll [disfmarker] we'll see, um, [speaker005:] Right. s See you. [speaker003:] and deal with. [speaker005:] OK, well. I guess if anybody has additional suggestions, [speaker003:] I mean maybe the "where is something" question as a whole, you know, can be construed as, u i locational versus instructional request. [speaker005:] w Yeah. [speaker003:] So, if we're not talk about the lexic [speaker001:] Location versus what? [speaker003:] instruction. [speaker001:] Instruction. Oh, directions? Yeah. [speaker005:] Sure. [speaker003:] Yeah. [speaker001:] Oh, I thought that was [disfmarker] definitely treated as an example of construal. Right? [speaker003:] Yeah but then you're not on the lexical level, that's sort of one level higher. [speaker001:] Oh, you want a lexical example. [speaker003:] But I don't need it. [speaker005:] Well, you might want both. [speaker003:] Mm hmm. [speaker001:] Yeah. [speaker003:] Also it would be nice to get [disfmarker] ultimately to get a nice mental space example, [speaker005:] We [disfmarker] [speaker003:] so, even temporal references are [disfmarker] just in the spatial domain are rare. [speaker005:] But it's [disfmarker] it's easy to make up plausible ones. You know. [speaker003:] When [disfmarker] when you're getting information on objects. [speaker005:] Right, [speaker003:] So, I mean [disfmarker] [speaker005:] you know [disfmarker] you know, where r Yeah. What color was this in [disfmarker] in in the nain nineteenth century. [speaker003:] Yeah. [speaker005:] What was this p instead of [disfmarker] wh what [disfmarker] you know [disfmarker] how was this painted, what color was this painted, um, was this alleyway open. [speaker003:] Yeah, maybe we can include that also in our second, uh, data run. [speaker005:] Uh. [speaker003:] We c we can show people pictures of objects and then have then ask the system about the objects and engage in conversation on the history and the art and the architecture and so forth. [speaker005:] Mm hmm. OK. So why don't we plan to give you feedback electronically. Wish you a good trip. All success. [speaker004:] For some reason when you said "feedback electronically" I thought of that [disfmarker] you ever see the Simpsons where they're [disfmarker] like the family's got the buzzers and they buzz each other when they don't like what the other one is saying? [speaker001:] Yeah. That's the [disfmarker] first one, I think. [speaker004:] It was a very early one. [speaker001:] The very very first one. [speaker004:] I don't know if it's the first one. [speaker001:] Mmm. Mmm. [speaker005:] So. OK. Doesn't look like it crashed. That's great. [speaker007:] So I think maybe what's causing it to crash is I keep starting it and then stopping it to see if it's working. And so I think starting it and then stopping it and starting it again causes it to crash. So, I won't do that anymore. [speaker002:] And it looks like you've found a way of uh mapping the location to the [disfmarker] without having people have to give their names each time? [speaker001:] Sounds like an initialization thing. [speaker002:] I mean it's like you have the [disfmarker] [speaker007:] No. [speaker002:] So you know that [disfmarker] I mean, are you going to write down [pause] that I sat here? [speaker007:] I'm gonna collect the digit forms and write it down. [speaker002:] OK. [speaker003:] Oh, OK. [speaker007:] So [disfmarker] So they should be right with what's on the digit forms. OK, so I'll go ahead and start with digits. u And I should say that uh, you just pau you just read each line an and then pause briefly. [speaker005:] And start by giving the transcript number. [speaker004:] Transcript [disfmarker] Uh. OK, OK. [speaker001:] Tran Oh sorry, go ahead. [speaker005:] So uh, you see, Don, the unbridled excitement of the work that we have on this project. [speaker008:] OK. [speaker005:] It's just uh [disfmarker] [speaker008:] Umh. [speaker005:] Uh, you know, it doesn't seem like a bad idea to have [comment] that information. [speaker007:] And I'm surprised I sort of [disfmarker] I'm surprised I forgot that, but uh I think that would be a good thing to add. [speaker005:] Yeah, I [disfmarker] I'd [disfmarker] I think it's some [speaker007:] After I just printed out a zillion of them. [speaker005:] Yeah, well, that's [disfmarker] Um, so I [disfmarker] I do have a [disfmarker] a an agenda suggestion. Uh, we [disfmarker] I think the things that we talk about in this meeting uh tend to be a mixture of uh procedural uh mundane things and uh research points and um I was thinking I think it was a meeting a couple of weeks ago that we [disfmarker] we spent much of the time talking about the mundane stuff cuz that's easier to get out of the way and then we sort of drifted into the research and maybe five minutes into that Andreas had to leave. So [vocalsound] uh I'm suggesting we turn it around and [disfmarker] and uh sort of we have [disfmarker] anybody has some mundane points that we could send an email later, uh hold them for a bit, and let's talk about the [disfmarker] the research y kind of things. Um, so um the one th one thing I know that we have on that is uh we had talked a [disfmarker] a couple weeks before um uh about the uh [disfmarker] the stuff you were doing with [disfmarker] with uh um uh l l attempting to locate events, we had a little go around trying to figure out what you meant by "events" but I think, you know, what we had meant by "events" I guess was uh points of overlap between speakers. But I th I gather from our discussion a little earlier today that you also mean uh interruptions with something else like some other noise. [speaker004:] Yeah. [speaker005:] Yes? [speaker004:] Uh huh. Yeah. [speaker005:] You mean that as an event also. So at any rate you were [disfmarker] you've [disfmarker] you've done some work on that [speaker004:] To right. [speaker005:] and um then the other thing would be it might be nice to have a preliminary discussion of some of the other uh research uh areas that uh we're thinking about doing. Um, I think especially since you [disfmarker] you haven't been in [disfmarker] in these meetings for a little bit, maybe you have some discussion of some of the p the plausible things to look at now that we're starting to get data, uh and one of the things I know that also came up uh is some discussions that [disfmarker] that uh [disfmarker] that uh Jane had with Lokendra uh about some [disfmarker] some [disfmarker] some um uh work about I [disfmarker] I [disfmarker] I d I [disfmarker] I don't want to try to say cuz I [disfmarker] I'll say it wrong, but anyway some [disfmarker] some potential collaboration there about [disfmarker] about the [disfmarker] about the [disfmarker] working with these data. [speaker003:] Oh. [speaker005:] So. [speaker003:] Sure. [speaker005:] So, uh. [speaker007:] You wanna just go around? [speaker005:] Uh. [pause] Well, I don't know if we [disfmarker] if this is sort of like everybody has something to contribute sort of thing, I think there's just just a couple [disfmarker] a couple people primarily um but um Uh, wh why don't [disfmarker] Actually I think that [disfmarker] that last one I just said we could do fairly quickly so why don't you [disfmarker] you start with that. [speaker002:] OK. Shall I [disfmarker] shall I just start? [speaker005:] Yeah, just explain what it was. [speaker002:] OK. Um, so, uh, he was interested in the question of [disfmarker] you know, relating to his [disfmarker] to the research he presented recently, um of inference structures, and uh, the need to build in, um, this [disfmarker] this sort of uh mechanism for understanding of language. And he gave the example in his talk about how [pause] um, e a I'm remembering it just off the top of my head right now, but it's something about how um, i "Joe slipped" you know, "John had washed the floor" or something like that. And I don't have it quite right, but that kind of thing, where you have to draw the inference that, OK, there's this time sequence, but also the [disfmarker] the [disfmarker] the causal aspects of the uh floor and [disfmarker] and how it might have been the cause of the fall and that um it was the other person who fell than the one who cleaned it and it [disfmarker] [comment] These sorts of things. So, I looked through the transcript that we have so far, [comment] and um, fou identified a couple different types of things of that type and um, one of them was something like uh, during the course of the transcript, um um, w we had gone through the part where everyone said which channel they were on and which device they were on, and um, the question was raised "Well, should we restart the recording at this point?" And [disfmarker] and Dan Ellis said, "Well, we're just so far ahead of the game right now [pause] we really don't need to". Now, how would you interpret that without a lot of inference? So, the inferences that are involved are things like, OK, so, how do you interpret "ahead of the game"? You know. So it's the [disfmarker] it's [pause] i What you [disfmarker] what you int what you draw [disfmarker] you know, the conclusions that you need to draw are that space is involved in recording, [speaker007:] Hmm, metaphorically. [speaker002:] that um, i that [pause] i we have enough space, and he continues, like "we're so ahead of the game cuz now we have built in downsampling". So you have to sort of get the idea that um, "ahead of the game" is sp speaking with respect to space limitations, that um that in fact downsampling is gaining us enough space, and that therefore we can keep the recording we've done so far. But there are a lot of different things like that. [speaker007:] So, do you think his interest is in using this as [pause] a data source, or [pause] training material, or what? [speaker005:] Well, I [disfmarker] I should maybe interject to say this started off with a discussion that I had with him, so um we were trying to think of ways that his interests could interact with ours [speaker007:] Mm hmm. [speaker005:] and um uh I thought that if we were going to project into the future when we had a lot of data, uh and um such things might be useful for that in or before we invested too much uh effort into that he should uh, with Jane's help, look into some of the data that we're [disfmarker] already have [speaker007:] Mm hmm. [speaker005:] and see, is there anything to this at all? Is there any point which you think that, you know, you could gain some advantage and some potential use for it. Cuz it could be that you'd look through it and you say "well, this is just the wrong [pause] task for [disfmarker] for him to pursue his [disfmarker]" [speaker007:] Wrong, yeah. [speaker005:] And [disfmarker] and uh I got the impression from your mail that in fact there was enough things like this just in the little sample that [disfmarker] that you looked at that [disfmarker] that it's plausible at least. [speaker002:] It's possible. Uh, he was [disfmarker] he [disfmarker] he [disfmarker] you know [disfmarker] We met and he was gonna go and uh you know, y look through them more systematically [speaker005:] Yeah. [speaker002:] and then uh meet again. [speaker005:] Yeah. [speaker002:] So it's, you know, not a matter of a [disfmarker] [speaker005:] Yeah. [speaker002:] But, yeah, I think [disfmarker] I think it was optimistic. [speaker005:] So anyway, that's [disfmarker] that's e a quite different thing from anything we've talked about that, you know, might [disfmarker] might [disfmarker] might come out from some of this. [speaker003:] But he can use text, basically. I mean, he's talking about just using text [speaker002:] That's his major [disfmarker] I mentioned several that w had to do with implications drawn from intonational contours [speaker003:] pretty much, or [disfmarker]? [speaker002:] and [pause] that wasn't as directly relevant to what he's doing. [speaker003:] OK. [speaker002:] He's interested in these [disfmarker] these knowledge structures, [speaker004:] Yeah, interesting. [speaker002:] inferences that you draw [pause] i from [disfmarker] [speaker005:] I mean, he certainly could use text, but we were in fact looking to see if there [disfmarker] is there [disfmarker] is there something in common between our interest in meetings and his interest in [disfmarker] in [disfmarker] in this stuff. So. [speaker007:] And I imagine that transcripts of speech [disfmarker] I mean text that is speech [disfmarker] probably has more of those than sort of prepared writing. I [disfmarker] I don't know whether it would or not, but it seems like it would. [speaker005:] I don't know, probably de probably depends on what the prepared writing was. But. [speaker002:] Yeah, I don't think I would make that leap, because i in narratives, you know [disfmarker] I mean, if you spell out everything in a narrative, it can be really tedious, [speaker007:] Mm hmm. [speaker002:] so. [speaker007:] Yeah, I'm just thinking, you know, when you're [disfmarker] when you're face to face, you have a lot of backchannel and [disfmarker] And [disfmarker] [speaker002:] Oh. That aspect. [speaker007:] Yeah. And so I think it's just easier to do that sort of broad inference jumping if it's face to face. I mean, so, if I just read that Dan was saying "we're ahead of the game" [comment] in that [disfmarker] in that context, [speaker002:] Well [disfmarker] Yeah. [speaker007:] I might not realize that he was talking about disk space as opposed to anything else. [speaker002:] I [disfmarker] you know, I [disfmarker] I had several that had to do with backchannels and this wasn't one of them. [speaker007:] Uh huh. [speaker002:] This [disfmarker] this one really does um m make you leap from [disfmarker] So he said, you know, "we're ahead of the game, w we have built in downsampling". [speaker007:] Mm hmm. [speaker002:] And the inference, i if you had it written down, would be [disfmarker] [speaker007:] I guess it would be the same. [speaker002:] Uh huh. But there are others that have backchannelling, it's just he was less interested in those. [speaker006:] Can I [disfmarker] Sorry to interrupt. Um, I f f f I've [disfmarker] [@ @] [comment] d A minute [disfmarker] uh, several minutes ago, I, like, briefly was [disfmarker] was not listening and [disfmarker] So who is "he" in this context? [speaker003:] Yeah, there's a lot of pronoun [disfmarker] [speaker006:] OK. So I was just realizing we've [disfmarker] You guys have been talking about "he" um for at least uh, I don't know, three [disfmarker] three four minutes without ever mentioning the person's name again. [speaker003:] I believe it. Yeah. Actually to make it worse, [comment] uh, Morgan uses "you" and "you" [speaker006:] So this is [disfmarker] this is [disfmarker] this is [disfmarker] gonna be a big, big problem if you want to later do uh, you know, indexing, or speech understanding of any sort. [speaker007:] It's in my notes. [speaker003:] with gaze and no identification, or [disfmarker] I just wrote this down. [speaker006:] You just wrote this? [speaker003:] Yeah, actually. Cuz Morgan will say well, "you had some ideas" [speaker004:] Yeah. [speaker003:] and he never said Li He looked [disfmarker] [speaker007:] Well, I think he's doing that intentionally, [speaker003:] Right, [speaker007:] aren't you? [speaker003:] so it's great. [speaker006:] Right. [speaker003:] So this is really great [speaker004:] Yeah. [speaker003:] because the thing is, because he's looking at the per even for addressees in the conversation, [speaker006:] Mm hmm. [speaker003:] I bet you could pick that up in the acoustics. Just because your gaze is also correlated with the directionality of your voice. [speaker005:] Uh huh. Could be. Yeah. That would be tou [speaker007:] Oh, that would be interesting. [speaker002:] Can we [speaker004:] Yeah. [speaker003:] Yeah, so that, I mean, to even know um when [disfmarker] Yeah, if you have the P Z Ms you should be able to pick up what a person is looking at from their voice. [speaker007:] Well, especially with Morgan, with the way we have the microphones arranged. I'm sort of right on axis and it would be very hard to tell. [speaker003:] Right. [speaker007:] Uh. [speaker002:] Oh, but you'd have the [disfmarker] [speaker003:] Put Morgan always like this and [disfmarker] [speaker002:] You'd have fainter [disfmarker] [speaker005:] Well, these [disfmarker] [speaker002:] Wouldn't you get fainter reception out here? [speaker007:] Sure, but I think if I'm talking like this? Right now I'm looking at Jane and talking, now I'm looking at Chuck and talking, I don't think the microphones would pick up that difference. [speaker003:] But you don't have this [disfmarker] this problem. [speaker002:] I see. [speaker003:] Morgan is the one who does this most. [speaker007:] So if I'm talking at you, or I'm talking at you. [speaker005:] I probably been affect No, I th I think I've been affected by too many conversations where we were talking about lawyers and talking about [disfmarker] and concerns about "oh gee is somebody going to say something bad?" and so on. [speaker007:] Lawyers. [speaker005:] And so I [disfmarker] so I'm [disfmarker] I'm tending to stay away from people's names even though uh [disfmarker] [speaker002:] I am too. [speaker003:] Even though you could pick up later on, just from the acoustics who you were t who you were looking at. [speaker002:] I am too. [speaker007:] And we did mention who "he" was. [speaker003:] Yeah. [speaker007:] Early in the conversation. [speaker005:] Yeah. [speaker006:] Right, but I missed it. But [disfmarker] it was uh [disfmarker] [speaker007:] Do [disfmarker] Sh Can I say [speaker005:] Yeah. [speaker003:] Yeah, yeah. [speaker007:] or [disfmarker] or is that just too sensitive? [speaker006:] Yeah. [speaker005:] Yeah. No no, there's [disfmarker] No no, it isn't sensitive at all. [speaker002:] Well [disfmarker] [speaker005:] I was just [disfmarker] I was just [disfmarker] I was overreacting just because we've been talking about it. [speaker002:] And in fact, it is [disfmarker] it is [disfmarker] it is sensitive. [speaker005:] It's OK to [disfmarker] [speaker003:] No, but that [disfmarker] it's interesting. [speaker002:] I [disfmarker] I came up with something from the Human Subjects people that I wanted to mention. I mean, it fits into the m area of the mundane, but they did say [disfmarker] You know, I asked her very specifically about this clause of how, um, you know, it says "no individuals will be identified uh," in any publication using the data. "OK, well, individuals being identified, let's say you have a [disfmarker] a snippet that says," Joe s uh thinks such and such about [disfmarker] about this field, but I think he's wrongheaded. "Now I mean, we're [disfmarker] we're gonna be careful not to have the" wrongheaded "part in there, but [disfmarker] but you know, let's say we say, you know," Joe used to think so and so about this area, in his publication he says that but I think he's changed his mind. or whatever. [speaker005:] b But I [disfmarker] [speaker002:] Then the issue of [disfmarker] of being able to trace Joe, because we know he's well known in this field, and all this and [disfmarker] and tie it to the speaker, whose name was just mentioned a moment ago, can be sensitive. So I think it's really [disfmarker] really kind of adaptive and wise to not mention names any more than we have to because if there's a slanderous aspect to it, then how much to we wanna be able to have to remove? [speaker005:] Yeah, well, there's that. But I [disfmarker] I mean I think also to some extent it's just educating the Human Subjects people, in a way, because there's [disfmarker] If uh [disfmarker] You know, there's court transcripts, there's [disfmarker] there's transcripts of radio shows [disfmarker] I mean people say people's names all the time. So I think it [disfmarker] it can't be bad to say people's names. It's just that [disfmarker] i I mean you're right that there's more poten If we never say anybody's name, then there's no chance of [disfmarker] of [disfmarker] of slandering anybody, [speaker003:] But, then it won't [disfmarker] I mean, if we [disfmarker] if we [disfmarker] [speaker005:] but [disfmarker] [speaker007:] It's not a meeting. [speaker003:] Yeah. I mean we should do whatever's natural in a meeting if [disfmarker] if we weren't being recorded. [speaker005:] Yeah. Right, so I [disfmarker] So my behavior is probably not natural. So. [speaker003:] "If Person X [disfmarker]" [speaker002:] Well, my feeling on it was that it wasn't really important who said it, you know. [speaker005:] Yeah. [speaker006:] Well, if you ha since you have to um go over the transcripts later anyway, you could make it one of the jobs of the [pause] people who do that to mark [speaker007:] Well, we t we t we talked about this during the anon anonymization. If we wanna go through and extract from the audio and the written every time someone says a name. [speaker006:] Right. [speaker007:] And I thought that our conclusion was that we didn't want to do that. [speaker005:] Yeah, we really can't. But a actually, I'm sorry. I really would like to push [disfmarker] finish this off. [speaker002:] I understand. No I just [disfmarker] I just was suggesting that it's not a bad policy p potentially. [speaker005:] So it's [disfmarker] [speaker002:] So, we need to talk about this later. [speaker005:] Yeah, I di I didn't intend it an a policy though. It was [disfmarker] it was just it was just unconscious [disfmarker] well, semi conscious behavior. [speaker002:] Uh huh. [speaker005:] I sorta knew I was doing it but it was [disfmarker] [speaker006:] Well, I still don't know who "he" is. [speaker005:] I [disfmarker] I do I don't remember who "he" is. [speaker003:] No, you have to say, you still don't know who "he" is, with that prosody. [speaker005:] Ah. Uh, we were talking about Dan at one point [comment] and we were talking about Lokendra at another point. [speaker006:] Oh. [speaker002:] Yeah, depends on which one you mean. [speaker005:] And I don't [disfmarker] I don't remember which [disfmarker] which part. [speaker003:] It's ambiguous, so it's OK. [speaker005:] Uh, I think [disfmarker] [speaker007:] Well, the inference structures was Lokendra. [speaker006:] But no. The inference stuff was [disfmarker] was [disfmarker] was Lokendra. OK. [speaker005:] Yeah. Yeah. Yeah. [speaker006:] That makes sense, yeah. [speaker005:] Um [disfmarker] [speaker003:] And the downsampling must have been Dan. [speaker007:] Yeah. [speaker005:] Good [disfmarker] Yeah. [speaker003:] It's an inference. [speaker005:] Yeah, you could do all these inferences, [speaker007:] Yeah. [speaker005:] yeah. Yeah. Um, I [disfmarker] I would like to move it into [disfmarker] into uh what Jose uh has been doing because he's actually been doing something. [speaker002:] Yeah. [speaker004:] Uh huh. [speaker005:] So. [vocalsound] Right. [speaker004:] OK. [speaker006:] As opposed to the rest of us. [speaker004:] Well [comment] [vocalsound] OK. I [disfmarker] I remind that me [disfmarker] my first objective eh, in the project is to [disfmarker] to study difference parameters to [disfmarker] to find a [disfmarker] a good solution to detect eh, the overlapping zone in eh speech recorded. But eh, [vocalsound] tsk, [comment] [vocalsound] ehhh [comment] In that way [comment] I [disfmarker] [vocalsound] I [disfmarker] [vocalsound] I begin to [disfmarker] to study and to analyze the ehn [disfmarker] the recorded speech eh the different session to [disfmarker] to find and to locate and to mark eh the [disfmarker] the different overlapping zone. And eh so eh I was eh [disfmarker] I am transcribing the [disfmarker] the first session and I [disfmarker] I have found eh, eh one thousand acoustic events, eh besides the overlapping zones, eh I [disfmarker] I [disfmarker] I mean the eh breaths eh aspiration eh, eh, talk eh, eh, clap, eh [disfmarker] [comment] I don't know what is the different names eh you use to [disfmarker] to name the [disfmarker] the [pause] n speech [speaker001:] Nonspeech sounds? [speaker004:] Yeah. [speaker007:] Oh, I don't think we've been doing it at that level of detail. So. [speaker004:] Yeah. Eh, [vocalsound] I [disfmarker] I [disfmarker] I do I don't need to [disfmarker] to [disfmarker] to mmm [vocalsound] [disfmarker] to m to label the [disfmarker] the different acoustic, but I prefer because eh I would like to [disfmarker] to study if eh, I [disfmarker] I will find eh, eh, a good eh parameters eh to detect overlapping I would like to [disfmarker] to [disfmarker] to test these parameters eh with the [disfmarker] another eh, eh acoustic events, to nnn [disfmarker] [vocalsound] to eh [disfmarker] to find what is the ehm [disfmarker] the false [disfmarker] eh, the false eh hypothesis eh, nnn, which eh are produced when we use the [disfmarker] the ehm [disfmarker] this eh parameter [disfmarker] eh I mean pitch eh, eh, difference eh, feature [disfmarker] [speaker007:] Mm hmm. [speaker001:] You know [disfmarker] I think some of these um that are the nonspeech overlapping events may be difficult even for humans to tell that there's two there. [speaker007:] So it was [disfmarker] [speaker004:] Yeah. [speaker001:] I mean, if it's a tapping sound, you wouldn't necessarily [disfmarker] or, you know, something like that, it'd be [disfmarker] it might be hard to know that it was two separate events. [speaker004:] Yeah. Yeah. Yeah. Yeah. [speaker007:] Well [disfmarker] You weren't talking about just overlaps [speaker004:] Ye [speaker007:] were you? You were just talking about acoustic events. [speaker004:] I [disfmarker] I [disfmarker] I [disfmarker] I t I t I talk eh about eh acoustic events in general, [speaker007:] Someone starts, someone stops [disfmarker] Yeah. [speaker004:] but eh my [disfmarker] my objective eh will be eh to study eh overlapping zone. [speaker001:] Oh. [speaker007:] Mm hmm. [speaker004:] Eh? [comment] n Eh in twelve minutes I found eh, eh one thousand acoustic events. [speaker005:] How many overlaps were there uh in it? No no, how many of them were the overlaps of speech, though? [speaker004:] How many? Eh almost eh three hundred eh in one session [speaker007:] Oh, God! [speaker004:] in five [disfmarker] eh in forty five minutes. [speaker007:] Ugh. [speaker001:] Three hundred overlapping speech [disfmarker] [speaker004:] Alm Three hundred overlapping zone. [speaker003:] Overlapping speech. [speaker004:] With the overlapping zone, overlapping speech [disfmarker] speech what eh different duration. [speaker001:] Mm hmm. [speaker005:] Sure. [speaker002:] Does this [disfmarker]? So if you had an overlap involving three people, how many times was that counted? [speaker004:] Yeah, three people, two people. Eh, um I would like to consider eh one people with difference noise eh in the background, be [speaker005:] No no, but I think what she's asking is [pause] if at some particular for some particular stretch you had three people talking, instead of two, did you call that one event? [speaker004:] Oh. Oh. Yeah. I consider one event eh for th for that eh for all the zone. This [disfmarker] th I [disfmarker] I [disfmarker] I con I consider [disfmarker] I consider eh an acoustic event, the overlapping zone, the period where three speaker or eh [disfmarker] are talking together. [speaker007:] Well [disfmarker] So let's [disfmarker] [speaker002:] For [speaker007:] So let's say me and Jane are talking at the same time, and then Liz starts talking also over all of us. How many events would that be? [speaker004:] So I don't understand. [speaker007:] So, two people are talking, [comment] and then a third person starts talking. [speaker004:] Yeah? [speaker007:] Is there an event right here? [speaker004:] Eh no. No no. For me is the overlapping zone, because [disfmarker] because you [disfmarker] you have s you have more one [disfmarker] eh, more one voice eh, eh produced in a [disfmarker] in [disfmarker] in a moment. [speaker007:] So i if two or more people are talking. [speaker005:] I see. OK. Yeah. So I think [disfmarker] Yeah. We just wanted to understand how you're defining it. [speaker004:] Yeah. [speaker005:] So then, in the region between [disfmarker] since there [disfmarker] there is some continuous region, in between regions where there is only one person speaking. [speaker004:] If Uh huh. [speaker005:] And one contiguous region like that you're calling an event. Is it [disfmarker] Are you calling the beginning or the end of it the event, [speaker004:] Uh huh. Yeah. [speaker005:] or are you calling the entire length of it the event? [speaker004:] I consider the [disfmarker] the, nnn [disfmarker] the nnn, nnn [disfmarker] eh, the entirety eh, eh, all [disfmarker] all the time there were [disfmarker] the voice has overlapped. [speaker005:] OK. [speaker004:] This is the idea. But eh I [disfmarker] I don't distinguish between the [disfmarker] the numbers of eh speaker. Uh, I'm not considering [vocalsound] eh the [disfmarker] the [disfmarker] ehm [vocalsound] eh, the fact of eh, eh, for example, what did you say? Eh at first eh, eh two talkers are uh, eh speaking, and eh, eh a third person eh join to [disfmarker] to that. For me, it's eh [disfmarker] it's eh, all overlap zone, with eh several numbers of speakers is eh, eh the same acoustic event. Wi but [disfmarker] uh, without any mark between the zone [disfmarker] of the overlapping zone with two speakers eh speaking together, and the zone with the three speakers. [speaker002:] That would j just be one. [speaker004:] It [disfmarker] One. One. [speaker002:] OK. [speaker004:] Eh, with eh, a beginning mark and the ending mark. [speaker005:] Got it. [speaker004:] Because eh [vocalsound] for me, is the [disfmarker] is the zone with eh some kind of eh distortion the spectral. I don't mind [disfmarker] By the moment, by the moment. [speaker007:] Well, but [disfmarker] But you could imagine that three people talking has a different spectral characteristic than two. [speaker004:] I [disfmarker] I don't [disfmarker] Yeah, but eh [disfmarker] but eh I have to study. [comment] What will happen in a general way, [speaker005:] Could. [speaker007:] So. You had to start somewhere. [speaker005:] Yeah. We just w [speaker004:] I [disfmarker] [vocalsound] I don't know what eh will [disfmarker] will happen with the [disfmarker] [speaker003:] So there's a lot of overlap. [speaker007:] Yep. [speaker003:] So. [speaker007:] That's a lot of overlap, yeah, [speaker005:] So again, that's [disfmarker] that's three [disfmarker] three hundred in forty five minutes that are [disfmarker] that are speakers, just speakers. [speaker007:] for forty five minutes. [speaker004:] Yeah? Yeah. Yeah. [speaker005:] Uh huh. OK. Yeah. [speaker002:] But a [disfmarker] a [disfmarker] a th [speaker005:] So that's about eight per minute. [speaker002:] But a thousand events in twelve minutes, that's [disfmarker] [speaker004:] Yeah, [pause] but [disfmarker] Yeah. [speaker003:] But that can include taps. [speaker005:] Uh. Yeah. [speaker004:] But [disfmarker] [speaker002:] Well, but a thousand taps in eight minutes is a l in twelve minutes is a lot. [speaker004:] General. [speaker003:] Actually [disfmarker] [speaker004:] I [disfmarker] I con I consider [disfmarker] I consider acoustic events eh, the silent too. [speaker002:] Silent. [speaker007:] Silence starting or silence ending [disfmarker] [speaker004:] Yeah, silent, ground to [disfmarker] bec to detect [disfmarker] eh because I consider acoustic event all the things are not eh speech. [speaker003:] Oh, OK. [speaker005:] Mm hmm. [speaker001:] Oh. [speaker004:] In ge in [disfmarker] in [disfmarker] in a general point of view. [speaker003:] Oh. [speaker005:] OK, so how many of those thousand were silence? [speaker003:] Alright. [speaker006:] Not speech [disfmarker] not speech or too much speech. [speaker004:] in the per Too much speech. [speaker005:] Right. So how many of those thousand were silence, silent sections? [speaker004:] Yeah. Uh silent, I [disfmarker] I [disfmarker] I [disfmarker] I don't [disfmarker] I [disfmarker] I haven't the [disfmarker] eh I [disfmarker] I would like to [disfmarker] to do a stylistic study [speaker005:] Yeah. [speaker004:] and give you eh with the report eh from eh the [disfmarker] the study from the [disfmarker] the [disfmarker] the session [disfmarker] one session. [speaker005:] Yeah. Yeah. [speaker004:] And I [disfmarker] I found that eh another thing. When eh [vocalsound] eh I w I [disfmarker] [vocalsound] I was eh look at eh nnn, the difference speech file, um, for example, eh if eh we use the ehm [disfmarker] the mixed file, to [disfmarker] to transcribe, the [disfmarker] the events and the words, I [disfmarker] I saw that eh the eh speech signal, collected by the eh this kind of mike [disfmarker] eh of this kind of mike, eh are different from the eh mixed signal eh, we eh [disfmarker] collected by headphone. [speaker007:] Yep. Right. [speaker004:] And [disfmarker] It's right. [speaker005:] Yeah. [speaker004:] But the problem is [vocalsound] the following. The [disfmarker] the [disfmarker] the [disfmarker] I [disfmarker] I [disfmarker] I knew that eh the signal eh, eh would be different, but eh the [disfmarker] the problem is eh, eh we eh detected eh difference events in the speech file eh collected by [disfmarker] by that mike uh qui compared with the mixed file. And so if [disfmarker] when you transcribe eh only eh using the nnn [disfmarker] the mixed file, it's possible [disfmarker] eh if you use the transcription to evaluate a different system, it's possible you eh [disfmarker] in the eh i and you use the eh speech file collected by the eh fet mike, to eh [disfmarker] to nnn [disfmarker] to do the experiments [pause] with the [disfmarker] the system, [speaker005:] Mm hmm. [speaker007:] Right. [speaker004:] its possible to evaluate eh, eh [disfmarker] or to consider eh acoustic events that [disfmarker] which you marked eh in the mixed file, but eh they don't appear in the eh speech signal eh collected by the [disfmarker] by the mike. [speaker007:] Right. The [disfmarker] the reason that I generated the mixed file was for IBM to do word level transcription, not speech event transcription. [speaker004:] Yeah. Yeah. Oh, it's a good idea. It's a good idea I think. [speaker007:] So I agree that if someone wants to do speech event transcription, that the mixed signals here [disfmarker] [speaker004:] Yeah. [speaker007:] I mean, if I'm tapping on the table, you it's not gonna show up on any of the mikes, but it's gonna show up rather loudly in the PZM. [speaker004:] Yeah. Yeah. Yeah. So and I [disfmarker] I [disfmarker] [vocalsound] I say eh that eh, eh, or this eh only because eh I c I [disfmarker] I [disfmarker] [vocalsound] in my opinion, it's necessary to eh [disfmarker] to eh [disfmarker] to put the transcription on the speech file, collected by the objective signal. [speaker007:] So. [speaker004:] I mean the [disfmarker] the [disfmarker] the signal collected by the [disfmarker] eh, the real mike in the future, in the prototype to [disfmarker] to eh correct the initial eh segmentation eh with the eh real speech [speaker005:] Mm hmm. The [disfmarker] the [disfmarker] the far field, yeah. [speaker004:] you have to [disfmarker] to analyze [disfmarker] you have to [disfmarker] to process. Because I [disfmarker] I found a difference. [speaker005:] Yeah, well, just [disfmarker] I mean, just in that [disfmarker] that one s ten second, or whatever it was, example that Adam had that [disfmarker] that we [disfmarker] we passed on to others a few months ago, there was that business where I g I guess it was Adam and Jane were talking at the same time and [disfmarker] and uh, in the close talking mikes you couldn't hear the overlap, and in the distant mike you could. So yeah, it's clear that if you wanna study [disfmarker] if you wanna find all the places where there were overlap, it's probably better to use a distant mike. [speaker006:] That's good. [speaker005:] On the other hand, there's other phenomena that are going on at the same time for which it might be useful to look at the close talking mikes, [speaker004:] Yeah. [speaker005:] so it's [disfmarker] [speaker003:] But why can't you use the combination of the close talking mikes, time aligned? [speaker007:] If you use the combination of the close talking mikes, you would hear Jane interrupting me, but you wouldn't hear the paper rustling. And so if you're interested in [disfmarker] [speaker003:] I [disfmarker] I mean if you're interested in speakers overlapping other speakers and not the other kinds of nonspeech, that's not a problem, [speaker005:] Some [comment] of it's masking [disfmarker] masked. [speaker004:] Yeah. [speaker001:] Were you interrupting him or was he interrupting you? [speaker005:] Right. [speaker007:] Right. [speaker003:] right? [speaker007:] Although the other issue is that the [pause] mixed close talking mikes [disfmarker] [speaker004:] Yeah. [speaker007:] I mean, I'm doing weird normalizations and things like that. [speaker003:] But it's known. [speaker004:] Yeah. [speaker007:] Yep. [speaker003:] I mean, the normalization you do is over the whole conversation [speaker007:] Right. [speaker003:] isn't it, over the whole meeting. [speaker007:] Yep. [speaker003:] So if you wanted to study people overlapping people, that's not a problem. [speaker004:] I [disfmarker] I [disfmarker] I think eh I saw the nnn [disfmarker] the [disfmarker] eh but eh I eh [disfmarker] I have eh any results. I [disfmarker] I [disfmarker] I saw the [disfmarker] the speech file collected by eh the fet mike, and eh eh signal eh to eh [disfmarker] to noise eh relation is eh low. [speaker005:] Mm hmm. [speaker004:] It's low. It's very low. You would comp if we compare it with eh the headphone. [speaker007:] Yep. [speaker004:] And I [disfmarker] I found that nnn [disfmarker] that eh, [vocalsound] ehm, pr probably, [speaker007:] Did [disfmarker] Did you [speaker004:] I'm not sure eh by the moment, but it's [disfmarker] it's probably that eh a lot of eh, [vocalsound] eh for example, in the overlapping zone, on eh [disfmarker] in [disfmarker] in several eh parts of the files where you [disfmarker] you can find eh, eh [vocalsound] eh, smooth eh eh speech eh from eh one eh eh talker in the [disfmarker] in the meeting, [speaker005:] Mm hmm. Mm hmm. [speaker004:] it's probably in [disfmarker] in that eh [disfmarker] in [disfmarker] in those files you [disfmarker] you can not find [disfmarker] you can not process because eh it's confused with [disfmarker] with noise. [speaker005:] Mm hmm. [speaker004:] And there are [vocalsound] a lot of I think. But I have to study with more detail. But eh my idea is to [disfmarker] to process only [pause] nnn, this eh [disfmarker] nnn, this kind of s of eh speech. Because I think it's more realistic. I'm not sure it's a good idea, but eh [disfmarker] [speaker007:] Well, it's more realistic but it'll [disfmarker] it'll be a lot harder. [speaker005:] No [disfmarker] i Well, it'd be hard, but on the other hand as you point out, if your [disfmarker] if i if [disfmarker] if your concern is to get uh the overlapping people [disfmarker] people's speech, you will [disfmarker] you will get that somewhat better. [speaker004:] Yeah. Mm hmm. Yeah. [speaker005:] Um, Are you making any use [disfmarker] uh you were [disfmarker] you were working with th the data that had already been transcribed. Does it uh [disfmarker] [speaker004:] With [disfmarker] By Jane. [speaker005:] Yes. [speaker004:] Yeah. [speaker005:] Now um did you make any use of that? See I was wondering cuz we st we have these ten hours of other stuff that is not yet transcribed. [speaker004:] Yeah. Yeah. [speaker005:] Do you [disfmarker] [speaker004:] The [disfmarker] the transcription by Jane, t eh i eh, I [disfmarker] I [disfmarker] I want to use to [disfmarker] to nnn, [vocalsound] eh to put [disfmarker] i i it's a reference for me. But eh the transcription [disfmarker] eh for example, I [disfmarker] I don't [disfmarker] I [disfmarker] I'm not interested in the [disfmarker] in the [disfmarker] in the words, transcription words, eh transcribed eh eh in [disfmarker] eh follow in the [disfmarker] [vocalsound] in the [disfmarker] in the speech file, but eh eh Jane eh for example eh put a mark eh at the beginning eh of each eh talker, in the [disfmarker] in the meeting, um eh she [disfmarker] she nnn includes information about the zone where eh there are eh [disfmarker] there is an overlapping zone. [speaker005:] Mm hmm. [speaker004:] But eh there isn't any [disfmarker] any mark, time [disfmarker] temporal mark, to [disfmarker] to c eh [disfmarker] to mmm [vocalsound] [disfmarker] e heh, to label [comment] the beginning and the end of the [disfmarker] of the [speaker005:] OK. Right, so she is [disfmarker] [speaker004:] ta I'm [disfmarker] I [disfmarker] I [disfmarker] I think eh we need this information to [speaker005:] Right. So the twelve [disfmarker] you [disfmarker] you [disfmarker] it took you twelve hours [disfmarker] of course this included maybe some [disfmarker] some time where you were learning about what [disfmarker] what you wanted to do, but [disfmarker] but uh, it took you something like twelve hours to mark the forty five minutes, your [speaker007:] Twelve minutes. [speaker004:] Twelve minutes. [speaker005:] s Twelve minutes! [speaker004:] Twelve minutes. Twelve. [speaker005:] I thought you did forty five minutes of [disfmarker] [speaker004:] No, forty five minutes is the [disfmarker] is the session, all the session. [speaker005:] Oh, you haven't done the whole session. [speaker002:] Oh. [speaker004:] Yeah, [speaker005:] This is just twelve minutes. [speaker004:] all is the [vocalsound] the session. [speaker005:] Oh. [speaker004:] Tw twelve hours of work to [disfmarker] [vocalsound] to segment eh and label eh twelve minutes from a session of part [disfmarker] of f [speaker005:] So [comment] let me back up again. So the [disfmarker] when you said there were three hundred speaker overlaps, [speaker004:] Yeah. [speaker005:] that's in twelve minutes? [speaker004:] No no no. I [disfmarker] I consider all the [disfmarker] all the session because eh I [disfmarker] I count the nnn [disfmarker] the nnn [disfmarker] the overlappings marked by [disfmarker] by Jane, [speaker005:] Oh, OK. OK. [speaker002:] Oh, I see. [speaker004:] in [disfmarker] in [disfmarker] in [disfmarker] in the [pause] fin in [disfmarker] in the [pause] forty five minutes. [speaker005:] So it's three hundred in forty five minutes, but you have [disfmarker] you have time uh, uh marked [disfmarker] twelve minute [disfmarker] the [disfmarker] the [disfmarker] the um overlaps in twelve minutes of it. [speaker004:] Yeah. [speaker005:] Got it. [speaker007:] Well, not just the overlaps, everything. [speaker006:] So, can I ask [disfmarker] [vocalsound] can I ask whether you found [disfmarker] uh, you know, how accurate uh Jane's uh uh labels were as far as [disfmarker] you know, did she miss some overlaps? or did she n? [speaker004:] But, by [disfmarker] by the moment, I [disfmarker] I don't compare, my [disfmarker] my temporal mark with eh Jane, but eh I [disfmarker] I want to do it. Because eh eh i per perhaps I have eh errors in the [disfmarker] in the marks, I [disfmarker] and if I [disfmarker] I compare with eh Jane, it's probably I [disfmarker] I [disfmarker] I can correct and [disfmarker] and [disfmarker] and [disfmarker] to get eh eh a more accurately eh eh transcription in the file. [speaker005:] Yeah. [speaker007:] Well, also Jane [disfmarker] Jane was doing word level. [speaker005:] Yeah. [speaker007:] So we weren't concerned with [comment] exactly when an overlap started and stopped. [speaker004:] Yeah. [speaker006:] Right. Right. I'm expect I'm not expecting [disfmarker] [speaker004:] Well [disfmarker] [speaker003:] Well, not only a word level, but actually I mean, you didn't need to show the exact point of interruption, [speaker004:] No, it's [disfmarker] [speaker003:] you just were showing at the level of the phrase or the level of the speech spurt, or [disfmarker] [speaker007:] Right. [speaker005:] Mm hmm. [speaker007:] Yep. [speaker004:] Yeah. [speaker002:] Well [disfmarker] [speaker004:] Yeah. [speaker002:] Well, yeah, b yeah, I would say time bin. So my [disfmarker] my goal is to get words with reference to a time bin, [pause] beginning and end point. [speaker003:] Yeah. [speaker004:] Yeah. Yeah. [speaker003:] Right. [speaker004:] Yeah. [speaker002:] And [disfmarker] and sometimes, you know, it was like you could have an overlap where someone said something in the middle, but, yeah, w it just wasn't important for our purposes to have it that [disfmarker] i disrupt that unit in order to have, you know, a the words in the order in which they were spoken, [speaker004:] Yeah. [speaker002:] it would have [disfmarker] it would have been hard with the interface that we have. [speaker007:] Right. [speaker002:] Now, my [disfmarker] a Adam's working on a of course, on a revised overlapping interface, [speaker004:] Uh huh. I [disfmarker] I [disfmarker] I think [disfmarker] It's [disfmarker] it's a good eh work, [speaker002:] but [disfmarker] [speaker004:] but eh I think we need eh eh more information. [speaker006:] No, of course. I expect you to find more overlaps than [disfmarker] than Jane [speaker002:] Yeah. [speaker007:] Always need more for [disfmarker] [speaker004:] No, no. I [disfmarker] I have to go to [disfmarker] [speaker002:] Yeah. [speaker006:] because you're looking at it at a much more detailed level. [speaker004:] I want eh [disfmarker] I wanted to eh compare the [disfmarker] the transcription. [speaker007:] But if it takes sixty to one [disfmarker] [speaker005:] I have [disfmarker] Well, I but I have a suggestion about that. Um, obviously this is very, very time consuming, and you're finding lots of things which I'm sure are gonna be very interesting, but in the interests of making progress, uh might I s how [disfmarker] how would it affect your time if you only marked speaker overlaps? [speaker004:] Only. [speaker005:] Yes. Do not mark any other events, [speaker004:] Yeah. [speaker005:] but only mark speaker [disfmarker] [speaker004:] Uh huh. [speaker005:] Do you think that would speed it up quite a bit? [speaker004:] OK. OK. I [disfmarker] I [disfmarker] I [disfmarker] I w I [disfmarker] I wanted to [disfmarker] [speaker005:] Do y do you think that would speed it up? Uh, speed up your [disfmarker] your [disfmarker] your marking? [speaker004:] nnn, I don't understand very. [speaker005:] It took you a long time [pause] to mark twelve minutes. [speaker004:] Yeah. Oh, yeah, yeah. [speaker005:] Now, my suggestion was for the other thirty three [disfmarker] [speaker004:] On only to mark [disfmarker] only to mark overlapping zone, but [disfmarker] [speaker005:] Yeah, and my question is, if you did that, if you followed my suggestion, would it take much less time? [speaker004:] Oh, yeah. [speaker005:] Yeah OK. [speaker004:] Sure. Yeah sure. [speaker005:] Then I think it's a good idea. Then I think it's a good idea, because it [speaker004:] Sure sure. Sure, because I [disfmarker] I need a lot of time to [disfmarker] to put the label or to do that. Yeah. [speaker005:] Yeah, I mean, we we know that there's noise. [speaker007:] And [speaker004:] Uh huh. [speaker005:] There's [disfmarker] there's uh continual noise uh from fans and so forth, and there is uh more impulsive noise from uh taps and so forth [speaker004:] Yeah. [speaker005:] and [disfmarker] and something in between with paper rustling. We know that all that's there and it's a g worthwhile thing to study, but obviously it takes a lot of time to mark all of these things. [speaker004:] Yeah. [speaker005:] Whereas th i I would think that uh you [disfmarker] we can study more or less as a distinct phenomenon the overlapping of people talking. [speaker004:] Uh huh. OK. [speaker005:] So. Then you can get the [disfmarker] Cuz you need [disfmarker] [speaker004:] OK. [speaker005:] If it's three hundred uh [disfmarker] i i it sounds like you probably only have fifty or sixty or seventy events right now that are really [disfmarker] [speaker004:] Yeah. [speaker005:] And [disfmarker] and you need to have a lot more than that to have any kind of uh even visual sense of [disfmarker] of what's going on, much less any kind of reasonable statistics. [speaker007:] Right. [speaker003:] Now, why do you need to mark speaker overlap by hand if you can infer it from the relative energy in the [disfmarker] [speaker007:] Well, that's [disfmarker] That's what I was gonna bring up. [speaker003:] I mean, you shouldn't need to do this p completely by hand, [speaker005:] Um, OK, yeah. So let's back up because you weren't here for an earlier conversation. [speaker003:] right? I'm sorry. [speaker005:] So the idea was that what he was going to be doing was experimenting with different measures such as the increase in energy, such as the energy in the LPC residuals, such as [disfmarker] I mean there's a bunch of things [disfmarker] I mean, increased energy is is sort of an obvious one. [speaker003:] Mm hmm. In the far field mike. [speaker005:] Yeah. [speaker003:] Oh, OK. [speaker005:] Um, and uh, it's not obvious, I mean, you could [disfmarker] you could do the dumbest thing and get [disfmarker] get it ninety percent of the time. But when you start going past that and trying to do better, it's not obvious what combination of features is gonna give you the [disfmarker] you know, the right detector. So the idea is to have some ground truth first. And so the i the idea of the manual marking was to say "OK this, i you know, it's [disfmarker] it's really here". [speaker001:] But I think Liz is saying why not get it out of the transcripts? [speaker003:] What I mean is [pause] get it from the close talking mikes. [speaker005:] Uh, yeah. [speaker003:] A or ge get a first pass from those, [speaker005:] We t we t w we t we talked about that. [speaker003:] and then go through sort of [disfmarker] It'd be a lot faster probably to [disfmarker] [speaker006:] And you can [disfmarker] [speaker007:] Yeah, that's his, uh [disfmarker] [speaker005:] We [disfmarker] we [disfmarker] we talked about that. s But so it's a bootstrapping thing and the thing is, [speaker003:] Yeah, I just [disfmarker] [speaker005:] the idea was, i we i i we thought it would be useful for him to look at the data anyway, and [disfmarker] and then whatever he could mark would be helpful, [speaker003:] Right. [speaker005:] and we could [disfmarker] Uh it's a question of what you bootstrap from. You know, do you bootstrap from a simple measurement which is right most of the time and then you g do better, or do you bootstrap from some human being looking at it and then [disfmarker] then do your simple measurements, uh from the close talking mike. I mean, even with the close talking mike you're not gonna get it right all the time. [speaker003:] Well, that's what I wonder, because um [disfmarker] or how bad it is, [speaker005:] Well [speaker007:] I'm working on a program to do that, and [disfmarker] [speaker003:] be um, because that would be interesting especially because the bottleneck is the transcription. Right? I mean, we've got a lot more data than we have transcriptions for. We have the audio data, [speaker005:] Yeah. [speaker003:] we have the close talking mike, so I mean it seems like one kind of project that's not perfect, but [disfmarker] um, that you can get the training data for pretty quickly is, you know, if you infer form the close talking mikes where the on off points are of speech, [speaker005:] Right, we discussed that. [speaker003:] you know, how can we detect that from a far field? [speaker007:] And [disfmarker] [speaker002:] Oh. [speaker007:] I've [disfmarker] I've written a program to do that, [speaker003:] OK, [speaker007:] and it, uh [disfmarker] [speaker003:] I'm sorry I missed the [disfmarker] [speaker005:] It's OK. [speaker007:] and [disfmarker] so [disfmarker] but it's [disfmarker] it's doing something very, very simple. It just takes a threshold, based on [disfmarker] on the volume, [speaker003:] Uh huh. [speaker006:] Or you can set the threshold low and then weed out the false alarms by hand. [speaker003:] Right, by hand. Yeah. [speaker006:] Yeah. [speaker007:] um, and then it does a median filter, and then it looks for runs. And, it seems to work, I've [disfmarker] I'm sort of fiddling with the parameters, to get it to actually generate something, and I haven't [disfmarker] I don't [disfmarker] what I'm working on [disfmarker] was working on [disfmarker] was getting it to a form where we can import it into the user interface that we have, [pause] into Transcriber. And so [disfmarker] I told [disfmarker] I said it would take about a day. I've worked on it for about half a day, [speaker008:] I have to go. [speaker007:] so give me another half day and I we'll have something we can play with. [speaker003:] OK. [speaker005:] See, this is where we really need the Meeting Recorder query stuff to be working, because we've had these meetings and we've had this discussion about this, and I'm sort of remembering a little bit about what we decided, [speaker003:] Right. I'm sorry. I just [disfmarker] [speaker005:] but I couldn't remember all of it. [speaker003:] It [speaker005:] So, I think it was partly that, you know, give somebody a chance to actually look at the data and see what these are like, partly that we have e some ground truth to compare against, you know, when [disfmarker] when he [disfmarker] he gets his thing going, [speaker007:] But [disfmarker] [speaker005:] uh, and [disfmarker] [speaker003:] Well, it's definitely good to have somebody look at it. I was just thinking as a way to speed up you know, the amount of [disfmarker] [speaker005:] That was [disfmarker] that was exactly the notion that [disfmarker] that [disfmarker] that we discussed. [speaker002:] Mm hmm. [speaker003:] OK. [speaker007:] Thanks. [speaker002:] Another thing we discussed was um that [disfmarker] [speaker005:] So. [speaker003:] It looks good. I'll be in touch. Thanks. [speaker005:] S See ya. Yeah. [speaker002:] Was that um there m [pause] there was this already a script I believe uh that Dan had written, [comment] that uh handle bleedthrough, I mean cuz you have this [disfmarker] this close [disfmarker] you have contamination from other people who speak loudly. [speaker007:] Yeah, and I haven't tried using that. It would probably help the program that I'm doing to first feed it through that. It's a cross correlation filter. So I [disfmarker] I haven't tried that, but that [disfmarker] If [disfmarker] It [disfmarker] it might be something [disfmarker] it might be a good way of cleaning it up a little. [speaker002:] So, some thought of maybe having [disfmarker] Yeah, having that be a preprocessor and then run it through yours. [speaker007:] Exactly. Yep. [speaker005:] But [disfmarker] but that's a refinement [speaker002:] That's what we were discussing. [speaker005:] and I think we wanna see [disfmarker] try the simple thing first, cuz you add this complex thing up uh afterwards that does something good y y yo you sort of wanna see what the simple thing does first. [speaker007:] Yep. [speaker005:] But uh, having [disfmarker] having somebody have some experience, again, with [disfmarker] with uh [disfmarker] with marking it from a human standpoint, we're [disfmarker] I mean, I don't expect Jose to [disfmarker] to do it for uh f fifty hours of [disfmarker] [comment] of speech, but I mean we [disfmarker] [comment] if uh [disfmarker] if he could speed up what he was doing by just getting the speaker overlaps so that we had it, say, for forty five minutes, then at least we'd have three hundred examples of it. [speaker004:] Yeah. Sure. Sure. [speaker005:] And when [disfmarker] when uh Adam was doing his automatic thing he could then compare to that and see what it was different. [speaker003:] Oh yeah, definitely. [speaker001:] You know, I did [disfmarker] I did uh something almost identical to this at one of my previous jobs, and it works pretty well. I mean, i almost exactly what you described, an energy detector with a median filter, you look for runs. And uh, you know, you can [disfmarker] [speaker007:] It seemed like the right thing to do. [speaker001:] Yeah. I mean, you [disfmarker] you can get y I mean, you get them pretty close. [speaker007:] That was with zero literature search. [speaker001:] And so I think doing that to generate these possibilities and then going through and saying yes or no on them would be a quick way to [disfmarker] to do it. [speaker007:] That's good validation. [speaker001:] Yeah. [speaker002:] Is this proprietary? [speaker007:] Yeah, do you have a patent on it? [speaker001:] Uh. [comment] No. No. It was when I was working for the government. [speaker005:] Oh, then everybody owns it. It's the people. [speaker002:] Well, I mean, is this something that we could just co opt, or is it [disfmarker]? No. [speaker001:] Nah. [speaker002:] OK. [speaker005:] Well, i i i he's pretty close, anyway. I think [disfmarker] I think it's [disfmarker] [speaker001:] Yeah, he's [disfmarker] it [disfmarker] it doesn't take a long time. [speaker002:] Right. I just thought if it was tried and true, then [disfmarker] [comment] and he's gone through additional levels of [disfmarker] of development. [speaker007:] Just output. Although if you [disfmarker] if you have some parameters like what's a good window size for the median filter [disfmarker] [speaker001:] Oh! [comment] I have to remember. I'll think about it, and try to remember. [speaker006:] And it might be different for government people. [speaker007:] That's alright. [speaker005:] Yeah, good enough for government work, as they say. [speaker003:] They [disfmarker] they [disfmarker] [speaker001:] Di dif different [disfmarker] different bandwidth. [speaker006:] They [speaker007:] I was doing pretty short, you know, tenth of a second, [comment] sorts of numbers. [speaker006:] OK. [speaker005:] Uh, I don't know, it [disfmarker] if [disfmarker] if we want to uh [disfmarker] So, uh, maybe we should move on to other [disfmarker] other things in limited time. [speaker002:] Can I ask one question about his statistics? So [disfmarker] so in the tw twelve minutes, um, if we took three hundred and divided it by four, which is about the length of twelve minutes, i Um, I'd expect like there should be seventy five overlaps. [speaker005:] Yeah. [speaker002:] Did you find uh more than seventy five overlaps in that period, or [disfmarker]? [speaker004:] More than? [speaker002:] More than [disfmarker] How many overlaps in your twelve minutes? [speaker004:] How many? Eh, not [@ @] I Onl only I [disfmarker] I transcribe eh only twelve minutes from the [speaker005:] Yeah. [speaker004:] but eh I [disfmarker] I don't co eh [disfmarker] I don't count eh the [disfmarker] the overlap. [speaker002:] The overlaps. OK. [speaker004:] I consider I [disfmarker] I [disfmarker] The [disfmarker] the nnn [disfmarker] The [disfmarker] the three hundred is eh considered only you [disfmarker] your transcription. I have to [disfmarker] [vocalsound] to finish transcribing. So. [speaker007:] I b I bet they're more, because the beginning of the meeting had a lot more overlaps than [disfmarker] than sort of the middle. [speaker004:] Yeah. [speaker007:] Middle or end. [speaker004:] Yeah. [speaker007:] Because i we're [disfmarker] we're dealing with the [disfmarker] Uh, in the early meetings, [speaker002:] I'm not sure. [speaker007:] we're recording while we're saying who's talking on what microphone, [comment] and things like that, [speaker004:] Yeah. [speaker007:] and that seems to be a lot of overlap. [speaker004:] Yeah. [speaker002:] I think it's an empirical question. I think we could find that out. [speaker004:] Yeah. [speaker007:] Yep. [speaker002:] I'm [disfmarker] I'm not sure that the beginning had more. [speaker005:] So [disfmarker] so I was gonna ask, I guess about any [disfmarker] any other things that [disfmarker] that [disfmarker] that either of you wanted to talk about, especially since Andreas is leaving in five minutes, that [disfmarker] that you wanna go with. [speaker003:] Can I just ask about the data, like very straightforward question is where we are on the amount of data and the amount of transcribed data, just cuz I'm [disfmarker] I wanted to get a feel for that to sort of be able to know what [disfmarker] what can be done first and like how many meetings are we recording [speaker005:] Right so there's this [disfmarker] this [disfmarker] There's this forty five minute piece that Jane transcribed. [speaker003:] and [disfmarker] [speaker005:] That piece was then uh sent to IBM so they could transcribe so we have some comparison point. Then there's s a larger piece that's been recorded and uh put on CD ROM and sent uh to IBM. Right? And then we don't know. [speaker003:] How many meetings is that? [speaker007:] What's that? [speaker003:] Like [disfmarker] how many [disfmarker] [speaker005:] That was about ten hours, and there was about [disfmarker] [speaker003:] t ten [disfmarker] It's like ten meetings or something? [speaker007:] Yeah, something like that. [speaker003:] Uh huh. [speaker007:] And then [disfmarker] then we [speaker001:] Ten meetings that have been sent to IBM? [speaker003:] And [disfmarker] [speaker007:] Well, I haven't sent them yet because I was having this problem with the [pause] missing files. [speaker005:] Yeah. Oh. Oh, that's right, that had [disfmarker] those have not been sent. [speaker001:] H how many total have we recorded now, altogether? [speaker005:] We're saying about [pause] twelve hours. [speaker007:] About twelve [pause] by now. Twelve or thirteen. [speaker003:] Uh huh. And we're recording only this meeting, like continuously we're only recording this one now? or [disfmarker]? [speaker005:] No. [speaker007:] Nope. [speaker005:] No, so the [disfmarker] the [disfmarker] that's the [disfmarker] that's the biggest one [disfmarker] uh, chunk so far, [speaker003:] OK. [speaker001:] It was the morning one. [speaker005:] but there's at least one meeting recorded of uh the uh uh natural language guys. [speaker007:] Jerry. [speaker003:] Do they meet every week, [speaker005:] And then there [disfmarker] Uh, they do. [speaker003:] or every [disfmarker] [speaker005:] w w And we talked to them about recording some more and we're going to, uh, we've started having a morning meeting, today uh i starting a w a week or two ago, on the uh front end issues, and we're recording those, uh there's a network services and applications group here who's agreed to have their meetings recorded, [speaker003:] Great. [speaker005:] and we're gonna start recording them. They're [disfmarker] They meet on Tuesdays. We're gonna start recording them next week. So actually, we're gonna h start having a [disfmarker] a pretty significant chunk and so, you know, [vocalsound] Adam's sort of struggling with trying to get things to be less buggy, and come up quicker when they do crash and stuff [disfmarker] things like that, now that uh [disfmarker] [vocalsound] the things are starting to happen. So right now, yeah, I th I'd say the data is predominantly meeting meetings, but there are scattered other meetings in it and that [disfmarker] that amount is gonna grow uh so that the meeting meetings will probably ultimately [disfmarker] i if we're [disfmarker] if we collect fifty or sixty hours, the meeting meetings it will probably be, you know, twenty or thirty percent of it, not [disfmarker] not [disfmarker] not eighty or ninety. But. [speaker003:] So there's probably [disfmarker] there's three to four a week, [speaker007:] That's what we're aiming for. [speaker003:] that we're aiming for. [speaker005:] Yeah. [speaker003:] And they're each about an hour or something. [speaker005:] Yeah, yeah. [speaker007:] Although [disfmarker] Yeah. We'll find out tomorrow whether we can really do this or not. [speaker003:] So [disfmarker] [speaker005:] Yeah and th the [disfmarker] the other thing is I'm not pos [speaker003:] OK. [speaker005:] I'm sort of thinking as we've been through this a few times, that I really don't know [disfmarker] maybe you wanna do it once for the novelty, but I don't know if in general we wanna have meetings that we record from outside this group do the digits. [speaker007:] Right. [speaker005:] Because it's just an added bunch of weird stuff. [speaker003:] Yeah. [speaker005:] And, you know, we [disfmarker] we h we're highly motivated. Uh in fact, the morning group is really motivated cuz they're working on connected digits, so it's [disfmarker] [speaker007:] Actually that's something I wanted to ask, is I have a bunch of scripts to help with the transcription of the digits. [speaker005:] Yeah. [speaker007:] We don't have to hand transcribe the digits because we're reading them and I have those. [speaker005:] Yeah. [speaker007:] And so I have some scripts that let you very quickly extract the sections of each utterance. [speaker003:] Right. [speaker007:] But I haven't been ru I haven't been doing that. Um, if I did that, is someone gonna be working on it? [speaker005:] Uh, yeah, I [disfmarker] I think [speaker007:] I mean, is it something of interest? [speaker005:] definitely s so Absolutely. Yeah, whoever we have working on the acoustics for the Meeting Recorder are gonna start with that. [speaker007:] OK. I mean, I I'm [disfmarker] I'm interested in it, I just don't have time to do it now. [speaker006:] I was [disfmarker] these meetings [disfmarker] I'm sure someone thought of this, but these [disfmarker] this uh reading of the numbers would be extremely helpful to do um adaptation. [speaker007:] So Yep. Yep. [speaker006:] Um. [speaker003:] Actually I have o [speaker007:] I [disfmarker] I would really like someone to do adaptation. [speaker006:] Mm hmm. [speaker007:] So if we got someone interested in that, I think it would be great for Meeting Recorder. [speaker005:] Well [disfmarker] I mean, one of the things I wanted to do, uh, that I I talked to [disfmarker] to Don about, is one of the possible things he could do or m also, we could have someone else do it, is to do block echo cancellation, [speaker007:] Since it's the same people over and over. [speaker006:] Mm hmm. [speaker005:] to try to get rid of some of the effects of the [disfmarker] the [disfmarker] the far field effects. [speaker006:] Mm hmm. [speaker005:] Um, I mean we have [disfmarker] the party line has been that echo cancellation is not the right way to handle the situation because people move around, and uh, if [disfmarker] if it's [disfmarker] if it's uh not a simple echo, like a cross talk kind of echo, but it's actually room acoustics, it's [disfmarker] it's [disfmarker] it's [disfmarker] you can't really do inversion, [speaker006:] Mm hmm. [speaker005:] and even echo cancellation is going to uh be something [disfmarker] It may [disfmarker] you [disfmarker] Someone may be moving enough that you are not able to adapt quickly and so the tack that we've taken is more "lets come up with feature approaches and multi stream approaches and so forth, that will be robust to it for the recognizer and not try to create a clean signal". [speaker006:] Mm hmm. [speaker005:] Uh, that's the party line. But it occurred to me a few months ago that uh party lines are always, you know, sort of dangerous. It's good [disfmarker] [vocalsound] good to sort of test them, actually. And so we haven't had anybody try to do a good serious job on echo cancellation and we should know how well that can do. So that's something I'd like somebody to do at some point, just take these digits, take the far field mike signal, and the close uh mike signal, and apply really good echo cancellation. Um, there was a [disfmarker] have been some nice talks recently by [disfmarker] by Lucent on [disfmarker] on their b [speaker006:] Hmm. [speaker005:] the block echo cancellation particularly appealed to me, uh you know, trying and change it sample by sample, but you have some reasonable sized blocks. [comment] And um, you know, th [speaker001:] W what is the um [disfmarker] the artifact you try to [disfmarker] you're trying to get rid of when you do that? [speaker006:] Ciao. [speaker005:] Uh so it's [disfmarker] it [disfmarker] you have a [disfmarker] a direct uh [disfmarker] Uh, what's the difference in [disfmarker] If you were trying to construct a linear filter, that would um [disfmarker] [speaker006:] I'm signing off. [speaker005:] Yeah. that would subtract off [comment] the um uh parts of the signal that were the aspects of the signal that were different between the close talk and the distant. You know, so [disfmarker] so uh um I guess in most echo cancellation [disfmarker] Yeah, so you [disfmarker] Given that um [disfmarker] Yeah, so you're trying to [disfmarker] So you'd [disfmarker] There's a [disfmarker] a distance between the close and the distant mikes so there's a time delay there, and after the time delay, there's these various reflections. And if you figure out well what's the [disfmarker] there's a [disfmarker] a least squares algorithm that adjusts itself [disfmarker] adjusts the weight so that you try to subtract [disfmarker] essentially to subtract off uh different uh [disfmarker] different reflections. Right? So let's take the simple case where you just had [disfmarker] you had some uh some delay in a satellite connection or something and then there's a [disfmarker] there's an echo. It comes back. And you want to adjust this filter so that it will maximally reduce the effect of this echo. [speaker001:] So that would mean like if you were listening to the data that was recorded on one of those. Uh, just the raw data, you would [disfmarker] you might hear kind of an echo? And [disfmarker] and then this [disfmarker] noise cancellation would get [speaker005:] Well, I'm [disfmarker] I'm [disfmarker] I'm saying [disfmarker] That's a simplified version of what's really happening. [comment] What's really happening is [disfmarker] Well, when I'm talking to you right now, you're getting the direct sound from my speech, but you're also getting, uh, the indirect sound that's bounced around the room a number of times. OK? So now, if you um try to r you [disfmarker] To completely remove the effect of that is sort of impractical for a number of technical reasons, but I [disfmarker] but [disfmarker] not to try to completely remove it, that is, invert the [disfmarker] the room response, but just to try to uh uh eliminate some of the [disfmarker] the effect of some of the echos. Um, a number of people have done this so that, say, if you're talking to a speakerphone, uh it makes it more like it would be, if you were talking right up to it. So this is sort of the st the straight forward approach. You say I [disfmarker] I [disfmarker] I want to use this uh [disfmarker] this item but I want to subtract off various kinds of echos. So you construct a filter, and you have this [disfmarker] this filtered version uh of the speech um gets uh uh [disfmarker] gets subtracted off from the original speech. Then you try to [disfmarker] you try to minimize the energy in some sense. And so um [disfmarker] uh with some constraints. [speaker001:] Kind of a clean up thing, that [disfmarker] [speaker005:] It's a clean up thing. Right. [speaker001:] OK. [speaker005:] So, echo cancelling is [disfmarker] is, you know, commonly done in telephony, and [disfmarker] and [disfmarker] and it's sort of the obvious thing to do in this situation if you [disfmarker] if, you know, you're gonna be talking some distance from a mike. [speaker001:] When uh, I would have meetings with the folks in Cambridge when I was at BBN over the phone, they had a um [disfmarker] some kind of a special speaker phone and when they would first connect me, it would come on and we'd hear all this noise. [speaker005:] Yeah. [speaker001:] And then it was uh [disfmarker] And then it would come on and it was very clear, you know. [speaker005:] Right. So it's taking samples, it's doing adaptation, it's adjusting weights, and then it's getting the sum. So um, uh anyway that's [disfmarker] that's kind of a reasonable thing that I'd like to have somebody try [disfmarker] somebody look [disfmarker] And [disfmarker] and the digits would be a reasonable thing to do that with. I think that'd be enough data [disfmarker] plenty of data to do that with, and i for that sort of task you wouldn't care whether it was uh large vocabulary speech or anything. Uh. [vocalsound] Um [speaker002:] Is Brian Kingsbury's work related to that, or is it a different type of reverberation? [speaker005:] Brian's [comment] Kingsbury's work is an example of what we did f f from the opposite dogma. Right? Which is what I was calling the "party line", which is that uh doing that sort of thing is not really what we want. We want something more flexible, uh i i where people might change their position, and there might be, you know [disfmarker] There's also um oh yeah, noise. So the echo cancellation does not really allow for noise. It's if you have a clean situation but you just have some delays, Then we'll figure out the right [disfmarker] the right set of weights for your taps for your filter in order to produce the effect of those [disfmarker] those echos. But um if there's noise, then the very signal that it's looking at is corrupted so that it's decision about what the right [disfmarker] you know, right [disfmarker] right uh [disfmarker] delays are [disfmarker] is, uh [disfmarker] is [disfmarker] right delayed signal is [disfmarker] is [disfmarker] is [disfmarker] uh is incorrect. And so, in a noisy situation, um, also in a [disfmarker] in a situation that's very reverberant [disfmarker] [comment] with long reverberation times [comment] and really long delays, it's [disfmarker] it's sort of typically impractical. So for those kind of reasons, and also a [disfmarker] a c a complete inversion, if you actually [disfmarker] I mentioned that it's kind of hard to really do the inversion of the room acoustics. Um, that's difficult because um often times the [disfmarker] the um [disfmarker] [vocalsound] the system transfer function is such that when it's inverted you get something that's unstable, and so, if you [disfmarker] you do your estimate of what the system is, and then you try to invert it, you get a filter that actually uh, you know, rings, and [disfmarker] and uh goes to infinity. So it's [disfmarker] so there's [disfmarker] there's [disfmarker] there's that sort of technical reason, and the fact that things move, and there's air currents [disfmarker] I mean there's all sorts of [disfmarker] all sorts of reasons why it's not really practical. So for all those kinds of reasons, uh we [disfmarker] we [disfmarker] we sort of um, concluded we didn't want to in do inversion, and we're even pretty skeptical of echo cancellation, which isn't really inversion, and um we decided to do this approach of taking [disfmarker] uh, just picking uh features, which were [disfmarker] uh will give you more [disfmarker] something that was more stable, in the presence of, or absence of, room reverberation, and that's what Brian was trying to do. So, um, let me just say a couple things that I was [disfmarker] I was gonna bring up. Uh. Let's see. I guess you [disfmarker] you actually already said this thing about the uh [disfmarker] about the consent forms, which was that we now don't have to [disfmarker] So this was the human subjects folks who said this, [comment] or that [disfmarker] that [disfmarker]? [speaker002:] The a apparently [disfmarker] I mean, we're gonna do a revised form, of course. Um but once a person has signed it once, then that's valid for a certain number of meetings. She wanted me to actually estimate how many meetings and put that on the consent form. I told her that would be a little bit difficult to say. So I think from a s practical standpoint, maybe we could have them do it once every ten meetings, or something. It won't be that many people who do it [pause] that often, but um just, you know, so long as they don't forget that they've done it, I guess. [speaker005:] OK. Um, back on the data thing, so there's this sort of one hour, ten hour, a hundred hour sort of thing that [disfmarker] that we have. We have [disfmarker] we have an hour uh that [disfmarker] that is transcribed, we have [disfmarker] we have twelve hours that's recorded but not transcribed, and at the rate we're going, uh by the end of the semester we'll have, I don't know, forty or fifty or something, if we [disfmarker] if this really uh [disfmarker] Well, do we have that much? [speaker003:] Not really. [speaker005:] Let's see, we have [disfmarker] [speaker003:] It's three to four per week. So that's what [disfmarker] You know, that [disfmarker] [speaker005:] uh eight weeks, uh is [disfmarker] [speaker003:] So that's not a lot of hours. [speaker005:] Eight weeks times three hours is twenty four, so that's [disfmarker] Yeah, so like thirty [disfmarker] thirty hours? [speaker001:] Three [disfmarker] Three hours. [speaker003:] Yeah. I mean, is there [disfmarker] I know this sounds [pause] tough but we've got the room set up. Um I was starting to think of some projects where you would use well, similar to what we talked about with uh energy detection on the close talking mikes. There are a number of interesting questions that you can ask about how interactions happen in a meeting, that don't require any transcription. So what are the patterns, the energy patterns over the meeting? And I'm really interested in this [vocalsound] but we don't have a whole lot of data. So I was thinking, you know, we've got the room set up and you can always think of, also for political reasons, if ICSI collected you know, two hundred hours, that looks different than forty hours, even if we don't transcribe it ourselves, [speaker005:] But I don't think we're gonna stop at the end of this semester. [speaker003:] so [disfmarker] [speaker005:] Right? So, I th I think that if we are able to keep that up for a few months, we are gonna have more like a hundred hours. [speaker003:] I mean, is there [disfmarker] Are there any other meetings here that we can record, especially meetings that have some kind of conflict in them [comment] or some kind of deci I mean, that are less well [disfmarker] I don't [disfmarker] uh, that have some more emotional aspects to them, or strong [disfmarker] [speaker007:] We had some good ones earlier. [speaker003:] There's laughter, um I'm talking more about strong differences of opinion meetings, maybe with manager types, or [disfmarker] [speaker007:] I think it's hard to record those. [speaker003:] To be allowed to record them? [speaker005:] Yeah, people will get [disfmarker] [speaker003:] OK. [speaker002:] It's also likely that people will cancel out afterwards. But I [disfmarker] but I wanted to raise the KPFA idea. [speaker003:] OK. Well, if there is, anyway. [speaker005:] Yeah, I was gonna mention that. [speaker007:] Oh, that's a good idea. That's [disfmarker] That would be a good match. [speaker005:] Yeah. So [disfmarker] Yeah. So I [disfmarker] I [disfmarker] uh, I [disfmarker] I'd mentioned to Adam, and [disfmarker] that was another thing I was gonna talk [disfmarker] uh, mention to them before [disfmarker] [comment] that uh there's uh [disfmarker] It [disfmarker] it oc it occurred to me that we might be able to get some additional data by talking to uh acquaintances in local broadcast media. Because, you know, we had talked before about the problem about using found data, [comment] that [disfmarker] that uh it's just set up however they have it set up and we don't have any say about it and it's typically one microphone, in a, uh, uh [disfmarker] or [disfmarker] and [disfmarker] and so it doesn't really give us the [disfmarker] the [disfmarker] the uh characteristics we want. Um and so I do think we're gonna continue recording here and record what we can. But um, it did occur to me that we could go to friends in broadcast media and say "hey you have this panel show, [pause] or this [disfmarker] you know, this discussion show, and um can you record multi channel?" And uh they may be willing to record it uh with [disfmarker] [speaker003:] With lapel mikes or something? [speaker005:] Well, they probably already use lapel, but they might be able to have it [disfmarker] it wouldn't be that weird for them to have another mike that was somewhat distant. [speaker003:] Right. [speaker005:] It wouldn't be exactly this setup, but it would be that sort of thing, and what we were gonna get from UW, you know, assuming they [disfmarker] they [disfmarker] they start recording, isn't [disfmarker] als also is not going to be this exact setup. [speaker003:] Right. No, I think that'd be great, [speaker005:] So, [comment] I [disfmarker] I [disfmarker] I [disfmarker] I was thinking of looking into that. [speaker003:] if we can get more data. [speaker005:] the other thing that occurred to me after we had that discussion, in fact, is that it's even possible, since of course, many radio shows are not live, [comment] uh that we could invite them to have like some of their [disfmarker] [comment] record some of their shows here. [speaker002:] Wow! [speaker003:] Well [disfmarker] Or [disfmarker] The thing is, they're not as averse to wearing one of these head mount [speaker007:] Right, as we are. [speaker003:] I mean, they're on the radio, right? So. [comment] Um, I think that'd be fantastic [speaker005:] Right. [speaker003:] cuz those kinds of panels and [disfmarker] Those have interesting [speaker005:] Yeah. [speaker003:] Th that's an [disfmarker] a side of style [disfmarker] a style that we're not collecting here, so [speaker005:] And [disfmarker] and the [disfmarker] I mean, the other side to it was the [disfmarker] what [disfmarker] which is where we were coming from [disfmarker] I'll [disfmarker] I'll talk to you more about it later [comment] is that [disfmarker] is that there's [disfmarker] there's uh [speaker003:] it'd be great. [speaker005:] the radio stations and television stations already have stuff worked out presumably, uh related to, you know, legal issues and [disfmarker] and permissions and all that. I mean, they already do what they do [disfmarker] do whatever they do. So it's [disfmarker] uh, it's [disfmarker] So it's [disfmarker] so it's another source. So I think it's something we should look into, you know, we'll collect what we collect here hopefully they will collect more at UW also and um [disfmarker] and maybe we have this other source. But yeah I think that it's not unreasonable to aim at getting, you know, significantly in excess of a hundred hours. I mean, that was sort of our goal. The thing was, I was hoping that we could [disfmarker] [@ @] in the [disfmarker] under this controlled situation we could at least collect, you know, thirty to fifty hours. And at the rate we're going we'll get pretty close to that I think this semester. And if we continue to collect some next semester, I think we should, uh [disfmarker] [speaker003:] Right. Yeah I was mostly trying to think, OK, if you start a project, within say a month, you know, how much data do you have to work with. And you [disfmarker] you wanna s you wanna sort of fr freeze your [disfmarker] your data for awhile so um right now [disfmarker] and we don't have the transcripts back yet from IBM right? [speaker005:] Well, we don't even have it for this f you know, forty five minutes, that was [disfmarker] [speaker003:] Do [disfmarker] Oh, do we now? So um, not complaining, I was just trying to think, you know, what kinds of projects can you do now versus uh six months from now [speaker005:] Yeah. [speaker003:] and they're pretty different, because [speaker007:] Right. [speaker005:] Yeah. So I was thinking right now it's sort of this exploratory stuff where you [disfmarker] you look at the data, you use some primitive measures and get a feeling for what the scatter plots look like, [speaker003:] um [disfmarker] Right. [speaker005:] and [disfmarker] and [disfmarker] and uh [disfmarker] and meanwhile we collect, and it's more like yeah, three months from now, or six months from now you can [disfmarker] you can do a lot of other things. [speaker003:] Right, right. Cuz I'm not actually sure, just logistically that I can spend [disfmarker] you know, I don't wanna charge the time that I have on the project too early, before there's enough data to make good use of the time. And that's [disfmarker] and especially with the student [speaker007:] Right. [speaker003:] uh for instance this guy who seems [disfmarker] [speaker005:] Yeah. [speaker003:] Uh anyway, I shouldn't say too much, but um if someone came that was great and wanted to do some real work and they have to end by the end of this school year in the spring, how much data will I have to work with, with that person. And so it's [disfmarker] [speaker005:] i Yeah, so I would think, exploratory things now. Uh, three months from now [disfmarker] Um, I mean the transcriptions I think are a bit of an unknown cuz we haven't gotten those back yet as far as the timing, but I think as far as the collection, it doesn't seem to me l like, uh, unreasonable to say that uh in January, you know, ro roughly uh [disfmarker] which is roughly three months from now, we should have at least something like, you know, twenty five, thirty hours. [speaker003:] And we just don't know about the transcription part of that, [speaker005:] So that's [disfmarker] [speaker002:] Yeah, we need to [disfmarker] I think that there's a possibility that the transcript will need to be adjusted afterwards, [speaker003:] so. I mean, it [disfmarker] [speaker002:] and uh es especially since these people won't be uh used to dealing with multi channel uh transcriptions. [speaker003:] Right. [speaker005:] Yeah. [speaker002:] So I think that we'll need to adjust some [disfmarker] And also if we wanna add things like um, well, more refined coding of overlaps, then definitely I think we should count on having an extra pass through. I wanted to ask another a a aspect of the data collection. There'd be no reason why a person couldn't get together several uh, you know, friends, and come and argue about a topic if they wanted to, right? [speaker005:] If they really have something they wanna talk about as opposed to something [@ @] [disfmarker] I mean, what we're trying to stay away from was artificial constructions, but I think if it's a real [disfmarker] Why not? Yeah. [speaker003:] I mean, I'm thinking, politically [disfmarker] [speaker007:] Stage some political debates. [speaker002:] You could do this, [speaker003:] Well yeah, [speaker002:] you know. You could. [speaker003:] or just if you're [disfmarker] if you ha If there are meetings here that happen that we can record even if we don't [pause] um have them do the digits, [comment] or maybe have them do a shorter [pause] digit thing [comment] like if it was, you know, uh, one string of digits, or something, they'd probably be willing to do. [speaker007:] We don't have to do the digits at all if we don't want to. [speaker003:] Then, having the data is very valuable, cuz I think it's um politically better for us to say we have this many hours of audio data, especially with the ITR, if we put in a proposal on it. It'll just look like ICSI's collected a lot more audio data. Um, whether it's transcribed or not um, is another issue, but there's [disfmarker] there are research questions you can answer without the transcriptions, or at least that you can start to answer. [speaker007:] Yep. [speaker002:] It seems like you could hold some meetings. [speaker003:] So. [speaker002:] You know, you and maybe Adam? You [disfmarker] you could [disfmarker] you could maybe hold some additional meetings, if you wanted. [speaker001:] Would it help at all [disfmarker] I mean, we're already talking about sort of two levels of detail in meetings. One is uh um without doing the digits [disfmarker] Or, I guess the full blown one is where you do the digits, and everything, and then talk about doing it without digits, what if we had another level, just to collect data, which is without the headsets and we just did the table mounted stuff. [speaker003:] Need the close talking mikes. [speaker001:] You do, OK. [speaker003:] I mean, absolutely, [speaker005:] Yeah. Yeah. [speaker003:] yeah. I'm really scared [disfmarker] [speaker007:] It seems like it's a big part of this corpus is to have the close talking mikes. [speaker003:] Um or at least, like, me personally? I would [disfmarker] [comment] I [disfmarker] couldn't use that data. [speaker001:] I see, OK. [speaker005:] Yeah. [speaker002:] I agree. [speaker003:] Um. [speaker002:] And Mari also, we had [disfmarker] This came up when she she was here. That's important. [speaker005:] Yeah, [speaker003:] So it's a great idea, [speaker005:] I [disfmarker] I [disfmarker] b By the [disfmarker] by the way, I don't think the transcriptions are actually, in the long run, such a big bottleneck. [speaker003:] and if it were true than I would just do that, but it's not that bad [disfmarker] like the room is not the bottleneck, and we have enough time in the room, it's getting the people to come in and put on the [disfmarker] and get the setup going. [speaker005:] I think the issue is just that we're [disfmarker] we're blazing that path. Right? And [disfmarker] and um [disfmarker] d Do you have any idea when [disfmarker] when uh the [disfmarker] you'll be able to send uh the ten hours to them? [speaker007:] Well, I've been burning two C Ds a day, which is about all I can do with the time I have. [speaker005:] Yeah. Yeah. [speaker007:] So it'll be early next week. [speaker005:] Yeah, OK. So early next week we send it to them, and then [disfmarker] then we check with them to see if they've got it and we [disfmarker] we start, you know asking about the timing for it. [speaker007:] Yep. [speaker005:] So I think once they get it sorted out about how they're gonna do it, which I think they're pretty well along on, cuz they were able to read the files and so on. [speaker007:] Yep. [speaker005:] Right? [speaker007:] Yeah, but [disfmarker] [speaker005:] Well [disfmarker] [speaker007:] Yeah, who knows where they are. [speaker001:] Have they ever responded to you? [speaker007:] Nope. [speaker005:] Yeah, but [disfmarker] You know, so they [disfmarker] they [disfmarker] they have [disfmarker] you know, they're volunteering their time and they have a lot of other things to do, [speaker003:] What if [disfmarker] [speaker007:] Yeah, you [disfmarker] we can't complain. [speaker005:] right? But they [disfmarker] But at any rate, they'll [disfmarker] I [disfmarker] I think once they get that sorted out, they're [disfmarker] they're making cassettes there, then they're handing it to someone who they [disfmarker] who's [disfmarker] who is doing it, and uh I think it's not going to be [disfmarker] I don't think it's going to be that much more of a deal for them to do thirty hours then to do one hour, I think. It's not going to be thirty [speaker007:] Yep. I think that's probably true. [speaker003:] Really? So it's the amount of [disfmarker] [speaker005:] It's [disfmarker] it's just getting it going. [speaker007:] It's pipeline, pipeline issues. [speaker003:] Right. What about these lunch meetings [disfmarker] [speaker007:] Once the pipeline fills. [speaker003:] I mean, I don't know, if there's any way without too much more overhead, even if we don't ship it right away to IBM even if we just collect it here for awhile, [comment] to record you know, two or three more meeting a week, just to have the data, even if they're um not doing the digits, but they do wear the headphones? [speaker005:] But the lunch meetings are pretty much one person getting up and [disfmarker] [speaker003:] No, I meant, um, sorry, the meetings where people eat their lunch downstairs, maybe they don't wanna be recorded, but [disfmarker] [speaker007:] Oh, and we're just chatting? [speaker003:] Just the ch the chatting. [speaker007:] Yeah, we have a lot of those. [speaker003:] I actually [disfmarker] I actually think that's [pause] useful [pause] data, um [pause] the chatting, [speaker007:] Yeah, the problem with that is I would [disfmarker] I think I would feel a little constrained to [disfmarker] You know? Uh, some of the meetings [disfmarker] [speaker003:] but [disfmarker] OK. You don't wanna do it, cuz [disfmarker] OK. [speaker007:] You know, our "soccer ball" meeting? I guess none of you were there for our soccer ball meeting. [speaker003:] Alright. Alright, [comment] so I'll just throw it out there, [speaker007:] That was hilarious. [speaker003:] if anyone knows of one more m or two more wee meetings per week that happen at ICSI, um that we could record, I think it would be worth it. [speaker005:] Yeah. Well, we should also check with Mari again, because they [disfmarker] because they were really intending, you know, maybe just didn't happen, but they were really intending to be duplicating this in some level. So then that would double [pause] what we had. Uh. And there's a lot of different meetings at UW uh [disfmarker] I mean really m a lot more [comment] than we have here right cuz we're not right on campus, [speaker007:] Right. [speaker005:] so. [speaker001:] Is the uh, notion of recording any of Chuck's meetings dead in the water, or is that still a possibility? [speaker005:] Uh, [vocalsound] they seem to have some problems with it. We can [disfmarker] we can talk about that later. Um, but, again, Jerry is [disfmarker] Jerry's open [disfmarker] So I mean, we have two speech meetings, one uh network meeting, uh Jerry was open to it but I [disfmarker] I s One of the things that I think is a little [disfmarker] a little bit of a limitation, there is a think when the people are not involved uh in our work, we probably can't do it every week. You know? I [disfmarker] I [disfmarker] I [disfmarker] I think that [disfmarker] that people are gonna feel uh [disfmarker] are gonna feel a little bit constrained. Now, it might get a little better if we don't have them do the digits all the time. [speaker007:] Yep. [speaker005:] And the [disfmarker] then [disfmarker] so then they can just really sort of try to [disfmarker] put the mikes on and then just charge in and [disfmarker] [speaker003:] What if we give people [disfmarker] you know, we cater a lunch in exchange for them having their meeting here or something? [speaker002:] Well, you know, I [disfmarker] I do think eating while you're doing a meeting is going to be increasing the noise. [speaker003:] OK. Alright, alright, alright. [speaker002:] But I had another question, which is um, you know, in principle, w um, I know that you don't want artificial topics, but um it does seem to me that we might be able to get subjects from campus to come down and do something that wouldn't be too artificial. I mean, we could [disfmarker] political discussions, or [disfmarker] or something or other, [speaker003:] No, definitely. [speaker002:] and i you know, people who are [disfmarker] Because, you know, there's also this constraint. We d it's like, you know, the [disfmarker] the [disfmarker] uh goldibears [disfmarker] goldi goldilocks, it's like you don't want meetings that are too large, but you don't want meetings that are too small. And um [disfmarker] a and it just seems like maybe we could exploit the subj human subject p p pool, in the positive sense of the word. [speaker001:] Well, even [disfmarker] I mean, coming down from campus is sort of a big thing, but what about [speaker002:] We could pay subjects. [speaker001:] or what about people in the [disfmarker] in the building? [speaker003:] Yeah, I was thinking, there's all these other peo [speaker001:] I mean, there's the State of California downstairs, and [disfmarker] [speaker003:] Yeah. I mean [disfmarker] [speaker007:] I just really doubt that uh any of the State of California meetings would be recordable and then releasable to the general public. [speaker002:] Yeah. [speaker001:] Oh. [speaker003:] Mm hmm. [speaker007:] So I [disfmarker] I mean I talked with some people at the Haas Business School who are i who are interested in speech recognition [speaker003:] Alright, well. [speaker007:] and, they sort of hummed and hawed and said "well maybe we could have meetings down here", but then I got email from them that said "no, we decided we're not really interested and we don't wanna come down and hold meetings." So, I think it's gonna be a problem to get people regularly. [speaker001:] What about Joachim, maybe he can [disfmarker] [speaker005:] But [disfmarker] but we c But I think, you know, we get some scattered things from this and that. And I [disfmarker] I d I do think that maybe we can get somewhere with the [disfmarker] with the radio. Uh i I have better contacts in radio than in television, but [disfmarker] [speaker003:] Mm hmm. [speaker001:] You could get a lot of lively discussions from those radio ones. [speaker007:] Yep. [speaker003:] Well, and they're already [disfmarker] they're [disfmarker] these things are already recorded, [speaker005:] Yeah. [speaker003:] we don't have to ask them to [disfmarker] even [disfmarker] and I'm not sure wh how they record it, but they must record from individual [disfmarker] [speaker005:] n Well [disfmarker] No, I'm not talking about ones that are already recorded. I'm talking about new ones because [disfmarker] because [disfmarker] because we would be asking them to do something different. [speaker003:] Why [disfmarker] why not? Well, we can find out. I know for instance Mark Liberman was interested uh in [disfmarker] in LDC getting [pause] data, [speaker005:] Right, that's the found data idea. [speaker003:] uh, and [disfmarker] [speaker005:] But what I'm saying is uh if I talk to people that I know who do these th who produce these things we could ask them if they could record an extra channel, let's say, of a distant mike. [speaker003:] Yeah. Mm hmm. [speaker005:] And u I think routinely they would not do this. So, since I'm interested in the distant mike stuff, I wanna make sure that there is at least that somewhere [speaker003:] Right. Great. OK. [speaker005:] and uh [disfmarker] But if we ask them to do that they might be intrigued enough by the idea that they uh might be e e willing to [disfmarker] the [disfmarker] I might be able to talk them into it. [speaker003:] Mm hmm. [speaker007:] Um. We're getting towards the end of our disk space, so we should think about trying to wrap up here. [speaker005:] OK. Well I don't [disfmarker] why don't we [disfmarker] why d u why don't we uh uh turn them [disfmarker] turn [speaker003:] That's a good way to end a meeting. [speaker007:] OK, leave [disfmarker] leave them on for a moment until I turn this off, cuz that's when it crashed last time. [speaker002:] Oh. That's good to know. [speaker005:] Turning off the microphone made it crash. Well [disfmarker] [speaker002:] That's good to know. [speaker005:] OK. [speaker003:] OK. So uh, he's not here, [speaker004:] So. [speaker003:] so you get to [disfmarker] [speaker004:] Yeah, I will try to explain the thing that I did this [disfmarker] this week [disfmarker] during this week. [speaker003:] Yeah. [speaker004:] Well eh you know that I work [disfmarker] I begin to work with a new feature to detect voice unvoice. [speaker005:] Mm hmm. [speaker004:] What I trying two MLP to [disfmarker] to the [disfmarker] with this new feature and the fifteen feature uh from the eh bus base system [speaker005:] The [disfmarker] the mel cepstrum? [speaker004:] No, satly the mes the Mel Cepstrum, the new base system [disfmarker] the new base system. [speaker005:] Oh the [disfmarker] OK, [speaker004:] Yeah, we [disfmarker] [speaker005:] the Aurora system. [speaker004:] yeah the Aurora system with the new filter, VAD or something like that. [speaker005:] OK. [speaker004:] And I'm trying two MLP, one one that only have t three output, voice, unvoice, and silence, [speaker003:] Mm hmm. [speaker004:] and other one that have fifty six output. The probabilities of the allophone. And I tried to do some experiment of recognition with that and only have result with [disfmarker] with the MLP with the three output. And I put together the fifteen features and the three MLP output. And, well, the result are li a little bit better, but more or less similar. [speaker003:] Uh, I [disfmarker] I'm [disfmarker] I'm slightly confused. [speaker005:] Hmm. [speaker003:] What [disfmarker] what feeds the uh [disfmarker] the three output net? [speaker004:] Voice, unvoice, and si [speaker003:] No no, what feeds it? What features does it see? [speaker004:] The feature [disfmarker] the input? The inputs are the fifteen [disfmarker] the fifteen uh bases feature. [speaker003:] Uh huh. [speaker004:] the [disfmarker] with the new code. And the other three features are R, the variance of the difference between the two spectrum, [speaker003:] Uh huh. [speaker004:] the variance of the auto correlation function, except the [disfmarker] the first point, because half the height value is R zero [speaker003:] Mm hmm. Mm hmm. Mm hmm. Mm hmm. [speaker004:] and also R zero, the first coefficient of the auto correlation function. That is like the energy with these three feature, [speaker003:] Right. [speaker004:] also these three feature. [speaker003:] You wouldn't do like R one over R zero or something like that? I mean usually for voiced unvoiced you'd do [disfmarker] yeah, you'd do something [disfmarker] you'd do energy [speaker004:] Yeah. [speaker003:] but then you have something like spectral slope, which is you get like R one ov over R zero or something like that. [speaker004:] Uh yeah. [speaker005:] What are the R's? I'm sorry I missed it. [speaker003:] R correlations. [speaker004:] No, [speaker005:] Oh. [speaker004:] R c No. Auto correlation? Yes, yes, the variance of the auto correlation function that uses that [speaker003:] Ye Well that's the variance, but if you just say "what is [disfmarker]" I mean, to first order, um yeah one of the differences between voiced, unvoiced and silence is energy. [speaker004:] Yeah, [speaker003:] Another one is [disfmarker] but the other one is the spectral shape. [speaker004:] I I'll [disfmarker] The spectral shape, yeah. [speaker003:] Yeah, and so R one over R zero is what you typically use for that. [speaker004:] No, I don't use that [disfmarker] I can't use [disfmarker] [speaker003:] No, I'm saying that's what people us typically use. [speaker004:] Mmm. [speaker003:] See, because it [disfmarker] because this is [disfmarker] this is just like a single number to tell you um "does the spectrum look like that or does it look like that". [speaker004:] Mm hmm. [speaker001:] Oh. R [disfmarker] R [disfmarker] R zero. [speaker003:] Right? [speaker004:] Mm hmm. [speaker003:] So if it's [disfmarker] if it's um [disfmarker] if it's low energy uh but the [disfmarker] but the spectrum looks like that or like that, it's probably silence. [speaker004:] Mm hmm. [speaker003:] Uh but if it's low energy and the spectrum looks like that, it's probably unvoiced. [speaker004:] Yeah. [speaker003:] So if you just [disfmarker] if you just had to pick two features to determine voiced unvoiced, you'd pick something about the spectrum like uh R one over R zero, um and R zero [speaker004:] Mm hmm, OK. [speaker003:] or i i you know you'd have some other energy measure and like in the old days people did like uh zero crossing counts. [speaker004:] Yeah, yeah. [speaker003:] Right. S S [speaker004:] Well, I can also th use this. [speaker003:] Yeah. Um, [speaker004:] Bec because the result are a little bit better but we have in a point that everything is more or less the similar [disfmarker] more or less similar. [speaker003:] Yeah. But um [speaker004:] It's not quite better. [speaker003:] Right, but it seemed to me that what you were what you were getting at before was that there is something about the difference between the original signal or the original FFT and with the filter which is what [disfmarker] and the variance was one take uh on it. [speaker004:] Yeah, I used this too. [speaker003:] Right. But it [disfmarker] it could be something else. Suppose you didn't have anything like that. Then in that case, if you have two nets, Alright, and this one has three outputs, and this one has f [speaker004:] Mm hmm. [speaker003:] whatever, fifty six, or something, [speaker004:] Mm hmm. [speaker003:] if you were to sum up the probabilities for the voiced and for the unvoiced and for the silence here, we've found in the past you'll do better at voiced unvoiced silence than you do with this one. So just having the three output thing doesn't [disfmarker] doesn't really buy you anything. [speaker004:] Yeah. [speaker003:] The issue is what you feed it. [speaker004:] Yeah, I have [disfmarker] yeah. [speaker003:] So uh [speaker005:] So you're saying take the features that go into the voiced unvoiced silence net and feed those into the other one, as additional inputs, rather than having a separate [disfmarker] [speaker004:] No [disfmarker] [speaker003:] w W well that's another way. [speaker004:] Yeah. [speaker003:] That wasn't what I was saying but yeah that's certainly another thing to do. No I was just trying to say if you b if you bring this into the picture over this, what more does it buy you? [speaker005:] Mmm. [speaker003:] And what I was saying is that the only thing I think that it buys you is um based on whether you feed it something different. And something different in some fundamental way. And so the kind of thing that [disfmarker] that she was talking about before, was looking at something uh ab um [disfmarker] something uh about the difference between the [disfmarker] the uh um log FFT uh log power uh and the log magnitude uh F F spectrum uh and the um uh filter bank. [speaker004:] Yeah. [speaker003:] And so the filter bank is chosen in fact to sort of integrate out the effects of pitch and she's saying you know trying [disfmarker] So the particular measure that she chose was the variance of this m of this difference, [speaker004:] Mm hmm. [speaker003:] but that might not be the right number. [speaker004:] Maybe. [speaker003:] Right? I mean maybe there's something about the variance that's [disfmarker] that's not enough or maybe there's something else that [disfmarker] that one could use, but I think that, for me, the thing that [disfmarker] that struck me was that uh you wanna get something back here, so here's [disfmarker] here's an idea. uh What about it you skip all the [disfmarker] all the really clever things, and just fed the log magnitude spectrum into this? [speaker004:] Ah [disfmarker] I'm sorry. [speaker003:] This is f You have the log magnitude spectrum, and you were looking at that and the difference between the filter bank and [disfmarker] and c c computing the variance. [speaker004:] Yeah. Mm hmm. Mm hmm. [speaker003:] That's a clever thing to do. What if you stopped being clever? [speaker004:] Mm hmm. [speaker003:] And you just took this thing in here because it's a neural net and neural nets are wonderful and figure out what they can [disfmarker] what they most need from things, and I mean that's what they're good at. [speaker004:] Yeah. [speaker003:] So I mean you're [disfmarker] you're [disfmarker] you're trying to be clever and say what's the statistic that should [disfmarker] we should get about this difference but uh in fact, you know maybe just feeding this in or [disfmarker] or feeding both of them in [speaker005:] Hmm. [speaker003:] you know, another way, saying let it figure out what's the [disfmarker] what is the interaction, especially if you do this over multiple frames? [speaker004:] Mm hmm. [speaker003:] Then you have this over time, and [disfmarker] and both kinds of measures and uh you might get uh something better. [speaker004:] Mm hmm. [speaker003:] Um. [speaker005:] So [disfmarker] so don't uh [disfmarker] don't do the division, but let the net have everything. [speaker003:] That's another thing you could do yeah. Yeah. [speaker004:] Yeah. [speaker003:] Um. I mean, it seems to me, if you have exactly the right thing then it's better to do it without the net because otherwise you're asking the net to learn this [disfmarker] you know, say if you wanted to learn how to do multiplication. [speaker005:] Mm hmm. [speaker003:] I mean you could feed it a bunch of s you could feed two numbers that you wanted to multiply into a net and have a bunch of nonlinearities in the middle and train it to get the product of the output and it would work. But, it's kind of crazy, cuz we know how to multiply and you [disfmarker] you'd be you know much lower error usually [vocalsound] if you just multiplied it out. But suppose you don't really know what the right thing is. And that's what these sort of dumb machine learning methods are good at. So. Um. Anyway. It's just a thought. [speaker005:] How long does it take, Carmen, to train up one of these nets? [speaker004:] Oh, not too much. [speaker005:] Yeah. [speaker004:] Mmm, one day or less. [speaker005:] Hmm. [speaker003:] Yeah, it's probably worth it. [speaker001:] What are [disfmarker] what are your f uh frame error rates for [disfmarker] for this? [speaker004:] Eh fifty f six uh no, the frame error rate? Fifty six I think. [speaker001:] O [speaker003:] Is that [disfmarker] maybe that's accuracy? [speaker004:] Percent. The accuracy. [speaker001:] Fif fifty six percent accurate for v voice unvoice [speaker004:] Mm hmm. No for, yes f I don't remember for voice unvoice, [speaker001:] Oh, OK. [speaker004:] maybe for the other one. [speaker003:] Yeah, voiced unvoiced hopefully would be a lot better. [speaker001:] OK. [speaker004:] for voiced. I don't reme [speaker001:] Should be in nineties somewhere. [speaker004:] Better. Maybe for voice unvoice. [speaker001:] Right. [speaker004:] This is for the other one. I should [disfmarker] I can't show that. [speaker001:] OK. [speaker004:] But I think that fifty five was for the [disfmarker] when the output are the fifty six phone. [speaker001:] Mm hmm. [speaker004:] That I look in the [disfmarker] with the other [disfmarker] nnn the other MLP that we have are more or less the same number. Silence will be better but more or less the same. [speaker003:] I think at the frame level for fifty six that was the kind of number we were getting for [disfmarker] for uh um reduced band width uh stuff. [speaker004:] I think that [disfmarker] I [disfmarker] I [disfmarker] I think that for the other one, for the three output, is sixty sixty two, sixty three more or less. [speaker001:] Mm hmm. [speaker003:] That's all? [speaker004:] It's [disfmarker] Yeah. [speaker003:] That's pretty bad. [speaker004:] Yeah, because it's noise also. [speaker003:] Aha! [speaker001:] Oh yeah. [speaker004:] And we have [speaker003:] Aha! Yeah. [speaker004:] I know. [speaker003:] Yeah. OK. But even i in [disfmarker] Oh yeah, in training. Still, Uh. Well actually, so this is a test that you should do then. Um, if you're getting fifty six percent over here, uh that's in noise also, right? [speaker004:] Yeah, yeah, yeah. [speaker003:] Oh OK. If you're getting fifty six here, try adding together the probabilities of all of the voiced phones here and all of the unvoiced phones [speaker004:] will be [disfmarker] [speaker003:] and see what you get then. [speaker004:] Yeah. [speaker003:] I bet you get better than sixty three. [speaker004:] Well I don't know, but [disfmarker] I th I [disfmarker] I think that we [disfmarker] I have the result more or less. Maybe. I don't know. I don't [disfmarker] I'm not sure but I remember [@ @] that I can't show that. [speaker003:] OK, but that's a [disfmarker] That is a [disfmarker] a good check point, you should do that anyway, [speaker004:] Yeah. [speaker003:] OK? Given this [disfmarker] this uh regular old net that's just for choosing for other purposes, uh add up the probabilities of the different subclasses and see [disfmarker] see how well you do. Uh and that [disfmarker] you know anything that you do over here should be at least as good as that. [speaker004:] Mm hmm. [speaker003:] OK. [speaker004:] I will do that. [speaker005:] The targets for the neural net, uh, they come from forced alignments? [speaker004:] But [disfmarker] Uh, [comment] no. [speaker001:] TIMIT canonical ma mappings. [speaker004:] TIMIT. [speaker005:] Ah! [speaker003:] Oh. So, this is trained on TIMIT. [speaker005:] OK. [speaker004:] Yeah. [speaker001:] Yeah, noisy TIMIT. [speaker003:] OK. [speaker004:] Yeah this for TIMIT. [speaker003:] But noisy TIMIT? [speaker004:] Noisy TIMIT. [speaker001:] Right. [speaker004:] We have noisy TIMIT with the noise of the [disfmarker] the TI digits. And now we have another noisy TIMIT also with the noise of uh Italian database. [speaker003:] I see. Yeah. Well there's gonna be [disfmarker] it looks like there's gonna be a noisy uh [disfmarker] some large vocabulary noisy stuff too. Somebody's preparing. [speaker005:] Really? [speaker003:] Yeah. I forget what it'll be, resource management, Wall Street Journal, something. Some [disfmarker] some read task actually, that they're [disfmarker] preparing. [speaker001:] Hmm! [speaker005:] For what [disfmarker] For Aurora? [speaker003:] Yeah. [speaker005:] Oh! [speaker003:] Yeah, so the uh [disfmarker] Uh, the issue is whether people make a decision now based on what they've already seen, or they make it later. And one of the arguments for making it later is let's make sure that whatever techniques that we're using work for something more than [disfmarker] than connected digits. [speaker005:] Hmm. [speaker003:] So. [speaker005:] When are they planning [disfmarker] When would they do that? [speaker003:] Mmm, I think late [disfmarker] uh I think in the summer sometime. [speaker005:] Hmm. [speaker003:] So. OK, thanks. [speaker004:] This is the work that I did during this date [speaker003:] Uh huh. [speaker004:] and also mmm I [disfmarker] H Hynek last week say that if I have time I can to begin to [disfmarker] to study well seriously the France Telecom proposal [speaker003:] Mm hmm. [speaker004:] to look at the code and something like that to know exactly what they are doing because maybe that we can have some ideas [speaker003:] Mm hmm. [speaker004:] but not only to read the proposal. Look insi look i carefully what they are doing with the program [@ @] and I begin to [disfmarker] to work also in that. But the first thing that I don't understand is that they are using R the uh log energy that this quite [disfmarker] I don't know why they have some constant in the expression of the lower energy. I don't know what that means. [speaker005:] They have a constant in there, you said? [speaker004:] Yeah. [speaker003:] Oh, at the front it says uh "log energy is equal to the rounded version of sixteen over the log of two" [speaker004:] This [disfmarker] Yeah. [speaker003:] Uh. [speaker004:] Then maybe I can understand. [speaker003:] uh times the [disfmarker] Well, this is natural log, and maybe it has something to do with the fact that this is [disfmarker] [speaker005:] Is that some kind of base conversion, [speaker003:] I [disfmarker] I have no idea. [speaker005:] or [disfmarker]? [speaker003:] Yeah, that's what I was thinking, but [disfmarker] but um, then there's the sixty four, Uh, [vocalsound] I don't know. [speaker004:] Because maybe they're [disfmarker] the threshold that they are using on the basis of this value [disfmarker] [speaker005:] Experimental results. [speaker001:] Mc McDonald's constant. [speaker004:] I don't know exactly, because well th I thought maybe they have a meaning. But I don't know what is the meaning of take exactly this value. [speaker003:] Yeah, it's pretty funny looking. [speaker005:] So they're taking the number inside the log and raising it to sixteen over log base two. [speaker003:] I don't know. Yeah, I [disfmarker] um Right. Sixteen over [comment] two. [speaker005:] Does it have to do with those sixty fours, or [disfmarker]? [speaker003:] Um. If we ignore the sixteen, the natural log of t one over the natural log of two times the natu I don't know. Well, maybe somebody'll think of something, but this is uh [disfmarker] It may just be that they [disfmarker] they want to have [disfmarker] for very small energies, they want to have some kind of a [disfmarker] [speaker004:] Yeah, the e The effect I don't [disfmarker] [@ @] I can understand the effect of this, no? because it's to [disfmarker] to do something like that. No? [speaker003:] Well, it says, since you're taking a natural log, it says that when [disfmarker] when you get down to essentially zero energy, this is gonna be the natural log of one, which is zero. [speaker004:] Mm hmm. [speaker003:] So it'll go down to uh to [nonvocalsound] the natural log being [disfmarker] So the lowest value for this would be zero. So y you're restricted to being positive. And this sort of smooths it for very small energies. Uh, why they chose sixty four and something else, that was probably just experimental. [speaker004:] Yeah. [speaker003:] And the [disfmarker] the [disfmarker] the constant in front of it, I have no idea. um [speaker004:] Well. I [disfmarker] I will look to try if I move this parameter in their code what happens, maybe everything is [disfmarker] Maybe they tres hole are on basis of this. [speaker003:] uh [disfmarker] I mean [disfmarker] it [disfmarker] [vocalsound] they [disfmarker] they probably have some fi particular s fixed point arithmetic that they're using, [speaker004:] I don't know. [speaker005:] Yeah, I was just gonna say maybe it has something to do with hardware, [speaker003:] and then it just [disfmarker] [speaker005:] something they were doing. [speaker003:] Yeah. Yeah, I mean that [disfmarker] they're s probably working with fixed point or integer or something. I think you're supposed to on this stuff anyway, and [disfmarker] and so maybe that puts it in the right realm somewhere. [speaker005:] Well it just, yeah, puts it in the right range, or [disfmarker] [speaker003:] Yeah. I think, given at the level you're doing things in floating point on the computer, I don't think it matters, would be my guess, [speaker004:] Mm hmm. [speaker003:] but. [speaker004:] I [disfmarker] this more or less anything [speaker003:] Yeah. OK, and wh when did Stephane take off? He took off [disfmarker] [speaker004:] I think that Stephane will arrive today or tomorrow. [speaker003:] Oh, he was gone these first few days, and then he's here for a couple days before he goes to Salt Lake City. [speaker004:] Mm hmm. [speaker003:] OK. [speaker004:] He's [disfmarker] I think that he is in Las Vegas or something like that. [speaker003:] Yeah. Yeah. So he's [disfmarker] he's going to ICASSP which is good. I [disfmarker] I don't know if there are many people who are going to ICASSP [speaker004:] Yeah. [speaker003:] so [disfmarker] so I thought, make sure somebody go. [speaker004:] Yeah. [speaker005:] Do [disfmarker] have [disfmarker] Have people sort of stopped going to ICASSP in recent years? [speaker003:] Um, people are less consistent about going to ICASSP and I think it's still [disfmarker] it's still a reasonable forum for students to [disfmarker] to present things. Uh, it's [disfmarker] I think for engineering students of any kind, I think it's [disfmarker] it's if you haven't been there much, it's good to go to, uh to get a feel for things, a range of things, not just speech. Uh. But I think for [disfmarker] for sort of dyed in the wool speech people, um I think that ICSLP and Eurospeech are much more targeted. [speaker005:] Mm hmm. [speaker003:] Uh. And then there's these other meetings, like HLT and [disfmarker] and uh ASRU [disfmarker] so there's [disfmarker] there's actually plenty of meetings that are really relevant to [disfmarker] to uh computational uh speech processing of one sort or another. [speaker005:] Mm hmm. [speaker003:] Um. So. I mean, I mostly just ignored it because I was too busy and [vocalsound] didn't get to it. So uh Wanna talk a little bit about what we were talking about this morning? [speaker001:] Oh! [speaker003:] Just briefly, or [pause] Or anything else? [speaker001:] um [pause] uh [pause] Yeah. So. I [disfmarker] I guess some of the progress, I [disfmarker] I've been getting a [disfmarker] getting my committee members for the quals. And um so far I have Morgan and Hynek, [vocalsound] Mike Jordan, and I asked John Ohala and he agreed. [speaker005:] Cool. [speaker001:] Yeah. Yeah. So I'm [disfmarker] I [disfmarker] I just need to ask um Malek. One more. Um. Tsk. Then uh I talked a little bit about [vocalsound] um continuing with these dynamic ev um acoustic events, and um [vocalsound] [vocalsound] we're [disfmarker] we're [disfmarker] we're [vocalsound] thinking about a way to test the completeness of a [disfmarker] a set of um dynamic uh events. Uh, completeness in the [disfmarker] in the sense that [vocalsound] um if we [disfmarker] if we pick these X number of acoustic events, [vocalsound] do they provide sufficient coverage [vocalsound] for the phones that we're trying to recognize [vocalsound] or [disfmarker] or the f the words that we're gonna try to recognize later on. And so Morgan and I were uh discussing [vocalsound] um s uh s a form of a cheating experiment [vocalsound] where we get [disfmarker] [vocalsound] um we have uh [vocalsound] um a chosen set of features, or acoustic events, and we train up a hybrid [vocalsound] um system to do phone recognition on TIMIT. So i i the idea is if we get good phone recognition results, [vocalsound] using um these set of acoustic events, [vocalsound] then [vocalsound] um that [disfmarker] that says that these acoustic events are g sufficient to cover [vocalsound] a set of phones, at least found in TIMIT. Um so i it would be a [disfmarker] [vocalsound] a measure of "are we on the right track with [disfmarker] with the [disfmarker] the choices of our acoustic events". Um, [vocalsound] So that's going on. And [vocalsound] also, just uh working on my [vocalsound] uh final project for Jordan's class, uh which is [disfmarker] [speaker003:] Actually, let me [disfmarker] Hold that thought. [speaker001:] Yeah. [speaker003:] Let me back up while we're still on it. [speaker001:] OK, sure. [speaker003:] The [disfmarker] the other thing I was suggesting, though, is that given that you're talking about binary features, uh, maybe the first thing to do is just to count and uh count co occurrences and get probabilities for a discrete HMM cuz that'd be pretty simple because it's just [disfmarker] Say, if you had ten [disfmarker] ten events, uh that you were counting, uh each frame would only have a thousand possible values for these ten bits, and uh so you could make a table that would [disfmarker] say, if you had thirty nine phone categories, that would be a thousand by thirty nine, and just count the co occurrences and divide them by the [disfmarker] the uh [disfmarker] uh uh occ uh count the co occurrences between the event and the phone and divide them by the number of occurrences of the phone, and that would give you the likelihood of the [disfmarker] of the event given the phone. And um then just use that in a very simple HMM and uh you could uh do phone recognition then and uh wouldn't have any of the issues of the uh training of the net or [disfmarker] I mean, it'd be on the simple side, but [speaker005:] Mm hmm. [speaker003:] uh um you know, if [disfmarker] uh uh the example I was giving was that if [disfmarker] if you had um onset of voicing and [disfmarker] and end of voicing as being two kinds of events, then if you had those a all marked correctly, and you counted co occurrences, you should get it completely right. [speaker005:] Mm hmm. [speaker003:] So. um [disfmarker] But you'd get all the other distinctions, you know, randomly wrong. I mean there'd be nothing to tell you that. So um [vocalsound] uh If you just do this by counting, then you should be able to find out in a pretty straightforward way whether you have a sufficient uh set of events to [disfmarker] to do the kind of level of [disfmarker] [vocalsound] of uh classification of phones that you'd like. So that was [disfmarker] that was the idea. And then the other thing that we were discussing was [disfmarker] was um [vocalsound] OK, how do you get the [disfmarker] your training data. [speaker005:] Mm hmm. [speaker003:] Cuz uh the [vocalsound] Switchboard transcription project uh uh you know was half a dozen people, or so working off and on over a couple years, and uh similar [disfmarker] [vocalsound] similar amount of data [vocalsound] to what you're talking about with TIMIT training. So, it seems to me that the only reasonable starting point is uh to automatically translate the uh current TIMIT markings into the markings you want. And uh [vocalsound] it won't have the kind of characteristic that you'd like, of catching funny kind of things that maybe aren't there from these automatic markings, [speaker005:] Mm hmm. [speaker003:] but [disfmarker] but uh it's uh [disfmarker] [speaker005:] It's probably a good place to start. [speaker003:] Yeah. [speaker005:] Yeah. [speaker003:] Yeah and a short [disfmarker] short amount of time, just to [disfmarker] again, just to see if that information is sufficient to uh determine the phones. [speaker005:] Mm hmm. Hmm. [speaker003:] So. [speaker005:] Yeah, you could even then [disfmarker] to [disfmarker] to get an idea about how different it is, you could maybe take some subset and you know, go through a few sentences, mark them by hand and then see how different it is from you know, the canonical ones, [speaker003:] Right. [speaker005:] just to get an idea [disfmarker] a rough idea of h if it really even makes a difference. [speaker003:] You can get a little feeling for it that way, [speaker005:] Yeah. [speaker003:] yeah that is probably right. I mean uh my [disfmarker] my guess would be that this is [disfmarker] since TIMIT's read speech that this would be less of a big deal, [speaker005:] Mm hmm. [speaker003:] if you went and looked at spontaneous speech it'd be more [disfmarker] more of one. [speaker005:] Right. Right. [speaker003:] And the other thing would be, say, if you had these ten events, you'd wanna see, well what if you took two events or four events or ten events or t and you know, and [disfmarker] and hopefully there should be some point at which [vocalsound] having more information doesn't tell you really all that much more about what the phones are. [speaker005:] Mm hmm. You could define other events as being sequences of these events too. [speaker003:] Uh, you could, but the thing is, what he's talking about here is a uh [disfmarker] a translation to a per frame feature vector, so there's no sequence in that, I think. I think it's just a [disfmarker] [speaker005:] Unless you did like a second pass over it or something after you've got your [disfmarker] [speaker003:] Yeah, but we're just talking about something simple here, yeah, to see if [disfmarker] [speaker005:] Yeah. Yeah, yeah. Yeah. I'm adding complexity. [speaker003:] Yeah. Just [disfmarker] You know. The idea is with a [disfmarker] with a very simple statistical structure, could you [disfmarker] could you uh at least verify that you've chosen features that [vocalsound] are sufficient. [speaker005:] Yeah. [speaker003:] OK, and you were saying something [disfmarker] starting to say something else about your [disfmarker] your class project, or [disfmarker]? [speaker001:] Oh. Yeah th Um. [speaker003:] Yeah. [speaker001:] So for my class project I'm [vocalsound] um [vocalsound] [vocalsound] I'm tinkering with uh support vector machines? something that we learned in class, and uh um basically just another method for doing classification. And so I'm gonna apply that to [vocalsound] um compare it with the results by um King and Taylor who did [vocalsound] um these um using recurrent neural nets, they recognized [vocalsound] um [vocalsound] a set of phonological features um and made a mapping from the MFCC's to these phonological features, so I'm gonna [vocalsound] do a similar thing with [disfmarker] [vocalsound] with support vector machines and see if [disfmarker] [speaker005:] So what's the advantage of support vector machines? What [disfmarker] [speaker001:] Um. So, support vector machines are [disfmarker] are good with dealing with a less amount of data [speaker005:] Hmm. [speaker001:] and um so if you [disfmarker] if you give it less data it still does a reasonable job [vocalsound] in learning the [disfmarker] the patterns. [speaker005:] Hmm. [speaker001:] Um and [vocalsound] um [speaker003:] I guess it [disfmarker] yeah, they're sort of succinct, and [disfmarker] and they [vocalsound] uh [speaker001:] Yeah. [speaker005:] Does there some kind of a distance metric that they use or how do they [disfmarker] for cla what do they do for classification? [speaker001:] Um. Right. So, [vocalsound] the [disfmarker] the simple idea behind a support vector machine is [vocalsound] um, [vocalsound] you have [disfmarker] you have this feature space, right? [speaker005:] Mm hmm. [speaker001:] and then it finds the optimal separating plane, um between these two different um classes, [speaker005:] Mm hmm. Mm hmm. [speaker001:] and um [vocalsound] and so [vocalsound] um, what it [disfmarker] i at the end of the day, what it actually does is [vocalsound] it picks [vocalsound] those examples of the features that are closest to the separating boundary, and remembers those [speaker005:] Mm hmm. [speaker001:] and [disfmarker] [vocalsound] and uses them to recreate the boundary for the test set. So, given these [vocalsound] um these features, or [disfmarker] or these [disfmarker] these examples, [pause] um, [pause] critical examples, [vocalsound] which they call support f support vectors, [vocalsound] then um [vocalsound] given a new example, [vocalsound] if the new example falls [vocalsound] um away from the boundary in one direction then it's classified as being a part of this particular class [speaker005:] Oh. [speaker001:] and otherwise it's the other class. [speaker005:] So why save the examples? Why not just save what the boundary itself is? [speaker001:] Mm hmm. Um. Hmm. Let's see. Uh. Yeah, that's a good question. I [disfmarker] [speaker003:] That's another way of doing it. [speaker001:] yeah. [speaker005:] Mmm. [speaker003:] Right? So [disfmarker] so it [disfmarker] I mean I [disfmarker] I guess it's [disfmarker] [speaker005:] Sort of an equivalent. [speaker003:] You know, it [disfmarker] it goes back to nearest neighbor [vocalsound] sort of thing, [speaker005:] Mm hmm. [speaker003:] right? Um, i i if [disfmarker] is it eh w When is nearest neighbor good? Well, nearest neighbor good [disfmarker] is good if you have lots and lots of examples. Um but of course if you have lots and lots of examples, then it can take a while to [disfmarker] to use nearest neighbor. There's lots of look ups. So a long time ago people talked about things where you would have uh a condensed nearest neighbor, where you would [disfmarker] you would [disfmarker] you would pick out uh some representative examples which would uh be sufficient to represent [disfmarker] to [disfmarker] to correctly classify everything that came in. [speaker005:] Oh. Mm hmm. [speaker003:] I [disfmarker] I think s I think support vector stuff sort of goes back to [disfmarker] [vocalsound] to that kind of thing. [speaker005:] I see. [speaker003:] Um. [speaker005:] So rather than doing nearest neighbor where you compare to every single one, you just pick a few critical ones, and [disfmarker] [speaker003:] Yeah. [speaker005:] Hmm. [speaker003:] And th the You know, um neural net approach uh or Gaussian mixtures for that matter are sort of [disfmarker] fairly brute force kinds of things, where you sort of [disfmarker] [vocalsound] you predefine that there is this big bunch of parameters and then you [disfmarker] you place them as you best can to define the boundaries, and in fact, as you know, [vocalsound] these things do take a lot of parameters and [disfmarker] and uh [vocalsound] if you have uh only a modest amount of data, you have trouble [vocalsound] uh learning them. Um, so I [disfmarker] I guess the idea to this is that it [disfmarker] it is reputed to uh be somewhat better in that regard. [speaker005:] Mm hmm. [speaker001:] Right. I it can be a [disfmarker] a reduced um [vocalsound] parameterization of [disfmarker] of the [disfmarker] the model by just keeping [vocalsound] certain selected examples. [speaker005:] Hmm. [speaker001:] Yeah. So. [speaker003:] But I don't know if people have done sort of careful comparisons of this on large tasks or anything. Maybe [disfmarker] maybe they have. I don't know. [speaker001:] Yeah, I don't know either. [speaker003:] Yeah. [speaker002:] S do you get some kind of number between zero and one at the output? [speaker001:] Actually you don't get a [disfmarker] you don't get a nice number between zero and one. You get [disfmarker] you get either a zero or a one. Um, uh there are [disfmarker] there are pap Well, basically, it's [disfmarker] it's um [vocalsound] you [disfmarker] you get a distance measure at the end of the day, and then that distance measure is [disfmarker] is um [disfmarker] [vocalsound] is translated to a zero or one. Um. [speaker003:] But that's looking at it for [disfmarker] for classification [disfmarker] for binary classification, [speaker001:] That's for classification, [speaker005:] And you get that for each class, you get a zero or a one. [speaker003:] right? [speaker001:] right. Right. [speaker003:] But you have the distances to work with. [speaker001:] You have the distances to work with, [speaker003:] Cuz actually Mississippi State people did use support vector machines for uh uh speech recognition and they were using it to estimate probabilities. [speaker001:] yeah. Yeah. Yeah, they [disfmarker] [vocalsound] they had a [disfmarker] had a way to translate the distances into [disfmarker] into probabilities with the [disfmarker] with the simple [vocalsound] um [vocalsound] uh sigmoidal function. [speaker003:] Yeah, and d did they use sigmoid or a softmax type thing? And didn't they like exponentiate or something [speaker001:] Um [pause] [vocalsound] Yeah, [speaker003:] and then [vocalsound] divide by the sum of them, [speaker001:] there's some [disfmarker] there's like one over one plus the exponential or something like that. [speaker003:] or [disfmarker]? Oh it [disfmarker] i Oh, so it is a sigmoidal. [speaker001:] Yeah. [speaker003:] OK. Alright. [speaker005:] Did the [disfmarker] did they get good results with that? [speaker003:] I mean, they're OK, I [disfmarker] I don't [disfmarker] I don't think they were earth [disfmarker] earth shattering, but I think that [vocalsound] uh this was a couple years ago, [speaker005:] Hmm. [speaker003:] I remember them doing it at some meeting, and [disfmarker] and um I don't think people were very critical because it was interesting just to [disfmarker] to try this and you know, it was the first time they tried it, so [disfmarker] [vocalsound] so the [disfmarker] you know, the numbers were not incredibly good [speaker005:] Hmm. [speaker003:] but there's you know, it was th reasonable. [speaker005:] Mm hmm. [speaker003:] I [disfmarker] I don't remember anymore. I don't even remember what the task was, it [comment] was Broadcast News, or [vocalsound] something. I don't know. [speaker005:] Hmm. [speaker001:] Right. [speaker002:] Uh s So Barry, if you just have zero and ones, how are you doing the speech recognition? [speaker001:] Oh I'm not do I'm not planning on doing speech recognition with it. I'm just doing [vocalsound] detection of phonological features. [speaker002:] Oh. OK. [speaker001:] So uh for example, [vocalsound] this [disfmarker] this uh feature set called the uh sound patterns of English [vocalsound] um is just a bunch of [vocalsound] um [vocalsound] binary valued features. Let's say, is this voicing, or is this not voicing, is this [vocalsound] sonorants, not sonorants, and [vocalsound] stuff like that. [speaker002:] OK. [speaker001:] So. [speaker005:] Did you find any more mistakes in their tables? [speaker001:] Oh! Uh I haven't gone through the entire table, [pause] yet. Yeah, yesterday I brought Chuck [vocalsound] the table and I was like, "wait, this [disfmarker] is [disfmarker] Is the mapping from N to [disfmarker] to this phonological feature called um" coronal ", is [disfmarker] is [disfmarker] should it be [disfmarker] shouldn't it be a one? or should it [disfmarker] should it be you know coronal instead of not coronal as it was labelled in the paper? So I ha haven't hunted down all the [disfmarker] all the mistakes yet, [speaker003:] Uh huh. [speaker001:] but [disfmarker] [speaker003:] But a as I was saying, people do get probabilities from these things, [speaker002:] OK. [speaker003:] and [disfmarker] and uh we were just trying to remember how they do, but people have used it for speech recognition, and they have gotten probabilities. So they have some conversion from these distances to probabilities. [speaker002:] OK. [speaker001:] Right, yeah. [speaker003:] There's [disfmarker] you have [disfmarker] you have the paper, right? The Mississippi State paper? [speaker001:] Mm hmm. Mm hmm. [speaker003:] Yeah, if you're interested y you could look, [speaker002:] And [disfmarker] OK. OK. [speaker001:] Yeah, [speaker003:] yeah. [speaker001:] I can [disfmarker] I can show you [disfmarker] I [disfmarker] yeah, [speaker005:] So in your [disfmarker] in [disfmarker] in the thing that you're doing, uh you have a vector of ones and zeros for each phone? [speaker001:] our [disfmarker] Mm hmm. Uh, is this the class project, or [disfmarker]? [speaker005:] Yeah. [speaker001:] OK. um [speaker005:] Is that what you're [disfmarker] [speaker001:] Right, [comment] Right, right f so for every phone there is [disfmarker] there is a um [disfmarker] a vector of ones and zeros [vocalsound] f uh corresponding to whether it exhibits a particular phonological feature or not. [speaker005:] Mm hmm. Mm hmm. And so when you do your wh I'm [disfmarker] what is the task for the class project? To come up with the phones? [speaker001:] Um [speaker005:] or to come up with these vectors to see how closely they match the phones, [speaker001:] Oh. Right, um to come up with a mapping from um MFCC's or s some feature set, [vocalsound] um to [vocalsound] uh w to whether there's existence of a particular phonological feature. [speaker005:] or [disfmarker]? Mm hmm. [speaker001:] And um yeah, basically it's to learn a mapping [vocalsound] from [disfmarker] [vocalsound] from the MFCC's to [vocalsound] uh phonological features. Is it [disfmarker] did that answer your question? [speaker005:] I think so. [speaker001:] OK. [speaker005:] I guess [disfmarker] I mean, uh [disfmarker] I'm not sure what you [disfmarker] what you're [disfmarker] what you get out of your system. [speaker001:] C [speaker005:] Do you get out a uh [disfmarker] a vector of these ones and zeros and then try to find the closest matching phoneme to that vector, [speaker001:] Mm hmm. [speaker005:] or [disfmarker]? [speaker001:] Oh. No, no. I'm not [disfmarker] I'm not planning to do any [disfmarker] any phoneme mapping yet. Just [disfmarker] [vocalsound] it's [disfmarker] it's basically [disfmarker] it's [disfmarker] it's really simple, basically a detection [vocalsound] of phonological features. [speaker005:] Uh huh. I see. [speaker001:] Yeah, and um [vocalsound] [vocalsound] cuz the uh [disfmarker] So King and [disfmarker] and Taylor [vocalsound] um did this with uh recurrent neural nets, [speaker005:] Yeah. Mm hmm. [speaker001:] and this i their [disfmarker] their idea was to first find [vocalsound] a mapping from MFCC's to [vocalsound] uh phonological features and then later on, once you have these [vocalsound] phonological features, [vocalsound] then uh map that to phones. [speaker005:] Mm hmm. [speaker001:] So I'm [disfmarker] I'm sort of reproducing phase one of their stuff. [speaker005:] Mmm. So they had one recurrent net for each particular feature? [speaker001:] Right. Right. Right. Right. [speaker005:] I see. I wo did they compare that [disfmarker] I mean, what if you just did phone recognition and did the reverse lookup. [speaker001:] Uh. [speaker005:] So you recognize a phone and which ever phone was recognized, you spit out it's vector of ones and zeros. [speaker001:] Mm hmm. [speaker003:] I expect you could do that. [speaker005:] I mean uh [disfmarker] [speaker001:] Uh. [speaker003:] That's probably not what he's going to do on his class project. [speaker005:] Yeah. [speaker003:] Yeah. [speaker005:] No. [speaker003:] So um have you had a chance to do this um thing we talked about yet with the uh [disfmarker] [speaker001:] Yeah. [speaker003:] um [speaker005:] Insertion penalty? [speaker003:] Uh. No actually I was going a different [disfmarker] That's a good question, too, but I was gonna ask about the [disfmarker] [vocalsound] the um [vocalsound] changes to the data in comparing PLP and mel cepstrum for the SRI system. [speaker005:] Uh. Well what I've been [disfmarker] "Changes to the data", I'm not sure I [disfmarker] [speaker003:] Right. So we talked on the phone about this, that [disfmarker] that there was still a difference of a [disfmarker] of a few percent [speaker005:] Yeah. Right. [speaker003:] and [vocalsound] you told me that there was a difference in how the normalization was done. And I was asking if you were going to do [disfmarker] [vocalsound] redo it uh for PLP with the normalization done as it had been done for the mel cepstrum. [speaker005:] Mm hmm. Uh right, no I haven't had a chance to do that. What I've been doing is [vocalsound] uh [vocalsound] trying to figure out [disfmarker] [speaker003:] OK. [speaker005:] it just seems to me like there's a um [disfmarker] well it seems like there's a bug, because the difference in performance is [disfmarker] it's not gigantic but it's big enough that it [disfmarker] it seems wrong. [speaker003:] Yeah, I agree, [speaker005:] and [disfmarker] [speaker003:] but I thought that the normalization difference was one of the possibilities, [speaker005:] Yeah, but I don't [disfmarker] I'm not [disfmarker] [speaker003:] right? [speaker005:] Yeah, I guess I don't think that the normalization difference is gonna account for everything. So what I was working on is um just going through and checking the headers of the wavefiles, [speaker003:] OK. [speaker005:] to see if maybe there was a um [disfmarker] a certain type of compression or something that was done that my script wasn't catching. So that for some subset of the training data, uh the [disfmarker] the [disfmarker] the features I was computing were junk. [speaker003:] OK. [speaker005:] Which would you know cause it to perform OK, but uh, you know, the [disfmarker] the models would be all messed up. So I was going through and just double checking that kind of think first, to see if there was just some kind of obvious bug in the way that I was computing the features. [speaker003:] Mm hmm. I see. OK. [speaker005:] Looking at all the sampling rates to make sure all the sampling rates were what [disfmarker] eight K, what I was assuming they were, [speaker003:] Yeah. [speaker005:] um [disfmarker] [speaker003:] Yeah, that makes sense, to check all that. [speaker005:] Yeah. So I was doing that first, before I did these other things, just to make sure there wasn't something [disfmarker] [speaker003:] Although really, uh uh, a couple three percent uh difference in word error rate uh [comment] could easily come from some difference in normalization, I would think. But [speaker005:] Yeah, and I think, hhh [disfmarker] [comment] I'm trying to remember but I think I recall that Andreas was saying that he was gonna run sort of the reverse experiment. Uh which is to try to emulate the normalization that we did but with the mel cepstral features. Sort of, you know, back up from the system that he had. I thought he said he was gonna [disfmarker] I have to look back through my [disfmarker] my email from him. [speaker003:] Yeah, he's probably off at [disfmarker] at uh his meeting now, [speaker005:] Yeah, he's gone now. Um. [speaker003:] yeah. Yeah. But yeah the [disfmarker] I sh think they should be [vocalsound] roughly equivalent, [speaker005:] But [disfmarker] [speaker003:] um I mean again the Cambridge folk found the PLP actually to be a little better. [speaker005:] Right. [speaker003:] Uh So it's [disfmarker] [vocalsound] um I mean the other thing I wonder about was whether there was something just in the [disfmarker] the bootstrapping of their system which was based on [disfmarker] but maybe not, since they [disfmarker] [speaker005:] Yeah see one thing that's a little bit um [disfmarker] I was looking [disfmarker] I've been studying and going through the logs for the system that um Andreas created. And um his uh [disfmarker] the way that the [disfmarker] [vocalsound] [comment] S R I system looks like it works is that it reads the wavefiles directly, uh and does all of the cepstral computation stuff on the fly. [speaker003:] Right. Right. [speaker005:] And, so there's no place where these [disfmarker] where the cepstral files are stored, anywhere that I can go look at and compare to the PLP ones, so whereas with our features, he's actually storing the cepstrum on disk, and he reads those in. [speaker003:] Right. [speaker005:] But it looked like he had to give it [disfmarker] uh even though the cepstrum is already computed, he has to give it uh a front end parameter file. Which talks about the kind of uh com computation that his mel cepstrum thing does, [speaker003:] Uh huh. [speaker005:] so i I [disfmarker] I don't know if that [disfmarker] it probably doesn't mess it up, it probably just ignores it if it determines that it's already in the right format or something but [disfmarker] the [disfmarker] the [disfmarker] the two processes that happen are a little different. So. [speaker003:] Yeah. So anyway, there's stuff there to sort out. [speaker005:] Yeah. Yeah. [speaker003:] So, OK. Let's go back to what you thought I was asking you. [speaker005:] Yeah no and I didn't have a chance to do that. [speaker003:] Ha! Oh! You had the sa same answer anyway. [speaker005:] Yeah. Yeah. I've been um, [disfmarker] I've been working with um Jeremy on his project and then I've been trying to track down this bug in uh the ICSI front end features. [speaker003:] Uh huh. [speaker005:] So one thing that I did notice, yesterday I was studying the um [disfmarker] the uh RASTA code [speaker003:] Uh huh. [speaker005:] and it looks like we don't have any way to um control the frequency range that we use in our analysis. We basically [disfmarker] it looks to me like we do the FFT, um and then we just take all the bins and we use everything. We don't have any set of parameters where we can say you know, "only process from you know a hundred and ten hertz to thirty seven fifty". [speaker003:] Um [disfmarker] [speaker005:] At least I couldn't see any kind of control for that. [speaker003:] Yeah, I don't think it's in there, I think it's in the uh uh uh the filters. So, the F F T is on everything, but the filters um, for instance, ignore the [disfmarker] the lowest bins and the highest bins. And what it does is it [disfmarker] it copies [speaker005:] The [disfmarker] the filters? Which filters? [speaker003:] um The filter bank which is created by integrating over F F T bins. [speaker005:] Mm hmm. [speaker003:] um [speaker005:] When you get the mel [disfmarker] When you go to the mel scale. [speaker003:] Right. Yeah, it's bark scale, and it's [disfmarker] it [disfmarker] it um [disfmarker] it actually copies the uh um [disfmarker] the second filters over to the first. So the first filters are always [disfmarker] and you can s you can specify a different number of [vocalsound] uh features [disfmarker] different number of filters, I think, as I recall. So you can specify a different number of filters, and whatever [vocalsound] um uh you specify, the last ones are gonna be ignored. So that [disfmarker] that's a way that you sort of change what the [disfmarker] what the bandwidth is. Y you can't do it without I think changing the number of filters, [speaker005:] I saw something about uh [disfmarker] that looked like it was doing something like that, but I didn't quite understand it. [speaker003:] but [disfmarker] [speaker005:] So maybe [disfmarker] [speaker003:] Yeah, so the idea is that the very lowest frequencies and [disfmarker] and typically the veriest [comment] highest frequencies are kind of junk. [speaker005:] Uh huh. [speaker003:] And so um you just [disfmarker] for continuity you just approximate them by [disfmarker] [vocalsound] by the second to highest and second to lowest. [speaker005:] Mm hmm. [speaker003:] It's just a simple thing we put in. And [disfmarker] and so if you h [speaker005:] But [disfmarker] so the [disfmarker] but that's a fixed uh thing? There's nothing that lets you [disfmarker] [speaker003:] Yeah, [comment] I think that's a fixed thing. But see [disfmarker] see my point? If you had [disfmarker] [vocalsound] If you had ten filters, [vocalsound] then you would be throwing away a lot at the two ends. [speaker005:] Mm hmm. [speaker003:] And if you had [disfmarker] if you had fifty filters, you'd be throwing away hardly anything. [speaker005:] Mm hmm. [speaker003:] Um, I don't remember there being an independent way of saying "we're just gonna make them from here to here". [speaker005:] Use this analysis bandwidth or something. [speaker003:] But I [disfmarker] I [disfmarker] I don't know, it's actually been awhile since I've looked at it. [speaker005:] Yeah, I went through the Feacalc code and then looked at you know just calling the RASTA libs [comment] and thing like that. And I didn't [disfmarker] I couldn't see any wh place where that kind of thing was done. But um I didn't quite understand everything that I saw, [speaker003:] Yeah, see I don't know Feacalc at all. [speaker005:] so [disfmarker] Mm hmm. [speaker003:] But it calls RASTA with some options, and um [speaker005:] Right. [speaker003:] But I [disfmarker] I think in [disfmarker] I don't know. I guess for some particular database you might find that you could tune that and tweak that to get that a little better, but I think that [vocalsound] in general it's not that critical. I mean there's [disfmarker] [speaker005:] Yeah. [speaker003:] You can [disfmarker] You can throw away stuff below a hundred hertz or so and it's just not going to affect phonetic classification at all. [speaker005:] Another thing I was thinking about was um is there a [disfmarker] I was wondering if there's maybe um [vocalsound] certain settings of the parameters when you compute PLP which would basically cause it to output mel cepstrum. So that, in effect, what I could do is use our code but produce mel cepstrum and compare that directly to [disfmarker] [speaker003:] Well, it's not precisely. Yeah. [speaker005:] Hmm. [speaker003:] I mean, um, [vocalsound] um what you can do is um you can definitely change the [disfmarker] the filter bank from being uh a uh trapezoidal integration to a [disfmarker] a [disfmarker] a triangular one, [speaker005:] Mm hmm. [speaker003:] which is what the typical mel [disfmarker] mel cepstral uh filter bank does. [speaker005:] Mm hmm. [speaker003:] And some people have claimed that they got some better performance doing that, so you certainly could do that easily. But the fundamental difference, I mean, there's other small differences [disfmarker] [speaker005:] There's a cubic root that happens, right? [speaker003:] Yeah, but, you know, as opposed to the log in the other case. I mean [vocalsound] the fundamental d d difference that we've seen any kind of difference from before, which is actually an advantage for the P L P i uh, I think, is that the [disfmarker] the smoothing at the end is auto regressive instead of being cepstral [disfmarker] uh, [comment] from cepstral truncation. So um it's a little more noise robust. [speaker005:] Hmm. [speaker003:] Um, and that's [disfmarker] that's why when people started getting databases that had a little more noise in it, like [disfmarker] like uh um Broadcast News and so on, that's why c Cambridge switched to PLP I think. [speaker005:] Mm hmm. [speaker003:] So um That's a difference that I don't [vocalsound] think we put any way to get around, since it was an advantage. [speaker005:] Mm hmm. [speaker003:] um [vocalsound] uh but we did [disfmarker] eh we did hear this comment from people at some point, that [vocalsound] um it uh they got some better results with the triangular filters rather than the trapezoidal. So that is an option in RASTA. [speaker005:] Hmm. [speaker003:] Uh and you can certainly play with that. But I think you're probably doing the right thing to look for bugs first. [speaker005:] Yeah just [disfmarker] it just seems like this kind of behavior could be caused by you know s some of the training data being messed up. [speaker003:] I don't know. Could be. [speaker005:] You know, you're sort of getting most of the way there, but there's a [disfmarker] So I started going through and looking [disfmarker] One of the things that I did notice was that the um log likelihoods coming out of the log recognizer from the PLP data were much lower, much smaller, than for the mel cepstral stuff, and that the average amount of pruning that was happening was therefore a little bit higher for the PLP features. [speaker003:] Oh huh! [speaker005:] So, since he used the same exact pruning thresholds for both, I was wondering if it could be that we're getting more pruning. [speaker003:] Oh! He [disfmarker] he [disfmarker] [vocalsound] He used the identical pruning thresholds even though the s the range of p of the likeli [speaker005:] Yeah. [speaker003:] Oh well that's [disfmarker] [vocalsound] That's a pretty good [comment] point right there. [speaker005:] Right. Right. [speaker003:] Yeah. [speaker005:] Yeah, so [disfmarker] [speaker003:] I would think that you might wanna do something like uh you know, look at a few points to see where you are starting to get significant search errors. [speaker005:] That's [disfmarker] Right. Well, what I was gonna do is I was gonna take um a couple of the utterances that he had run through, then run them through again but modify the pruning threshold and see if it you know, affects the score. [speaker003:] Yeah. Yeah. [speaker005:] So. [speaker003:] But I mean you could [disfmarker] uh if [disfmarker] if [disfmarker] if that looks promising you could, you know, r uh run [vocalsound] the overall test set with a [disfmarker] with a few different uh pruning thresholds for both, [speaker005:] Mm hmm. Right. [speaker003:] and presumably he's running at some pruning threshold that's [disfmarker] that's uh, you know [disfmarker] gets very few search errors but is [disfmarker] is relatively fast [speaker005:] Mm hmm. Right. I mean, yeah, generally in these things you [disfmarker] you turn back pruning really far, [speaker003:] and [disfmarker] [speaker005:] so I [disfmarker] I didn't think it would be that big a deal because I was figuring well you have it turned back so far that you know it [disfmarker] [speaker003:] But you may be in the wrong range for the P L P features for some reason. [speaker005:] Yeah. Yeah. Yeah. And the uh the [disfmarker] the run time of the recognizer on the PLP features is longer which sort of implies that the networks are bushier, you know, there's more things it's considering which goes along with the fact that the matches aren't as good. So uh, you know, it could be that we're just pruning too much. So. [speaker003:] Yeah. Yeah, maybe just be different kind of distributions and [disfmarker] and [speaker005:] Mm hmm. [speaker003:] yeah so that's another possible thing. [speaker005:] Mm hmm. [speaker003:] They [disfmarker] they should [disfmarker] really shouldn't [disfmarker] There's no particular reason why they would be exactly [disfmarker] behave exactly the same. [speaker005:] Mm hmm. Right. Right. [speaker003:] So. [speaker005:] So. There's lots of little differences. So. [speaker003:] Yeah. [speaker005:] Uh. [speaker003:] Yeah. [speaker005:] Trying to track it down. [speaker003:] Yeah. I guess this was a little bit off topic, I guess, [speaker005:] Yeah [speaker003:] because I was [disfmarker] I was thinking in terms of th this as being a [disfmarker] a [disfmarker] a [disfmarker] a core [vocalsound] item that once we [disfmarker] once we had it going we would use for a number of the front end things also. [speaker005:] Mm hmm. [speaker003:] So. um Wanna [disfmarker] [speaker002:] That's [disfmarker] as far as my stuff goes, [speaker003:] What's [disfmarker] what's on [disfmarker] [speaker002:] yeah, [speaker003:] Yeah. [speaker002:] well I [vocalsound] tried this mean subtraction method. Um. Due to Avendano, [vocalsound] I'm taking s um [vocalsound] six seconds of speech, um [vocalsound] I'm using two second [vocalsound] FFT analysis frames, [vocalsound] stepped by a half second so it's a quarter length step and I [disfmarker] [vocalsound] I take that frame and four f the four [disfmarker] I take [disfmarker] Sorry, I take the current frame and the four past frames and the [vocalsound] four future frames and that adds up to six seconds of speech. And I calculate um [vocalsound] the spectral mean, [vocalsound] of the log magnitude spectrum [pause] over that N. I use that to normalize the s the current center frame [vocalsound] by mean subtraction. And I then [disfmarker] then I move to the next frame and I [disfmarker] [vocalsound] I do it again. Well, actually I calculate all the means first and then I do the subtraction. And um [vocalsound] the [disfmarker] I tried that with HDK, the Aurora setup of HDK training on clean TI digits, and um [vocalsound] it [disfmarker] it helped um in a phony reverberation case um [vocalsound] where I just used the simulated impulse response um [vocalsound] the error rate went from something like eighty it was from something like eighteen percent [vocalsound] to um four percent. And on meeting rec recorder far mike digits, mike [disfmarker] on channel F, it went from um [vocalsound] [vocalsound] forty one percent error to eight percent error. [speaker005:] On [disfmarker] on the real data, not with artificial reverb? [speaker002:] Right. [speaker005:] Uh huh. [speaker002:] And that [disfmarker] that was um [vocalsound] trained on clean speech only, which I'm guessing is the reason why the baseline was so bad. And [disfmarker] [speaker003:] That's ac actually a little side point is I think that's the first results that we have uh uh uh of any sort on the far field uh [disfmarker] on [disfmarker] on the far field data uh for [disfmarker] recorded in [disfmarker] in meetings. [speaker002:] Oh um actually um Adam ran the SRI recognizer. [speaker003:] Did he? On the near field, on the ne [speaker002:] On the far field also. He did one PZM channel and one PDA channel. [speaker003:] Oh did he? Oh! I didn't recall that. What kind of numbers was he getting with that? [speaker002:] I [disfmarker] [vocalsound] I'm not sure, I think it was about five percent error for the PZM channel. [speaker003:] Five. [speaker002:] f I think. Yeah. [speaker003:] So why were you getting forty one here? Is this [disfmarker] [speaker002:] Um. I [disfmarker] I'm g I'm guessing it was the [disfmarker] the training data. Uh, clean TI digits is, like, pretty pristine [vocalsound] training data, and if they trained [vocalsound] the SRI system on this TV broadcast type stuff, I think it's a much wider range of channels and it [disfmarker] [speaker003:] No, but wait a minute. I [disfmarker] I [disfmarker] I th [disfmarker] I think he [disfmarker] What am I saying here? Yeah, so that was the SRI system. Maybe you're right. Yeah. Cuz it was getting like one percent [disfmarker] [vocalsound] So it's still this kind of ratio. It was [disfmarker] it was getting one percent or something on the near field. Wasn't it? [speaker005:] Mm hmm, or it wa a it was around one. [speaker003:] Yeah. Yeah. I think it was getting around one percent for the near [disfmarker] for the n for the close mike. [speaker005:] Yeah. [speaker002:] Huh? [speaker003:] So it was like one to five [disfmarker] [speaker002:] OK. [speaker003:] So it's still this kind of ratio. It's just [disfmarker] yeah, it's a lot more training data. So So probably it should be something we should try then is to [disfmarker] is to see if [disfmarker] is [vocalsound] at some point just to take [disfmarker] i to transform the data and then [disfmarker] [vocalsound] and then uh use th use it for the SRI system. [speaker002:] b You me you mean um ta [speaker003:] So you're [disfmarker] so you have a system which for one reason or another is relatively poor, [speaker002:] Yeah. [speaker003:] and [disfmarker] and uh you have something like forty one percent error uh and then you transform it to eight by doing [disfmarker] doing this [disfmarker] this work. Um. So here's this other system, which is a lot better, but there's still this kind of ratio. It's something like five percent error [vocalsound] with the [disfmarker] the distant mike, and one percent with the close mike. [speaker002:] OK. [speaker003:] So the question is [vocalsound] how close to that one can you get [vocalsound] if you transform the data using that system. [speaker002:] r Right, so [disfmarker] so I guess this SRI system is trained on a lot of s Broadcast News or Switchboard data. [speaker003:] Yeah. [speaker002:] Is that right? Do you know which one it is? [speaker005:] It's trained on a lot of different things. Um. It's trained on uh a lot of Switchboard, Call Home, [speaker002:] Uh huh. [speaker005:] um a bunch of different sources, some digits, there's some digits training in there. [speaker002:] OK. O one thing I'm wondering about is what this mean subtraction method [vocalsound] um will do if it's faced with additive noise. [speaker001:] Hmm. [speaker002:] Cuz I [disfmarker] I [disfmarker] it's cuz I don't know what log magnitude spectral subtraction is gonna do to additive noise. [speaker003:] Yeah, well, it's [disfmarker] it's not exactly the right thing [speaker002:] That's [disfmarker] that's the [disfmarker] Uh huh. [speaker003:] but [vocalsound] uh [vocalsound] but you've already seen that cuz there is added noise here. [speaker002:] That's [disfmarker] that's [disfmarker] Yeah, that's true. [speaker003:] Yeah. [speaker002:] That's a good point. [speaker003:] So um [disfmarker] [speaker002:] OK, so it's then [disfmarker] then it's [disfmarker] it's [disfmarker] it's reasonable to expect it would be helpful if we used it with the SRI system and [speaker003:] Yeah, I mean, as helpful [disfmarker] I mean, so that's the question. Yeah, w we're often asked this when we work with a system that [disfmarker] that isn't [disfmarker] isn't sort of industry [disfmarker] industry standard great, [speaker002:] Uh huh. [speaker003:] uh and we see some reduction in error using some clever method, then, you know, will it work on a [disfmarker] [vocalsound] on a [disfmarker] on a good system. So uh you know, this other one's [disfmarker] it was a pretty good system. I think, you know, one [disfmarker] one percent word error rate on digits is [disfmarker] uh digit strings is not [vocalsound] uh you know stellar, but [disfmarker] but given that this is real [vocalsound] digits, as opposed to uh sort of laboratory [disfmarker] [speaker002:] Mm hmm. [speaker003:] Well. [speaker005:] And it wasn't trained on this task either. [speaker003:] And it wasn't trained on this task. Actually one percent is sort of [disfmarker] you know, sort of in a reasonable range. People would say "yeah, I could [disfmarker] I can imagine getting that". [speaker002:] Mm hmm. [speaker003:] And uh so the [disfmarker] the four or five percent or something is [disfmarker] is [disfmarker] is quite poor. [speaker002:] Mm hmm. [speaker003:] Uh, you know, if you're doing a uh [disfmarker] [vocalsound] a sixteen digit uh credit card number you'll basically get it wrong almost all the time. So. So. Uh, [vocalsound] um [speaker002:] Hmm. [speaker003:] a significant reduction in the error for that would be great. [speaker002:] Huh, OK. [speaker003:] And [disfmarker] and then, uh Yeah. So. Yeah. Cool. [speaker002:] Sounds good. [speaker003:] Yeah. Alright, um, I actually have to run. So I don't think I can do the digits, but um, [vocalsound] I guess I'll leave my microphone on? [speaker005:] Uh, yeah. [speaker003:] Yeah. Thank you. [speaker005:] Yep. Yeah. That'll work. [speaker003:] I can be out of here quickly. [comment] [comment] [vocalsound] [vocalsound] That's I just have to run for another appointment. OK, I t Yeah. I left it on. OK. [speaker001:] Hey, you're not supposed to be drinking in here dude. [speaker004:] OK. [speaker001:] Do we have to read them that slowly? OK. Sounded like a robot. Um, this is t [speaker003:] OK. [speaker001:] When you read the numbers it kind of reminded me of beat poetry. [speaker004:] I tried to go for the EE Cummings sort of feeling, but [disfmarker] [speaker001:] Three three six zero zero. Four two zero zero one seven. That's what I think of when I think of beat poetry. [speaker003:] Beat poetry. [speaker001:] You ever seen "So I married an axe murderer"? [speaker003:] Uh parts of it. [speaker004:] Mm hmm. [speaker001:] There's a part wh there's parts when he's doing beat poetry. [speaker003:] Oh yeah? [speaker001:] And he talks like that. That's why I thi That uh probably is why I think of it that way. [speaker004:] Hmm. No, I didn't see that movie. Who did [disfmarker] who made that? [speaker001:] Mike Meyers is the guy. [speaker004:] Oh. OK. [speaker001:] It it's his uh [disfmarker] it's his cute romantic comedy. That's [disfmarker] that's [disfmarker] That's his cute romantic comedy, yeah. The other thing that's real funny, I'll spoil it for you. is when he's [disfmarker] he works in a coffee shop, in San Francisco, and uh he's sitting there on this couch and they bring him this massive cup of espresso, and he's like "excuse me I ordered the large espresso?" [speaker004:] Uh. We're having, [vocalsound] a tiramisu tasting contest this weekend. [speaker001:] Wait [disfmarker] do are y So you're trying to decide who's the best taster of tiramisu? [speaker004:] No? Um. There was a [disfmarker] a [disfmarker] a fierce argument that broke out over whose tiramisu might be the best and so we decided to have a contest where those people who claim to make good tiramisu make them, [speaker001:] Ah. [speaker004:] and then we got a panel of impartial judges that will taste [disfmarker] do a blind taste [vocalsound] and then vote. [speaker001:] Hmm. [speaker004:] Should be fun. [speaker001:] Seems like [disfmarker] Seems like you could put a s magic special ingredient in, so that everyone know which one was yours. Then, if you were to bribe them, you could uh [disfmarker] [speaker004:] Mm hmm. Well, I was thinking if um [disfmarker] y you guys have plans for Sunday? We're [disfmarker] we're not [disfmarker] it's probably going to be this Sunday, but um we're sort of working with the weather here because we also want to combine it with some barbecue activity where we just fire it up and what [disfmarker] whoever brings whatever you know, can throw it on there. So only the tiramisu is free, nothing else. [speaker001:] Well, I'm going back to visit my parents this weekend, so, I'll be out of town. [speaker004:] So you're going to the west Bay then? No, south Bay? [speaker001:] No, the South Bay, yeah. [speaker004:] South Bay. [speaker003:] Well, I should be free, so. [speaker004:] OK, I'll let you know. [speaker003:] OK. [speaker001:] We are. Is Nancy s uh gonna show up? Mmm. Wonder if these things ever emit a very, like, piercing screech right in your ear? [speaker004:] They are gonna get more comfortable headsets. They already ordered them. OK. [speaker003:] Uh [disfmarker] [speaker004:] Let's get started. The uh [disfmarker] Should I go first, with the uh, um, data. Can I have the remote [vocalsound] control. Thank you. OK. So. On Friday we had our wizard test data test and um [vocalsound] these are some of the results. This was the introduction. I actually uh, even though Liz was uh kind enough to offer to be the first subject, I sort of felt that she knew too much, so I asked uh Litonya. just on the spur of the moment, and she was uh kind enough to uh serve as the first subject. [speaker002:] Mm hmm. [speaker004:] So, this is what she saw as part of [disfmarker] as uh for instr introduction, this is what she had to read [pause] aloud. Uh, that was really difficult for her and uh [disfmarker] [speaker003:] Because of l all the names, you mean? [speaker004:] The names and um this was the uh first three tasks she had to [disfmarker] to master after she called the system, and um then of course the system broke down, and those were the l uh uh I should say the system was supposed to break down and then um these were the remaining three tasks that she was going to solve, with a human [disfmarker] Um. There are [disfmarker] here are uh the results. Mmm. And I will not [disfmarker] We will skip the reading now. D Um. And um. The reading was five minutes, exactly. And now comes the [disfmarker] This is the phone in phase of [disfmarker] [speaker003:] Wait, can I [disfmarker] I have a question. So. So there's no system, right? Like, there was a wizard for both uh [disfmarker] both parts, is this right? [speaker004:] Yeah. It was bo it both times the same person. [speaker003:] OK. [speaker004:] One time, pretending to be a system, one time, to [disfmarker] pretending to be a human, which is actually not pretending. [speaker003:] OK. And she didn't [disfmarker] [speaker004:] I should [disfmarker] [speaker003:] I mean. Well. Isn't this kind of obvious when it says "OK now you're talking to a human" and then the human has the same voice? [speaker004:] No no no. We u Wait. OK, good question, but uh you [disfmarker] you just wait and see. It's [disfmarker] You're gonna l learn. [speaker003:] OK. [speaker004:] And um the wizard sometimes will not be audible, Because she was actually [disfmarker] they [disfmarker] there was some uh lapse in the um wireless, we have to move her closer. [speaker001:] Is she mispronouncing "Anlage"? Is it "Anlaga" or "Anlunga" [speaker004:] They're mispronouncing everything, but it's [disfmarker] [speaker001:] OK. [speaker004:] This is the system breaking down, actually. "Did I call Europe?" So, this is it. Well, if we [disfmarker] we um [speaker002:] So, are [disfmarker] are you trying to record this meeting? [speaker004:] There was a strange reflex. I have a headache. I'm really sort of out of it. OK, the uh lessons learned. The reading needs to be shorter. Five minutes is just too long. Um, that was already anticipated by some people suggested that if we just have bullets here, they're gonna not [disfmarker] they're [disfmarker] subjects are probably not gonna [disfmarker] going to follow the order. And uh she did not. She [disfmarker] [speaker003:] Really? [speaker004:] No. She [disfmarker] she jumped around quite a bit. [speaker003:] Oh, it's surprising. [speaker002:] S so if you just number them "one", "two", "three" it's [speaker004:] Yeah, and make it sort of clear in the uh [disfmarker] [speaker002:] OK. Right. [speaker004:] Um. We need to [disfmarker] So that's one thing. And we need a better introduction for the wizard. That is something that Fey actually thought of a [disfmarker] in the last second that sh the system should introduce itself, when it's called. [speaker002:] Mm hmm. True. [speaker004:] And um, um, another suggestion, by Liz, was that we uh, through subjects, switch the tasks. So when [disfmarker] when they have task one with the computer, the next person should have task one with a human, and so forth. [speaker002:] Mm hmm. [speaker004:] So we get nice um data for that. Um, we have to refine the tasks more and more, which of course we haven't done at all, so far, in order to avoid this rephrasing, so where, even though w we don't tell the person "ask [pause] blah blah blah blah blah" they still try, or at least Litonya tried to um repeat as much of that text as possible. [speaker003:] Say exactly what's on there? Yeah. [speaker004:] And uh my suggestion is of course we [disfmarker] we keep the wizard, because I think she did a wonderful job, [speaker002:] Great. [speaker004:] in the sense that she responded quite nicely to things that were not asked for, "How much is a t a bus ticket and a transfer" so this is gonna happen all the time, we d you can never be sure. [speaker002:] Mm hmm. [speaker004:] Um. Johno pointed out that uh we have maybe a grammatical gender problem there with wizard. So um. [speaker001:] Yes. I wasn't [disfmarker] wasn't sure whether wizard was the correct term for [pause] uh "not a man". [speaker004:] But uh [disfmarker] [speaker003:] There's no female equivalent of [disfmarker] [speaker001:] Are you sure? [speaker003:] No, I don't know. Not that I know of. [speaker002:] Right. [speaker004:] Well, there is witch and warlock, and uh [disfmarker] [speaker003:] Yeah, that's what I was thinking, but [disfmarker] [speaker001:] Yeah, that's so [@ @]. [speaker002:] Right. Right. [speaker004:] OK. [speaker002:] Uh. [speaker004:] And um [disfmarker] So, some [disfmarker] some work needs to be done, but I think we can uh [disfmarker] And this, and [disfmarker] in case no [disfmarker] you hadn't seen it, this is what Litonya looked at during the uh [disfmarker] um while taking the [disfmarker] while partaking in the data collection. [speaker003:] Ah. [speaker002:] OK, great. So [pause] first of all, I agree that um we should hire Fey, and start paying her. Probably pay for the time she's put in as well. Um, do you know exactly how to do that, or is uh Lila [disfmarker] I mean, you know what exactly do we do to [disfmarker] to put her on the payroll in some way? [speaker004:] I'm completely clueless, but I'm willing to learn. [speaker002:] OK. Well, you'll have to. Right. So anyway, [speaker004:] N [speaker002:] um So why don't you uh ask Lila and see what she says about you know exactly what we do for someone in [speaker004:] Student type worker, [speaker002:] th [speaker004:] or [disfmarker]? [speaker002:] Well, yeah she's un she's not a [disfmarker] a student, she just graduated but anyway. [speaker004:] Hmm. [speaker002:] So i if [disfmarker] Yeah, I agree, she sounded fine, she a actually was [pause] uh, more uh, present and stuff than [disfmarker] than she was in conversation, so she did a better job than I would have guessed from just talking to her. [speaker004:] Yeah. [speaker002:] So I think that's great. [speaker004:] This is sort of what I gave her, so this is for example h how to get to the student prison, [speaker002:] Yeah. [speaker004:] and I didn't even spell it out here and in some cases I [disfmarker] I spelled it out a little bit um more thoroughly, [speaker002:] Right. [speaker004:] this is the information on [disfmarker] on the low sunken castle, and the amphitheater that never came up, and um, so i if we give her even more um, instruments to work with I think the results are gonna be even better. [speaker002:] Oh, yeah, and then of course as she does it she'll [disfmarker] she'll learn [@ @]. So that's great. Um [pause] And also if she's willing to take on the job of organizing all those subjects and stuff that would be wonderful. [speaker004:] Mmm. [speaker002:] And, uh she's [disfmarker] actually she's going to graduate school in a kind of an experimental paradigm, so I think this is all just fine in terms of h her learning things she's gonna need to know uh, to do her career. [speaker004:] Mmm. [speaker002:] So, I [disfmarker] my guess is she'll be r r quite happy to take on that job. And, so [disfmarker] [speaker004:] Yep. Yeah she [disfmarker] she didn't explicitly state that so. [speaker002:] Great. [speaker004:] And um I told her that we gonna um figure out a meeting time in the near future to refine the tasks and s look for the potential sources to find people. She also agrees that you know if it's all just gonna be students the data is gonna be less valuable because of that so. [speaker002:] Well, as I say there is this s set of people next door, it's not hard to [speaker004:] We're already [disfmarker] Yeah. [speaker002:] uh [disfmarker] [speaker004:] However, we may run into a problem with a reading task there. And um, we'll see. [speaker002:] Yeah. We could talk to the people who run it and um see if they have a way that they could easily uh tell people that there's a task, pays ten bucks or something, [speaker004:] Mm hmm. Yeah. [speaker002:] but um you have to be comfortable reading relatively complicated stuff. And [disfmarker] and there'll probably be self selection to some extent. [speaker004:] Mmm. Yep. [speaker002:] Uh, so that's good. Um. Now, [pause] I signed us up for the Wednesday slot, and part of what we should do is this. [speaker004:] OK. [speaker002:] So, my idea on that was [pause] uh, partly we'll talk about system stuff for the computer scientists, but partly I did want it to get the linguists involved in some of this issue about what the task is and all [disfmarker] um you know, what the dialogue is, and what's going on linguistically, because to the extent that we can get them contributing, that will be good. [speaker004:] Yep. [speaker002:] So this issue about you know re formulating things, maybe we can get some of the linguists sufficiently interested that they'll help us with it, uh, other linguists, if you're a linguist, but in any case, [speaker004:] Yep. [speaker002:] um, the linguistics students and stuff. So my idea on [disfmarker] on Wednesday is partly to uh [disfmarker] you [disfmarker] I mean, what you did today would [disfmarker] i is just fine. You just uh do "this is what we did, and here's the [pause] thing, and here's s some of the dialogue and [disfmarker] and so forth." But then, the other thing of course is we should um give the computer scientists some idea of [disfmarker] of what's going on with the system design, and where we think the belief nets fit in and where the pieces are and stuff like that. [speaker004:] Yep. [speaker002:] Is [disfmarker] is this [pause] make sense to everybody? Yeah. So, I don't [disfmarker] I don't think it's worth a lot of work, particularly on your part, to [disfmarker] to [disfmarker] to make a big presentation. I don't think you should [disfmarker] you don't have to make any new [pause] uh PowerPoint or anything. I think we got plenty of stuff to talk about. And, then um just see how a discussion goes. [speaker004:] Mm hmm. Sounds good. The uh other two things is um we've [disfmarker] can have Johno tell us a little about this [speaker002:] Great. [speaker004:] and we also have a l little bit on the interface, M three L enhancement, and then um that was it, I think. [speaker001:] So, what I did for this [disfmarker] this is [disfmarker] uh, a pedagogical belief net because I was [disfmarker] I [disfmarker] I took [disfmarker] I tried to conceptually do what you were talking about with the nodes that you could expand out [disfmarker] so what I did was I took [disfmarker] I made these dummy nodes called Trajector In and Trajector Out that would isolate the things related to the trajector. [speaker002:] Yep. [speaker001:] And then there were the things with the source and the path and the goal. [speaker002:] Yep. [speaker001:] And I separated them out. And then I um did similar things for our [disfmarker] our net to [disfmarker] uh with the context and the discourse and whatnot, um, so we could sort of isolate them or whatever in terms of the [disfmarker] the top layer. [speaker002:] Mm hmm. [speaker001:] And then the bottom layer is just the Mode. So. [speaker002:] So, let's [disfmarker] let's [disfmarker] Yeah, I don't understand it. Let's go [disfmarker] Slide all the way up so we see what the p the p very bottom looks like, or is that it? [speaker001:] Yeah, there's just one more node and it says "Mode" which is the decision between the [disfmarker] [speaker004:] Yeah. [speaker002:] OK, great. Alright. So [disfmarker] [speaker001:] So basically all I did was I took the last [pause] belief net [speaker002:] Mm hmm. [speaker001:] and I grouped things according to what [disfmarker] how I thought they would fit in to uh image schemas that would be related. And the two that I came up with were Trajector landmark and then Source path goal as initial ones. [speaker002:] Yep. Mm hmm. [speaker001:] And then I said well, uh the trajector would be the person in this case probably. [speaker002:] Right, yep. [speaker001:] Um, you know, we have [disfmarker] we have the concept of what their intention was, whether they were trying to tour or do business or whatever, [speaker002:] Right. [speaker001:] or they were hurried. That's kind of related to that. And then um in terms of the source, the things [disfmarker] uh the only things that we had on there I believe were whether [disfmarker] Oh actually, I kind of, [disfmarker] I might have added these cuz I don't think we talked too much about the source in the old one but uh whether the [disfmarker] where I'm currently at is a landmark might have a bearing on whether [disfmarker] [speaker004:] Mm hmm. [speaker001:] or the "landmark iness" of where I'm currently at. And "usefulness" is basi basically means is that an institutional facility like a town hall or something like that that's not [disfmarker] something that you'd visit for tourist's [disfmarker] tourism's sake or whatever. "Travel constraints" would be something like you know, maybe they said they can [disfmarker] they only wanna take a bus or something like that, right? And then those are somewhat related to the path, [speaker002:] Mm hmm. [speaker001:] so that would determine whether we'd [disfmarker] could take [disfmarker] we would be telling them to go to the bus stop or versus walking there directly. Um, "Goal". Similar things as the source except they also added whether the entity was closed and whether they have somehow marked that is was the final destination. Um, and then if you go up, Robert, Yeah, so [disfmarker] um, in terms of Context, what we had currently said was whether they were a businessman or a tourist of some other person. Um, Discourse was related to whether they had asked about open hours or whether they asked about where the entrance was or the admission fee, or something along those lines. [speaker002:] Mm hmm. [speaker001:] Uh, Prosody I don't really [disfmarker] I'm not really sure what prosody means, in this context, so I just made up you know whether [disfmarker] whether what they say is [disfmarker] or h how they say it is [disfmarker] is that. [speaker002:] Right, OK. [speaker001:] Um, the Parse would be what verb they chose, and then maybe how they modified it, in the sense of whether they said "I need to get there quickly" or whatever. [speaker002:] Mm hmm. [speaker001:] And um, in terms of World Knowledge, this would just basically be like opening and closing times of things, the time of day it is, and whatnot. [speaker004:] What's "tourbook"? [speaker001:] Tourbook? That would be, I don't know, the "landmark iness" of things, [speaker004:] Mm hmm. [speaker001:] whether it's in the tourbook or not. [speaker002:] Ch ch ch ch. Now. Alright, so I understand what's [disfmarker] what you got. I don't yet understand [pause] how you would use it. So let me see if I can ask a s [speaker001:] Well, this is not a working Bayes net. [speaker002:] Right. No, I understand that, but [disfmarker] but um So, what [disfmarker] Let's slide back up again and see [disfmarker] start at the [disfmarker] at the bottom and Oop bo doop boop boop. Yeah. So, you could imagine w Uh, go ahead, you were about to go up there and point to something. [speaker001:] Well I [disfmarker] OK, I just [disfmarker] Say what you were gonna say. [speaker002:] Good, do it! No no, [speaker001:] OK. [speaker002:] go do it. [speaker001:] Uh [disfmarker] I [disfmarker] I'd [disfmarker] No, I was gonna wait until [disfmarker] [speaker002:] Oh, OK. So, so if you [disfmarker] if we made [disfmarker] if we wanted to make it into a [disfmarker] a real uh Bayes net, that is, you know, with fill [disfmarker] you know, actually f uh, fill it [@ @] in, then uh [disfmarker] [speaker001:] So we'd have to get rid of this and connect these things directly to the Mode. [speaker002:] Well, I don't [disfmarker] That's an issue. So, um [disfmarker] [speaker001:] Cuz I don't understand how it would work otherwise. [speaker002:] Well, here's the problem. And [disfmarker] and uh [disfmarker] Bhaskara and I was talking about this a little earlier today [disfmarker] is, if we just do this, we could wind up with a huge uh, combinatoric input to the Mode thing. And uh [disfmarker] [speaker001:] Well I [disfmarker] oh yeah, I unders I understand that, I just [disfmarker] uh it's hard for me to imagine how he could get around that. [speaker002:] Well, i But that's what we have to do. [speaker001:] OK. [speaker002:] OK, so, so, uh. There [disfmarker] there are a variety of ways of doing it. Uh. Let me just mention something that I don't want to pursue today which is there are technical ways of doing it, uh I I slipped a paper to Bhaskara and [disfmarker] about Noisy OR's and Noisy MAXes and there're ways to uh sort of back off on the purity of your Bayes net edness. [speaker001:] Mmm. [speaker002:] Uh, so. If you co you could ima and I now I don't know that any of those actually apply in this case, but there is some technology you could try to apply. [speaker001:] So it's possible that we could do something like a summary node of some sort that [disfmarker] [speaker002:] Yeah. [speaker001:] OK. [speaker002:] Yeah. And, um So. [speaker001:] So in that case, the sum we'd have [disfmarker] we [disfmarker] I mean, these wouldn't be the summary nodes. We'd have the summary nodes like where the things were [disfmarker] I guess maybe if thi if things were related to business or some other [disfmarker] [speaker002:] Yeah. So what I was gonna say is [disfmarker] is maybe a good at this point is to try to informally [disfmarker] [speaker001:] Yeah. [speaker002:] I mean, not necessarily in th in this meeting, but to try to informally think about what the decision variables are. So, if you have some bottom line uh decision about which mode, you know, what are the most relevant things. [speaker001:] Mmm. [speaker002:] And the other trick, which is not a technical trick, it's kind of a knowledge engineering trick, is to make the n [pause] each node sufficiently narrow that you don't get this combinatorics. So that if you decided that you could characterize the decision as a trade off between three factors, whatever they may be, OK? then you could say "Aha, let's have these three factors", OK? and maybe a binary version f for each, or some relatively compact decision node just above the final one. [speaker001:] Mmm. [speaker002:] And then the question would be if [disfmarker] if those are the things that you care about, uh can you make a relatively compact way of getting from the various inputs to the things you care about. So that y so that, you know, you can sort of try to do a knowledge engineering thing [speaker001:] OK. [speaker002:] given that we're not gonna screw with the technology and just always use uh sort of orthodox Bayes nets, then we have a knowledge engineering little problem of how do we do that. Um and [speaker001:] So what I kind of need to do is to take this one and the old one and merge them together? [speaker002:] "Eh eh eh." Yeah. [speaker001:] So that [disfmarker] [speaker002:] Well, mmm, something. I mean, so uh, Robert has thought about this problem f for a long time, cuz he's had these examples kicking around, so he may have some good intuition about you know, what are the crucial things. [speaker001:] Mmm. [speaker002:] and, um, I understand where this [disfmarker] the uh [disfmarker] this is a way of playing with this abs Source path goal trajector exp uh uh abstraction and [disfmarker] and sort of sh displaying it in a particular way. [speaker001:] Yeah. [speaker002:] Uh, I don't think our friends uh on Wednesday are going to be able to [disfmarker] Well, maybe they will. Well, let me think about whether [disfmarker] whether I think we can present this to them or not. Um, Uh, [speaker004:] Well, I think this is still, I mean, ad hoc. This is sort of th the second [vocalsound] version and I [disfmarker] I [disfmarker] I [disfmarker] look at this maybe just as a, you know, a [disfmarker] a [disfmarker] whatever, UML diagram or, you know, as just a uh screen shot, not really as a Bayes net as John [disfmarker] Johno said. [speaker001:] We could actually, y yeah draw it in a different way, in the sense that it would make it more abstract. [speaker004:] Yeah. But the uh [disfmarker] the [disfmarker] the nice thing is that you know, it just is a [disfmarker] is a visual aid for thinking about these things which has comple clearly have to be specified m more carefully and uh [speaker002:] Alright, well, le let me think about this some more, and uh see if we can find a way to present this to this linguists group that [disfmarker] that is helpful to them. [speaker004:] I mean, ultimately we [disfmarker] we may w w we regard this as sort of an exercise in [disfmarker] in thinking about the problem and maybe a first version of uh a module, if you wanna call it that, that you can ask, that you can give input and it it'll uh throw the dice for you, uh throw the die for you, because um I integrated this into the existing SmartKom system in [disfmarker] in the same way as much the same way we can um sort of have this uh [disfmarker] this thing. Close this down. So if this is what M three L um will look like and what it'll give us, um [disfmarker] And a very simple thing. We have an action that he wants to go from somewhere, which is some type of object, to someplace. [speaker002:] Mm hmm. [speaker004:] And this [disfmarker] these uh [disfmarker] this changed now only um, um [disfmarker] It's doing it twice now because it already did it once. Um, we'll add some action type, which in this case is "Approach" and could be, you know, more refined uh in many ways. [speaker002:] Mm hmm. Good. [speaker004:] Or we can uh have something where the uh goal is a public place and it will give us then of course an action type of the type "Enter". So this is just based on this one [disfmarker] um, on this one feature, and that's [disfmarker] that's about all you can do. And so in the f if this pla if the object type um here is [disfmarker] is a m is a landmark, of course it'll be um "Vista". And um this is about as much as we can do if we don't w if we want to avoid uh uh a huge combinatorial explosion where we specify "OK, if it's this and this but that is not the case", and so forth, it just gets really really messy. [speaker002:] OK, I'm sorry. You're [disfmarker] you're [disfmarker] [speaker004:] Hmm? [speaker002:] It was much too quick for me. OK, so let me see if I understand what you're saying. So, I [disfmarker] I do understand that uh you can take the M three L and add not [disfmarker] and it w and you need to do this, for sure, we have to add, you know, not too much about um object types and stuff, and what I think you did is add some rules of the style that are already there that say "If it's of type" Landmark ", then you take [disfmarker] you're gonna take a picture of it. " [speaker004:] Exactly. [speaker002:] F full stop, I mean, that's what you do. Ev every landmark you take a picture of, [speaker004:] Every public place you enter, and statue you want to go as near as possible. [speaker002:] you enter [disfmarker] You approach. OK. Uh, and certainly you can add rules like that to the existing SmartKom system. And you just did, right? [speaker004:] Yeah. [speaker002:] OK. [speaker004:] And it [disfmarker] it would do us no good. That [disfmarker] [speaker002:] Ah. [speaker004:] Ultimately. W [speaker002:] Well. So, s well, and let's think about this. Um, that's a [disfmarker] that's another kind of baseline case, that's another sort of thing "OK, here's a [disfmarker] another kind of minimal uh way of tackling this". Add extra properties, a deterministic rule for every property you have an action, "pppt!" You do that. Um, then the question would be Uh Now, if that's all you're doing, then you can get the types from the ontology, OK? because that's all [disfmarker] you're [disfmarker] all you're using is this type [disfmarker] the types in the ontology and you're done. [speaker004:] Hmm? [speaker002:] Right? So we don't [disfmarker] we don't use the discourse, we don't use the context, we don't do any of those things. [speaker004:] No. [speaker002:] Alright, but that's [disfmarker] but that's OK, and I mean it it's again a kind of one minimal extension of the existing things. And that's something the uh SmartKom people themselves would [disfmarker] they'd say "Sure, that's no problem [disfmarker] you know, no problem to add types to the ont" Right? [speaker004:] Yeah. No. And this is [disfmarker] just in order to exemplify what [disfmarker] what we can do very, very easily is, um we have this [disfmarker] this silly uh interface and we have the rules that are as banal as of we just saw, and we have our content. Now, the content [disfmarker] [speaker002:] Hmm. [speaker004:] I [disfmarker] whi which is sort of what [disfmarker] what we see here, which is sort of the Vista, Schema, Source, Path, Goal, whatever. [speaker002:] Yeah. Yeah. [speaker004:] This will um be um a job to find ways of writing down Image schema, X schema, constructions, in some [disfmarker] some form, and have this be in a [disfmarker] in a [disfmarker] in the content, loosely called "Constructicon". And the rules we want to throw away completely. And um [disfmarker] and here is exactly where what's gonna be replaced with our Bayes net, which is exactly getting the input feeding into here. This decides whether it's an whether action [disfmarker] the [disfmarker] the Enter, the Vista, or the whatever [speaker002:] Uh, "approach", you called it, I think this time. [speaker004:] uh Approach um construction should be activated, IE just pasted in. [speaker002:] That's what you said [disfmarker] Yeah, that's fine. Yeah, but [disfmarker] Right. But it's not construction there, it's action. Construction is a d is a different story. [speaker004:] Yeah. [speaker001:] Right. This is uh [disfmarker] so what we'd be generating would be a reference to a semantic uh like parameters for the [disfmarker] for the X schema? [speaker002:] For [disfmarker] for [disfmarker] for [disfmarker] Yes. [speaker001:] OK. [speaker002:] Yeah. So that [disfmarker] that uh i if you had the generalized "Go" X schema and you wanted to specialize it to these three ones, then you would have to supply the parameters. [speaker001:] Right. [speaker002:] And then uh, although we haven't worried about this yet, you might wanna worry about something that would go to the GIS and use that to actually get you know, detailed route planning. So, you know, where do you do take a picture of it and stuff like that. [speaker001:] Mm hmm. [speaker002:] But that's not [disfmarker] It's not the immediate problem. [speaker001:] Right. [speaker002:] But, presumably that [disfmarker] that [disfmarker] that functionality's there when [disfmarker] when we [disfmarker] [speaker001:] So the immediate problem is just deciding w which [disfmarker] [speaker004:] Aspects of the X schema to add. [speaker002:] Yeah, so the pro The immediate problem is [disfmarker] is back t t to what you were [disfmarker] what you are doing with the belief net. [speaker001:] Yeah. [speaker002:] You know, uh what are we going to use to make this decision [disfmarker] [speaker001:] Right and then, once we've made the decision, how do we put that into the content? [speaker002:] Yeah. Right. Right. Well, that [disfmarker] that actually is relatively easy in this case. [speaker001:] OK. [speaker002:] The harder problem is we decide what we want to use, how are we gonna get it? And that the [disfmarker] the [disfmarker] that's the hardest problem. So, the hardest problem is how are you going to get this information from some combination of the [disfmarker] what the person says and the context and the ontology. The h So, I think that's the hardest problem at the moment is [disfmarker] is [speaker001:] OK. [speaker002:] where are you gonna [disfmarker] how are you gonna g get this information. Um, and that's [disfmarker] so, getting back to here, uh, we have a d a technical problem with the belief nets that we [disfmarker] we don't want all the com [speaker001:] There's just too many factors right now. [speaker002:] too many factors if we [disfmarker] if we allow them to just go combinatorially. [speaker001:] Right. [speaker002:] So we wanna think about which ones we really care about and what they really most depend on, and can we c you know, clean this [disfmarker] this up to the point where it [disfmarker] [speaker001:] So what we really wanna do i cuz this is really just the three layer net, we wanna b make it [disfmarker] expand it out into more layers basically? [speaker002:] Right. We might. Uh, I mean that [disfmarker] that's certainly one thing we can do. Uh, it's true that the way you have this, a lot of the times you have [disfmarker] what you're having is the values rather than the variable. So uh [disfmarker] [speaker001:] Right. [speaker002:] OK? [speaker001:] So instead of in instead it should really be [disfmarker] just be "intention" as a node instead of "intention business" or "intention tour". [speaker002:] So you [disfmarker] Yeah, right, and then it would have values, uh, "Tour", "Business", or uh "Hurried". [speaker001:] Right. [speaker002:] But then [disfmarker] but i it still some knowledge design to do, about i how do you wanna break this up, what really matters. [speaker001:] Right. [speaker002:] I mean, it's fine. You know, we have to [disfmarker] it's [disfmarker] it's iterative. We're gonna have to work with it some. [speaker001:] I think what was going through my mind when I did it was someone could both have a business intention and a touring intention and the probabilities of both of them happening at the same time [disfmarker] [speaker002:] Well, you [disfmarker] you could do that. And it's perfectly OK [pause] to uh insist that [disfmarker] that, you know, th um, they add up to one, but that there's uh [disfmarker] that [disfmarker] that it doesn't have to be one zero zero. [speaker001:] Mmm. OK. [speaker002:] OK. So you could have the conditional p So the [disfmarker] each of these things is gonna be a [disfmarker] a [disfmarker] a probability. So whenever there's a choice, uh [disfmarker] so like landmark ness and usefulness, [speaker001:] Well, see I don't think those would be mutually [disfmarker] [speaker002:] OK [disfmarker] [speaker001:] it seems like something could both be [disfmarker] [speaker002:] Absolutely right. [speaker001:] OK. [speaker002:] And so that you might want to then have those b Th Then they may have to be separate. They may not be able to be values of the same variable. [speaker004:] Object type, mm hmm. [speaker002:] So that's [disfmarker] but again, this is [disfmarker] this is the sort of knowledge design you have to go through. Right. It's [disfmarker] you know, it's great [disfmarker] is [disfmarker] is, you know, as one step toward uh [disfmarker] toward where we wanna go. [speaker004:] Also it strikes me that we [disfmarker] we m may want to approach the point where we can sort of try to find a [disfmarker] uh, a specification for some interface, here that um takes the normal M three L, looks at it. Then we discussed in our pre edu [disfmarker] EDU meeting um how to ask the ontology, what to ask the ontology um the fact that we can pretend we have one, make a dummy until we get the real one, and so um we [disfmarker] we may wanna decide we can do this from here, but we also could do it um you know if we have a [disfmarker] a [disfmarker] a belief net interface. So the belief net takes as input, a vector, right? of stuff. And it [disfmarker] Yeah. And um it Output is whatever, as well. But this information is just M three L, and then we want to look up some more stuff in the ontology and we want to look up some more stuff in the [disfmarker] maybe we want to ask the real world, maybe you want to look something up in the GRS, but also we definitely want to look up in the dialogue history um some s some stuff. Based on we [disfmarker] we have uh [disfmarker] I was just made some examples from the ontology and so we have for example some information there that the town hall is both a [disfmarker] a [disfmarker] a building and it has doors and stuff like this, but it is also an institution, so it has a mayor and so forth and so forth and we get relations out of it and once we have them, we can use that information to look in the dialogue history, "were any of these things that [disfmarker] that are part of the town hall as an institution mentioned?", [speaker002:] Mm hmm. [speaker004:] "were any of these that make the town hall a building mentioned?", [speaker003:] Right. [speaker004:] and so forth, and maybe draw some inferences on that. So this may be a [disfmarker] a sort of a process of two to three steps before we get our vector, that we feed into the belief net, and then [disfmarker] [speaker002:] Yeah. I think that's [disfmarker] I think that's exactly right. There will be rules, but they aren't rules that come to final decisions, they're rules that gather information for a decision process. [speaker004:] Yeah. [speaker002:] Yeah, no I think that's [disfmarker] that's just fine. Uh, yeah. So they'll [disfmarker] they [disfmarker] presumably there'll be a thread or process or something that "Agent", yeah, "Agent", whatever you wan wanna say, yeah, that uh is rule driven, and can [disfmarker] can uh [disfmarker] can do things like that. And um there's an issue about whether there will be [disfmarker] that'll be the same agent and the one that then goes off and uh carries out the decision, so it probably will. My guess is it'll be the same basic agent that um can go off and get information, run it through a [disfmarker] a c this belief net that [disfmarker] turn a crank in the belief net, that'll come out with s uh more [disfmarker] another vector, OK, which can then be uh applied at what we would call the simulation or action end. So you now know what you're gonna do and that may actually involve getting more information. So on once you pull that out, it could be that that says "Ah! Now that we know that we gonna go ask the ontology something else." OK? Now that we know that it's a bus trip, OK? we didn't [disfmarker] We didn't need to know beforehand, uh how long the bus trip takes or whatever, but [disfmarker] but now that we know that's the way it's coming out then we gotta go find out more. [speaker004:] Mm hmm. [speaker002:] So I think that's OK. [speaker004:] Mm hmm. So this is actually, s if [disfmarker] if we were to build something that is um, and, uh, I had one more thing, the [disfmarker] it needs to do [disfmarker] Yeah. I think we [disfmarker] I [disfmarker] I can come up with a [disfmarker] a code for a module that we call the "cognitive dispatcher", which does nothing, [speaker002:] OK. [speaker004:] but it looks of complect object trees and decides how [disfmarker] are there parts missing that need to be filled out, there's [disfmarker] this is maybe something that this module can do, something that this module can do and then collect uh sub objects and then recombine them and put them together. So maybe this is actually some [disfmarker] some useful tool that we can use to rewrite it, and uh get this part, [speaker002:] Oh, OK. Uh. [speaker004:] then. Yeah. [speaker002:] I confess, I'm still not completely comfortable with the overall story. Um. I i This [disfmarker] this is not a complaint, this is a promise to do more work. So I'm gonna hafta think about it some more. Um. In particular [disfmarker] see what we'd like to do, and [disfmarker] and this has been implicit in the discussion, is to do this in such a way that you get a lot of re use. So. What you're trying to get out of this deep co cognitive linguistics is the fact that w if you know about source [disfmarker] source, paths and goals, and nnn [comment] all this sort of stuff, that a lot of this is the same, for different tasks. And that [disfmarker] uh there's [disfmarker] there's some [disfmarker] some important generalities that you're getting, so that you don't take each and every one of these tasks and hafta re do it. And I don't yet see how that goes. Alright. [speaker004:] There're no primitives upon which [pause] uh [speaker002:] u u What are the primitives, and how do you break this [disfmarker] [speaker004:] yeah. [speaker002:] So I y I'm just [disfmarker] just there saying eee [comment] well you [disfmarker] I know how to do any individual case, right? but I don't yet [disfmarker] see what's the really interesting question is can you use uh deep uh cognitive linguistics to [pause] get powerful generalizations. And [speaker004:] Yep. [speaker002:] um [speaker004:] Maybe we sho should we a add then the "what's this?" domain? N I mean, we have to "how do I get to X". Then we also have the "what's this?" domain, where we get some slightly different [disfmarker] [speaker003:] Right. [speaker002:] Could. Uh. [speaker004:] Um Johno, actually, does not allow us to call them "intentions" anymore. [speaker002:] Yeah. [speaker004:] So he [disfmarker] he dislikes the term. [speaker002:] Well, I [disfmarker] I don't like the term either, so I have n i uh i i y w i i It uh [disfmarker] [speaker004:] But um, I'm sure the "what's this?" questions also create some interesting X schema aspects. [speaker002:] Could be. [speaker004:] So. [speaker002:] I'm not a [disfmarker] I'm not op particularly opposed to adding that or any other task, I mean, eventually we're gonna want a whole range of them. [speaker004:] Mm hmm. [speaker003:] That's right. [speaker002:] Uh, I'm just saying that I'm gonna hafta do some sort of first principles thinking about this. I just at the moment don't know. [speaker004:] Mm hmm. [speaker002:] H No. Well, no the Bayes [disfmarker] the Bayes nets [disfmarker] The Bayes nets will be dec specific for each decision. But what I'd like to be able to do is to have the way that you extract properties, that will go into different Bayes nets, be the [disfmarker] uh general. So that if you have sources, you have trajectors and stuff like that, and there's a language for talking about trajectors, you shouldn't have to do that differently for uh uh going to something, than for circling it, for uh telling someone else how to go there, [speaker004:] Getting out of [disfmarker] [speaker002:] whatever it is. So that [disfmarker] that, the [disfmarker] the decision processes are gonna be different What you'd really like of course is the same thing you'd always like which is that you have um a kind of intermediate representation which looks the same o over a bunch of inputs and a bunch of outputs. So all sorts of different tasks [pause] and all sorts of different ways of expressing them use a lot of the same mechanism for pulling out what are the fundamental things going on. And that's [disfmarker] that would be the really pretty result. And pushing it one step further, when you get to construction grammar and stuff, what you'd like to be able to do is say you have this parser which is much fancier than the parser that comes with uh SmartKom, i that [disfmarker] that actually uses constructions and is able to tell from this construction that there's uh something about the intent [disfmarker] you know, the actual what people wanna do or what they're referring to and stuff, in independent of whether it [disfmarker] about [disfmarker] what is this or where is it or something, that you could tell from the construction, you could pull out deep semantic information which you're gonna use in a general way. So that's the [disfmarker] You might. You might. You might be able to [disfmarker] to uh say that this i this is the kind of construction in which the [disfmarker] there's [disfmarker] Let's say there's a uh cont there [disfmarker] the [disfmarker] the land the construction implies the there's a con this thing is being viewed as a container. OK. So just from this local construction you know that you're gonna hafta treat it as a container you might as well go off and get that information. And that may effect the way you process everything else. So if you say "how do I get into the castle" OK, then um [disfmarker] Or, you know, "what is there in the castle" or [disfmarker] so there's all sorts of things you might ask that involve the castle as a container and you'd like to have this orthogonal so that anytime the castle's referred to as a container, you crank up the appropriate stuff. Independent of what the goal is, and independent of what the surrounding language is. [speaker004:] Mm hmm. [speaker002:] Alright, so that's [disfmarker] that's the [disfmarker] that's the thesis level [speaker004:] Mm hmm. [speaker002:] uh [disfmarker] [speaker004:] It's unfortunate also that English has sort of got rid of most of its spatial adverbs because they're really fancy then, in [disfmarker] in [disfmarker] for these kinds of analysis. But uh. [speaker002:] Well, you have prepositional phrases that [disfmarker] [speaker004:] Yeah, but they're [disfmarker] they're easier for parsers. [speaker002:] Right. [speaker004:] Parsers can pick those up but [disfmarker] but the [disfmarker] with the spatial adverbs, they have a tough time. Because the [disfmarker] mean the semantics are very complex in that. [speaker002:] Right. [speaker004:] OK, yeah? I had one more [pause] thing. I don't remember. I just forgot it again. No. Oh yeah, b But an architecture like this would also enable us maybe to [disfmarker] to throw this away and [disfmarker] and replace it with something else, or whatever, so that we have [disfmarker] so that this is sort of the representational formats we're [disfmarker] we're [disfmarker] we're talking about that are independent of the problem, that generalize over those problems, and are oh, t of a higher quality than an any actual whatever um belief net, or "X" that we may use for the decision making, ultimately. Should be decoupled, yeah. OK. [speaker002:] Right. So, are we gonna be meeting here from now on? I'm [disfmarker] I'm happy to do that. We [disfmarker] we had talked about it, cuz you have th th the display and everything, that seems fine. [speaker004:] Yeah, um, Liz also asks whether we're gonna have presentations every time. I don't think we will need to do that but it's [disfmarker] [speaker002:] Right. [speaker004:] so far I think it was nice as a visual aid for some things and [disfmarker] and [disfmarker] [speaker002:] Oh yeah. No I [disfmarker] I think it's worth it to ass to meet here to bring this, and assume that something may come up that we wanna look at. [speaker004:] Yeah. [speaker002:] I mean. Why not. [speaker004:] And um. Yeah, that was my [disfmarker] [speaker002:] She was good. Litonya was good. [speaker004:] Yeah? The uh [disfmarker] um, she w she was definitely good in the sense that she [disfmarker] she showed us some of the weaknesses [speaker002:] Right. [speaker004:] and um also the um [disfmarker] [vocalsound] the fact that she was a real subject you know, is [disfmarker] is [disfmarker] [speaker002:] Right. Yeah, and [disfmarker] and [disfmarker] and [disfmarker] yeah and [disfmarker] and she took it seriously and stuff l No, it was great. [speaker004:] Yeah. [speaker002:] Yeah. [speaker004:] So I think that um [disfmarker] I mean, w Looking [disfmarker] just looking at this data, listening to it, what can we get out of it in terms of our problem, for example, is, you know, she actually m said [disfmarker] you know, she never s just spoke about entering, she just wanted to get someplace, and she said for buying stuff. Nuh? So this is definitely interesting, and [disfmarker] [speaker003:] Yeah, right. [speaker004:] Um, and in the other case, where she wanted to look at the stuff at the graffiti, also, of course, not in the sentence "How do you get there?" was pretty standard. Nuh? except that there was a nice anaphora, you know, for pointing at what she talked about before, and there she was talking about looking at pictures that are painted inside a wall on walls, so [speaker003:] Right. [speaker004:] Actually, you'd need a lot of world knowledge. This would have been a classical um uh "Tango", actually. Um, because graffiti is usually found on the outside and not on the inside, [speaker003:] Yeah. [speaker004:] but OK. So the mistake [comment] would have make a mistake [disfmarker] the system would have made a mistake here. [speaker003:] Yep. [speaker002:] Click? Alright. [speaker005:] OK, we're on. [speaker002:] OK. [speaker005:] So, I mean, everyone who's on the wireless check that they're on. [speaker006:] C we [disfmarker] [speaker007:] Alright. [speaker003:] I see. [speaker006:] Yeah. [speaker003:] Yeah. [speaker005:] OK, our agenda was quite short. [speaker002:] Oh, could you [pause] close the door, maybe? [speaker005:] Sure. [speaker002:] Yeah. [speaker005:] Two items, which was, uh, digits and possibly stuff on [disfmarker] on, uh, forced alignment, which Jane said that Liz and Andreas had in information on, [speaker006:] Mm hmm. [speaker005:] but they didn't, so. [speaker002:] I guess the only other thing, uh, for which I [disfmarker] [speaker006:] We should do that second, because Liz might join us in time for that. [speaker005:] OK. [speaker002:] Um. OK, so there's digits, alignments, and, um, I guess the other thing, [vocalsound] which I came unprepared for, uh, [vocalsound] is, uh, to dis s s see if there's anything anybody wants to discuss about the Saturday meeting. [speaker005:] Right. [speaker002:] So. Any [disfmarker] I mean, maybe not. [speaker005:] Digits and alignments. But [disfmarker] [speaker002:] Uh. [speaker006:] Talk about aligning people's schedules. [speaker002:] Yeah. [speaker005:] Yeah. [speaker003:] Mm hmm. [speaker002:] Yeah. I mean [disfmarker] Right. Yeah, I mean, it was [disfmarker] [speaker005:] Yeah, it's forced alignment of people's schedules. [speaker006:] Yeah. If we're very [disfmarker] [speaker004:] Forced align. [speaker002:] Yeah. [speaker006:] Yeah. [speaker002:] With [disfmarker] with [disfmarker] whatever it was, a month and a half or something ahead of time, the only time we could find in common [disfmarker] roughly in common, was on a Saturday. [speaker004:] Yeah. [speaker005:] Yep. [speaker002:] Ugh. [speaker006:] It's pretty sad. [speaker002:] Yeah. [speaker006:] Yeah. [speaker003:] Have [disfmarker] Have we thought about having a conference call to include him in more of [disfmarker] [vocalsound] in more of the meeting? I [disfmarker] I mean, I don't know, if we had the [disfmarker] if we had the telephone on the table [disfmarker] [speaker002:] No. But, h I mean, he probably has to go do something. [speaker006:] No, actually I [disfmarker] I have to [disfmarker] I have to shuttle [pause] kids from various places to various other places. [speaker002:] Right? [speaker003:] I see. OK. [speaker006:] So. [speaker002:] Yeah. [speaker006:] And I don't have [disfmarker] and I don't, um, have a cell phone [speaker004:] A cell phone? [speaker006:] so I can't be having a conference call while driving. [speaker003:] No. [comment] It's not good. [speaker002:] R r right. [speaker003:] That's not good. [speaker002:] So we have to [disfmarker] we [disfmarker] [speaker006:] Plus, it would make for interesting noise [disfmarker] background noise. [speaker005:] Yep. [speaker006:] Uh [disfmarker] [speaker002:] So we have to equip him with a [disfmarker] with a [disfmarker] [vocalsound] with a head mounted, uh, cell phone [speaker005:] Ye we and we'd have to force you to read lots and lots of digits, [speaker002:] and [disfmarker] [speaker006:] Oh, yeah. [speaker005:] so it could get real [disfmarker] [vocalsound] real car noise. [speaker004:] Yeah. [speaker006:] Oh, yeah. [speaker007:] Take advantage. [speaker004:] And with the kids in the background. [speaker006:] I'll let [disfmarker] I'd let [disfmarker] [speaker004:] Yeah. [speaker006:] I let, uh, my five year old have a try at the digits, eh. [speaker002:] Yeah. [speaker005:] So, anyway, I can talk about digits. Um, did everyone get the results or shall I go over them again? I mean that it was basically [disfmarker] the only thing that was even slightly surprising was that the lapel did so well. Um, and in retrospect that's not as surprising as maybe i it shouldn't have been as surprising as I [disfmarker] as [disfmarker] as I felt it was. The lapel mike is a very high quality microphone. And as Morgan pointed out, that there are actually some advantages to it in terms of breath noises and clothes rustling [pause] if no one else is talking. [speaker006:] Exactly. [speaker004:] Yeah. [speaker007:] Mm hmm. [speaker005:] Um, so, uh [disfmarker] [speaker007:] It's g it [disfmarker] [speaker002:] Well, it's [disfmarker] Yeah, sort of the bre the breath noises and the mouth clicks and so forth like that, the lapel's gonna be better on. [speaker004:] Or the cross talk. Yeah. [speaker002:] The lapel is typically worse on the [disfmarker] on clothes rustling, but if no one's rustling their clothes, [speaker005:] Right. I mean, a lot of people are just sort of leaning over and reading the digits, [speaker002:] it's [disfmarker] it's [disfmarker] [speaker005:] so it's [disfmarker] it's a very different task than sort of the natural. [speaker004:] Yeah. You don't move much during reading digits, I think. [speaker005:] So. [speaker002:] Yeah. Yeah. [speaker005:] Right. [speaker007:] Probably the fact that it picks up other people's speakers [disfmarker] other people's talking is an indication of that it [disfmarker] the fact it is a good microphone. [speaker004:] Yeah. [speaker005:] Right. [speaker002:] Right. So in the digits, in most [disfmarker] most cases, there weren't other people talking. [speaker007:] So. [speaker005:] Right. [speaker006:] D do the lapel mikes have any directionality to them? [speaker002:] So. There typically don't, no. [speaker006:] Because I [disfmarker] I suppose you could make some that have sort of [disfmarker] that you have to orient towards your mouth, and then it would [disfmarker] [speaker005:] They have a little bit, but they're not noise cancelling. So, uh [disfmarker] [speaker002:] They're [disfmarker] they're intended to be omni directional. [speaker005:] Right. [speaker006:] Mm hmm. [speaker002:] And th it's [disfmarker] and because you don't know how people are gonna put them on, you know. [speaker005:] Right. So, also, Andreas, on that one the [disfmarker] the back part of it should be right against your head. And that will he keep it from flopping aro up and down as much. [speaker006:] It is against my head. [speaker005:] OK. [speaker002:] Yeah. Um. Yeah, we actually talked about this in the, uh, front end meeting this morning, too. [speaker005:] Uh huh. [speaker002:] Much the same thing, and [disfmarker] and it was [disfmarker] uh, I mean, there the point of interest to the group was primarily that, um, [vocalsound] the, uh [disfmarker] the system that we had that was based on H T K, that's used by, you know, [pause] all the participants in Aurora, [vocalsound] was so much worse [vocalsound] than the [disfmarker] than the S R [speaker005:] Everybody. [speaker002:] And the interesting thing is that even though, [vocalsound] yes, it's a digits task and that's a relatively small number of words and there's a bunch of digits that you train on, [vocalsound] it's just not as good as having a [disfmarker] a l very large amount of data and training up a [disfmarker] a [disfmarker] a nice good big [vocalsound] HMM. Um, also you had the adaptation in the SRI system, which we didn't have in this. Um. So. Um. [speaker006:] And we know [disfmarker] Di did I send you some results without adaptation? [speaker005:] No. Or if you did, I didn't include them, [speaker002:] I s I think Stephane, uh, had seen them. [speaker005:] cuz it was [disfmarker] [speaker002:] So [disfmarker] [speaker006:] Yeah, I think I did, actually. So there was a significant loss from not doing the adaptation. [speaker002:] Yeah. [speaker006:] Um. A [disfmarker] a [disfmarker] a couple percent or some I mean [disfmarker] Well, I don't know it [disfmarker] Overall [disfmarker] Uh, I [disfmarker] I don't remember, but there was [disfmarker] [nonvocalsound] there was a significant, um, loss or win [comment] from adaptation [disfmarker] with [disfmarker] with adaptation. And, um, that was the phone loop adaptation. And then there was a very small [disfmarker] like point one percent on the natives [disfmarker] uh, win from doing, um, you know, adaptation to [pause] the recognition hypotheses. And [pause] I tried both means adaptation and means and variances, and the variances added another [disfmarker] or subtracted another point one percent. So, [vocalsound] it's, um [disfmarker] that's the number there. Point six, I believe, is what you get with both, uh, means and variance adaptation. [speaker005:] Right. [speaker002:] But I think one thing is that, uh, I would presume [disfmarker] Hav Have you ever t [vocalsound] Have you ever tried this exact same recognizer out on the actual TI digits test set? [speaker006:] This exact same recognizer? No. [speaker002:] It might be interesting to do that. Cuz my [disfmarker] my [disfmarker] cuz my sense, um [disfmarker] [speaker006:] But [disfmarker] but, I have [disfmarker] I mean, people [disfmarker] people at SRI are actually working on digits. [speaker005:] I bet it would do even slightly better. [speaker006:] I could [disfmarker] and they are using a system that's, um [disfmarker] you know, h is actually trained on digits, um, but h h otherwise uses the same, you know, decoder, the same, uh, training methods, and so forth, [speaker002:] Mm hmm. [speaker006:] and I could ask them what they get [pause] on TI digits. [speaker002:] Yeah, bu although I'd be [disfmarker] I think it'd be interesting to just take this exact actual system so that these numbers were comparable [speaker006:] Mm hmm. Well, Adam knows how to run it, [speaker002:] and try it out on TI digits. [speaker005:] Yeah. No problem. [speaker006:] so you just make a f [speaker002:] Yeah. Yeah. Yeah. Cuz our sense from the other [disfmarker] from the Aurora, uh, task is that [disfmarker] [speaker005:] And try it with TI digits? [speaker006:] Mm hmm. [speaker002:] I mean, cuz we were getting sub one percent [vocalsound] numbers on TI digits also with the tandem thing. [speaker006:] Mm hmm. Mmm. [speaker002:] So, [vocalsound] one [disfmarker] so there were a number of things we noted from this. One is, yeah, the SRI system is a lot better than the HTK [disfmarker] [speaker006:] Hmm. [speaker002:] this, you know, very limited training HTK system. [speaker006:] Mm hmm. [speaker002:] Uh, but the other is that, um, the digits [vocalsound] recorded here in this room with these close mikes, i uh, are actually a lot harder than the [pause] studio recording TI digits. [speaker006:] Mm hmm. [speaker002:] I think, you know, one reason for that, uh, might be that there's still [disfmarker] even though it's close talking, there still is some noise and some room acoustics. [speaker006:] Mm hmm. [speaker002:] And another might be that, uh, I'd [disfmarker] I would presume that in the studio, uh, uh, situation recording read speech that if somebody did something a little funny or n pronounced something a little funny or made a little [disfmarker] that they didn't include it, [speaker005:] They didn't include it. [speaker002:] they made them do it again. [speaker005:] Whereas, I took out [pause] the ones that I noticed that were blatant [disfmarker] that were correctable. [speaker002:] Mmm. Yeah. [speaker005:] So that, if someone just read the wrong digit, I corrected it. And then there was another one where Jose couldn't tell whether [disfmarker] I couldn't tell whether he was saying zero or six. [speaker002:] Yeah. [speaker005:] And I asked him and he couldn't tell either. [speaker009:] Hmm. [speaker005:] So I just cut it out. You know, so I just e edited out the first, i uh, word of the utterance. [speaker002:] Yeah. [speaker005:] Um, so there's a little bit of correction but it's definitely not as clean as TI digits. So my expectations is TI digits would, especially [disfmarker] I think TI digits is all [pause] American English. Right? So it would probably do even a little better still [speaker002:] Mm hmm. [speaker005:] on the SRI system, but we could give it a try. [speaker006:] Well. But [pause] remember, we're using a telephone bandwidth front end here, uh, on this, uh [disfmarker] on this SRI system, so, [vocalsound] um, I was [disfmarker] I thought that maybe that's actually a good thing because it [disfmarker] it gets rid of some of the [disfmarker] uh, the noises, um, you know, in the [disfmarker] the [disfmarker] below and above the [disfmarker] um, the, you know, speech bandwidth [speaker002:] Mm hmm. Mm hmm. [speaker006:] and, um, I suspect that to get sort of the last bit out of these higher quality recordings you would have to in fact, uh, use models that, uh, were trained on wider band data. And of course we can't do that or [disfmarker] [speaker005:] Wha what's TI digits? I thought t [speaker002:] It's wide band, yeah. It's [disfmarker] in [disfmarker] in fact, we looked it up [speaker005:] It is wide band. OK. [speaker002:] and it was actually twenty kilohertz sampling. [speaker005:] Oh, that's right. I [disfmarker] I did look that up. [speaker006:] Mm hmm. [speaker005:] I couldn't remember whether that was TI digits or one of the other digit tasks. [speaker002:] Yeah. [speaker006:] Right. But [disfmarker] but, I would [disfmarker] Yeah. It's [disfmarker] it's easy enough to try, just run it on [disfmarker] [speaker005:] Mm hmm. [speaker002:] Yeah. [speaker005:] So, Morgan, you're getting a little breath noise. [speaker002:] See w [speaker006:] Now, eh, does [disfmarker] one [disfmarker] one issue [disfmarker] [speaker005:] You might wanna move the mike down a little bit. [speaker006:] one issue with [disfmarker] with that is that [vocalsound] um, the system has this, uh, notion of a speaker to [disfmarker] which is used in adaptation, variance norm uh, you know, both in, uh, mean and variance normalization and also in the VTL [pause] estimation. [speaker002:] Mm hmm. [speaker006:] So [disfmarker] [speaker005:] Yeah, I noticed the script that extracted it. [speaker006:] Do y? Is [disfmarker]? So does [disfmarker] so th so does [disfmarker] does, um, [vocalsound] the TI digits database have speakers that are known? [speaker005:] Yep. Yep. [speaker006:] And is there [disfmarker] is there enough data or a comparable [disfmarker] comparable amount of data to [disfmarker] to what we have in our recordings here? [speaker005:] That I don't know. I don't know. I don't know how many speakers there are, and [disfmarker] and how many speakers per utterance. [speaker006:] OK. [speaker002:] Yeah. Well, the other thing would be to do it without the adaptation and compare to these numbers without the adaptation. That would [disfmarker] [speaker006:] Right. Uh, but I'm not so much worried about the adaptation, actually, than [disfmarker] than the, um, [vocalsound] um [disfmarker] the, uh, VTL estimation. [speaker005:] Right. [speaker006:] If you have only one utterance per speaker you might actually screw up on estimating the [disfmarker] the warping, uh, factor. So, um [disfmarker] [speaker005:] I strongly suspect that they have more speakers than we do. [speaker006:] Right. But it's not the amount of speakers, [speaker005:] So, uh [disfmarker] [speaker006:] it's the num it's the amount of data per speaker. [speaker005:] Right. So we [disfmarker] we could probably do an extraction that was roughly equivalent. [speaker006:] Right. Right. [speaker005:] Um. [speaker006:] So [disfmarker] [speaker005:] So, although I [disfmarker] I sort of know how to run it, there are a little [disfmarker] a f few details here and there that I'll have to [pause] dig out. [speaker006:] OK. The key [disfmarker] So th the system actually extracts the speaker ID from the waveform names. [speaker005:] Right. I saw that. [speaker006:] And there's a [disfmarker] there's a script [disfmarker] and that is actually all in one script. So there's this one script that parses waveform names and extracts things like the, um, speaker, uh, ID or something that can stand in as a speaker ID. So, we might have to modify that script to recognize the, um, speakers, [vocalsound] um, in the [disfmarker] in the, uh, um, [vocalsound] TI digits [pause] database. [speaker005:] Right. Right. And that, uh [disfmarker] [speaker006:] Or you can fake [disfmarker] you can fake [pause] names for these waveforms that resemble the names that we use here for the [disfmarker] for the meetings. [speaker005:] Right. [speaker006:] That would be the, sort of [disfmarker] probably the safest way to do [disfmarker] [speaker005:] I might have to do that anyway to [disfmarker] to do [disfmarker] because we may have to do an extract to get the [pause] amount of data per speaker about right. [speaker006:] Uh huh. [speaker005:] The other thing is, isn't TI digits isolated digits? [speaker006:] Right. [speaker005:] Or is that another one? I'm [disfmarker] I looked through a bunch of the digits t corp corpora, and now they're all blurring. [speaker002:] Mm hmm. [speaker005:] Cuz one of them was literally people reading a single digit. And then others were connected digits. [speaker002:] Yeah. Most of TI digits is connected digits, I think. [speaker005:] OK. [speaker002:] The [disfmarker] I mean, we had a Bellcore corpus that we were using. [speaker005:] Maybe it's the Bell Gram. [speaker002:] It was [disfmarker] [vocalsound] that's [disfmarker] that was isolated digits. [speaker005:] Bell Digits. Alright. [speaker006:] By the way, I think we can improve these numbers if we care to compr improve them [vocalsound] by, um, [vocalsound] not starting with the Switchboard models but by taking the Switchboard models and doing supervised adaptation on a small amount of digit data collected in this setting. [speaker002:] Um. [speaker005:] Yep. [speaker006:] Because that would adapt your models to the room acoustics and f for the far field microphones, you know, to the noise. And that should really improve things, um, further. And then you use those adapted models, which are not speaker adapted but sort of acous you know, channel adapted [disfmarker] [speaker005:] Channel adapted. [speaker006:] use that as the starting models for your speaker adaptation. [speaker002:] Yeah. [vocalsound] But the thing is, uh [disfmarker] I mean, w when you [disfmarker] it depends whether you're ju were just using this as a [disfmarker] [vocalsound] a starter task for [disfmarker] you know, to get things going for conversational or if we're really interested i in connected digits. And I [disfmarker] I think the answer is both. [speaker006:] Well, I don't know. [speaker002:] And for [disfmarker] for connected digits over the telephone you don't actually want to put a whole lot of effort into adaptation because [vocalsound] somebody [pause] gets on the phone and says a number and then you just want it. You don't [disfmarker] don't, uh [disfmarker] [speaker003:] This is [disfmarker] this [disfmarker] that one's better. [speaker006:] Right. [speaker003:] Mm hmm. [speaker006:] Um, but, you know, I [disfmarker] uh, my impression was that you were actually interested in the far field microphone, uh, problem, I mean. So, you want to [disfmarker] you want to [disfmarker] That's the obvious thing to try. [speaker003:] Oh. Oh. [speaker006:] Right? Then, eh [disfmarker] because you [disfmarker] you don't have any [disfmarker] [speaker002:] Right. [speaker003:] Yeah. [speaker006:] That's where the most m acoustic mismatch is between the currently used models and the [disfmarker] the r the set up here. [speaker002:] Right. [speaker006:] So. [speaker002:] Yeah. So that'd be anoth another interesting data point. [speaker006:] Mm hmm. [speaker002:] I mean, I [disfmarker] I guess I'm saying I don't know if we'd want to do that as the [disfmarker] as [disfmarker] [speaker005:] Other way. [speaker004:] Other way. [speaker005:] Liz [disfmarker] [speaker001:] Now you're all watching me. [speaker005:] It f it clips over your ears. [speaker001:] Alright. This way. [speaker005:] There you go. [speaker003:] If you have a strong fe if you have a strong preference, you could use this. [speaker001:] You're all watching. This is terrible. [speaker003:] It's just we [disfmarker] we think it has some spikes. So, uh, we [disfmarker] we didn't use that one. [speaker001:] I'll get it. [speaker003:] But you could if you want. [speaker002:] Yeah. At any rate, I don't know if w [speaker003:] I don't know. And Andre Andreas, your [disfmarker] your microphone's a little bit low. [speaker006:] It is? [speaker002:] Yeah. [speaker003:] Yeah. [speaker002:] I don't know if we wanna use that as the [disfmarker] [speaker006:] Uh. [speaker005:] Uh, it pivots. [speaker003:] So if you see the picture [speaker006:] I I [disfmarker] [speaker005:] It [disfmarker] it [disfmarker] like this. [speaker003:] and then you have to scr [speaker006:] I [disfmarker] I already adjusted this a number of times. [speaker005:] Eh. [speaker006:] I [disfmarker] I [speaker005:] Yeah, I think these mikes are not working as well as I would like. [speaker006:] can't quite seem to [disfmarker] Yeah, I think this contraption around your head is not [pause] working so well. [speaker002:] Too many adju too many adjustments. Yeah. Anyway, what I was saying is that I [disfmarker] I think I probably wouldn't want to see that as sort of like the norm, that we compared all things to. [speaker003:] That looks good. Yeah. [speaker002:] To, uh, the [disfmarker] to have [disfmarker] have all this ad all this, uh, adaptation. But I think it's an important data point, if you're [disfmarker] if [disfmarker] [speaker006:] Right. [speaker002:] Yeah. Um. The other thing that [disfmarker] that, uh [disfmarker] of course, what Barry was looking at was [disfmarker] was just that, the near versus far. And, yeah, the adaptation would get [vocalsound] th some of that. [speaker006:] Mm hmm. [speaker002:] But, I think even [disfmarker] even if there was, uh, only a factor of two or something, like I was saying in the email, I think that's [disfmarker] [vocalsound] that's a big factor. [speaker006:] Mm hmm. [speaker002:] So [disfmarker] [speaker005:] Liz, you could also just use the other mike if you're having problems with that one. [speaker002:] N [speaker003:] Well. [speaker001:] OK. [speaker003:] Yeah. This would be OK. We [disfmarker] we [disfmarker] we think that this has spikes on it, [speaker001:] It's this thing's [disfmarker] This is too big for my head. [speaker003:] so it's not as good acoustically, [speaker006:] Yeah, basically your ears are too big. [speaker003:] but [disfmarker] [speaker006:] I mean, mine are too. E th everybody's ears are too big for these things. [speaker001:] No, my [disfmarker] my [disfmarker] But this is too big for my head. [speaker006:] Uh [disfmarker] [speaker001:] So, I mean, [comment] [comment] it doesn't [disfmarker] you know, it's sit [speaker003:] Well, if you'd rather have this one then it's [disfmarker] [speaker001:] OK. [speaker002:] Yeah. [speaker005:] Oh, well. [speaker002:] It's [pause] great. [speaker005:] So the [disfmarker] To get that, uh, pivoted this way, it pivots like this. [speaker001:] No this way. [speaker005:] Yeah. [speaker001:] Yeah. [speaker005:] There you go. [speaker003:] And there's a screw that you can tighten. [speaker005:] And then it [disfmarker] Right. [speaker001:] Right. I already [pause] tried to get it close. [speaker003:] Good. [speaker005:] So if it doesn't bounce around too much, that's actually good placement. [speaker003:] That looks good. [speaker001:] OK. [speaker005:] But it looks like it's gonna bounce a lot. [speaker002:] So, where were we? Uh [disfmarker] [vocalsound] Yeah. [speaker003:] Yeah. [speaker005:] Digits. Adaptation. [speaker002:] Uh, adaptation, non adaptation, um, factor of two, um [disfmarker] [speaker006:] What k u By the way, wh what factor of two did you [disfmarker]? [speaker002:] Oh, yeah. I know what I was go w [speaker006:] I mean [disfmarker] [speaker002:] Oh, no, no. It's tha that [disfmarker] that we were saying, you know, well is [disfmarker] how much worse is far than near, you know. [speaker006:] Oh, th OK. [speaker002:] And I mean it depends on which one you're looking at, [speaker006:] That factor of two. Mm hmm. [speaker002:] but for the everybody, it's [vocalsound] little under a factor or two. Yeah. I [disfmarker] I know what I was thinking was that maybe, uh, i i we could actually t t try at least looking at, uh, some of the [disfmarker] the large vocabulary speech from a far microphone, at least from the good one. [speaker006:] Mm hmm. [speaker002:] I mean, before I thought we'd get, you know, a hundred and fifty percent error or something, [speaker006:] Mm hmm. [speaker002:] but if [disfmarker] [vocalsound] if, uh [disfmarker] if we're getting thirty five, forty percent or something, [vocalsound] u um [disfmarker] [speaker001:] Actually if you run, though, on a close talking mike over the whole meeting, during all those silences, you get, like, four hundred percent word error. [speaker002:] Mm hmm. Right. I understand. But doing the same kind of limited thing [disfmarker] [speaker001:] Or [disfmarker] or some high number. [speaker002:] Yeah, sure. Get all these insertions. But I'm saying if you do the same kind of limited thing [vocalsound] as people have done in Switchboard evaluations or as [disfmarker] a [speaker001:] Yeah. Where you know who the speaker is and there's no overlap? [speaker002:] Yeah. [speaker001:] And you do just the far field for those regions? [speaker002:] Yeah. The same sort of numbers that we got those graphs from. [speaker005:] Could we do exactly the same thing that we're doing now, but do it with a far field mike? [speaker002:] Right? Yeah, do it with one of [disfmarker] on [speaker005:] Cuz we extract the times from the near field mike, but you use the acoustics from the far field mike. [speaker001:] Right. I understand that. I just meant that [disfmarker] so you have [pause] three choices. There's, um [disfmarker] You can use times where that person is talking only from the transcripts but the segmentations were [disfmarker] were synchronized. Or you can do a forced alignment on the close talking to determine that, the you know, within this segment, these really were the times that this person was talking and elsewhere in the segment other people are overlapping and just front end those pieces. Or you can run it on the whole data, which is [disfmarker] which is, you know, a [disfmarker] [speaker002:] But [disfmarker] but [disfmarker] but how did we get the [disfmarker] how did we determine the links, uh, that we're testing on in the stuff we reported? [speaker001:] In the H L T paper we took [pause] segments that are channel [disfmarker] time aligned, which is now h being changed in the transcription process, which is good, and we took cases where the transcribers said there was only one person talking here, because no one else had time [disfmarker] any words in that segment and called that "non overlap". [speaker002:] And tha And that's what we were getting those numbers from. [speaker001:] Yes. [speaker002:] Right. [speaker001:] Tho good [disfmarker] the good numbers. The bad numbers were from [pause] the segments where there was overlap. [speaker002:] Well, we could start with the good ones. But anyway [disfmarker] so I think that we should try it once with [vocalsound] the same conditions that were used to create those, [speaker001:] Yeah. [speaker002:] and in those same segments just use one of the P Z [speaker001:] Right. So we [disfmarker] we can do that. [speaker002:] And then, you know, I mean, the thing is if we were getting, uh [disfmarker] what, thirty five, forty percent, something like that on [disfmarker] on that particular set, [speaker001:] Yeah. [speaker002:] uh, does it go to seventy or eighty? Or, does it use up so much memory we can't decode it? [speaker001:] Right. It might also depend on which speaker th it is and how close they are to the PZM? [speaker002:] Uh [disfmarker] [speaker001:] I don't know how different they are from each other. [speaker006:] You want to probably choose the PZM channel that is closest to the speaker. [speaker001:] To be best [disfmarker] [speaker005:] For this particular digit ones, I just picked that one. [speaker004:] Yeah. [speaker001:] f [speaker002:] Well [disfmarker] [speaker001:] OK. [speaker006:] Oh, OK. [speaker001:] So we would then use that one, too, [speaker005:] So [disfmarker] [speaker002:] This is kind of central. [speaker001:] or [disfmarker]? [speaker002:] You know, it's [disfmarker] so i but I would [disfmarker] I'd pick that one. It'll be less good for some people than for other, but I [disfmarker] I'd like to see it on the same [disfmarker] exact same data set that [disfmarker] that we did the other thing on. [speaker005:] Actually [disfmarker] I sh actually should've picked a different one, [speaker002:] Right? [speaker005:] because [pause] that could be why the PDA is worse. Because it's further away from most of the people reading digits. [speaker004:] It's further away. Yeah. Yeah. [speaker002:] That's probably one of the reasons. [speaker003:] Hmm. Mm hmm. [speaker001:] Well, yeah. You could look at, I guess, that PZM or something. [speaker005:] Yep. [speaker002:] But the other is, it's very, uh [disfmarker] I mean, even though there's [disfmarker] I'm sure the f f the [disfmarker] the SRI, uh, front end has some kind of pre emphasis, it's [disfmarker] it's, uh [disfmarker] [vocalsound] still, th it's picking up lots of low frequency energy. [speaker006:] Mm hmm. [speaker002:] So, even discriminating against it, I'm sure some of it's getting through. Um. But, yeah, you're right. Prob a part of it is just the distance. [speaker001:] And aren't these pretty bad microphones? [speaker005:] Yep. [speaker001:] I mean [disfmarker] [speaker002:] Well, they're bad. But, I mean, if you listen to it, it sounds OK. You know? [speaker005:] Yeah. When you listen to it, uh, the PZM and the PDA [disfmarker] Yeah, th the PDA has higher sound floor [speaker002:] u Yeah. [speaker005:] but not by a lot. It's really pretty [disfmarker] uh, pretty much the same. [speaker001:] I just remember you saying you got them to be cheap on purpose. Cheap in terms of their quality. So. [speaker002:] Well, they're [pause] twenty five cents or so. [speaker005:] Th we wanted them to be [disfmarker] to be typical of what would be in a PDA. [speaker002:] Yeah. [speaker005:] So they are [disfmarker] they're not the PZM three hundred dollar type. [speaker001:] Mm hmm. [speaker005:] They're the twenty five cent, [speaker002:] Yeah. [speaker005:] buy them in packs of thousand type. [speaker002:] But, I mean, the thing is people use those little mikes for everything [speaker001:] I see. [speaker005:] Everything. [speaker002:] because they're really not bad. I mean, if you're not [vocalsound] doing something ridiculous like feeding it to a speech recognizer, they [disfmarker] they [disfmarker] [vocalsound] they [disfmarker] you know, you can hear the sou hear the sounds just fine. [speaker001:] Mm hmm. Right. [speaker002:] You know, it's [disfmarker] They [disfmarker] I mean, i it's more or less the same principles as these other mikes are built under, it's just that [pause] there's less quality control. They just, you know, churn them out and don't check them. Um. So. So that was [disfmarker] Yeah. So that was i interesting result. So like I said, the front end guys are very much interested in [disfmarker] in this is as [disfmarker] as well and [speaker006:] So [disfmarker] so, but where is this now? I mean, what's [disfmarker] where do we go from here? I mean, [speaker005:] Yeah. That was gonna be my question. [speaker006:] we [disfmarker] so we have a [disfmarker] we have a [disfmarker] a system that works pretty well but it's not, you know, the system that people here are used to using [disfmarker] to working with. So what [disfmarker] what do we do now? [speaker002:] Well, I think what we wanna do is we want to [disfmarker] eh, and we've talked about this in other [pause] contexts [disfmarker] we want to [vocalsound] have the ability to feed it different features. [speaker006:] Mm hmm. [speaker002:] And then, um, [vocalsound] from the point of view of the front end research, it would be s uh, substituting for HTK. [speaker006:] OK. OK. [speaker002:] I think that's the key thing. And then if we can feed it different features, then we can try all the different things that we're trying there. [speaker006:] OK. Alright. [speaker002:] And then, um, uh, also Dave is [disfmarker] is thinking about using the data in different ways, uh, to [vocalsound] um, uh, explicitly work on reverberation [speaker006:] Mm hmm. [speaker002:] starting with some techniques that some other people have [pause] found somewhat useful, and [disfmarker] Yeah. [speaker006:] OK. So [disfmarker] so the key [pause] thing that's missing here is basically the ability to feed, you know, other features [vocalsound] i into the recognizer and also then to train the system. [speaker002:] Right. Right. [speaker006:] OK. And, uh, es I don't know when Chuck will be back but that's exactly what he [disfmarker] he's gonna [disfmarker] [speaker002:] H h He's [disfmarker] he's sort of back, but he drove for fourteen hours an and wasn't gonna make it in today. [speaker006:] Oh, OK. So, I think that's one of the things that he said he would be working on. Um. Just sort of t to make sure that [pause] we can do that [speaker005:] Yeah. [speaker006:] and [disfmarker] Um. [speaker002:] Yeah. Right. [speaker006:] It's [disfmarker] uh, I mean, the [disfmarker] the front end is f i tha that's in the SRI recognizer is very nice in that it does a lot of things on the fly but it unfortunately [pause] is not [pause] designed and, um [disfmarker] [vocalsound] like the, uh, ICSI system is, where you can feed it from a pipeline of [disfmarker] of the command. So, the [disfmarker] what that means probably for the foreseeable future is that you have to, uh, dump out, um [disfmarker] you know, if you want to use some new features, you have to dump them into individual files and [pause] give those files to the recognizer. [speaker005:] We do [disfmarker] we tend to do that anyway. [speaker006:] OK. [speaker005:] Oh. So, although you [disfmarker] you can pipe it as well, we tend to do it that way because that way you can concentrate on one block and not keep re doing it over and over. [speaker006:] Oh, OK. Alright. [speaker002:] Yeah. Yeah. So I've [disfmarker] I [disfmarker] [speaker005:] So tha that's exactly what the P file [pause] is for. [speaker002:] Yeah. [speaker006:] Yeah, the [disfmarker] the [disfmarker] the cumbersome thing is [disfmarker] is, um [disfmarker] is that you actually have to dump out little [disfmarker] little files. [speaker001:] Uh [disfmarker] [speaker006:] So for each segment that you want to recognize [vocalsound] you have to [pause] dump out [pause] a separate file. [speaker005:] Uh huh. [speaker006:] Just like i th like th as if there were these waveform segments, but instead you have sort of feature file segments. But, you know [disfmarker] So. [speaker002:] Cool. OK. So the s the [disfmarker] the next thing we had on the agenda was something about alignments? [speaker001:] Oh. Yes, we have [disfmarker] I don't know, did you wanna talk about it, or [disfmarker]? I can give a [disfmarker] I was just telling this to Jane and [disfmarker] and [disfmarker] W we [disfmarker] we were able to get some definite improvement on the forced alignments by looking at them first and then realizing the kinds of errors [pause] that were occurring and um, some of the errors occurring very frequently are just things like the first word being moved to as early as possible in the recognition, which is a um, I think was both a [disfmarker] a pruning [pause] problem and possibly a problem with needing constraints on word locations. And so we tried both of these st things. We tried saying [disfmarker] I don't know, I got this [vocalsound] whacky idea that [disfmarker] just from looking at the data, that when people talk [pause] their words are usually chunked together. It's not that they say one word and then there's a bunch of words together. They're [comment] might say one word and then another word far away if they were doing just backchannels? But in general, if there's, like, five or six words and one word's far away from it, that's probably wrong on average. So, um [disfmarker] And then also, ca the pruning, of course, was too [disfmarker] too severe. [speaker006:] So that's actually interesting. The pruning was the same value that we used for recognition. And we had lowered that [disfmarker] we had used tighter pruning after Liz ran some experiments showing that, you know, it runs slower and there's no real difference in [disfmarker] [speaker005:] No gain. [speaker001:] Actually it was better with [disfmarker] slightly better or about th [speaker006:] Right. [speaker001:] it was the same with tighter pruning. [speaker006:] So for free recognition, this [disfmarker] the lower pruning value is better. You [disfmarker] [speaker001:] It's probably cuz the recognition's just bad en at a point where it's bad enough that [disfmarker] that you don't lose anything. [speaker006:] Correct. Right. Um, but it turned out for [disfmarker] for [disfmarker] to get accurate alignments it was really important to open up the pruning significantly. [speaker001:] Right. [speaker002:] Hmm. [speaker006:] Um [pause] because otherwise it would sort of do greedy alignment, um, in regions where there was no real speech yet from the foreground speaker. [speaker002:] Mm hmm. [speaker006:] Um, [vocalsound] so that was one big factor that helped improve things and then the other thing was that, you know, as Liz said the [disfmarker] we f enforce the fact that, uh, the foreground speech has to be continuous. It cannot be [disfmarker] you cannot have a background speech hypothesis in the middle of the foreground speech. You can only have background speech at the beginning and the end. [speaker001:] Yeah. I mean, yeah, it isn't always true, and I think what we really want is some clever way to do this, where, um, you know, from the data or from maybe some hand corrected alignments from transcribers that things like words that do occur just by themselves [pause] a alone, like backchannels or something that we did allow to have background speech around it [disfmarker] [speaker004:] Yeah. [speaker001:] those would be able to do that, [speaker003:] Sorry. [speaker001:] but the rest would be constrained. So, I think we have a version that's pretty good for the native speakers. I don't know yet about the non native speakers. And, um, we basically also made noise models for the different [disfmarker] sort of grouped some of the [pause] mouth noises together. Um, so, and then there's a background speech model. And we also [disfmarker] There was some neat [disfmarker] or, interesting cases, like there's one meeting where, [vocalsound] um, Jose's giving a presentation and he's talking about, um, the word "mixed [pause] signal" and someone didn't understand, uh, that you were saying "mixed" [disfmarker] I think, Morgan. [speaker008:] Yeah, yeah. [speaker001:] And so your speech ch was s saying something about mixed signal. And the next turn was a lot of people saying "mixed", like "he means mixed signal" or "I think it's mixed". And the word "mixed" in this segment occurs, like, a bunch of times. [speaker008:] Sh [speaker001:] And Chuck's on the lapel here, and he also says "mixed" but it's at the last one, and of course the aligner th aligns it everywhere else to everybody else's "mixed", [speaker008:] Yeah. [speaker001:] cuz there's no adaptation yet. So there's [disfmarker] [vocalsound] I think there's some issues about [disfmarker] u We probably want to adapt at least the foreground speaker. But, I guess Andreas tried adapting both the foreground and a background generic speaker, and that's actually a little bit of a f funky model. Like, it gives you some weird alignments, just because often the background speakers match better to the foreground than the foreground speaker. [speaker006:] Oh [disfmarker] [speaker004:] Yeah. [speaker008:] Oh. [speaker001:] So there's some things there, especially when you get lots of the same words, uh, occurring in the [disfmarker] [speaker006:] Well, the [disfmarker] I [disfmarker] I think you can do better by [vocalsound] uh, cloning [disfmarker] so we have a reject phone. And you [disfmarker] and what we wanted to try with [disfmarker] you know, once we have this paper written and have a little more time, [vocalsound] uh, t cloning that reject model and then one copy of it would be adapted to the foreground speaker to capture the rejects in the foreground, like fragments and stuff, and the other copy would be adapted to the background speaker. [speaker001:] Right. I mean, in general we actually [disfmarker] [speaker006:] And [disfmarker] [speaker001:] Right now the words like [pause] partial words are [pause] reject models [speaker006:] Mm hmm. [speaker001:] and you normally allow those to match to any word. But then the background speech was also a reject model, and so this constraint of not allowing rejects in between [disfmarker] you know, it needs to differentiate between the two. So just sort of working through a bunch of debugging kinds of issues. [speaker006:] Right. [speaker001:] And another one is turns, like people starting with [vocalsound] "well I think" and someone else is [pause] "well how about". So the word "well" is in this [disfmarker] in this [pause] segment multiple times, and as soon as it occurs usually the aligner will try to align it to the first person who says it. But then that constraint of sort of [disfmarker] uh, proximity constraint will push it over to the person who really said it in general. [speaker005:] Is the proximity constraint a hard constraint, or did you do some sort of probabilistic weighting distance, or [disfmarker]? [speaker006:] We [disfmarker] we didn't [disfmarker] No. [speaker001:] Right now it's a kluge. [speaker006:] We [disfmarker] w OK. We [disfmarker] it's straightforward to actually just have a [disfmarker] a penalty that doesn't completely disallows it but discourages it. But, um, we just didn't have time to play with, you know, tuning yet another [disfmarker] yet another parameter. [speaker005:] The ve level. Yeah. [speaker001:] Yeah. [speaker006:] And really the reason we can't do it is just that we don't have a [disfmarker] we don't have ground truth for these. So, [vocalsound] we would need a hand marked, um, [vocalsound] word level alignments or at least sort of the boundaries of the speech betw you know, between the speakers. Um, and then use that as a reference and tune the parameters of the [disfmarker] of the model, uh, to op to get the best [pause] performance. [speaker001:] Yeah. [speaker002:] G given [disfmarker] I [disfmarker] I mean, I wa I wa I was gonna ask you anyway, uh, how you assessed that things were better. [speaker006:] Mm hmm. [speaker001:] I looked at them. I spent two days [disfmarker] um, in Waves [disfmarker] [speaker002:] OK. [speaker001:] Oh, it was painful because [vocalsound] the thing is, you know the alignments share a lot in common, so [disfmarker] And you're [disfmarker] yo you're looking at these segments where there's a lot of speech. I mean, a lot of them have a lot of words. [speaker002:] Yeah. [speaker001:] Not by every speaker but by some speaker there's a lot of words. [speaker002:] Yeah. [speaker008:] Ju [speaker001:] No, not [disfmarker] I mean that if you look at the individual segments from just one person you don't see a lot of words, [speaker002:] Yeah. [speaker006:] Mm hmm. [speaker002:] Yeah. [speaker001:] but altogether you'll see a lot of words up there. And so the reject is also mapping and pauses [disfmarker] [speaker004:] Yeah. [speaker001:] So I looked at them all in Waves and just lined up all the alignments, and, at first it sort of looked like a mess and then the more I looked at it, I thought "OK, well it's moving these words leftward and [disfmarker]" You know, it wasn't that bad. It was just doing certain things wrong. So [disfmarker] But, I don't, you know, have time to l [comment] to look at all of them and it would be really useful to have, like, a [disfmarker] a transcriber who could use Waves, um, just mark, like, the beginning and end of the foreground speaker's real words [disfmarker] like, the beginning of the first word, the end of the last word [disfmarker] [speaker003:] Yeah. [speaker001:] and then we could, you know, do some adjustments. [speaker003:] I [disfmarker] OK. I have to ask you something, is i does it have to be Waves? Because if we could benefit from what you did, incorporate that into the present transcripts, [comment] that would help. [speaker006:] No. [speaker003:] And then, um, the other thing is, I believe that I did hand So. One of these transcripts was gone over by a transcriber and then I hand marked it myself so that we do have, uh, the beginning and ending of individual utterances. [speaker006:] Mm hmm. [speaker003:] Um, I didn't do it word level, but [disfmarker] but in terms [disfmarker] So I [disfmarker] so for [disfmarker] for one of the N S A groups. [speaker001:] Mm hmm. [speaker003:] And also I went back to the original one that I first transcribed and [disfmarker] and did it w uh, w uh, utterance by utterance for that particular one. So I think you do have [disfmarker] if that's a sufficient unit, I think that you do have hand marking for that. But it'd be wonderful to be able to [vocalsound] benefit from your Waves stuff. [speaker001:] Mm hmm. [speaker006:] We don't care what [disfmarker] what tool you use. [speaker001:] Yeah. I mean, if [disfmarker] if you can, um [disfmarker] if you wanna [disfmarker] [speaker003:] OK. I used it in Transcriber [speaker006:] U uh [disfmarker] [speaker003:] and it's [disfmarker] it's in the [disfmarker] [speaker001:] well, Jane and I were [disfmarker] just in terms of the tool, talking about this. I guess Sue had had some [pause] reactions. You know, interface wise if you're looking at speech, you wanna be able to know really where the words are. [speaker003:] Yeah, that's right. [speaker001:] And so, [vocalsound] we can give you some examples of sort of what this output looks like, [speaker003:] Middle of the word, or [disfmarker] [speaker001:] um, and see if you can in maybe incorporate it into the Transcriber tool some way, or [disfmarker] [speaker003:] Well, I th I'm thinking just ch e e incorporating it into the representation. [speaker001:] Um. [speaker003:] I mean, if it's [disfmarker] if it's [disfmarker] [speaker001:] You mean like [disfmarker] Yeah, word start insights. [speaker003:] if you have start points, if you have, like, time tags, [speaker001:] Right. [speaker003:] which is what I assume. Isn't that what [disfmarker] what you [disfmarker]? Well, see, Adam would be [disfmarker] [speaker006:] Yeah, whatever you use. I mean, we convert it to this format that the, um, NIST scoring tool unders uh, CTM. Conversation Time Marked file. [speaker001:] Yeah. [speaker006:] And [disfmarker] and then that's the [disfmarker] that's what the [disfmarker] [speaker005:] I think Transcriber, uh, outputs CTM. [speaker003:] If it [disfmarker]? OK. [speaker005:] I think so. [speaker003:] So you would know this more than I would. [speaker001:] Yeah. [speaker006:] Right. [speaker001:] So, I mean [disfmarker] [speaker003:] It seems like she [disfmarker] if she's g if she's moving time marks around, since our representation in Transcriber uses time marks, it seems like there should be some way of [disfmarker] of using that [disfmarker] benefitting from that. [speaker005:] Right. [speaker001:] Yeah, it wou the advantage would just be that when you brought up a bin you would be able [disfmarker] if you were zoomed in enough in Transcriber to see all the words, [speaker002:] Mm hmm. [speaker001:] you would be able to, like, have the words sort of located in time, if you wanted to do that. [speaker002:] So [disfmarker] so if we e e even just had a [disfmarker] a [disfmarker] It sounds like w we [disfmarker] [vocalsound] we almost do. [speaker001:] So. [speaker002:] Uh, if we [disfmarker] We have two. [speaker003:] We have two. [speaker002:] Yeah. Just ha uh, trying out [pause] the alignment [vocalsound] procedure that you have on that [speaker001:] Mm hmm. [speaker002:] you could actually get something, um [disfmarker] uh, uh, get an objective measure. [speaker006:] Mm hmm. [speaker002:] Uh [disfmarker] [speaker001:] You mean on [disfmarker] on the hand marked, um [disfmarker] So we [disfmarker] we only r hav I only looked at actually alignments from one meeting that we chose, [speaker002:] Yeah. [speaker001:] I think MR four, just randomly, um [disfmarker] And [disfmarker] [speaker006:] Actually, not randomly. [speaker001:] Not randomly [disfmarker] [speaker006:] We knew [disfmarker] we knew that it had these insertion errors from [disfmarker] [speaker001:] It had sort of [pause] average recognition performance in a bunch of speakers [speaker006:] Yeah. Yeah. [speaker001:] and it was a Meeting Recorder meeting. Um. But, yeah, we should try to use what you have. I did re run recognition on your new version of MR one. [speaker003:] Oh, good. [speaker001:] I [disfmarker] I mean the [disfmarker] the one with Dan [pause] Ellis in it [vocalsound] and Eric. [speaker003:] Good! Uh huh. Yeah, exactly. Yeah. Yeah. [speaker007:] I don't think that was the new version. [speaker001:] Um [disfmarker] That [disfmarker] Yeah, actually it wasn't the new new, [speaker003:] OK. [speaker001:] it was the medium new. But [disfmarker] but we would [disfmarker] we should do the [disfmarker] the latest version. [speaker003:] OK. [speaker007:] Yeah. You [disfmarker] did you adjust the [disfmarker] the utterance times, um, for each channel? [speaker001:] It was the one from last week. [speaker003:] Yes. Yes, I did. And furthermore, I found that there were a certain number where [disfmarker] [vocalsound] not [disfmarker] not a lot, but several times I actually [vocalsound] moved an utterance from [vocalsound] Adam's channel to Dan's or from Dan's to Adam's. So there was some speaker identif And the reason was because [vocalsound] I transcribed that at a point before [disfmarker] [vocalsound] uh, before we had the multiple audio available f so I couldn't switch between the audio. I [disfmarker] I transcribed it off of the mixed channel entirely, which meant in overlaps, I was at a [disfmarker] at a terrific disadvantage. [speaker001:] Right. Right. [speaker003:] In addition it was before the channelized, uh, possibility was there. And finally I did it using the speakers of my, um [disfmarker] [vocalsound] of [disfmarker] you know, off the CPU on my [disfmarker] on my machine cuz I didn't have a headphone. So it [@ @], like, I mean [disfmarker] [speaker001:] Right. [speaker003:] Yeah, I [disfmarker] I mean, i in retrospect [vocalsound] it would've been good to ha [vocalsound] have got I should've gotten a headphone. But in any case, um, thi this is [disfmarker] this was transcribed in a [disfmarker] in a, [vocalsound] uh, less optimal way than [disfmarker] than the ones that came after it, and I was able to [disfmarker] you know, an and this meant that there were some speaker identif identifications [speaker007:] Well, I know there were some speaker labelling problems, um, after interruptions. [speaker003:] which were changes. Yeah. Fixed that. [speaker007:] Is that what you're referring to? I mean, cuz there's this one instance when, for example, you're running down the stairs. [speaker003:] Oh, well [disfmarker] [speaker007:] I remember this meeting really well. [speaker004:] Yeah. [speaker007:] Right. [speaker001:] Don [disfmarker] Don has had [disfmarker] [vocalsound] He knows [disfmarker] he can just read it like a play. [speaker007:] It's a [disfmarker] Yeah, I've [disfmarker] I've [disfmarker] I'm very well acquainted with this meeting. Yeah, I can s [speaker004:] Yeah. [speaker001:] "And then she said, and then he said." [speaker007:] Yeah, I know it by heart. So, um, [vocalsound] there's one point when you're running down the stairs. [speaker003:] Uh oh. [speaker007:] Right? And, like, there's an interruption. You interrupt somebody, but then there's no line after that. For example, there's no speaker identification after that line. [speaker003:] Uh huh. [speaker007:] Is that what you're talking about? Or were there mislabellings as far as, like, the a Adam was [disfmarker]? [speaker003:] That was fixed, um, before [disfmarker] i i i I think I I think I understood that pretty [disfmarker] [speaker007:] Yeah. Cuz I thought I let you know about that. [speaker003:] Thank you for mentioning. [speaker007:] Yeah. [speaker003:] Yeah, no, tha that [disfmarker] That I think went away a couple of versions ago, [speaker007:] OK. But you're actually saying that certain, uh, speakers were mis mis identified. [speaker003:] but it's good to know. Yeah. So, with [disfmarker] under [disfmarker] um, uh, listening to the mixed channel, there were times when, as surprising as that is, I got Adam's voice confused with Dan's and vice versa [disfmarker] [speaker007:] OK. [speaker003:] not for long utterances, [speaker007:] OK. [speaker001:] Yeah. [speaker003:] but jus just a couple of places, [speaker002:] Mm hmm. [speaker003:] and embedde embedded in overlaps. The other thing that was w interesting to me was that I picked up a lot of, um, backchannels which were hidden in the mixed signal, which, you know, I mean, you c not [disfmarker] not too surprising. [speaker001:] Right. [speaker003:] But the other thing that [disfmarker] I [disfmarker] I hadn't thought about this, but I thou I wanted to raise this when you were [disfmarker] uh, with respect to also a strategy which might help with the alignments potentially, but that's [disfmarker] When I was looking at these backchannels, they were turning up usually [disfmarker] [vocalsound] very often in [disfmarker] w well, I won't say "usually" [disfmarker] but anyway, very often, I picked them up in a channel [vocalsound] w which was the person who had asked a question. S so, like, someone says "an and have you done the so and so?" And then there would be backchannels, but it would be the person who asked the question. Other people weren't really doing much backchannelling. And, you know, sometimes you have the [disfmarker] Yeah, uh huh. [speaker001:] Well, that's interesting. [speaker003:] I mean, i it wouldn't be perfect, [speaker001:] Yeah. [speaker003:] but [disfmarker] but it does seem more natural to give a backchannel when [disfmarker] when you're somehow involved in the topic, [speaker001:] No, that's really interesting. [speaker003:] and the most natural way is for you to have initiated the topic by asking a question. [speaker002:] Mm hmm. [speaker006:] Well, I think [disfmarker] [speaker001:] That's interesting. [speaker006:] No. I think it's [disfmarker] actually I think what's going on is backchannelling is something that happens in two party conversations. [speaker003:] Mm hmm. [speaker006:] And if you ask someone a question, you essentially initiating a little two party conversation. [speaker003:] Yeah. [speaker001:] Well, actu Yeah, when we looked at this [disfmarker] [speaker006:] So then you're [disfmarker] so and then you're expected to backchannel [speaker003:] Exactly. [speaker006:] because the person is addressing you directly and not everybody. [speaker003:] Exactly. Exactly my point. [speaker006:] Yeah. [speaker003:] An and so this is the expectation thing that [disfmarker] uh, uh, [speaker006:] Yeah. [speaker001:] Mm hmm. [speaker006:] Right. Right. [speaker003:] just the dyadic [disfmarker] But in addition, you know, if someone has done this analysis himself and isn't involved in the dyad, but they might also give backchannels to verify what [disfmarker] what the answer is that this [disfmarker] that the [disfmarker] the answerer's given [disfmarker] [speaker002:] H I tell you, I say [disfmarker] I say "uh huh" a lot, [speaker001:] Right. It's [disfmarker] [speaker003:] There you go. [speaker001:] Well, but it's interesting cuz, uh [disfmarker] [speaker002:] while people are talking to each other. [speaker003:] There you go. [speaker001:] But there are fewer [disfmarker] I think there are fewer "uh huhs". [speaker003:] Yeah. Yeah. [speaker001:] I mean, just from [disfmarker] We were looking at word frequency lists to try to find the cases that we would allow to have the reject words in between in doing the alignment. You know the ones we wouldn't constrain to be next to the other words. [speaker003:] Oh, yeah. [speaker001:] And "uh huh" is not as frequent as it sort of would be in Switchboard, if you looked at just a word frequency list of one word short utterances. And "yeah" is way up there, but not "uh huh". And so I was thinking thi it's not like [pause] you're being encouraged by everybody else to keep [pause] talking in the meeting. And uh, that's all, I I'll stop there, cuz I I think what you say makes a lot of sense. [speaker003:] Well, that's right. And that would [disfmarker] Well, an [speaker001:] But it was sort of [disfmarker] [speaker003:] And what you say is the [disfmarker] is the re uh, o other side of this, which is that, you know, so th there are lots of channels where you don't have these backchannels, w when a question has been asked and [disfmarker] and these [disfmarker] [speaker001:] Right. There's just probably less backchannelling in general, [speaker003:] Mm hmm. So that's good news, really. [speaker001:] even if you consider every other person altogether one person in the meeting, but we'll find out anyway. We were [disfmarker] I guess the other thing we're [disfmarker] we're [disfmarker] I should say is that we're gonna, um try [disfmarker] compare this type of overlap analysis to Switchboard, [speaker006:] And [speaker001:] where [disfmarker] and CallHome, where we have both sides, so that we can try to answer this question of, you know, [vocalsound] is there really more overlap in meetings or is it just because we don't have the other channel in Switchboard [speaker005:] Mm hmm. [speaker002:] Mm hmm. [speaker001:] and we don't know what people are doing. Try to create a paper out of that. [speaker002:] Yeah. I mean, y y you folks have probably [pause] already told me, but were [disfmarker] were you intending to do a Eurospeech submission, or [disfmarker]? [speaker001:] Um, you mean the one due tomorrow? [speaker002:] Yeah. [speaker001:] Yeah. Well, we're still, like, writing the scripts for doing the research, and we will [disfmarker] Yes, we're gonna try. [speaker003:] Mm hmm. [speaker001:] And I was telling Don, do not [pause] take this as an example of how people should work. [speaker002:] Do as I say, [speaker007:] That's r [speaker001:] So, [comment] we will try. [speaker002:] don't do as I do. Yeah. [speaker005:] Well [disfmarker] [speaker001:] It'll probably be a little late, [speaker005:] It is different. [speaker001:] but I'm gonna try it. [speaker005:] In previous years, Eurospeech only had the abstract due by now, not the full paper. [speaker007:] Right. [speaker001:] Right. [speaker005:] And so all our timing was off. I've given up on trying to do digits. I just don't think that what I have so far makes a Eurospeech paper. [speaker001:] Well, I'm no We may be in the same position, and I figured [vocalsound] we'll try, because that'll at least get us to the point where we have [disfmarker] We have this really nice database format that Andreas and I were working out that [disfmarker] It [disfmarker] it's not very fancy. It's just a ASCII line by line format, but it does give you information [disfmarker] [speaker006:] It's the [disfmarker] it's the spurt format. [speaker001:] It [disfmarker] Yeah, we're calling these "spurts" after Chafe. I was trying to find what's a word for [pause] a continuous region with [pause] pauses around it? [speaker003:] Hmm. [speaker002:] Yeah. I know that th the Telecom people use [disfmarker] use "spurt" for that. [speaker003:] Good. [speaker001:] They do? [speaker006:] Oh. [speaker002:] Yes. [speaker001:] Oh! Oh. [speaker002:] And that's [disfmarker] I mean, I [disfmarker] I was using that for a while when I was doing the rate of speech stuff, [speaker001:] I would jus [speaker002:] because I [disfmarker] because I looked up in some books and I found [disfmarker] OK, I wanna find a spurt [vocalsound] in which [disfmarker] [speaker001:] Ah, right! It's just, like, defined by the acoustics. [speaker002:] and [disfmarker] an because [disfmarker] cuz it's another question about how [pause] many pauses they put in between them. [speaker005:] Horrible. [speaker001:] Right. [speaker002:] But how fast do they do [pause] the words within the spurt? [speaker001:] Right. [speaker002:] Yeah. [speaker007:] you know "Burst" also? [speaker005:] It's gonna [disfmarker] [speaker001:] Well, that's what we were calling spurt, [speaker005:] Burst. [speaker007:] Isn't "burst" is used also? [speaker001:] so [disfmarker] [speaker005:] Spurt has the horrible name overloading with other [disfmarker] with hardware at ICSI. [speaker002:] Here. Just very locally, yeah. But [disfmarker] but that just [disfmarker] [speaker001:] Well, well, Chafe had this wor I think it was Chafe, or somebody had a [disfmarker] the word "spurt" originally, [speaker008:] Here [@ @] [disfmarker] [speaker001:] and so I [disfmarker] [speaker003:] Actually [disfmarker] [speaker001:] But tha that's good to know. Was thi it's Chafe? [speaker006:] So maybe we should talk [disfmarker] [speaker003:] Well, see, I know S Sue wrote about spurts of development. [speaker001:] Maybe it was Sue [disfmarker]? [speaker003:] But, in any case, I think it's a good term, [speaker001:] Y [speaker005:] Hmm! [speaker003:] and, uh [disfmarker] [speaker001:] So we have spurts and we have spurt ify dot shell and spurt ify [speaker002:] Yeah. [speaker003:] And ma maybe [disfmarker] maybe Chafe did. [speaker006:] Uh. [speaker002:] Yeah. [speaker006:] So s [speaker003:] I know [disfmarker] I know Ch Chafe dealt with [disfmarker] [speaker001:] And then it's got all [disfmarker] it's a verb now. [speaker007:] That's cool. [speaker006:] W uh, w [speaker003:] Chafe speaks about intonation units. [speaker001:] Yes. Right. [speaker003:] But maybe he speaks about spurts as well [speaker006:] We [speaker003:] and I just don't know. Yeah, go ahead. [speaker006:] So what we're doing [disfmarker] [speaker005:] I've heard "burst" also. [speaker006:] uh, this [disfmarker] this is just [disfmarker] maybe someone has s some [disfmarker] some ideas about how to do it better, [speaker007:] Mmm. [speaker006:] but we [disfmarker] So we're taking these, uh, alignments from the individual channels. We're [disfmarker] from each alignment we're producing, uh, one of these CTM files, which essentially has [disfmarker] it's just a linear sequence of words with the begin times for every word and the duration. [speaker003:] Great. [speaker006:] And [disfmarker] and [disfmarker] and of course [disfmarker] [speaker001:] It looks like a Waves label file almost. Right? [speaker006:] Right. But it has [disfmarker] one [disfmarker] the first column has the meeting name, [speaker001:] It's just [disfmarker] [speaker006:] so it could actually contain several meetings. Um. And the second column is the channel. Third column is the, um, start times of the words and the fourth column is the duration of the words. And then we're, um [disfmarker] OK. Then we have a messy alignment process where we actually insert into the sequence of words the, uh, tags for, like, where [disfmarker] where sentence [disfmarker] ends of sentence, question marks, um, [vocalsound] various other things. [speaker001:] Yeah. These are things that we had Don [disfmarker] [speaker006:] Uh. [speaker001:] So, Don sort of, um, propagated the punctuation from the original transcriber [disfmarker] [speaker006:] Right. [speaker001:] so whether it was, like, question mark or period or, [vocalsound] um, you know, comma and things like that, and we kept the [disfmarker] and disfluency dashes [disfmarker] uh, kept those in because we sort of wanna know where those are relative to the spurt overlaps [disfmarker] [speaker006:] Mm hmm. Right. So [disfmarker] so those are actually sort of retro fitted into the time alignment. [speaker001:] sp overlaps, or [disfmarker] [speaker006:] And then we merge all the alignments from the various channels and we sort them by time. And then there's a [disfmarker] then there's a process where you now determine the spurts. That is [disfmarker] Actually, no, you do that before you merge the various channels. So you [disfmarker] you id identify by some criterion, which is pause length [disfmarker] you identify the beginnings and ends of these spurts, and you put another set of tags in there to keep those straight. [speaker002:] Mm hmm. [speaker006:] And then you merge everything in terms of, you know, linearizing the sequence based on the time marks. And then [vocalsound] you extract the individual channels again, but this time you know where the other people start and end talking [disfmarker] you know, where their spurts start and end. And so you extract the individual channels, uh, one sp spurt by spurt as it were. Um, and inside the words or between the words you now have begin and end [pause] tags for overlaps. So, you [disfmarker] you basically have everything sort of lined up and in a form where you can look at the individual speakers and how their speech relates to the other speakers' speech. [speaker005:] Right. [speaker001:] Uh, I mean, I think that's actually really u useful also [speaker006:] And [disfmarker] [speaker001:] because even if you weren't studying overlaps, if you wanna get a transcription for the far field mikes, how are you gonna know which words from which speakers occurred at which times relative to each other? You have to be able to [pause] get a transcript like [disfmarker] like this anyway, just for doing far field recognition. So, you know, it's [disfmarker] it's sort of [disfmarker] [speaker006:] Yeah. [speaker001:] I thi it's just an issue we haven't dealt with before, how you time align things that are overlapping anyway. [speaker006:] So [disfmarker] [speaker003:] That's wonderful. [speaker005:] Well [disfmarker] [speaker006:] And [disfmarker] and we [disfmarker] [speaker001:] I mean, i I never thought about it before, [speaker005:] Y yes. [speaker006:] In [disfmarker] [speaker001:] but [disfmarker] [speaker005:] I mean, s when I came up with the original data [disfmarker] suggested data format based on the transcription graph, there's capability of doing that sort of thing in there. [speaker001:] Right. [speaker003:] Mm hmm. [speaker001:] But you can't get it directly from the transcription. [speaker006:] Right. [speaker003:] Yeah, that's right. [speaker006:] Well, this is [disfmarker] this is just [disfmarker] [speaker001:] Yeah, this is like a poor man's ver formatting version. But it's, you know [disfmarker] It's clean, it's just not fancy. [speaker005:] Right. [speaker006:] Well, there's lots of little things. [speaker001:] Um. [speaker006:] It's like there're twelve different scripts which you run and then at the end you have what you want. But, um, at the very last stage we throw away the actual time information. All we care about is whether [disfmarker] that there's a certain word was overlapped by someone else's word. So you sort of [disfmarker] at that point, you discretize things into just having overlap or no overlap. Because we figure that's about the level of analysis that we want to do for this paper. [speaker005:] Mm hmm. [speaker006:] But if you wanted to do a more fine grained analysis and say, you know, how far into the word is the overlap, you could do that. It's just [disfmarker] it'll just require more [disfmarker] [speaker001:] Yeah. Just [pause] sort of huge. [speaker006:] you know, slightly different [disfmarker] [speaker003:] What's interesting is it's exactly what, um, i in discussing with, um, Sue about this, [speaker001:] Yeah. [speaker003:] um, she, um, i i i indicated that that [disfmarker] you know, that's very important for overlap analysis. [speaker001:] Yeah. It's [disfmarker] it's nice to know, [speaker006:] Right. [speaker001:] and also I think as a human, like, I don't always hear these in the actual order that they occur. So I can have two foreground speakers, you know, Morgan an and [vocalsound] um, Adam and Jane could all be talking, and I could align each of them to be starting their utterance at the correct time, and then look where they are relative to each other, and that's not really what I heard. [speaker003:] And that's another thing she said. [speaker001:] Cuz it's just hard to do. [speaker003:] This is [disfmarker] This is Bever's [disfmarker] Bever's effect, when [disfmarker] where [disfmarker] In psy ps psycho linguistics you have these experiments where people have perceptual biases a as to what they hear, [speaker001:] Y Yeah. It's sort of [disfmarker] [speaker003:] that [disfmarker] that [disfmarker] Not the best [disfmarker] [speaker001:] Yeah, you sort of move things around until you get to a [pause] low information point and yo then you can bring in the other person. So it's [vocalsound] actually not even possible, I think, for any person to listen to a mixed signal, even equalize, and make sure that they have all the words in the right order. [speaker003:] Mm hmm. [speaker001:] So, I guess, we'll try to write this Eurospeech paper. [speaker003:] Superb. [speaker001:] I mean, we will write it. Whether they accept it [pause] late or not, I don't know. Um, and the good thing is that we have [disfmarker] It's sort of a beginning of what Don can use to link the prosodic features from each file to each other. [speaker006:] Yeah. [speaker002:] Yeah. That's the good thing about these pape [speaker006:] Plus, mayb [speaker001:] So. [speaker008:] Hmm? [speaker001:] i You know, might as well. We I ju Otherwise we won't get the work done [comment] [vocalsound] on our deadline. [speaker006:] I don't know, m I mean, u u Jane likes to look at data. [speaker002:] Yeah. [speaker006:] Maybe, you know, you could [disfmarker] you could look at this format and see if you find anything interesting. I don't know. [speaker002:] Yeah. [speaker003:] Well, what I'm thinking is [disfmarker] [speaker002:] No, it's [disfmarker] that's the good thing about these pape paper deadlines and, uh, you know, class projects, and [disfmarker] [vocalsound] and things like that, [speaker001:] Yeah. [speaker006:] Yeah. [speaker003:] Yeah. [speaker006:] Mm hmm. [speaker001:] Right. [speaker003:] Well, my [disfmarker] [speaker006:] Well th th the other thing that [disfmarker] that [disfmarker] that yo that you usually don't tell your graduate students is that these deadlines are actually not that, um, you know, strictly enforced, [speaker002:] because you [disfmarker] you really get g [speaker003:] Yeah. [speaker001:] Forces you to do the work. [speaker002:] Yeah. [speaker001:] Exactly. [speaker005:] Strict. [speaker006:] because [pause] the [disfmarker] [speaker002:] Oh, now it's out in the public, this [disfmarker] this [disfmarker] this secret information. [speaker006:] because [disfmarker] [speaker003:] I think we can ha [speaker002:] Yeah. [speaker006:] bec b [vocalsound] Nah [disfmarker] [speaker001:] Right. [speaker005:] No. [speaker001:] So [disfmarker] [speaker002:] No. [speaker003:] Nah. [speaker006:] i Because these [disfmarker] the conference organizers actually have an interest in getting lots of submissions. [speaker001:] Right. [speaker006:] I mean, a [disfmarker] a monetary interest. [speaker005:] Right. [speaker002:] Yeah. [speaker006:] So [disfmarker] [vocalsound] Um. [speaker002:] Th that's [disfmarker] that's true. [speaker003:] And good ones, good ones, [speaker006:] And good submission [speaker003:] which sometimes means [pause] a little extra time. [speaker002:] That's [disfmarker] [speaker006:] Right. Well [disfmarker] [speaker002:] That's true. [speaker006:] That's another issue, but [disfmarker] [speaker002:] By th by the way, this is totally unfair, you may [disfmarker] you may feel, but the [disfmarker] the, uh [disfmarker] the morning meeting folks actually have an [disfmarker] an extra month or so. [speaker006:] Mm hmm. [speaker004:] Yep. [speaker005:] Yep. The Aurora [disfmarker] there's a special Aurora [disfmarker] [speaker006:] When [disfmarker] [speaker001:] Uh [disfmarker] [speaker002:] There's a special Aurora session and the Aurora pe people involved in Aurora have till Ma uh, early May [pause] or something to turn in their paper. [speaker001:] Oh. [speaker006:] Mmm. [speaker001:] Oh. [speaker006:] Mmm. Well, then you can just [disfmarker] [speaker001:] Oh, well maybe we'll submit to s [comment] [vocalsound] Actually [disfmarker] [speaker006:] Maybe you can submit the digits paper on e for the Aurora session. [speaker008:] Yeah. [speaker004:] Yeah. [speaker001:] Yeah. [speaker005:] Oh, I could! [speaker002:] I if it w [speaker001:] Yeah. [speaker005:] I could submit that to Aurora. [speaker006:] Yeah. [speaker002:] Well [disfmarker] [speaker005:] That would be pretty [disfmarker] pretty [disfmarker] [speaker002:] i it has [disfmarker] [speaker001:] Yeah. [speaker005:] S That wouldn't work. [speaker002:] No, it wouldn't work. [speaker005:] It's not Aurora. [speaker002:] It's [disfmarker] it's not the Aurora [disfmarker] I mean, it [disfmarker] it's [disfmarker] it's actually the Aurora task. [speaker001:] Maybe they'll get s [speaker005:] Aurora's very specific. [speaker001:] Well, maybe it won't be after this [vocalsound] deadline [pause] extension. [speaker002:] It [speaker006:] But [disfmarker] but the people [disfmarker] I mean, a [disfmarker] a paper that is not on Aurora would probably be more interesting at that point [speaker001:] Maybe they'll [disfmarker] [speaker006:] because everybody's so sick and tired of the Aurora task. [speaker004:] Yeah. [speaker005:] Oh, I thought you meant this was just the digits section. I didn't know you meant it was Aurora digits. [speaker002:] Yeah. [speaker006:] Well, no. If you [disfmarker] if you have [disfmarker] it's to [disfmarker] if you discuss some relation to the Aurora task, like if you use the same [disfmarker] [speaker002:] This is not the Aurora task. So they just do a little grep for [disfmarker] [speaker001:] Do [disfmarker] uh, d d [speaker006:] Um. [speaker001:] Do not [disfmarker] do not [disfmarker] we are not setting a good example. [speaker006:] Well, a relation other than negation, maybe, [speaker001:] This is not a [disfmarker] [speaker006:] um. So. I don't know. [speaker001:] Anyway. But the good thing is this does [disfmarker] [speaker005:] Well, I I don't know. I mean, you could [disfmarker] you could do a paper on [pause] what's wrong with the Aurora task by comparing it to [pause] other ways of doing it. [speaker006:] How well does an Aurora system do on [disfmarker] on [disfmarker] you know, on digits collected in a [disfmarker] in this environment? [speaker005:] Different way. Yeah. [speaker006:] Yeah. [speaker002:] Maybe. [speaker006:] Maybe. [speaker005:] Pretty hokey. [speaker002:] I think it's a littl little far fetched. Nah, I mean, the thing is Aurora's pretty closed community. [speaker005:] Yep. [speaker002:] I mean, you know, the people who were involved in the [disfmarker] [vocalsound] the only people who are allowed to test on that are people who [disfmarker] who made it above a certain threshold in the first round, [speaker006:] Mm hmm. [speaker005:] It's very specific. [speaker002:] uh [vocalsound] w in ninety nine and it's [disfmarker] it's sort of a [disfmarker] it's [disfmarker] [speaker006:] Well, that's maybe why they don't f know that they have a crummy system. [speaker002:] not like a [disfmarker] [speaker006:] I mean, a crummy back end. No, I mean [disfmarker] I mean, seriously, if you [disfmarker] if you have a very [disfmarker] No, I'm sorry. [speaker001:] Uh, [comment] "beep" [vocalsound] "bee" [speaker006:] No. I didn't mean anybody [disfmarker] any particular system. [speaker005:] I mean, th [speaker006:] I meant this H T K back end. If they [disfmarker] [speaker002:] Oh, you don't like HTK? [speaker008:] Yeah. [speaker006:] I don't h I don't have any stock in HTK or Entropic or anything. [speaker002:] No. I mean, this [disfmarker] it it's the HTK [pause] that is trained on a very limited amount of data. [speaker005:] It's d it's very specific. [speaker006:] Right. [speaker002:] Yeah. [speaker006:] But so, if you [disfmarker] But maybe you should, you know, consider more [disfmarker] using more data, or [disfmarker] I mean [disfmarker] [speaker002:] Oh, yeah. I [disfmarker] I really think that that's true. [speaker006:] If yo if you sort of hermetically stay within one task and don't look left and right, then you're gonna [disfmarker] [speaker002:] And they i i [speaker005:] But they [disfmarker] they had [disfmarker] [speaker002:] i But [disfmarker] [speaker005:] They had something very specific in mind when they designed it. Right? [speaker006:] Right. [speaker002:] Well, u i [speaker005:] And so [disfmarker] so you can [disfmarker] you can argue about maybe that wasn't the right thing to do, but, you know, they [disfmarker] they [disfmarker] they had something specific. [speaker002:] But, one of the reasons I have Chuck's messing around with [disfmarker] with the back end that you're not supposed to touch [disfmarker] [speaker006:] Mm hmm. [speaker002:] I mean, for the evaluations, yes, we'll run a version that hasn't been touched. [speaker006:] Mm hmm. [speaker002:] But, uh, one of the reasons I have him messing around with that, because I think it's sort of an open question that we don't know the answer to. People always say very glibly [vocalsound] that i if you s show improvement on a bad system, that doesn't mean anything, cuz it may not be [disfmarker] [vocalsound] show [disfmarker] uh, because, you know, it doesn't tell you anything about the good system. [speaker006:] Mm hmm. [speaker002:] And I [disfmarker] I've always sort of felt that that depends. You know, that if some peopl If you're actually are getting at something that has some [pause] conceptual substance to it, it will port. [speaker006:] Mm hmm. [speaker002:] And in fact, most methods that people now use were originally tried with something that was not their absolute [pause] best system at some level. But of course, sometimes it doesn't, uh, port. So I think that's [disfmarker] that's an interesting question. If we're getting [pause] three percent error on, uh, u uh, English, uh, nati native speakers, [vocalsound] um, using the Aurora system, and we do some improvements and bring it from three to two, [vocalsound] do those same improvements bring, uh, th you know, the SRI system from one point three to [disfmarker] you know, to [vocalsound] point eight? [speaker006:] Hmm. Mm hmm. [speaker005:] Zero. [speaker002:] Well. [speaker006:] Mmm. [speaker002:] You know, so that's [disfmarker] that's something we can test. [speaker006:] Right. [speaker002:] So. Anyway. [speaker006:] OK. [speaker002:] I think we've [disfmarker] [vocalsound] we've covered that one up extremely well. [speaker003:] Mm hmm. [speaker006:] Whew! [speaker002:] OK. So, um [disfmarker] Yeah. So tha so we'll [disfmarker] you know, maybe you guys'll have [disfmarker] have one. Uh, you [disfmarker] you and, uh [disfmarker] and Dan have [disfmarker] have a paper that [disfmarker] that's going in. [speaker004:] Yeah. Yeah. [speaker002:] You know, that's [disfmarker] that's pretty solid, on the segmentation [pause] stuff. [speaker004:] Yeah. I will send you the [disfmarker] the final version, [speaker002:] Yeah. And the Aurora folks here will [disfmarker] will definitely get something in on Aurora, [speaker004:] which is not [disfmarker] [speaker006:] Actually this [disfmarker] this, um [disfmarker] So, there's another paper. [speaker002:] so. [speaker006:] It's a Eurospeech paper but not related to meetings. But it's on digits. So, um, uh, a colleague at SRI developed a improved version of MMIE training. [speaker002:] Uh huh. [speaker006:] And he tested it mostly on digits because it's sort of a [disfmarker] you know, it doesn't take weeks to train it. [speaker002:] Right. [speaker006:] Um. And got some very impressive results, um, with, you know, discriminative, uh, Gaussian training. Um, you know, like, um, error rates [pause] go from [disfmarker] I don't know, in very noisy environment, like from, uh, uh [disfmarker] I for now I [disfmarker] OK, now I have the order of magnit I'm not sure about the order of magnitude. Was it like from ten percent to [vocalsound] eight percent or from e e you know, point [disfmarker] you know, from one percent to point eight percent? [speaker002:] H i it got [disfmarker] it got better. [speaker006:] I mean, it's a [disfmarker] It got better. [speaker004:] Yeah. [speaker006:] That's the important thing. [speaker002:] Yeah, yeah. [speaker006:] Yeah. [speaker005:] Hey, that's the same percent relative, [speaker006:] But it's [disfmarker] [speaker005:] so [disfmarker] [speaker006:] Yeah. Right. [speaker002:] Yeah. [speaker006:] It's, uh, something in [disfmarker] [speaker002:] Yeah. [speaker006:] Right. [speaker005:] Twenty percent relative gain. [speaker006:] Yeah. [speaker002:] Yeah. Yeah. Um, [vocalsound] let's see. I think the only thing we had left was [disfmarker] unless somebody else [disfmarker] Well, there's a couple things. Uh, one is [pause] anything that, um, [vocalsound] anybody has to say about Saturday? Anything we should do in prep for Saturday? Um [disfmarker] I guess everybody knows about [disfmarker] I mean, u um, Mari was asking [disfmarker] was trying to come up with something like an agenda and we're sort of fitting around people's times a bit. But, um, [vocalsound] clearly when we actually get here we'll [pause] move things around this, as we need to, but [disfmarker] [speaker004:] OK. [speaker002:] so you can't absolutely count on it. [speaker004:] Yeah. [speaker002:] But [disfmarker] but, uh [disfmarker] [speaker001:] Are we meeting in here probably or [disfmarker]? [speaker002:] Yeah. [speaker001:] OK. [speaker002:] That was my thought. I think this is [disfmarker] [speaker006:] Are we recording it? [speaker001:] Yeah. We won't have enough microphones, [speaker002:] u No. I [disfmarker] I hadn't in intended to. [speaker001:] but [disfmarker] There's no way. [speaker006:] OK. [speaker002:] We won we wanna [disfmarker] I mean, they're [disfmarker] there's gonna be, uh, Jeff, Katrin, Mari and two students. So there's five [pause] from there. [speaker005:] And Brian. [speaker006:] But you know th [speaker002:] And Brian's coming, so that's six. [speaker006:] Mm hmm. [speaker005:] And plus all of us. [speaker006:] Can use the Oprah mike. [speaker002:] Uh [disfmarker] [speaker001:] Depends how fast you can [pause] throw it. [speaker005:] It seems like too many [disfmarker] too much coming and going. [speaker001:] It's just [disfmarker] [speaker006:] Mm hmm. [speaker001:] Yeah. [speaker002:] Well [disfmarker] [speaker001:] We don't even have enough channel [disfmarker] [speaker006:] Because it would be a different kind of meeting, [speaker004:] Yeah. [speaker006:] that's what I'm [disfmarker] [speaker008:] Yeah. [speaker002:] Well [disfmarker] [speaker006:] But [disfmarker] [speaker002:] I hadn't [pause] really thought of it, [speaker006:] Maybe just [disfmarker] maybe not the whole day [speaker002:] but [disfmarker] [speaker006:] but just, you know, maybe some [disfmarker] I mean, [speaker002:] Maybe part of it. [speaker006:] part of it? [speaker002:] Maybe part of it. [speaker005:] Make everyone read digits. [speaker002:] At the same time. [speaker005:] At the same time. [speaker006:] Please. [speaker001:] At the same time. [speaker002:] Yeah. [speaker001:] We c [speaker002:] I don't know. Any [speaker001:] That's their initiation into our [speaker005:] Into our [disfmarker] our [disfmarker] our cult. [speaker001:] w Yeah, our [disfmarker] Yeah, our [disfmarker] [speaker006:] Maybe the sections that are not right afte you know, after lunch when everybody's still munching and [disfmarker] [speaker002:] OK. [speaker001:] So can you send out a schedule once you know it, jus? [speaker002:] Well [disfmarker] OK. Yeah. I guess I sent it around a little bit. [speaker001:] Is [disfmarker] is there a r? There's a res [speaker002:] But [disfmarker] [speaker001:] Is it changed now, or [disfmarker]? [speaker002:] I hadn't heard back from Mari after I [disfmarker] I u u uh, brought up the point abou about Andreas's schedule. So, [vocalsound] um, maybe when I get back there'll be [pause] some [disfmarker] some mail from her. [speaker001:] OK. [speaker002:] So, I'll make a [disfmarker] [speaker003:] I'm looking forward to seeing your representation. That'd be, uh [disfmarker] [speaker001:] And w we should get [pause] the two meetings from y [speaker003:] I'd like to see that. Yeah. [speaker001:] I mean, I know about the first meeting, um, but the other one that you did, the NSA one, which we [pause] hadn't done cuz we weren't running recognition on it, [speaker003:] Mm hmm. [speaker001:] because the non native speaker [disfmarker] there were five non native speakers. [speaker003:] Mm hmm. I see. Mm hmm. [speaker001:] But, it would be useful for the [disfmarker] to see what we get [pause] with that one. [speaker003:] Great. [speaker001:] So. [speaker003:] OK. It's, uh, two thousand eleven twenty one one thousand. [speaker001:] Yeah, three. Right. So [disfmarker] [speaker003:] Great. I sent email when I finished the [disfmarker] that one. [speaker001:] N S A three, I think. [speaker003:] That was sort of son Yeah, that's right. That's right. That's much simpler. [speaker001:] I don't know what they said but I know the number. [speaker002:] Th that part's definitely gonna confuse somebody who looks at these later. [speaker006:] Right. Um. [speaker002:] I mean, this is [disfmarker] we we're recording secret NSA meetings? [speaker006:] Not the [disfmarker] [speaker002:] I mean, it's [disfmarker] [speaker006:] Yeah. Uh. [speaker003:] Yeah. Not that NSA. [speaker006:] The [disfmarker] th the [disfmarker] [speaker002:] It's network services and applications. [speaker001:] They are hard to understand. [speaker006:] Wait. [speaker001:] They're very, uh, out there. [speaker006:] The [disfmarker] [speaker001:] I have no idea what they're talking about. [speaker006:] The, um [disfmarker] [speaker002:] Yeah. [speaker006:] th the other good thing about the alignments is that, um, it's not always the machine's fault if it doesn't work. So, you can actually find, um, [speaker001:] It's the person's fault. [speaker006:] problem [disfmarker] uh, proble [speaker001:] It's Morgan's fault. [speaker006:] You can find [disfmarker] You can find, uh, problems with [disfmarker] with the transcripts, [speaker002:] It's always Morgan's fault. [speaker006:] um, you know, [speaker005:] Oh. [speaker006:] and go back and fix them. [speaker001:] Yeah. [speaker006:] But [disfmarker] [speaker001:] Tha There are some cases like where the [disfmarker] the wrong speaker [disfmarker] uh, these ca Not a lot, but where the [disfmarker] the wrong person [disfmarker] the [disfmarker] the speech is addre attached to the wrong speaker and you can tell that when you run it. Or at least you can get [pause] clues to it. [speaker003:] Interesting. I guess it does w [speaker001:] So these are from the early transcriptions that people did on the mixed signals, like what you have. [speaker003:] Mm hmm. It also raises the possibility of, um, using that kind of representation [disfmarker] I mean, I don't know, this'd be something we'd wanna check, [comment] but maybe using that representation for data entry and then displaying it on the channelized, uh, representation, cuz it [disfmarker] I think that the [disfmarker] I mean, my [disfmarker] my preference in terms of, like, looking at the data is to see it [pause] in this kind of musical score format. [speaker001:] Mm hmm. [speaker003:] And also, s you know, Sue's preference as well. [speaker001:] Yeah, if you can get it to [disfmarker] [speaker003:] And [disfmarker] and [disfmarker] but, I mean, this [disfmarker] if this is a better interface for making these kinds of, uh, you know, lo clos local changes, then that'd be fine, too. I don't [disfmarker] I have no idea. I think this is something that would need to be checked. Yeah. [speaker002:] OK. Th the other thing I had actually was, I [disfmarker] I didn't realize this till today, but, uh, this is, uh, Jose's last day. [speaker008:] Is my last [disfmarker] my last day. [speaker005:] Yeah. [speaker001:] Oh! [speaker006:] Oh! [speaker003:] Oh. [speaker008:] My [disfmarker] [vocalsound] my last meeting [pause] about meetings. [speaker005:] You're not gonna be here tomorrow? Oh, that's right. [speaker008:] Yeah. [speaker005:] Tomorrow [disfmarker] [speaker008:] Because, eh, I leave, eh, the next Sunday. [speaker004:] The last meeting meeting? [speaker005:] It's off. [speaker008:] I will come back to home [disfmarker] to Spain. [speaker006:] Mm hmm. [speaker001:] Oh. [speaker002:] Yeah. I d so I [disfmarker] I jus [speaker006:] Mm hmm. [speaker008:] And I [disfmarker] I would like to [disfmarker] to [disfmarker] to say thank you very much, eh, to all people [pause] in the group and at ICSI, [speaker001:] Oh. [speaker006:] Mm hmm. [speaker005:] Yeah. It was good having you. [speaker006:] Mmm. [speaker001:] Yeah. [speaker008:] because I [disfmarker] I enjoyed [@ @] very much, [speaker006:] Mmm. [speaker008:] uh. And I'm sorry by the result of overlapping, because, eh, [vocalsound] I haven't good results, eh, yet but, eh, [vocalsound] I [disfmarker] [vocalsound] I pretend [comment] to [disfmarker] to continuing out to Spain, eh, during the [disfmarker] the following months, eh, because I have, eh, another ideas [speaker002:] Uh huh. [speaker008:] but, eh, I haven't enough time to [disfmarker] to [disfmarker] [vocalsound] with six months it's not enough to [disfmarker] [vocalsound] to [disfmarker] to research, [speaker005:] Yep. [speaker002:] Yeah. [speaker008:] eh, and e i I mean, if, eh, the topic is, eh, so difficult, uh, in my opinion, there isn't [disfmarker] [speaker002:] Yeah. Maybe somebody else will come along and will be, uh, interested in working on it and could start off from where you are also, you know. They'd make use of [disfmarker] of what you've done. [speaker008:] Yeah. Yeah. [speaker002:] Yeah. [speaker008:] But, eh, I [disfmarker] I will try to recommend, eh, at, eh, [vocalsound] the Spanish government but, eh, the following [@ @] scholarship, eh, eh, [vocalsound] eh, will be here [pause] more time, because eh, i in my opinion is [disfmarker] is better, [vocalsound] eh, for us [pause] to [disfmarker] to spend more time here and to work more time i i in a topic. No? [speaker002:] Yeah, it's a very short time. [speaker008:] But, uh [disfmarker] [speaker002:] Yeah. [speaker005:] Yeah, six months is hard. [speaker002:] Yeah. [speaker008:] Yeah. It is. [speaker005:] I think a year is a lot better. [speaker008:] Yeah. [speaker002:] Yeah. [speaker008:] It's difficult. You [disfmarker] e you have, eh [disfmarker] you are lucky, and you [disfmarker] you find a solution [comment] in [disfmarker] in [disfmarker] in some few tim uh, months, eh? OK. But, eh, I think it's not, eh, common. But, eh, anyway, thank you. Thank you very much. Eh, I [disfmarker] I bring the chocolate, eh, to [disfmarker] [vocalsound] [vocalsound] to tear, uh, with [disfmarker] with you, [speaker006:] Mmm. [speaker003:] Ah. [speaker001:] Oh. [speaker003:] Nice. [speaker008:] uh. I [disfmarker] I hope if you need, eh, something, eh, from us in the future, I [disfmarker] I will be at Spain, [vocalsound] to you help, uh. [speaker005:] Great. [speaker002:] Well. [speaker003:] Great. [speaker002:] Thank you, Jose. [speaker003:] Thank you. [speaker001:] Right. [speaker008:] And, thank you very much. [speaker006:] Have a good trip. [speaker002:] Yeah. [speaker008:] Thank you. [speaker003:] Yeah. [speaker006:] Keep in touch. [speaker002:] Yeah. OK. I guess, uh, unless somebody has something else, we'll read [disfmarker] read our digits [speaker005:] Digits? [speaker002:] and we'll get our [disfmarker] [speaker004:] Uh. [speaker005:] Are we gonna do them simultaneously [speaker004:] Oops. [speaker002:] get our last bit of, uh, Jose's [disfmarker] Jose [disfmarker] Jose's digit [disfmarker] [speaker005:] or [disfmarker]? [speaker008:] You [disfmarker] eh [disfmarker] [speaker002:] Uh, I'm sorry? [speaker008:] Ye ye you prefer, eh, to eat, eh, chocolate, eh, at the coffee break, eh, at the [disfmarker]? [vocalsound] Or you prefer now, before after [disfmarker]? [speaker003:] Well, we have a time [disfmarker] [speaker006:] No, we prefer to keep it for ourselves. [speaker004:] During [disfmarker] [speaker006:] Yeah, yeah. [speaker003:] Well, we have a s a time [disfmarker] time constraint. [speaker004:] during digits. [speaker002:] So keep it away from that end of the table. [speaker003:] Yeah. [speaker006:] Yeah. [speaker008:] Yeah. [speaker003:] Yeah. [speaker001:] Why is it that I can read your mind? [speaker005:] Well, we've gotta wait until after di after we take the mikes off. [speaker004:] No, no. [speaker005:] So are we gonna do digits simultaneously [speaker002:] Well? [speaker001:] You [disfmarker] This is our reward if we [pause] do our digi [speaker004:] Yeah. [speaker005:] or what? [speaker002:] Yeah. [speaker003:] OK. [speaker004:] Simultaneous digit chocolate task. [speaker008:] I [disfmarker] I think, eh, it's enough, eh, for more peopl for more people [pause] after. [speaker002:] We're gonna [disfmarker] we're gonna do digits at the same [disfmarker] [speaker001:] Oh. [speaker006:] Mmm! [speaker008:] But, eh [disfmarker] [speaker003:] That's nice. [speaker006:] Mm hmm. [speaker002:] Um. [speaker001:] Oh, thanks, Jose. [speaker003:] Wow. [speaker008:] To Andreas, the idea is [disfmarker] is good. [vocalsound] s To eat here. [speaker002:] Well [disfmarker] [speaker006:] Mmm. [speaker003:] Wow. [speaker006:] Oh. [speaker003:] Very nice. [speaker002:] Tha that's [disfmarker] that looks great. [speaker001:] Oh, wow. [speaker006:] Oh, yeah. Th it doesn't [disfmarker] it won't leave this room. [speaker002:] Alright, so in the interest of getting to the [disfmarker] [speaker001:] We could do digits while other people eat. [speaker004:] Yeah. Yeah. [speaker001:] So it's background crunching. [speaker008:] Yeah. [speaker006:] Mmm. [speaker008:] Is, eh, a [disfmarker] another acoustic event. [speaker003:] Nice. [speaker001:] We don't have background chewing. [speaker004:] Background crunch. Yeah. [speaker001:] No, we don't have any data with background eating. [speaker006:] Mmm. [speaker004:] Yeah. [speaker001:] I'm serious. You [speaker002:] She's [disfmarker] she's serious. [speaker005:] It's just the rest of the digits [disfmarker] the rest of the digits are very clean, [speaker001:] I am serious. [speaker002:] She is serious. [speaker006:] Mmm. [speaker008:] Are you [disfmarker]? Oh, they're clean. [speaker001:] Well [disfmarker]? [speaker005:] um, without a lot of background noise, [speaker004:] Yeah! [speaker001:] And it [disfmarker] [speaker005:] so I'm just not sure [disfmarker] [speaker001:] You have to write down, like, while y what you're [disfmarker] what ch chocolate you're eating cuz they might make different sounds, like n nuts [disfmarker] chocolate with nuts, chocolate without nuts. [speaker003:] Oh. [speaker004:] Crunchy frogs. [speaker002:] Um [disfmarker] [speaker006:] Chocolate adaptation. [speaker002:] Actually [disfmarker] [vocalsound] actually kind of careful cuz I have a strong allergy to nuts, so I have to sort of figure out one without th [speaker001:] That w Oh, yeah, they [disfmarker] they might. [speaker002:] It's hard to [disfmarker] hard to say. [speaker001:] Maybe those? They're so [disfmarker] [speaker002:] I don't know. [speaker001:] I don't know. [speaker002:] Um [disfmarker] [speaker001:] This is [disfmarker] You know, this is a different kind of speech, [speaker002:] Well [disfmarker] [speaker008:] Take [disfmarker] take several. [speaker001:] looking at chocolates, deciding [disfmarker] [speaker006:] Mmm. [speaker001:] you know, it's another style. [speaker006:] Mmm. [speaker002:] Yeah. I may [disfmarker] I may hold off. But if I was [disfmarker] eh, but maybe I'll get some later. [speaker006:] Mmm. [speaker002:] Thanks. Well [disfmarker] well, why don't we [disfmarker]? He [disfmarker] he's worried about a ticket. Why don't we do a simultaneous one? [speaker001:] OK. [speaker005:] OK. [speaker003:] OK. [speaker006:] Mmm. [speaker002:] Simultaneous one? OK. [speaker005:] Remember to read the transcript number, please. [speaker001:] And you laughed at me, too, f the first time I said that. [speaker006:] Right. [speaker008:] OK. [speaker004:] Oops. [speaker002:] I have to what? [speaker008:] Yeah. [speaker001:] You laughed at me, too, the first time I sa said [disfmarker] [speaker002:] I did, and now I love it so much. [speaker001:] You really shouldn't, uh, te [speaker005:] OK, everyone ready? [speaker001:] You have to sort of, um [disfmarker] Jose, if you haven't done this, you have to plug your ears while you're t talking [speaker002:] W wait [disfmarker] wait a minute [disfmarker] wait a minute. W we want [disfmarker] we want [disfmarker] [speaker001:] so that you don't get confused, I guess. [speaker002:] we want it synchronized. [speaker008:] Yeah. [speaker003:] Hey, you've done this before. Haven't you? [speaker001:] Yeah. Oh, you've done this one before? [speaker004:] That's [disfmarker] [speaker003:] You've read [pause] digits together with us, haven't you [disfmarker] [speaker001:] Together? [speaker003:] I mean, at the same time? [speaker008:] No. [speaker003:] Oh, you haven't! [speaker001:] I'm not [disfmarker] we [disfmarker] we [disfmarker] Oh, and you haven't done this either. [speaker002:] OK. [speaker003:] Oh, OK. [speaker004:] Oh, yeah. [speaker001:] I the first time is [pause] traumatic, [speaker002:] We Y [vocalsound] Yeah, [speaker001:] but [disfmarker] [speaker003:] Oh, and the groupings are important, [speaker002:] bu [speaker008:] Mmm. [speaker003:] so yo you're supposed to pause between the groupings. [speaker008:] The grouping. [speaker002:] Yeah. [speaker008:] Yeah. [speaker002:] OK. So, uh [disfmarker] [speaker006:] You mean that the [disfmarker] the grouping is supposed to be synchronized? [speaker005:] Yeah, sure. [speaker003:] No. [speaker002:] No, no. [speaker006:] No? No? [speaker002:] Synchronized digits. [speaker003:] No. [speaker001:] That'd be good. We we'll give everybody the same sheet [speaker006:] It's like a [disfmarker] like a Greek [disfmarker] like a Greek choir? [speaker001:] but they say different [disfmarker] [speaker006:] You know? Like [disfmarker] [speaker002:] Yes. [speaker005:] Hey, what a good idea. [speaker006:] Yeah. [speaker005:] We could do the same sheet for everyone. Have them all read them at once. [speaker001:] Well, different digits [speaker004:] Eh [disfmarker] [speaker001:] but same groupings. [speaker005:] Or [disfmarker] or just same digits. [speaker001:] So they would all be [disfmarker] [speaker005:] See if anyone notices. [speaker001:] Yeah. [speaker003:] Yeah. That'd be good. [speaker002:] There's so many possibilities. [speaker003:] And then [disfmarker] then we can sing them next time. [speaker002:] Uh. OK, why don't we go? Uh, one two three [disfmarker] Go! [speaker003:] OK. Mmm! [speaker002:] And Andreas has the last word. [speaker005:] Did you read it twice or what? [speaker001:] He's try No, he's trying to get good recognition performance. [speaker003:] He had the h [speaker008:] Yeah. Yeah. [speaker003:] He had the [disfmarker] the long form. [speaker006:] No. [speaker005:] And we're off. [speaker003:] Hmm. Testing channel two. [speaker005:] Two, two. [speaker003:] Two. [speaker005:] Two. Oh. [speaker004:] Hello? [speaker002:] Hmm? Yeah Thank You. OK Well, so Ralf and Tilman are here. [speaker006:] OK. Great. Great. [speaker002:] Made it safely. [speaker006:] So the [disfmarker] what w we h have been doing i they would like us all to read these digits. But we don't all read them but a couple people read them. [speaker001:] OK. [speaker006:] Uh, wanna give them all with German accents today or [disfmarker]? [speaker002:] Sure. [speaker006:] OK. [speaker002:] OK and the way you do it is you just read the numbers not as uh each single, so just like I do it. [speaker001:] Mm hmm. [speaker002:] OK. First you read the transcript number. [speaker004:] OK, [speaker002:] Turn. [speaker004:] uh [disfmarker] What's [disfmarker] [speaker006:] OK. Let's be done with this. OK. [speaker001:] OK. [speaker006:] this is Ami, who [disfmarker] And this is Tilman and Ralf. [speaker001:] Hi. Uh huh. Nice to meet you. [speaker004:] Hi. [speaker006:] Hi. OK. So we're gonna try to finish by five so people who want to can go hear Nancy Chang's talk, uh downstairs. [speaker001:] Hmm. [speaker006:] And you guys are g giving talks on tomorrow and Wednesday lunch times, [speaker001:] Yes. [speaker004:] Mmm. [speaker006:] right? That's great. OK so, do y do you know what we're gonna do? [speaker002:] I thought two things uh we'll introduce ourselves and what we do. And um we already talked with Andreas, Thilo and David and some lines of code were already written today and almost tested and just gonna say we have um again the recognizer to parser thing where we're working on and that should be no problem and then that can be sort of developed uh as needed when we get [disfmarker] enter the tourism domain. em we have talked this morning with the [disfmarker] with Tilman about the generator. and um There one of our diligent workers has to sort of volunteer to look over Tilman's shoulder while he is changing the grammars to English [speaker001:] S Mm hmm. [speaker002:] because w we have [disfmarker] we face two ways. Either we do a syllable concatenating um grammar for the English generation which is sort of starting from scratch and doing it the easy way, or we simply adopt the ah um more in depth um style that is implemented in the German system and um are then able not only to produce strings but also the syntactic parse uh not parse not the syntactic tree that is underneath in the syntactic structure which is the way we decided we were gonna go because A, it's easier in the beginning [speaker001:] Mm hmm. [speaker002:] and um it does require some [disfmarker] some knowledge of [disfmarker] of those grammars and [disfmarker] and [disfmarker] and some ling linguistic background. But um it shouldn't be a problem for anyone. [speaker006:] OK So That sounds good. Johno, are you gonna have some time t to do that uh w with these guys? [speaker005:] Sure. [speaker006:] cuz y you're the grammar maven. [speaker005:] OK. [speaker006:] I mean it makes sense, doesn't it? [speaker005:] Yeah. [speaker006:] Yeah Good. OK. So, I think that's probably the [disfmarker] the right way to do that. And an Yeah, so I [disfmarker] I actually wanna f to find out about it too, but I may not have time to get in. [speaker002:] the [disfmarker] the ultimate goal is that before they leave we [disfmarker] we can run through the entire system input through output on at least one or two sample things. And um and by virtue of doing that then in this case Johno will have acquired the knowledge of how to extend it. Ad infinitum. When needed, if needed, when wanted and so forth. [speaker006:] OK that sounds great. [speaker002:] And um also um Ralf has hooked up with David and you're gonna continue either all through tonight or tomorrow on whatever to get the er parser interface working. [speaker004:] Mmm. [speaker002:] They are thinning out and thickening out lattices and doing this kind of stuff to see what works best. [speaker004:] Mmm, yep. [speaker006:] Great. So, you guys enjoy your weekend? [speaker001:] Yes, very much so. [speaker004:] Yeah, very much [speaker006:] OK, before [disfmarker] before you got put to work? [speaker004:] Yeah [speaker006:] Great. OK, so that's [disfmarker] Sort of one branch is to get us caught up on what's going on. Also of course it would be really nice to know what the plans are, in addition to what's sort of already in code. [speaker001:] Yes. [speaker006:] and we can d I dunno w w was there uh a time when we were set up to do that? It probably will work better if we do it later in the week, after [pause] we actually understand uh better what's going on. [speaker001:] Yes. [speaker004:] Hmm. [speaker001:] Yeah. [speaker006:] So when do you guys leave? [speaker001:] Um we're here through Sunday, [speaker004:] Oh [speaker001:] so [speaker006:] Oh, OK, [speaker001:] All through Friday would be fine. [speaker006:] so [disfmarker] OK, So [disfmarker] so anyt we'll find a time later in the week to uh get together and talk about [pause] your understanding of what SmartKom plans are. [speaker001:] Mm hmm. [speaker006:] and how we can change them. [speaker001:] Yes. Sure. [speaker006:] Uh, [speaker002:] Should we already set a date for that? Might be beneficial while we're all here. [speaker006:] OK? um What [disfmarker] what does not work for me is Thursday afternoon. I can do earlier in the day on Thursday, or [pause] um [pause] most of the time on Friday, not all. [speaker002:] Thursday morning sounds fine? [speaker006:] Wha but, Johno, [speaker001:] Mm hmm. [speaker006:] what are your constraints? [speaker005:] um Thursday afternoon doesn't work for me, but [disfmarker] [speaker002:] Neither does Thursday morning, no? [speaker005:] Uh Thursday morning should be fine. [speaker006:] Eleven? [speaker001:] OK. [speaker006:] Eleven on Thursday? [speaker005:] I was just thinking I w I will [pause] have [pause] leavened by eleven. [speaker006:] Right. Right. This is then out of deference to our non morning people. [speaker001:] Mm hmm. OK. So at eleven? [speaker004:] Hmm. [speaker001:] Thursday around eleven? OK. [speaker006:] Yeah. And actually we can invite um Andreas as well. [speaker002:] Uh he will be in Washington, though. [speaker006:] Oh that's true. He's off [disfmarker] off on his trip already. [speaker002:] but um David is here and he's actually knows everything about the SmartKom recognizer. [speaker006:] Thilo. OK well yeah maybe we'll see if David could make it. That would be good. [speaker002:] OK so facing to [disfmarker] to what we've sort of been doing here um well for one thing we're also using this room to collect data. [speaker001:] Yeah obviously. [speaker002:] um um Not this type of data, no not meeting data but sort of [disfmarker] sort ah our version of a wizard experiment such [speaker001:] Oh, OK. [speaker002:] not like the ones in Munich but pretty close to it. [speaker001:] Mm hmm. [speaker002:] The major difference to the Munich ones is that we do it via the telephone [speaker001:] OK. [speaker002:] even though all the recording is done here and so it's a [disfmarker] sort of a computer call system that gives you tourist information [speaker001:] Mm hmm. [speaker002:] tells you how to get places. And it breaks halfway through the experiment and a human operator comes on. and part of that is sort of trying to find out whether people change their linguistic verbal behavior when first thinking they speak to a machine and then to a human. [speaker001:] Yeah. [speaker002:] and we're setting it up so that we can [disfmarker] we hope to implant certain intentions in people. For example um we have first looked at a simple sentence that "How do I get to the Powder Tower?" OK so you have the [disfmarker] castle of Heidelberg [speaker001:] OK. [speaker002:] and there is a tower and it's called Powder Tower. [speaker001:] Oh, OK. Yeah. [speaker002:] and um so What will you parse out of that sentence? Probably something that we specified in M three L, [speaker004:] Mmm. [speaker002:] that is [@ @] [comment] "action go to whatever domain, object whatever Powder Tower". And maybe some model will tell us, some GPS module, in the mobile scenario where the person is at the moment. And um we've sort of gone through that once before in the Deep Mail project and we noticed that first of all what are [disfmarker] I should've brought some slides, but what our [disfmarker] So here's the tower. Think of this as a two dimensional representation of the tower. And our system led people here, to a point where they were facing a wall in front of the tower. There is no entrance there, but it just happens to be the closest point of the road network to the geometric center Because that's how the algorithm works. So we took out that part of the road network as a hack and then it found actually the way to the entrance. which was now the closest point of the road network to [speaker001:] Yeah. [speaker002:] OK, geometric center. But what we actually observed in Heidelberg is that most people when they want to go there they actually don't want to enter, because it's not really interesting. They wanna go to a completely different point where they can look at it and take a picture. [speaker001:] Oh, OK. [speaker004:] Hmm. [speaker001:] Yeah. [speaker002:] And so what uh uh a s you s let's say a simple parse from a s from an utterance won't really give us is what the person actually wants. Does he wanna go there to see it? Does he wanna go there now? Later? How does the person wanna go there? Is that person more likely to want to walk there? Walk a scenic route? and so forth. There are all kinds of decisions that we have identified in terms of getting to places and in terms of finding information about things. And we are constructing [disfmarker] and then we've identified more or less the extra linguistic parameters that may f play a role. Information related to the user and information related to the situation. And we also want to look closely on the linguistic information that what we can get from the utterance. That's part of why we implant these intentions in the data collection to see whether people actually phrase things differently whether they want to enter in order to buy something or whether they just wanna go there to look at it. And um so the idea is to construct uh um suitable interfaces and a belief net for a module that actually tries to guess what the underlying intention [pause] was. And then enrich or augment the M three L structures with what it thought what more it sort of got out of that utterance. So if it can make a good suggestion, "Hey!" you know, "that person doesn't wanna enter. That person just wants to take a picture," cuz he just bought film, or "that person wants to enter because he discussed the admission fee before". Or "that person wants to enter because he wants to buy something and that you usually do inside of buildings" and so forth. These ah these types of uh these bits of additional information are going to be embedded into the M three L structure in an [disfmarker] sort of subfield that we have reserved. And if the action planner does something with it, great. If not you know, then that's also something um that we can't really [disfmarker] at least we [comment] want to offer the extra information. We don't really [disfmarker] um we're not too worried. [speaker001:] Mm hmm. [speaker004:] Hmm. [speaker002:] I mean [disfmarker] t s Ultimately if you have [disfmarker] if you can offer that information, somebody's gonna s do something with it sooner or later. That's sort of part of our belief. [speaker005:] What was he saying? [speaker002:] Um, for example, right now I know the GIS from email is not able to calculate these viewpoints. So that's a functionality that doesn't exist yet to do that dynamically, [speaker001:] Mm hmm. [speaker002:] but if we can offer it that distinction, maybe somebody will go ahead and implement it. Surely nobody's gonna go ahead and implement it if it's never gonna be used, so. What have I forgotten about? Oh yeah, [speaker006:] Well th uh [speaker002:] how we do it, yeah that's the [speaker006:] No no. It's a good time to pause. I s I see [pause] questions on peoples' faces, so why don't [disfmarker] let's [disfmarker] let's [disfmarker] Let's hear [disfmarker] [speaker001:] Oh Well the obvious one would be if [disfmarker] if you envision this as a module within SmartKom, where exactly would that Sit? [speaker002:] um [disfmarker] so far I've thought of it as sort of adding it onto the modeler knowledge module. [speaker001:] That's the d [speaker004:] Hmm. [speaker001:] OK, yeah. [speaker002:] So this is one that already adds additional information to the [speaker001:] Makes perfect sense. Yes. [speaker004:] Hmm, ah. [speaker002:] but it could sit anywhere in the attention recognition I mean basically this is what attention recognition literally sort of can [disfmarker] [speaker001:] Well it's supposed to do. [speaker004:] Mmm. [speaker001:] Yeah [speaker006:] That's what it should do. Right, yeah. [speaker001:] Yeah. Yeah. [speaker004:] Huh. [speaker002:] Yeah. [speaker001:] Well f from my understanding of what the people at Phillips were originally trying to do doesn't seem to quite fit into SmartKom currently so what they're really doing right now is only selecting among the alternatives, the hypotheses that they're given enriched by the domain knowledge and the um discourse modeler and so on. [speaker002:] Yeah. Yeah. [speaker001:] So if [disfmarker] if this is additional information that could be merged in by them. And then it would be available to action planning and [disfmarker] and others. [speaker002:] Yeah. the [disfmarker] [speaker006:] let's [disfmarker] let's That w OK that was one question. Is there other [disfmarker] other things that cuz [pause] we wanna not Pa pass over any [pause] you know, questions or concerns that you have. [speaker001:] Well there're [disfmarker] there're two levels of [disfmarker] of giving an answer and I guess on both levels I don't have any um further questions. [speaker004:] Mmm. Mmm. [speaker001:] uh the [disfmarker] the two levels will be as far as I'm concerned as [pause] uh standing here for the generation module [speaker004:] Mmm. [speaker001:] and the other is [disfmarker] is my understanding of what SmartKom uh is supposed to be [speaker006:] Right. [speaker001:] and I [disfmarker] I think that fits in perfectly [speaker006:] So [disfmarker] well, let me [disfmarker] Let me s [pause] expand on that a little bit from the point of view of the generation. [speaker004:] Hmm. [speaker001:] Yeah. [speaker006:] So the idea is that we've actually got this all laid out an and we could show it to you ig um Robert didn't bring it today but there's a [disfmarker] a belief net which is [disfmarker] There's a first cut at a belief net that [disfmarker] that doesn't [disfmarker] it [disfmarker] isn't fully uh instantiated, and in particular some of the [disfmarker] the combination rules and ways of getting the [disfmarker] the conditional probabilities aren't there. But we believe that we have laid out the fundamental decisions in this little space [speaker001:] Mm hmm. [speaker006:] and the things that influence them. So one of the decisions is what we call this AVE thing. Do you want to um access, view or enter a thing. [speaker004:] Hmm. [speaker006:] So that's a a discrete decision. [speaker001:] Mm hmm. [speaker006:] There are only three possibilities and the uh [disfmarker] what one would like is for this uh, knowledge modeling module to add which of those it is and give it to the planner. [speaker001:] Mm hmm. [speaker006:] But, uh th the current design suggests that if it seems to be an important decision and if the belief net is equivocal so that it doesn't say that one of these is much more probable than the other, then an option is to go back and ask for the information you want. [speaker001:] Mm hmm. [speaker006:] Alright? Now there are two ways one can go [disfmarker] a imagine doing that. For the debugging we'll probably just have a [disfmarker] a drop down menu and the [disfmarker] while you're debugging you will just [disfmarker] OK. But for a full system, then one might very well formulate a query, give it to the dialogue planner and say this, you know ar are you know you [disfmarker] are you planning to enter? [speaker001:] Mm hmm. [speaker006:] Or whatever it [disfmarker] whatever that might be. So that's [disfmarker] under that model then, There would be a [disfmarker] uh [disfmarker] um a loop in which this thing would formulate a query, [speaker001:] Yes. [speaker006:] presumably give it to you. That would get expressed and then hopefully you know, you'd get an answer [pause] back. [speaker001:] Yep. [speaker006:] And that would of course [disfmarker] the answer would have to be parsed. [speaker004:] Mmm. [speaker006:] right [speaker004:] Yep. [speaker006:] and [disfmarker] OK so, [pause] th [pause] that uh, [speaker001:] Yes. [speaker006:] We probably won't do this early on, because the current focus is more on the decision making and stuff like that. [speaker001:] Yep. [speaker006:] But While we're on the subject I just wanted to give you a sort of head's up that it could be that some months from now we said "OK we're now ready to try to close that loop" in terms of querying about some of these decisions. [speaker001:] Mm hmm. Mm hmm. [speaker004:] Hmm. [speaker001:] Yep. So [disfmarker] my suggestion then is that you um look into the currently ongoing discussion about how the action plans are supposed to look like. And they're currently um Agreeing or [disfmarker] or in the process of agreeing on an X M L ification of um something like a state transition network of how dialogues would proceed. and [disfmarker] The [disfmarker] these um transition networks uh will be what the action planner interprets in a sense. [speaker006:] Hmm. D did you know this Robert? [speaker002:] uh Michael is doing that, right? [speaker001:] Well uh Marcus Lerkult is actually implementing that stuff and Marcus and Michael together are um leading the discussion there, [speaker002:] OK. [speaker001:] yeah. [speaker006:] So we ha we have to get in on that. [speaker002:] Mm hmm. [speaker001:] Yep. [speaker006:] because um partly those are like X schemas. [speaker004:] Mmm. [speaker001:] Definitely. [speaker006:] the transition diagrams. [speaker002:] Hmm. [speaker006:] And it may be that [disfmarker] that um we should early on make sure that they have the flexibility that we need. [speaker002:] Hmm. But they uh Have I understood this right? They [disfmarker] they govern more or less the [disfmarker] the dialogue behavior or the action [disfmarker] [speaker001:] Mm hmm. [speaker002:] It's not really what you do with the content of the dialogue but it's So, I mean there is this [disfmarker] this [disfmarker] this nice interf [speaker004:] uh, No, it's [disfmarker] it's also a quantrant uh uh [disfmarker] [speaker002:] i Is it [disfmarker] [speaker006:] So there's ac so there [disfmarker] th the word "action", OK, is [disfmarker] is what's ambiguous here. [speaker004:] I think. Hmm. [speaker001:] Yes. [speaker006:] So, um one thing is there's an actual planner that tells the person in the tourist domain now, per tells the person how to go, [speaker001:] OK. [speaker006:] " First go here, first go there [speaker004:] Mm hmm. [speaker006:] uh, you know, take a bus ", whatever it is. So that's that form of planning, and action, and a route planner and GIS, all sort of stuff. uh But I think that isn't what you mean. [speaker001:] No. No, in SmartKom terminology that's um called a function that's modeled by a function modeler. And it's th that's completely um encapsulated from th the dialogue system. That's simply a functionality that you give data as in a query and then you get back from that mmm, a functioning model um which might be a planner or a VCR or whatever. um some result and that's then [disfmarker] then used. [speaker006:] Well, OK, so that's what I thought. So action he action here means dia uh speech ac uh you know dialogue act. [speaker001:] Yeah, yeah. [speaker002:] Mmm. [speaker001:] Yeah, in that [disfmarker] in that sense yes, [speaker006:] Yeah. [speaker001:] dialogue act, yeah. [speaker006:] Um, I think tha I think it's not going to [disfmarker] I think that's not going to be good enough. I I don what uh [disfmarker] what I meant by that. So I think the idea of having a, you know, transition diagram for the grammar of conversations is a good idea. [speaker001:] Mm hmm. [speaker006:] OK? And I think that we do hav definitely have to get in on it and find out [disfmarker] OK. But I think that um when [disfmarker] so, when you get to the tourist domain it's not just an information retrieval system. [speaker001:] Mm hmm. [speaker006:] Right? [speaker001:] Clearly. [speaker006:] So this i this is where I think this [disfmarker] [speaker001:] Yes. [speaker006:] people are gonna have to think this through a bit more carefully. [speaker001:] Mm hmm. [speaker006:] So, if it's only like in [disfmarker] in the [disfmarker] in the film and T V thing, OK, you can do this. And you just get information and give it to people. But what happens when you actually get them moving and so forth and so on [speaker001:] Yep. [speaker006:] Uh, y y your [disfmarker] I d I think the notion of this as a self contained uh module you know th the functional module that [disfmarker] that interacts with [disfmarker] with where the tourism g stuff is going [comment] probably is too restrictive. [speaker001:] Yep. [speaker006:] Now I dunno how much people have thought ahead to the tourist domain in this [speaker001:] Probably not enough, I mean an [disfmarker] another uh more basic point there is that the current um tasks and therefore th the concepts in this ac what's called the action plan and what's really the dialogue manager. [speaker006:] Yeah [speaker001:] um is based on slots that have to be filled and the um kind of values in these slots would be fixed things like the a time or a movie title or something like this [speaker006:] Mm hmm. Right. [speaker001:] whereas in the a um tourist domain it might be an entire route. [speaker006:] Indeed. [speaker001:] Set based, or even very complex structured information in these slots [speaker006:] Right. [speaker001:] and I'm not sure if [disfmarker] if complex slots of that type are really um being taken into consideration. [speaker006:] OK. Could you [disfmarker] could you put a message into the right place to see if we can at least ask that question? [speaker001:] So that's [disfmarker] that's really something we [speaker002:] Yep. [speaker001:] Mm hmm. [speaker002:] rea [speaker001:] I mean nothing's being completely settled there [speaker002:] yep [speaker001:] so this is really an ongoing discussion [speaker002:] Mm hmm yeah and um it might actually [speaker001:] and that's [speaker002:] OK ah also [disfmarker] because um again in in Deep Map we have faced and implemented those problems once already maybe we can even shuffle some know how from there to to Markus and Michael. [speaker001:] Mm hmm. Yes. [speaker004:] Mmm. [speaker001:] Yep. [speaker002:] And um mmm You don't know [disfmarker] OK th I'll [disfmarker] I'll talk to Michael it's what I do anyway. Who [disfmarker] How far is the uh the [disfmarker] the M three L specification for [disfmarker] for the la natural language input gone on the [disfmarker] the uh I haven't seen anything for the uh tourist path domain. [speaker004:] Yeah, it's [disfmarker] it's not defined yet. [speaker002:] And um you are probably also involved in that, [speaker004:] Um [disfmarker] Yeah. [speaker002:] right? uh together with the usual gang, um Petra and Jan [speaker004:] Mmm. Yeah, there's a meeting next next week I think [speaker002:] OK because That's [disfmarker] Those are the [disfmarker] I think the [disfmarker] the true key issues is how does the whatever comes out of the language input pipeline look like and then what the action planner does with it [disfmarker] and how that is uh specified. I didn't think of the internal working of the uh the action planner and the language [disfmarker] uh the function model as sort of relevant. Because what [disfmarker] what they take is sort of this [disfmarker] this fixed representation of a [disfmarker] of an intention. And that can be as detailed or as crude as you want it to be. [speaker001:] Mm hmm. [speaker002:] But um the internal workings of of the [disfmarker] whether you know there're dialogue [disfmarker] action planners that work with belief nets that are action planners that work with you know state automata. So that shouldn't really matter too much. I mean it does matter because it does have to keep track of you [disfmarker] we are on part six of r a route that consists of eight steps and so forth [speaker006:] Yeah, th there [disfmarker] there [disfmarker] I think there are a lot of reasons why it matters. [speaker001:] Right. [speaker006:] OK, so that uh, for example, the i it's the action planner is going to take some spec and s make some suggestions about what the user should do. What the user says after that is going to be very much caught up with what the action planner told it. [speaker001:] Yes. [speaker006:] If the [disfmarker] If the parser and the language end doesn't know what the person's been told OK th it's you're making your life much more difficult than it has to be. [speaker002:] Yeah. [speaker006:] Right? So if someone says the best t to uh go there is by taxi, let's say. Now the planner comes out and says you wanna get there fast, take a taxi. OK. And the language end doesn't know that. OK, there's all sorts of dialogues that won't make any sense which would be just fine. [speaker001:] hmm [speaker006:] uh [speaker002:] Yeah. [speaker001:] That would b but that [disfmarker] I think that [disfmarker] that uh point has been realized and it's [disfmarker] it's not really um been defined yet but there's gonna be some kind of feedback and input from uh the action planner into all the analysis modules, telling them what to expect and what the current state of the discourse is. [speaker004:] Mmm. [speaker001:] Beyond what's currently being implemented which is just word lists. [speaker006:] Yeah, but this is not the st this is not just the state of the discourse. [speaker002:] Mm hmm. [speaker001:] Of [disfmarker] of special interest. [speaker006:] This is actually the state of the plan. That's why [speaker002:] Mm hmm. [speaker001:] Yes, Yes, Mm hmm yeah. [speaker006:] OK so it [disfmarker] z and s uh, It's great if people are already taking that into account. But One would have t have to see [disfmarker] see the details. [speaker001:] The specifics aren't really there yet. Yes. [speaker006:] Yeah. [speaker001:] So, there's work to do there. [speaker006:] So anyway, Robert, that's why I was thinking that [speaker002:] Mm hmm. [speaker006:] um I think you're gonna need [disfmarker] We talked about this several times that [disfmarker] that [disfmarker] the [disfmarker] the input end is gonna need a fair amount of feedback from the planning end. [speaker001:] hmm [speaker006:] In [disfmarker] in one of these things which are [disfmarker] are much more continuous than the [disfmarker] just the dialogue over movies and stuff. [speaker001:] Yeah. [speaker004:] Mmm. [speaker001:] And even on [disfmarker] on a more basic level the [disfmarker] the action planner actually needs to be able to have um an expressive power that can deal with these structures. [speaker006:] Hmm? [speaker001:] And not just um say um [disfmarker] um the dialogue um will consist of ten possible states and th these states really are fixed in [disfmarker] in a certain sense. [speaker006:] Would there be any chance of getting the terminology changed so that the dialogue planner was called a "dialogue planner"? [speaker001:] You have to [disfmarker] [speaker006:] Because there's this other thing The o There's this other thing in [disfmarker] in the tourist domain which is gonna be a route planner [speaker001:] That'd be nice. [speaker006:] or [disfmarker] It's really gonna be an action planner. And [comment] i it [disfmarker] [speaker001:] It oughta be called a [disfmarker] a dialogue manager. cuz that's what everybody else calls it. [speaker006:] I would think, yeah. [speaker004:] Mmm. [speaker006:] Huh? [speaker001:] Yeah. [speaker006:] So, s So what would happen if we sent a note saying "Gee we've talked about this and couldn't we change this uh th the whole word?" I have no idea how complicated these things are. [speaker002:] Probably close to impossible. [speaker001:] Depends on who you talk to how. We'll see. I'll go check, cause I completely agree. [speaker004:] Mmm. [speaker001:] Yeah, and I think this is just for historical reasons within uh, the preparation phase of the project and not because somebody actually believes it ought to be action planner. So if there is resistance against changing it, that's just because "Oh, We don't want to change things." That [disfmarker] that not deep reason [speaker006:] OK, anyway. I if [disfmarker] if that c in persists then we're gonna need another term. for the thing that actually does the planning of the uh routes and whatever we are doing for the tourist. [speaker002:] That's external services. [speaker006:] Yeah, but that's not g eh tha That ha has all the wrong connotations. it's [disfmarker] it sounds like it's you know stand alone. It doesn't interact, it doesn't That's why I'm saying. I think you can't [disfmarker] it's fine for looking up when T you know when the show's on TV. You go to th but I [disfmarker] I [disfmarker] I [disfmarker] I think it's really [disfmarker] really wrong headed for something that you [disfmarker] that has a lot of state, it's gonna interact co in a complicated way with the uh understanding parts. [speaker002:] Yeah. Yeah I think just the [disfmarker] the spatial planner and the route planner I showed you once the interac action between them among them in the deep map system [speaker006:] Right. [speaker002:] so [disfmarker] a printout of the communication between those two fills up I don't know how many pages and that's just part of how do I get to one place. [speaker001:] Hmm [speaker002:] It's really insane. and uh but um so this is um definitely a good point to get uh Michael into the discussion. Or to enter his discussion, actually. That's the way around. [speaker001:] Yeah, Marcus. [speaker002:] Markus Is he new in the [disfmarker] in the? [speaker001:] Wh where's? Yeah, he's [disfmarker] he started um I think January. [speaker004:] Yeah. [speaker001:] And he's gonna be responsible for the implementation of this action planner. Dialogue manager. [speaker002:] Is he gonna continue with the old [disfmarker] uh [disfmarker] thing? [speaker001:] No, no he's completely gonna rewrite everything. In Java. [speaker002:] OK. [speaker001:] OK so that's interesting. [speaker002:] Yes I was just [disfmarker] that's my next question [speaker001:] hmm [speaker002:] whether we're [disfmarker] we're gonna stick to Prolog or not. [speaker001:] No. No, that's gonna be phased out. [speaker006:] Yeah. [speaker002:] OK But I do think the [disfmarker] the function modeling concept has a certain [disfmarker] makes sense in a [disfmarker] in a certain light [speaker001:] Yeah. [speaker002:] because the action planner should not be [disfmarker] or the dialogue manager in that case should not um w have to worry about whether it's interfacing with um something that does route planning in this way or that way [speaker001:] Mm hmm. [speaker006:] I I totally agree. Sure. [speaker002:] huh, it j [speaker006:] Yeah I [disfmarker] I agree. There is [disfmarker] there's a logic to dialogue which [disfmarker] which is [disfmarker] is separable. I Yeah. [speaker002:] and it [disfmarker] cant [disfmarker] sort of formulate its what it wants in a [disfmarker] in a rather a abstract uh way, you know f "Find me a good route for this." [speaker006:] Mm hmm. [speaker002:] It doesn't really have to worry ab how route planner A or how route planner B actually wants it. So this is [disfmarker] seemed like a good idea. In the beginning. [speaker006:] It's tricky. It's tricky because one could well imagine [disfmarker] I think it will turn out to be the case that uh, this thing we're talking about, th the extended n uh knowledge modeler will fill in some parameters about what the person wants. One could well imagine that the next thing that's trying to fill out the detailed uh, route planning, let's say, will also have questions that it would like to ask the user. You could well imagine you get to a point where it's got a [disfmarker] a choice to make and it just doesn't know something. And so y you would like it t also be able to uh formulate a query. [speaker001:] Mm hmm. [speaker006:] And to run that back through uh. the dialogue manager and to the output module and back around. [speaker002:] hmm [speaker006:] And a I a a good design would [disfmarker] would allow that to happen. [speaker002:] a lot of, yeah [speaker004:] Mmm. [speaker006:] If [disfmarker] if you know if [disfmarker] if you can't make it happen then you [disfmarker] you do your best. [speaker001:] Yeah but that doesn't necessarily contradict um an architecture where there really is a pers a def well defined interface. [speaker006:] I totally agree. [speaker001:] and [disfmarker] and [speaker006:] But [disfmarker] but what it nee but th what the point is the in that case the dialogue manager is sort of event driven. So the dialogue manager may think it's in a dialogue state of one sort, and this [disfmarker] one of these planning modules comes along and says "hey, right now we need to ask a question". [speaker001:] Mm hmm. [speaker006:] So that forces the dialogue manager to change state. [speaker001:] Yes [speaker006:] OK. It could be y [speaker001:] Sure, ye yeah I [disfmarker] I think that's [disfmarker] that's the um concept that people have, [speaker006:] Yeah, yeah it [disfmarker] it [disfmarker] OK. [speaker001:] yep. And [disfmarker] and the [disfmarker] the underlying idea of course is that there is something like kernel modules with kernel functionality that you can plug uh certain applications like tourist information or um the home scenario with uh controlling a VCR and so on. And then extend it to an arbitrary number of applications eventually. So [disfmarker] wouldn't That's an additional reason to have this well defined interface [speaker006:] Oh, yeah, yeah. [speaker001:] and keep these things like uh tourist information external. And then call it external services. [speaker002:] Hmm. [speaker001:] But of course the [disfmarker] the more complex [disfmarker] [speaker002:] Yeah, there is another philosophical issue that I think you know you can [disfmarker] evade [speaker001:] yep. [speaker002:] but, at at least it makes sense to me that sooner or later uh [disfmarker] a service is gonna come and describe itself to you. and that's sort of what Srini is working on in [disfmarker] in [disfmarker] in the DAML uh project where um you [disfmarker] you find a GIS about [disfmarker] that gives you information on Berkeley, [speaker001:] Yeah. [speaker002:] and it's [disfmarker] it's gonna be there and tell you what it can do and how it wants to do things. and so you can actually interface to such a system without ever having met it before and the function modeler and a self description of the um external service haggle it out [speaker001:] Hmm. [speaker002:] and you can use the same language core, understanding core to interface with planner A, planner B, planner C and so forth. [speaker001:] Hmm. [speaker004:] Mmm. [speaker002:] Which is, you know, uh [disfmarker] uh [disfmarker] utopian [disfmarker] completely utopian at the moment, but slowly, you know, getting into the realm of the uh contingent. [speaker001:] Hmm. [speaker002:] But we are facing of course much more um realistic problems. And language input for example, is of course uh crucial you know also when you do the sort of deep understanding analysis that we envision. um Then of course, the uh um, you know what is it [disfmarker] poverty of the stimulus, yet the m uh the less we get of that the better. and um so we [disfmarker] we're thinking, for example how much syntactic analysis actually happens already in the parser. [speaker004:] Hmm. [speaker002:] and whether one could interface to that potentially [speaker004:] Yeah, are there currently is uh no syntactic analysis but in the next release there will be some. [speaker002:] Hmm. [speaker006:] How's it [disfmarker] [speaker004:] unless and it's um uh you can access this [speaker006:] S so uh y we [disfmarker] we looked at the e current pattern matching thing. [speaker004:] Hmm. [speaker006:] And as you say it's just a surface pattern matcher. Uh, So what are [disfmarker] what are the plans roughly? [speaker004:] um it's to [disfmarker] to integrate and syntactic analysis. and um add some more features like segmentation. So then an utter more than one utterance is [disfmarker] There um there's often uh pause between it and a segmentation occurs. [speaker006:] So, the um [disfmarker] [speaker004:] um [speaker006:] So the idea is to uh [disfmarker] have a pa y y a particular [disfmarker] [speaker004:] yeah [speaker006:] Do you have a particular parser in mind? Is it uh [disfmarker] partic d I mean have you thought through [disfmarker]? Is it an HPSG parser? Is it a whatever? [speaker004:] No [disfmarker] no it's [disfmarker] uh I think it's it's totally complicated for it's just one [disfmarker] one person [speaker006:] OK. [speaker004:] and so I have to keep the [disfmarker] [speaker006:] Oh, you have to do it. You have to do it, yeah. [speaker004:] Yeah, ah and so [vocalsound] things must be simpler [speaker006:] I see, so [speaker004:] but uh, Miel syntactic analysis with um finite state transducers. [speaker006:] But the people at D F Yeah. People at DFKI have written a fair number of parsers. Other [disfmarker] you know, people over the years. uh have written various parsers at DFKI. None of them are suitable? I [disfmarker] I [disfmarker] I d I'm asking. I don't know. [speaker004:] Yeah, uh the problem is th that it has to be very fast because um if you want to for more than one path anywhere [speaker006:] OK. [speaker004:] what's in the latches from the speech recognizer [speaker006:] Mm hmm. [speaker004:] so it's speed is crucial. uh And they are not fast enough. [speaker006:] Mm hmm. [speaker004:] And they also have to be very robust. cuz of um speech recognition errors and [speaker006:] OK. So, um [disfmarker] So there was a chunk parser in Verbmobil, that was one of the uh branchers. You know they [disfmarker] d th I c There were these various uh, competing uh syntax modules. And I know one of them was a chunk parser and I don't remember [pause] who did that. [speaker004:] I think it's that [speaker002:] A Alan? [speaker004:] might, at Tuebingen I thought. [speaker006:] Yeah I d I don't remember. [speaker004:] was [disfmarker] Do you know something about that? [speaker001:] Tubingen was at least involved in putting the chunks together [speaker004:] In Tub at [disfmarker] [speaker001:] I [disfmarker] can't quite recall whether they actually produced the chunks in the first place. [speaker004:] oh [speaker006:] Uh. I see. Yeah, that's right. [speaker001:] Or wh [speaker006:] There w [speaker004:] Oh from [disfmarker] from Stuttgart, [speaker006:] That's right. They w They had [disfmarker] There were [disfmarker] This was done with a two phase thing, where [comment] the chunk parser itself was pretty stupid [speaker004:] yeah, also [speaker006:] and then there was a kind of trying to fit them together that h used more context. [speaker001:] Right. Yeah [speaker006:] Right? [speaker001:] Well you s and [disfmarker] and especially you did some [disfmarker] some um, l um was a learning based approach which learned from a big corpus of [disfmarker] of trees. [speaker006:] Right. [speaker004:] Mm hmm. [speaker006:] Right. [speaker001:] And yes the [disfmarker] it [disfmarker] the chunk parser was a finite state machine that um Mark Light originally w worked on in [disfmarker] while he was in Tuebingen and then somebody else in Tuebingen picked that up. So it was done in Tuebingen, yeah. Definitely. [speaker006:] But is that the kind of thing y It sounds like the kind of thing that you were thinking of. [speaker004:] yeah. [speaker001:] Yeah I guess it's similar. [speaker004:] yeah that's In this direction, yes [speaker006:] What? [speaker004:] Yeah, it's in [disfmarker] in this direction. [speaker006:] Hmm. [speaker002:] The [disfmarker] From Michael Strube, I've heard very good stuff about the chunk parser that is done by FORWISS, uh, which is in embassy doing the parsing. [speaker001:] Mm hmm. [speaker002:] So this is sort of [disfmarker] came as a surprise to me that you know, embassy s [comment] is featuring a nice parser but it's [pause] what I hear. One could also look at that and see whether there is some synergy possible. [speaker004:] Mm hmm, yeah, it would be very interesting, Mm hmm. Mmm, yeah. [speaker002:] And they're doing chunk parsing and it's uh [disfmarker] I [disfmarker] I can give you the names of the people who do it there. But um. Then there is of course more ways of parsing things. [speaker006:] Of course. But [disfmarker] But uh given th the constraints, that you want it to be small and fast and so forth, my guess is you're probably into some kind of chunk parsing. And uh I'm not a big believer in this um statistical you know, cleaning up uh It [disfmarker] That seems to me kind of a last resort if uh you can't do it any other way. uh but I dunno. It may [disfmarker] i i may be that's what you guys finally decide do. [speaker004:] Hmm. [speaker006:] Uh. And have you looked [disfmarker] uh just [disfmarker] again for context [disfmarker] [speaker004:] Mm hmm. [speaker006:] There is this [disfmarker] this one that they did at SRI some years ago [disfmarker] Fastus? [speaker004:] um [speaker006:] a [disfmarker] [speaker004:] yeah, I've [disfmarker] I've looked at it but [disfmarker] but it's no [disfmarker] not much uh information available. [speaker006:] ah! [speaker004:] I found, but it's also finite state transducers, I thought. [speaker006:] It is. Yeah. I mean [disfmarker] it's [disfmarker] it was pretty ambitious. [speaker004:] and [speaker006:] And of course it was English oriented, um w [speaker004:] Yeah, and [disfmarker] and Purely finite state transducers are not so good for German since there's um [speaker006:] Right. Yeah, I guess that's the point is [disfmarker] is all the morphology and stuff. [speaker004:] The word order is [disfmarker] is uh not fixed [speaker006:] And English is all th all word order. And it makes a lot more sense. [speaker004:] Yeah. [speaker006:] And [disfmarker] e Yeah, OK. Good point. So in [disfmarker] in [disfmarker] in German you've got uh most of this done with [speaker004:] Mm hmm. Also it's uh [disfmarker] it's um [disfmarker] Yes, uh the um choice between uh this processing and that processing and my template matcher. [speaker006:] Right. Right. So what about Um Did y like Morfix? a a e y you've got stemmers? Or is that something that [disfmarker] [speaker004:] Um, yeah but it's all in the [disfmarker] in the lexicon. [speaker006:] But did you have that? [speaker004:] So it's [disfmarker] Yeah th the information is available. [speaker006:] OK. I see. So, but [disfmarker] So y you just connect to the lexicon [speaker004:] So [disfmarker] Yeah [speaker006:] and uh at least for German you have all [disfmarker] all of the [disfmarker] uh the stemming information. [speaker004:] Yeah, we can, oh yeah. We have knowledge bases from [disfmarker] from Verbmobil system we can use [speaker006:] Yep. [speaker004:] and so. [speaker006:] Right. But it [disfmarker] it [disfmarker] it doesn't look like i you're using it. I didn't n see it being used in the current template uh parser. I [disfmarker] I didn't see any Uh [disfmarker] of course we l actually only looked at the English. [speaker004:] It [disfmarker] [speaker006:] Did we look at the German? [speaker004:] um [speaker006:] I don't remember. So w wha [speaker004:] Yeah, but [disfmarker] but it's used for [disfmarker] for stem forms. [speaker001:] n Well I think [disfmarker] I think there's some misunderstanding here [speaker006:] i [speaker004:] Oh, OK. [speaker001:] it's [disfmarker] Morphix is not used on line. s so the lexicon might be derived by Morphix [speaker004:] What? [speaker001:] but What [disfmarker] what's happening on line is just um um a [disfmarker] a retrieval from the lexicon which would give all the stemming information [speaker006:] Right. Right. [speaker004:] Hmm. [speaker001:] so it would be a full foreign lexicon. [speaker006:] And that's what you have. [speaker004:] Yeah [speaker001:] Yep. [speaker006:] OK. What [disfmarker] uh I didn't reme [speaker002:] We threw out all the forms. [speaker006:] Huh? [speaker002:] We threw out all the forms because, you know, English, well [disfmarker] [speaker006:] Oh OK, so it [disfmarker] yeah, s s I thought I'd [disfmarker] [speaker004:] Mm hmm. [speaker006:] So in German then you actually do case matching and things like in the [disfmarker] in the pattern matcher or not? [speaker004:] um Not yet but it's planned to do that. [speaker006:] OK. Cuz I r I didn't reme I didn't think I saw it. Have we looked at the German? [speaker004:] Yeah [speaker006:] Oh, I haven yeah that's [disfmarker] getting it from the lexicon is just fine. Yeah, yeah, yeah. [speaker001:] Sure, [speaker004:] Oh yes. [speaker001:] right. [speaker006:] No problem with that. um Yeah and here's the case where the English and the German might really be significantly different. In terms of if you're trying to build some fast parser and so forth and [disfmarker] You really might wanna do it in a significantly different way. I don't know. So you've [disfmarker] you guys have looked at this? also? in terms of You know, w if you're doing this for English as well as German Um Do you think now that it would be this [disfmarker] doing it similarly? [speaker004:] um Yeah, it's um I think it's um yes, it's [disfmarker] it's um possible to [disfmarker] to do list processing. and Maybe this is um more adequate for English and in German um set processing is used. [speaker006:] Set. [speaker004:] Maybe yeah. Some extensions uh have to be made. For [disfmarker] for a English version [speaker006:] Mmm. OK. Interesting. Not easy. [speaker002:] Well there's m I'm sure there's gonna be more discussion on that after your talk. [speaker004:] Mm hmm, [speaker006:] Right. [speaker004:] yeah. [speaker002:] We're just gonna foreshadow what we saw that [speaker006:] Right. [speaker002:] and um [speaker006:] Now actually, um Are you guys free at five? Or [disfmarker] Do you have to go somewhere at five o'lock tonight? W in ten minutes? [speaker004:] Ah [disfmarker] mmm. No. [speaker001:] uh [disfmarker] uh [disfmarker] I think we're expect [disfmarker] [speaker004:] Oder there was an [disfmarker] talk? [speaker001:] Yeah, there [disfmarker] there's the um practice talk. [speaker004:] uh Mmm, yeah. [speaker006:] Great. So you're going to that. [speaker001:] Yeah, that [disfmarker] that's what we were planning to do. [speaker006:] That's good, because that will uh tell you a fair amount about The form of semantic construction grammar that we're using. [speaker001:] Yeah. [speaker006:] so [disfmarker] So I th I think that probably as good an introduction as you'll get. [speaker001:] Mm hmm. [speaker004:] Ah. [speaker006:] Uh to the form of [disfmarker] of uh [disfmarker] conceptual grammar that [disfmarker] that w we have in mind for this. [speaker004:] Mmm, ah. [speaker006:] It won't talk particularly about how that relates to what uh Robert was saying at the beginning. But let me give you a very short version of this. So we talked about the fact that There're going to be a certain number of decisions That you want the knowledge modeler to make, that will be then fed to the function module, that does uh, route planning. It's called the "route planner" or something. [speaker001:] Mm hmm. [speaker006:] So there are these decisions. And then one half of this we talked about at little bit is how if you had the right information, if you knew something about what was said and about th the something about was the agent a tourist or a native or a business person or uh young or old, whatever. That information, and also about the Uh, what we're calling "the entity", Is it a castle, is it a bank? Is it a s town square, is it a statue? Whatever. So all that kind of information could be combined into decision networks and give you decisions. But the other half of the problem is How would you get that kind of information from the parsed input? So, um So what you might try to do is just build more templates, saying uh we're trying to build a templ you know build a template that w uh somehow would capture the fact that he wants to take a picture. [speaker004:] Mmm. [speaker006:] OK? And [disfmarker] and we could [disfmarker] you could do this. And it's a small enough domain that probably you, you know [disfmarker] [speaker004:] Mmm. [speaker006:] OK. You could do this. But uh from our point of view this is also a research project and there are a couple of people not here for various reasons who are doing doctoral dissertations on this, [speaker001:] Mm hmm. [speaker006:] and the idea that we're really after is a very deep semantics based on cognitive linguistics and the notion that there are a relatively small number of primitive conceptual schemas that characterize a lot of activity. So a typical one in this formulation is a container. So this is a static thing. And the notion is that all sorts of physical situations are characterized in terms of containers. Going in and out the portals and con [speaker004:] Mmm. [speaker006:] OK. But also, importantly for Lakoff and these guys is all sorts of metaphorical things are also characterized this way. You get in trouble and you know et cetera [speaker004:] Mmm. [speaker006:] and so [disfmarker] s So, what we're really trying to do is to map from the discourse to the conceptual semantics level. And from there to the appropriate decisions. So another one of these primitive, what are called "image schemas", is uh goal seeking. [speaker001:] Mm hmm. [speaker006:] So this a notion of a source, path, goal, trajector, possibly obstacles. [speaker001:] Mm hmm. [speaker006:] And the idea is this is another conceptual primitive. And that all sorts of things, particularly in the tourist domain, can be represented in terms of uh source, path and goal. [speaker001:] Mm hmm. [speaker006:] So the idea would be could we build an analyser that would take an utterance and say "Aha! th this utterance is talking about an attempt to reach a goal. The goal is this, the pers the, uh traveller is that, uh the sor w where we are at now is is this, they've mentioned possible obstacles, et cetera." So th the [disfmarker] and this is an [disfmarker] again attempt to get very wide coverage. So if you can do this, then the notion would be that across a very large range of domains, you could use this deep conceptual basis as the interface. [speaker001:] Mm hmm. Mm hmm. [speaker006:] And then, uh The processing of that, both on the input end, recognizing that certain words in a language talk about containers or goals, et cetera, and on the output end, given this kind of information, you can then uh make decisions about what actions to take. Provides, they claim, a very powerful, general notion of deep semantics. So that's what we're really doing. [speaker001:] Mm hmm. [speaker006:] And Nancy is going to [disfmarker] Her talk is going to be not about using this in applications, but about modeling how children might learn this kind of uh deep semantic grammar. [speaker001:] Mm hmm. Yep, yep. And how do you envision um the [disfmarker] the um this deep semantic to be worked with. Would it be highly ambiguous if and then there would be another module that takes that um highly underspecified deep semantic construction and map it onto the current context to find out what the person really was talking about in that context. [speaker006:] Well that's [disfmarker] that's [disfmarker] that's where the belief net comes in. [speaker001:] or [disfmarker] or a [disfmarker] [speaker006:] So th the idea is, let's take this business about going to the Powder Tower. [speaker001:] Mm hmm. [speaker006:] So part of what you'll get out of this will be the fact tha w if it works right, OK, that this is an agent that wants to go to this place and that's their goal and there will be additional situational information. [speaker001:] Mm hmm. Oh, OK. [speaker006:] Uh, OK, [speaker001:] th [speaker006:] part of it comes from the ontology. The tower is this kind of object. [speaker001:] Mm hmm. Yeah, OK. [speaker006:] Part of it comes from the user model. [speaker001:] Mm hmm. [speaker006:] And the idea of the belief net is it combines the information from the dialogue which comes across in this general way, [speaker001:] Mm hmm. [speaker006:] you know this is a [disfmarker] this is a goal seeking behavior, along with specific information from the ontology about the kinds of objects involved [speaker001:] Yeah OK, Yeah, yep yep yep yep [speaker006:] and about the situation about "Is it raining?" I don't know. Whatever it is. And so that's the belief net that we've laid out. [speaker001:] Mm hmm. [speaker006:] And so th the coupling to the situation comes in this model from, at th at th at the belief net, combining evidence from the dialogue with the ontology with the situation. [speaker001:] Yeah. [speaker004:] Hmm. [speaker006:] But Nancy isn't gonna talk about that, [speaker001:] Yeah, [speaker006:] just about the um [speaker001:] oh yeah, I see, yeah yeah, really. [speaker002:] First steps. [speaker006:] Right. The [disfmarker] the construction grammar. [speaker002:] And she's gonna start in a minute. [speaker006:] In a minute. [speaker004:] Ah, OK. [speaker006:] OK. [speaker007:] Is it i in, then, your place, in five [disfmarker] five A? [speaker001:] Alright.